Blogs

Organizational behavior


Authors: Jonathan Grudin
Posted: Mon, June 23, 2014 - 11:36:18

Two books strongly affected my view of organizations—those I worked in, studied, and developed products for. One I read 35 years ago; the other I just finished, although it came out 17 years ago.

Encountering Henry Mintzberg’s typology of organizational structure

In 1987, an “Organizational Science Discussion Group” was formed by psychologists and computer scientists at MCC. We had no formal training in organizational behavior but realized that software was being designed to support more complex organizational settings. Of the papers we discussed, two made lasting impressions. “A Garbage Can Model of Organizational Choice” humorously described the anarchy found in universities. It may primarily interest academics; it didn’t seem relevant to my experiences in industry. The discussion group’s favorite was a dense one-chapter condensation [1] of Henry Mintzberg’s 1979 The Structuring of Organizations: A Synthesis of the Research.

Mintzberg observed that organizations have five parts, each with its own goals, processes, and methods of evaluation. Three are the top management, middle management, and workers, which he labels strategic apex, middle line, and operating core. A fourth group designs workplace rules and processes, such as distribution channels, forms, and assembly lines. This he calls technostructure, although technology is not necessarily central. Finally there is everyone else: the support staff (including IT staff), attorneys, custodians, cafeteria workers, and so on.

Mintzberg argues that these groups naturally vie for influence, with one or another usually becoming particularly powerful. There are thus five “organizational forms.” Some are controlled by executives, but in divisionalized companies the middle line has strong autonomy, as when several product lines are managed with considerable independence. In an organization of professionals, such as a university, the workers—faculty—have wide latitude in organizing their work. In organizations highly reliant on regulations or manufacturing processes, the technostructure is powerful. And an “adhocracy” such as a film company relies on a collection of people in support roles.

When I left MCC, I was puzzled to find that Mintzberg’s analysis was not universally highly regarded. Where was supporting evidence, people asked. What could you do with it?  Only then did I realize why we had been so impressed. It arose from the unique origin of MCC.

An act of Congress enabled MCC to open its doors in 1984. In response to a prominent Japanese “Fifth Generation” initiative, anti-trust laws were modified to permit 20 large U.S. companies to collaborate on pre-competitive research. MCC was a civilian effort headed by Bobby Ray Inman, previously the NSA Director and CIA Deputy Director. It employed about 500 people, some from the shareholder companies and many hired directly. Our small discussion group drew from the software and human interface programs; MCC also had programs on artificial intelligence, databases, parallel processing, CAD, and packaging (hardware).

Consider this test of Mintzberg’s hypotheses: Create an organization of several hundred people spread across the five organizational parts, give them an ambiguous charter, let them loose, and see what happens. MCC was that experiment.

To a breathtaking degree, we supported Mintzberg’s thesis. Each group fought for domination. The executives tried to control low-level decisions. Middle managers built small fiefdoms and strove for autonomy. Individual contributors maneuvered for an academic “professional bureaucracy” model. Employees overseeing the work processes burdened us with many restrictive procedural hurdles, noting for example that because different shareholder companies funded different programs, our interactions should be regulated. Even the support staff felt they should run things—and not without reason. Several were smart technicians from shareholder companies; seeing researchers running amok on LISP machines, some thought, “We know what would be useful to the shareholders, these guys sure as hell don’t.”

Mintzberg didn’t write about technology design per se. We have to make the connections. Central to his analysis is that each part of the organization works differently. Executives, middle managers, individual contributors, technostructure, and support staff have different goals, priorities, tasks, and ways to measure and reward work. Their days are organized differently. Time typically spent in meetings, ability to delegate, and the sensitivity of their work differ. Individual contributors spend more time in informal communication, managers rely more on structured information—documents, spreadsheets, slide decks—and executives coordinate the work of groups that rarely communicate directly.

Such distinctions determine which software features will help and which may hinder. Preferences can sharply conflict. When designing a system or application that will be used by people in different organizational parts, it is important to consult or observe representatives of these groups during requirements analysis and design testing.

At MCC we did not pursue implications, but I was prepared when Constance Perin handed me an unpublished paper [2] in 1988. I had previously seen the key roles in email being senders and receivers; she showed that enterprise adoption could hinge on differences between managers, who liked documents and hated interruptions, and individual contributors, who engaged in informal communication and interruption. Over the next 25 years, studying organizational adoption of a range of technologies, I repeatedly found differences among members of Mintzberg’s groups. If it was confirmation bias, it was subtle, because somewhat obtusely I didn’t look for it and was surprised each time. The pattern can also be seen in other reports of enterprise technology adoption. This HICSS paper and this WikiSym paper provide a summary and a recent example.

Clayton Christensen and disruptive technologies

In 1997, Clayton Christensen published The Innovator’s Dilemma. Thinking it was a business professor’s view of issues facing a lone inventor, I put off reading it until now. But it is a nice analysis of organizational behavior based on economics and history, and is a great tool for thinking about the past and the present.

I have spent years looking into HCI history [3], piecing together patterns some of which are more fully and elegantly laid out by Christensen. The Innovator’s Dilemma deepened my interpretations of HCI history and reframed my current work on K-12 education. Before covering recent criticism of this short, easily read book and indicating why it is a weak tool for prediction, I will outline its thesis and discuss how I found Mintzberg and Christensen to be useful.

Christensen describes fields as diverse as steel manufacture, excavation equipment, and diabetes treatment, arguing that products advance through sustaining innovations that improve performance and satisfy existing customers. Eventually a product provides more capability than most customers need, setting the stage for a disruptive innovation that has less capability and a lower price—for example, a 3.5” disk drive when most computers used 5”+ drives, or a small off-road motorbike when motorcycles were designed for highway use. The innovation is dismissed by existing customers, but if new customers happy with less are found, the manufacturer can improve the product over time and then enter the mainstream market. For example, minicomputers were initially positioned for small businesses that could not afford mainframes, then became more capable and undermined the mainframe industry. Later, PCs and workstations, initially too weak to do much, grew more capable and destroyed the once-lucrative minicomputer market.

An interesting insight is that established companies can fail despite being well-managed. Many made rational decisions. They listened to customers and improved their market share of profitable product lines rather than diverting resources into speculative products with no established markets.

Some firms that successfully embraced disruptive innovations learned to survive with few sales and low profit margins. Because dominant companies are structured to handle large volume and high margins, Christensen concludes that a large company can best embrace a disruptive innovation by creating an autonomous entity, as IBM did when it located its PC development team in Florida.

Using the insights of Mintzberg and Christensen for understanding

For decades, Mintzberg’s analysis has helped me understand the results of quantitative and qualitative research, mine and others’, as described in the papers cited above and two handbook chapters [4]. Reading The Innovator’s Dilemma, I reevaluated my experiences at Wang Laboratories, a successful minicomputer company that, like the others, underestimated PCs and Unix-based workstations. It also made sense of more recent experiences at Microsoft, as well as events in HCI history.

For example, a former Xerox PARC engineer recounted his work on the Alto, the first computer sporting a GUI that was intended for commercial sale. A quarter century later he still seemed exasperated with Xerox marketers for pricing the Alto to provide the same high-margin return as photocopiers. With a lower price, the Alto could have found a market and created the personal computer industry. The marketing decision seems clueless in hindsight, but in Christensen’s framework it can be seen as sensible unless handling a disruptive innovation—which the personal computer turned out to be.

A colleague said, “An innovator’s dilemma book could be written about Microsoft.” Indeed. It would describe successes and failures. Not long after The Innovator’s Dilemma was published, Xbox development began. The team was located far from the main Redmond site, reportedly to let them develop their own approach, as Christensen would recommend. Unsuccessful efforts are less easily discussed, but Courier might be a possibility.

Using (or avoiding) the frameworks as a basis for predictions

Mintzberg’s typology has proven relevant so often that I would recommend including members of each of his groups when assessing requirements or testing designs. His detailed analysis could suggest design features, but because of the complex, rapidly evolving interdependencies in how technology is used in organizations, empirical assessment is necessary.

Christensen is more prescriptive, arguing that sustaining innovations require one approach and a timely disruptive innovation requires a different approach. But if disruptiveness is a continuum, rather than either-or, choosing the approach could be difficult. And getting the timing right could be even trickier. Can one accurately assess disruptiveness? My intuition is, rarely.

Christensen courageously concluded the 1997 book by analyzing a possible disruptive innovation, the electric car. His approach impressed me—methodical, logical, building on his lessons from history. He concluded that the electric car was disruptive and provided guidance for its marketing. In my view, this revealed the challenges. He projected that only in 2020 would electric vehicle acceleration intersect mainstream demands (0 to 60 mph in 10 seconds). Reportedly the Nissan Leaf has achieved that and the Tesla has reached five seconds. On cruising range he was also pessimistic. Unfortunately, his recommendations depend on the accuracy of these and other trends. He suggested a new low-end market (typical for the disruptive innovations that he studied) such as high school students, who decades earlier fell in love with the disruptive Honda 50 motorcycle; instead, electric cars focus on appealing to existing high-end drivers. A hybrid approach by established manufacturers, which failed for his mechanical excavator companies, has been a major automobile innovation success story.

Christensen reverse-engineered success cases, a method with weaknesses that I described in an earlier blog post. We are not told how often plausible disruptive innovations failed or were developed too soon. Christensen says that innovators must be willing to fail a couple times before succeeding. Unfortunately, there is no way to differentiate two failures of an innovation that will succeed from two failures of a bad or premature idea. Is it “the third time is a charm” or “three strikes and you’re out”? If 2/3 of possible disruptive innovations pan out in a reasonable time frame, an organization would be foolish not to plan for one. If only one in 100 succeed, it could be better to cross your fingers and invest the resources in sustaining innovations.

Our field is uniquely positioned to explore these challenges. Most industries studied by Christenson had about one disruptive innovation per century. Disk drives, which Christensen describes as the fruit flies of the business world, were disrupted every three or four years. He never mentions Moore’s law. He was trying to build a general case, but semiconductor advances do guarantee a flow of disruptive innovation. New markets appear as prices fall and performance rises. A premature effort can be rescued by semiconductor advances: The Apple Macintosh, a disruptive innovation for the PC market, was released in 1984. It failed, but models in late 1985 and early 1996 with more memory and processor power succeeded.

Despite the assistance of Moore’s law, the success rate for innovative software applications has been estimated to be 10%. Many promising, potentially disruptive applications failed to meet expectations for two or three decades: speech recognition and language understanding, desktop videoconferencing, neural nets, workflow management systems, and so on. The odds of correctly timing a breakthrough in a field that has one each century are worse. Someone will nail it, but how many will try too soon and be forgotten?

The weakness of Christensen’s historical analysis as a tool for prediction is emphasized by Harvard historian Jill Lepore in a New Yorker article appearing after this post was drafted. Some of Christensen’s cases are more ambiguous when examined closely, although Christensen did describe exceptions in his chapter notes. Lepore objects to the subsequent use of the disruptive innovation framework by Christensen and others to make predictions in diverse fields, notably education.

These are healthy concerns, but I see a lot of substance in the analysis. No mainframe company succeeded in the minicomputer market. No minicomputer company succeeded in efforts to make PCs. They were many, they were highly profitable, and save IBM, they disappeared.

I’ll take the plunge by suggesting that a disruptive innovation is unfolding in K-12 education. The background is in posts that I wrote before reading Christensen: “A Perfect Storm” and “True Digital Natives.” In Christensen’s terms, 1:1 device-per-student deployments transform the value network. They enable new pedagogical and administrative approaches, high-resolution digital pens, advanced note-taking tools, and handwriting recognition software (for searching notes). As with many disruptive innovations at the outset, the market of 1:1 deployments is too small to attract mainstream sales and marketing. But appropriate pedagogy has been developed, prices are falling fast, and infrastructure is being built out. Proven benefits make widespread deployment inevitable. The question is, when? The principal obstacle in the U.S. is declining state support for professional development for teachers.

Conclusion: the water we swim in

Many of my cohort have worked in several organizations over our careers. Young people are told to expect greater volatility. It makes sense to invest in learning about organizations. If you start a discussion group, you now have two recommendations.

Endnotes

1. Published in D. Miller & P. H. Friesen (Eds.), Organizations: A Quantum View, Prentice-Hall, 1984 and reprinted in R. Baecker (Ed.), Readings in Computer Supported Cooperative Work and Groupware, Morgan Kaufmann, 1995.

2. A modified version appeared as Electronic social fields in bureaucracies.

3. A moving target: The evolution of HCI. In J. Jacko (Ed.), Human-computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications. (3rd edition). Taylor & Francis, 2012. An updated version is available on my web page.

4. J. Grudin & S. Poltrock, 2012. Taxonomy and theory in Computer Supported Cooperative Work. In S.W. Kozlowski (Ed.), Handbook of Organizational Psychology, 1323-1348. Oxford University Press. Updated version on my web page; 
J. Grudin, 2014. Organizational adoption of new communication technologies. In H. Topi (Ed.), Computer Science Handbook, Vol. II. Chapman & Hall / CRC Press.

Thanks to John King and Gayna Williams for discussions.



Posted in: on Mon, June 23, 2014 - 11:36:18

Jonathan Grudin

Jonathan Grudin has been active in CHI and CSCW since each was founded. He has written about the history of HCI and challenges inherent in the field’s trajectory, the focus of a course given at CHI 2022. He is a member of the CHI Academy and an ACM Fellow. [email protected]
View All Jonathan Grudin's Posts



Post Comment


@Strim (2024 06 18)

His predictions can be seen to be extremely accurate bubble shooter

@teg (2024 07 01)

Our small discussion group drew from the software and human interface programs; MCC also had programs on artificial intelligence, databases, parallel processing, CAD, and geometry dash subzero packaging (hardware).

@selapio (2024 10 16)

Many online games involve GeoGuessr Free remembering patterns, storylines, or specific tasks, which can help improve a player’s memory. Additionally, focusing on in-game objectives helps boost concentration over time.

@Evelinke (2024 10 25)

There are a few signs a scorpio man is fighting his feelings for you and wants you back.