Cover story

XXV.6 November - December 2018
Page: 26
Digital Citation

The HCI innovator’s dilemma


Authors:
Chris Harrison

back to top 

Generating ideas and solutions, evaluating and refining them, and hopefully putting them into practice is the essence of an HCI professional’s life, whether it be in software or hardware. It is why we’re always armed with Post-it notes and dry erase markers—brainstorming, affinity diagramming, and sketching our way to a better future. As inventors and problem solvers, innovation is our game, and for this reason it is interesting to consider the lifecycle of HCI ideas.

ins01.gif

At first we might think that HCI, as a fundamentally technology-driven field, would largely follow along with other technology trends. Fortunately, many people have written extensively on the business and history of technology, creating a cornucopia of curves and charts to draw upon. In this article, I pull together a few notable ones to make the case that HCI does not follow a standard innovation lifecycle.

back to top  Insights

ins02.gif

Let’s start with a 1960s classic: the Technology Lifecycle S-Curve [1] (Figure 1). In short, there’s an embryonic period of uncertainly, missteps, and hard work before commercial feasibility is achieved, at which point the product enters a period of rapid improvement and proliferation. Money is made! Eventually, as the technology and market mature, improvement and adoption slow and there are diminishing returns. The product ages, and may even die, but for the sake of simplicity we’ll focus on the earlier periods.

ins03.gif Figure 1. Technology Lifecycle S-Curve, adapted from [1].

Figure 1 is a reasonable generalization of a complex process—it’s still being taught today. In such graphs, the axes are important to note, as they elucidate the values of the chart’s designer. In this case, I’ve chosen a popular formulation that plots time against market size, but you’ll also find variants that use product sales, revenue, number of problems solved, technical maturity, and so on. These are all important metrics, but they also almost exclusively view the process of innovation through a business or technical lens.

For many HCI innovators, impact is a professional and personal goal. Impact, of course, is also an imprecise word. Affecting millions of lives (happiness, health, etc.) through an app or feature that you developed is without doubt impactful. This type of impact largely follows that S-curve, as more users correlates with more influence. But many in HCI, especially academics, also seek a very specific form of impact: intellectual impact. These are the clever interaction techniques, enlightening study results, novel sensing hardware, and other tasty morsels we reveal or invent that potentially create and shape new computing experiences.

The greatest opportunity to have HCI intellectual impact is in the very earliest stages of a product category. This is when the intellectual landscape is wide open, and questions and problems abound. Everything is new and there is no doctrine. This is when HCI innovators can come in, establish user needs, shape attitudes, define features, invent new lingo and methods, and fill capability voids with new enabling technologies. As with all innovation-driven pursuits, the rate and impact of contributions start high and naturally taper off as the subfield or product category matures. James Utterback and William Abernathy described this “innovation dynamic” back in 1975 [2] (Figure 2).

ins04.gif Figure 2. Innovation rate over time, adapted from [2], overlaid approximately onto our earlier technology lifecycle epochs.

Unfortunately, I believe this gentle innovation roll-off that occurs synchronously with product maturation is inaccurate for HCI. The real curve is much uglier and tells the story of a dilemma we as innovators in HCI must face. To be frank, I derive this thesis not from extensive research or interviews, but rather from my own views and experiences as an academic who’s had some luck in transferring technologies to industry, and also as an entrepreneur with my startup Qeexo [3], which has software on more than 150 million smartphones to date.

back to top  The Dilemma

First off, we need to extend our timeline, considering the period of work before embryonic activity, when the concept is a mere glint in an HCI community’s eyes. This almost always predates commercial activity, as borne out in Brad Myers’s 1998 Interactions article “A Brief History of Human Computer Interaction Technology” [4]. This holds true for today’s emerging HCI areas, like virtual and augmented reality (first researched in the late 1960s), voice-driven interfaces (also 1960s), and the Internet of Things (1980s). Figure 3 offers examples of when research started and when commercial products began to ship.

ins05.gif Figure 3. Research and commercial development timelines of various HCI subjects, adapted and extended from [4].

The gap between early research (glint stage) and early commercial activity (embryonic stage) is often several decades, because the idea starts as something so impractical and challenging that no one is thinking about productization. People are mostly trying to figure out if it is possible at all, or at least interesting to study. Academics have the unique luxury of setting practicality aside to probe the distant future (myself included, with research like on-skin projected interfaces [5,6], unlikely to be feasible anytime soon, if ever!). This is a mentality (and in many ways a method) that allows HCI researchers to be way ahead of the game, before there is any thought, let alone hope, of commercialization—the latter being a prerequisite for most businesses. This significant separation in time led Bill Buxton to assert that “any technology that is going to have significant impact in the next 10 years is at least 10 years old.” This long period of incubation is what Buxton describes as a “long nose of innovation” [7], which I’ll fit under a new extended S-curve in Figure 4.

ins06.gif Figure 4. Extended Technology Lifecycle S-Curve, integrating a new “glint” stage and Buxton’s long nose of innovation.

This 20-plus-year gap can be very deceptive. Attendees at venues like ACM CHI often lament that no HCI research ever goes into product. I would argue that HCI is at the vanguard of innovation and has repeatedly influenced industry. But this is the not the direct path we all wish existed from paper to product. Instead, HCI research has much greater impact in identifying opportunities in the first place, establishing the science and methods, building a shared vision, and developing a pipeline of human talent.

For this reason, few people can confidently say, “That feature was based on my paper!” Similarly, there are extraordinarily few startups that have come out of CHI (including technical HCI communities like UIST and UBICOMP). Instead, the collective weight of an area’s research propels the idea out of academia and into the collective conscience, both industry and the public imagination. This is often when we start to see embryonic commercial activity; non-researchers are convinced by the idea, and there is light at the end of the profitability tunnel.

For most of this embryonic period, things are good. HCI innovators are consulted and hired to help push the idea toward commercialization, leveraging their deep expertise from potentially decades of work in the domain. This is why industrial research labs popped up at places like Apple and Microsoft in the 1980s and 90s, and Oculus and Snap in this decade. In many respects, this is the golden era for HCI innovators, where there are finally resources to tackle big, interesting problems, and there is the real potential for research to escape into the real world. This was Xerox PARC in the 1970s working on the Alto system, which defined the desktop graphical user interface experience, and Apple in the 2000s working on the first iPhone, which defined multitouch mobile computing. Both were built on glint-stage research from decades earlier—Doug Engelbart’s oN-Line System (demonstrating windows, the mouse, word processing, hypertext, and other GUI mainstays) was developed in the 1960s [8], and Steve Jobs visited multitouch researchers for demos as far back as 1985 [9].

As money and resources pour in, teams grow, and ideas balloon into actual products, the intellectual impact of HCI research rapidly falls away (Figure 5). We’ve entered the growth phase. There’s now a critical mass of expertise, and the community commercializing the idea dwarfs the original research community, some of whom may have even moved on to different research areas in the intervening decades. The lineage of ideas grows obscure, with most of the good ideas from the literature now on people’s lips, origin unknown, fostering the belief that everything was invented in-house. As the commercial stakes mount, there is also a trend to firewall ideas from escaping and entering organizations, creating intellectual echo chambers. Steve Jobs proclaimed (and may have even believed), “We have invented a new technology called ‘multitouch’ which is phenomenal.” That said, I do suspect that many good HCI ideas are reinvented at this stage. I also suspect this reinvention wouldn’t happen in a vacuum, without that research having been done decades prior and floating in the ether since. As Buxton wryly noted, “There are no new ideas. Just refinements of old ones, iterating until some amorphous perfect storm wave sweeps them to overnight success!”

ins07.gif Figure 5. HCI Intellectual Impact Curve.

The growth phase is when a dominant design emerges [2] and products begin to ossify. In my view, this transition from growth to maturity is the most painful time for HCI innovators, as it feels like industry has reached escape velocity and no longer wants or needs outside ideas. What’s worse is that new and good ideas are often rejected, as products have to satisfy an existing customer base. This is the HCI innovator’s Trough of Disillusion (Figure 5): A product has reached peak success and influence, yet we have little influence and get little credit. It isn’t until a product ages that the spigot of new ideas opens slightly, when companies have exhausted in-house ideas and begin to once again welcome outside ones to reinvigorate tired products.

Returning to my earlier examples, we saw this effect in desktop GUIs, which, after a period of intense innovation and market growth in the 1980s and 90s, largely cemented around the same concepts: a desktop, overlapping windows, icons, hierarchical file systems, a cursor, and so on. It does not matter if you run MacOS, Windows, or some flavor of Linux—they are all basically the same thing in different skins. Likewise with smartphone interfaces: grids of app icons, a shelf with favorites, full-screen apps, notification dropdown, app-centered file organization. Of course, this hasn’t stopped the HCI community (including me) from cranking out hundreds of papers a year on refinements and extensions to desktop and mobile GUIs. This is fine, and even good research, but we should be honest with ourselves about the potential for impact at this point in these categories’ lifecycles.


We’re faced with a dilemma as a community: When ideas have real users and real value, our ability to launch HCI innovations tends to fall on deaf ears.


When companies have tried to innovate mature user experiences, it tends to go poorly. Perhaps the canonical example of a mature product with a large user base is Windows. Microsoft launched a dramatically redesigned interface with Windows 8, which led to so much customer consternation that Microsoft had to regress the design in subsequent versions to a more classic desktop experience. Just this year, Snapchat, with its hundreds of millions of users, had to roll back a substantive redesign after its user base ignited. While you may have strong opinions on these interfaces in hindsight, I can assure you that each was designed and vetted by hundreds of experts before release. No doubt these designs were good in some ways, but they were also different, and ordinary customers (reminder: you are not like the user) don’t like different once they’ve integrated a product into their lives and businesses. Thus, change is hard, and why the HCI research community is right to lament its inability to get HCI innovations into products. It’s true: If it can be called a product, it is almost certainly too late. The window of opportunity is before it is a product, and probably before most people think it can be a product.

Thus, we’re faced with a dilemma as a community: When ideas have real users and real value, our ability to launch HCI innovations tends to fall on deaf ears. I do not mean to say it is impossible, just very challenging. Even if you are well positioned in industry, I’m sure you would agree that bringing new features, let alone new products, to market is a huge battle. On the flip side, when HCI innovators invest efforts early, before products and markets exist, the work can feel speculative and decoupled from real-world problems. I’ve certainly felt this in my own research—why again do people want an unwieldy computer strapped to their shoulder projecting onto their arms when a smartphone is so much more practical?

back to top  The Dilemma Zone

The Technology Lifecycle S-Curve is limited in that it considers only one generation of innovation, but technology and society are constantly reinventing themselves, so the progress of technology and their markets is very much a series of S-curves. Innovation enables new products while killing old ones. Think Blockbuster to Netflix, CDs to streaming, or taxis to ride sharing. It’s extremely difficult to keep a large, mature user base happy while also rapidly evolving a product. Instead, companies with mature products tend to focus on sustaining innovation—improvements that make their existing products better and their customers happy. Newcomers, with fewer expectations and smaller user bases, can disruptively innovate. This is the basic premise of Clayton Christensen’s Innovator’s Dilemma, articulated in the eponymous 1997 book [10]. In between two S-curves is a dilemma zone (Figure 6, left). If the disruptive innovation is led by another company, the mature company might fold. If the mature company is fortunate enough to be the disruptive innovator, it might tank its own successful product and market. Either is risky, hence the dilemma.

ins08.gif Figure 6. Christensen’s Innovator’s Dilemma [10] versus the HCI Innovator’s Dilemma.

As before, we need to consider the optics of Christensen’s graph. His innovator is a business and the disruption is to them. The dilemma zone is a business-centric view of opportunity and risk. I think the opposite relationship is true for HCI innovators: The zone between the two S-curves is an HCI innovator’s opportunity zone, where intellectual impact peaks, and everywhere else is our dilemma zone, where we’re out of cycle with the hype and growth of products (Figure 6, right).

back to top  Disruptive Opportunities

If we want to maximize our chances for intellectual impact, we need to strike fast as disruptive opportunities emerge. Sometimes the HCI community can make its own luck, as we did with things like desktop GUIs and multitouch, but more reliable is to be on the lookout for opportunities enabled by other fields and businesses. We did not invent the Internet, email, social media, Wikipedia, or MMORPGs, but boy did we study them to death.


If we want to maximize our chances for intellectual impact, we need to strike fast as disruptive opportunities emerge.


One good signal I use is to look for new computing modalities that are enabled through a confluence of technological improvements. We have seen this most recently with smartwatches and VR/AR headsets, which became commercially practical only in the past decade. Both are now in their late embryonic or early growth phases, which is reflected in their interface design. Unlike desktop and smartphone GUIs, there are few commonalities across different offerings. Smartwatches have screens big and small, square and round, full color and grayscale; some have accessory mechanical controls, while others are touch only. It’s the Wild West, with no dominant design having won out… yet. The same is true in VR/AR, with Sony, Oculus, HTC Vive, Microsoft, and others having different takes on how to interact in virtual environments; there’s a wide array of gloves, controllers, hand gestures, voice commands, and heads-up interfaces battling it out for design supremacy. In both cases, there is still time to get innovations into product, and there is a large HCI community doing so. However, if history is any guide, our window of influence will wane in the next five or so years, as products go mass market.

There are also a lot of nascent computing modalities that deserve our attention and innovation. For instance, the Internet of Things is still amorphous and needs our help in shaping it. The fact that I need to pull out a smartphone and swipe through pages of apps to change the brightness of lights in my living room shows how far we need to go. So what is the interface for a smart home going to be? I have no idea, and that’s great! There’s lot of research to be done. In my own lab, we’ve been working on an “infobulb” that combines a computer, projector, and depth camera into one self-contained device that screws into standard light fixtures (Figure 7, left). This lightbulb 2.0 allows your kitchen countertop or office desk to become an expansive multitouch interface [11], offering rich communication, creation, and information retrieval capabilities (Figure 7, right). Such a platform opens a host of questions, from how projected interfaces should gracefully cohabit surfaces with physical objects, to what I would want in the app store for my house. We’ve been working on this topic since 2012 [12], building on research from the 1990s [13,14,15]. We’re now at that two-decade mark, and there are commercial rumblings that I wish I could tell you more about.

ins09.gif Figure 7. Proof-of-concept “infobulb” (left) able to project multitouch interfaces onto everyday surfaces (right).

back to top  Conclusion

I want to leave you with one final chart: the Hype Cycle, created by Gartner [16], which I’ve populated with HCI areas from a 10,000-foot perspective (Figure 8). Some people hate this chart, but I find it to be a useful visual shorthand to see where things lie, and where HCI innovation can have the most influence. I encourage readers to plot their own areas of interest on this chart (or email me angry messages if I put your field somewhere awkward).

ins10.gif Figure 8. HCI topics plotted along a Hype Cycle (opinions the author’s), adapted from [16].

The astute reader might notice that this hype cycle curve is not all that dissimilar from the HCI Intellectual Impact Curve I proposed earlier. Indeed, they track fairly well, except for one key difference: In the earliest stages, external expectations are low, but HCI innovation is highest. This mismatch often means the HCI community is the biggest champion for a technology vision and in no small part contributes to the eventual peak of inflated expectations. Getting people excited about technology and drawing them into our human-centered vision of the future is an enviable position, and so I have no qualms about being off-cycle from the business innovator’s dilemma. Yes, it does mean that sometimes our research and innovations get labeled as esoteric and impractical, which has significant implications for taxpayer-funded science and education, as well as for the sustainability of industrial research labs. But it also means we get to shape the narrative and cast light on important human-computer problems, hopefully with broader impacts in mind [17]. This is a great responsibility that we should exercise and celebrate as a community. So let’s pull out those Post-its and markers and get to work.

back to top  References

1. Rogers, E.M. Diffusion of Innovations (1st ed.). Free Press of Glencoe, New York, 1962.

2. Utterback, J.M. and Abernathy, W.J. A dynamic model of product and process innovation. Omega 3, 6 (1975), 639–656.

3. Qeexo, Inc.; http://www.qeexo.com

4. Myers, B.A. A brief history of human computer interaction technology. ACM Interactions 5, 2, (Mar. 1998), 44–54.

5. Harrison, C., Tan, D., and Morris, D. Skinput: Appropriating the body as an input surface. Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 2010, 453–462. DOI: https://doi.org/10.1145/1753326.1753394

6. Xiao, R., Cao, T., Guo, N. Zhuo, J.J., Zhang, Y., and Harrison, C. LumiWatch: On-arm projected graphics and touch input. Proc. of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, New York, 2018, Paper 95; https://doi.org/10.1145/3173574.3173669

7. Buxton, B. The long nose of innovation. Business Week. Jan. 2, 2008 (Rev. May 30, 2014).

8. Engelbart, D.C. and English, W.K. A research center for augmenting human intellect. Proc. of the December 9–11, 1968, Fall Joint Computer Conference, AFIPS ‘68 Fall. ACM, New York, 1968, 395–410. DOI:http://dx.doi.org/10.1145/1476589.1476645

9. O’Connell, K. The untold history of multitouch. The Link. Summer 2017 issue. https://www.cs.cmu.edu/sites/default/files/TheLink_Summer2017.pdf

10. Christensen, C.M. The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail. Harvard Business Review Press, 1997.

11. Xiao, R., Hudson, S.E., and Harrison, C. Supporting responsive cohabitation between virtual interfaces and physical objects on everyday surfaces. Proc. of the 9th ACM SIGCHI Symposium on Engineering Interactive Computing Systems. ACM, New York, 2017, Article 11.

12. Xiao, R., Harrison, C., and Hudson, S.E. WorldKit: Rapid and easy creation of ad-hoc interactive applications on everyday surfaces. Proc. of the 31st Annual SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 879–888.

13. Raskar, R., Welch, G., Cutts, M., Lake, A., Stesin, L., and Fuchs, H. The office of the future: A unified approach to image-based modeling and spatially immersive displays. Proc. of the 25th Annual Conference on Computer Graphics and Interactive Techniques. ACM, New York, 1998, 179–188; http://doi.acm.org/10.1145/280814.280861

14. Underkoffler, H., Ullmer, B., and Ishii, H. Emancipated pixels: Real-world graphics in the luminous room. Proc. of the 26th Annual Conference on Computer Graphics and Interactive Techniques. ACM Press/Addison-Wesley Publishing Co., New York, 1999, 385–392; http://dx.doi.org/10.1145/311535.311593

15. Wellner, P. The DigitalDesk calculator: Tangible manipulation on a desk top display. Proc. of the 4th Annual ACM Symposium on User Interface Software and Technology. ACM, New York, 1991, 27–33; http://doi.acm.org/10.1145/120782.120785

16. Gartner Hype Cycle; https://www.gartner.com/technology/research/methodologies/hype-cycle.jsp

17. Hecht, B., Wilcox, L., Bigham, J.P., Schöning, J., Hoque, E., Ernst, J., Bisk, Y., De Russis, L., Yarosh, L., Anjum, B., Contractor, D., and Wu, C. It’s time to do something: Mitigating the negative impacts of computing through a change to the peer review process. ACM Future of Computing Blog. Mar. 29, 2018; https://acm-fca.org/2018/03/29/negativeimpacts/

back to top  Author

Chris Harrison is the Habermann Chair and an assistant professor of human-computer interaction at Carnegie Mellon University, directing the Future Interfaces Group (www.figlab.com). His lab broadly investigates novel sensing and interactive technologies, especially those that empower people to interact with small computing devices in big ways. chris.harrison@cs.cmu.edu

back to top  Footnotes

www.chrisharrison.net

back to top 

Copyright held by author. Publication rights licensed to ACM.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.

Post Comment


No Comments Found