In his seminal book The Psychology of Everyday Things, Donald Norman outlined a world of things around us that are poorly designed because their designers did not apply psychology in the design process . The idea that psychologists can answer questions about design, through a user-centered design process, is a thesis that has guided our field for several decades. However, if we examine what the world’s top industrial designers, such as Yves Béhar, Jonathan Ive, Karim Rashid, and Philippe Starck, actually do, it becomes clear they work quite differently. To them, thinking about function is like thinking about three-dimensional form. While this may equate to designing for Gibsonian affordance , arriving at a shape that works is not a scientifically driven process. Many top designers follow a more intuitive process that, at least at the conceptual stage, involves many pencil sketches of potential shapes and materials. Some time ago, interaction designers were guided into what may, in the near future, turn out to be a wrong direction. We were taught that if we followed a well-defined user-centered-design process, this would provide a nearguarantee for a successful product.
But the art of user interface design is about to undergo revolutionary changes, with a renewed focus on the material, three-dimensional shape of things. We are at the dawn of a new age: display materials with unprecedented capabilities in terms of three-dimensional form. Flexible Organic Light Emitting Diodes (FOLEDs) and Flexible Electrophoretic Ink (E Ink) present a third revolution in display technologies that will greatly alter the way computer interfaces are designed. Instead of being constrained to the flat (or rather, somewhat rounded) surfaces of the Cathode Ray Tube (CRT), or the really flat surfaces of the Liquid Crystal Display (LCD) Dynabooks and tablet PCs, we will have the ability to shrink-wrap displays around any three-dimensional object, and thus, potentially, every everyday thing. Note that this does not equate to virtual or augmented reality, in that these objects will truly live in the real world: You will order your morning coffee through a display on the skin of your beverage container, your newspaper will be made out of flexible digital E Ink, and your car dashboard will consist of a large curved display that shows an image of the road ahead, as if there were no car body.
These interfaces will have a great number of interesting shapes that will be mostly three-dimensional, like organic life forms found in nature. This is where the term Organic User Interfaces (OUIs)  originates: the idea that user interfaces are no longer constrained to (x,y) input coordinates on flat surfaces. OUIs are computer interfaces that use non-flat, optionally flexible display technologies. These interfaces optimize the expression of the sensory and motor capacities of the entire human body, in three dimensions. Note that OUIs differ from Tangible User Interfaces in that their surface is always coated with a high-resolution display. The capacity for OUIs to mix with real-world objects, as Pierre Wellner suggested 20 years ago with his Digital Desk , will be much greater than that of Graphical User Interfaces with flat-panel displays. This means OUI software will need to be designed using processes similar to those currently used for designing regular everyday things. Your morning newspaper, your car, your dresses, or your kitchen plates will perhaps retain their everyday functionality, but they will all be augmented with a seamless, interactive, full-color display skin in the future.
It comes as no surprise, then, that interaction designers trained for software design may not be the most suitable designers for this new form of interface. These “computational things” will need to be designed by artists who understand three-dimensional form as well as the nature of materials. As Web designonce performed by people who understood the technologywas gradually taken over by graphic designers, these computational things will need to be designed by industrial designers, if they are truly to become everyday. This does not mean interaction designers will lose their jobs. On the contrary, our success as a field may be greater than ever. Markets will simply expand into areas never touched by interaction design, with computational things possibly embodying the majority of computer interfaces by 2030. Interaction design will thus be everywhere. When computers become so ubiquitous that they are no longer considered technologies, we will have reached a major milestone. They will just be everyday computational things.
What will these everyday computational things look like, and how will they act? One example is a digital paper computer recently designed out of Flexible E Ink at the Human Media Lab (see Figure 1). Rather than relying on a single display that limits our awareness of the world around us, and its relationship to the digital world, we will have many of these thin-film displays. The practicality of having many displays is that they are sufficiently thin to be tossed around a desk like real paper, and carried the same way, in stacks. Software windows will literally become hardware displays, with all of the navigational properties of software and hardware. If you want a window to pop up through the stack, you could shuffle it to the top of the stack like regular paper, or you could simply touch its title bar, with the information migrating to the top display automatically. At the same time, the information in these physical windows will have some very excellent properties of paper, such as a low energy footprint, haptics, and natural physical behaviors. These paper computers will be incredibly portable and foldable, allowing screen real estate to adjust with the task, as well as the shape of the body when pocketed. You will page forward by simply bending the display with the thumbas one navigates the pages of a bookbut one would never actually have to turn a page, as that is a physical limitation of the book that we need not copy.
A second example is that of DisplayObjects : regular everyday products coated with curved or freeform interactive display skins, like the Dynacan (see Figure 2). This pop-can computer has a curved interactive display. Its projected touch skin, made out of easily reusable photons in this embodiment, demonstrates one of the most important design challenges with everyday computational objects: designing display skins that are reusable. We must avoid considering a world in which FOLEDs are used as disposable or even recycled wrappers on products. In a world of carbon credits, the usability of computational things will have to equate with stimulating the reusability of products. When flexible display interfaces are integrated into sufficiently compelling reusable form factors, users may be better motivated to keep these reusable computational things with them, thus potentially reducing the negative environmental impact of their design.
The iPod nano design that Apple recently introduced is beginning to approach the idea of products with a skin that is made out of touch screens. This is coming very close to work by David Merrill on Siftablestiny computers that are mostly just touch displays . Another potential environmental impact of products wrapped with high-resolution touch screens, then, is that one no longer needs to upgrade their hardware just to alter their look and feel. This may save precious environmental resources at various levels of the manufacturing process, but it will require a new business modelone that is based on selling a new hardware look and feel through a software application. We are now beginning to see this model emerge with GPS/car-navigation companies, such as TomTom, selling smartphone apps rather than hardware “bricks.” Since most smartphones now more or less utilize the same brick form factor, whether or not one chooses to run a Blackberry or Android platform could be a mere software option.
So the brick form factor is just that: a hardware form factor that may be utilized to embody certain software functions to which it is suited. There are, however, more interesting examples that involve more intricate shapes. The Industrial Design group at the Technical University of Eindhoven has been making great progress toward light sculptures that respond to touch (see Figure 3). The entire skin of these light appliances is sculpted out of high-powered LEDs. The light source is directed toward a certain area, and thus a certain function, through touching its LED skin and dragging the illuminated LED “pixels” with the hand.
A slightly more advanced, and perhaps more futuristic, example in this category is that of toys with skins that are made out of high-resolution FOLEDs. By seamlessly integrating into the body of the toy, curved displays allow certain features of the toy, like the face, to be animated in software. This not only allows for more advanced storytelling or interactive features, but also improves the “keepability” of the toy. When they get bored with the look and feel, children may personalize the appearance of their physical Barbie in a way they already do in software like “The Sims”. There are many other real-world physical products that could improve their (re)usability in some way by including an organically shaped touch display in an environmentally intelligent way.
One thing that is evident, however, is the interfaces that run on these embedded displays will not be designed for generic computing activities. Instead, their functionality will be focused on a few functions associated with their hardware form factor. Bill Buxton  used to refer to the difference between our current smartphone functionalities and those of a truly ubiquitous computer as akin to the difference between a Swiss Army knife, a generic tool that is poor at everything, and a steak knife, which excels at only one thing: cutting. OUIs will take this distinction to new levels by hypercontextualizing their interface. A Barbie doll with interactive FOLED skin may not provide an appropriate interface for browsing the Web, and therefore should not be designed to do so.
A FOLED integrated into a cup of lemonade would perhaps be able to answer questions pertaining to lemons or lemonade but not coffee. It might provide information about ingredients, such as where they came from, their nutritional value, and the like. It would perhaps throw in a lemonade-stand game, but it would never provide any information about coffee beans, or at least not until the cup was reused to serve coffee. A credit card with a thin-film display would know everything about the balance on your card, or perhaps its security features, but it would not provide you with recipes. It might throw in a subway map on the back for good measure, but its functionality would be limited largely to supporting mobile financial transactions.
The best thing that could happen to user interface design, then, is for computers to stop being technological devices and become just like real everyday things, including some limitation of functionality. Their hypercontextualized OUIs will feature the same kind of printed skins that we find on products today, but with interactivity. This also means the same people are likely to design them. A first step in this direction is seen in the emergence of apps on the iPhone and other smartphone platforms, which serve to focus their interface to a single task . From the point of view of interface design, hyper-contextualization hides complexity from the user, ensuring each computational thing excels at only one or perhaps two functions at a time. This will greatly simplify the design of the user interface, to the point where it may not be necessary to follow our bloated user-centered design processes anymore.
One consequence of Moore’s Law is that it has delivered a world in which technology and interactive displays have become a mere commodity to industrial designers: computational materials, not unlike plastics or wood. Such a world is perfectly well understood by industrial designers. When user interface technologies become so integrated into everyday products that their interface truly disappears, the art of interface design will become truly ubiquitous. This may mean we will lose the initiative and the prerogative on the interface design front. Rather than cling to old mantras, let us embrace that wave of success in which interaction design is going to be in everything. Let us, for example, learn how we might effectively sketch designs using computational materials .
For this, it is time to start a real dialogue with industrial designers: one that allows us to better understand how we might design this brave new world of everyday computational things.
5. Akaoka, E., Ginn, T., and Vertegaal, R. DisplayObjects: Prototyping functional physical interfaces on 3-D Styrofoam, paper or cardboard models. Proc. of TEI’10. (Cambridge, MA, Jan. 2527) ACM, New York, 2010, 2010, 4956.
9. Toering, E., Man, P., De Jong, F., Overbeeke, K. Hummels, C., Ross, P. and Kirsh, D. Dance Rail: An interactive installation that provokes aesthetic movement. Proc. of DESFORM’09. (Taipei, Taiwan, Oct. 2629) 2009.
Roel Vertegaal is an associate professor of human-computer interaction at Queen’s University in Canada, where he directs the Human Media Laboratory (www.humanmedialab.org)
Figure 2. Dynacan is an interactive curved computer on a pop can. To underscore the issues with recycling such computational objects, its skin was made out of projected reusable photons, rather than FOLEDs.
©2011 ACM 1072-5220/11/0100 $10.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2011 ACM, Inc.