Blogs

Implicit interaction design


Authors: Mikael Wiberg
Posted: Wed, February 19, 2014 - 10:41:19

I guess it’s not an understatement to say that 99.9% of all interaction design projects still end up as screen-based solutions in one form or another. Most of these solutions are also still built around the desktop computer as a model for how we should interact with computers. This model assumes that we interact with a computer via a screen and via some explicit input peripherals (for instance a keyboard, a mouse, or a touch screen). Despite all recent claims that we’re already in the third wave of HCI in which the promise of ubiquitous computing is fulfilled— “interaction anytime, anywhere and in any form”—we still force a desktop computing model on just about any interaction design project.

Let me take a simple example just to illustrate “explicit interaction” and how the desktop model of interaction is introduced over and over again to new contexts. So, at the gas station close to where I live they have recently introduced new “modern” gas pumps. These gas pumps include not only a credit card reader but also a touch screen so that I can select if I want to pay for the gas via the machine or if I prefer to pay inside the station. Further on, it also allows me to enter my pin code for the credit card and to select if I want a receipt or not. One small problem with this particular solution in relation to where I live (in the northern part of Sweden) is that we have a long and quite cold winter here and the touch screen does not work if I keep my gloves on. However, that’s actually not the real problem here. In fact, I would argue that it is just an effect of the explicit “desktop computer” interaction model chosen in the first place. You see, although I do not drive to the gas station for the explicit goal of interacting with a computer, that is still the context I am facing when going there. The thing is, and I do not think that this comes as a surprise, I want to fill my car with gas—that´s my goal—but somehow, when going there I find myself in front of this quite ordinary computer (screen, keypad, turn-taking between user and machine, and so on) and although I have this clear goal related to my car it is the computer I need to adjust myself to (take off my gloves, inserting my credit card, enter my pin code, etc). My point is that this is not a rare case. Although this new gas pump could be understood as yet another example of ubiq comp (a computer built into the gas pump) I see it more as yet another example of how “explicit interaction” is introduced to yet another use context. When I say explicit I mean that in order to use the gas pump I first need to explicitly interact with this computer built into the gas pump.

So, where can we go from here if we do think that there are alternatives to screen-based interaction? If we do think that ubiq comp is a good idea? And if we do think that 3rd wave HCI can offer alternative ways for introducing computing in our everyday lives? Well, I think that we need to look for other, fundamentally different, interaction models and really question if the desktop computing model including the visual UIs is really the only possible alternative for every interaction design project.

Let me offer you a first alternative. My example above illustrates how desktop computing is introduced in the context of gas stations, but more fundamentally it illustrates, once again, because we can see this in so many contexts right now, how we think that every interaction design project also needs to end in a solution in which the user explicitly uses a computer in one form or another. With explicit I mean that the user, despite the main activity they want to do (for instance filling up their car with gas), still is forced to do human-computer interaction as this turn-taking activity between looking at a screen, typing on some input peripheral, looking at the screen again, and so on. But what if we can consider alternative models for interaction design? What if we design interaction without this false necessity of introducing a screen and a keyboard as design elements to every solution? What if, for instance, I can just go to a gas station where a camera reads my license plate (OCR), checks this number in relation to a cloud based service to see if I am a member, if I have a valid credit card, etc. And then, the only thing I need to do at the gas station is to fill my car with gas. The payment for the gas is done automatically, as simple as when I download a new app to my phone. Interaction with the system happens while I do the things I really want to do. The interaction design is aligned with my core activities and not as a separate explicit session with a computer. This is interaction design from the viewpoint of the implicit. Sometimes I think about alternative solutions like this in terms of scripts and services, even under the notion of “scripted materialities,” and at other times as just “implicit interaction” design.

Implicit interaction design foregrounds the human while putting the technology in the background. It is not about decreasing the value of interaction but about making solutions in which interaction is not something extra you first need to do with a computer before you can move on to what you really want to do. I view implicit interaction design as a promising approach for truly entangling interactive systems with our everyday activities! “Doing computing while doing the things you truly want!”



Posted in: on Wed, February 19, 2014 - 10:41:19

Mikael Wiberg

Mikael Wiberg is a full professor in informatics at Umeå University, Sweden. Wiberg's main work is within the areas of interactivity, mobility, materiality, and architecture. He is a co-editor in chief of ACM Interactions, and his most recently published book is The Materiality of Interaction: Notes on the Materials of Interaction Design (MIT Press, 2018). [email protected]
View All Mikael Wiberg's Posts



Post Comment


No Comments Found