Authors: Mikael Wiberg
Posted: Wed, March 13, 2013 - 10:51:34
The landscape of available interaction technologies and interfaces is growing at an amazing pace. Almost every day we see new interactive technologies launched in the market and almost every day we see new examples of HCI that at first glance seem to have removed any trace of a traditional computer. No longer is interaction restrained to the act of sitting in front of a computer. No longer are peripherals a necessity for interacting with computers when gestures, our bodies, our eyes, our skin, our position, or even our fingertips can do the job for us. Today computing can be done without the traditional screen, mouse, keyboard setup.
In this blog post I suggest that while this technological development is indeed amazing it also challenges us on a more conceptual level to rethink and maybe ultimately change our understanding and framing of the notion of “interface.” As I will argue, we might need to reexamine this notion’s focus on interfaces as pointing to the visual “face” of the computer, and it might actually be more fruitful to think more about it as a verb, “to interface,” i.e., to think about interaction as a practice in which we interface with the computer through a wide range of modalities. This at a time when the interfaces we have become used to increasingly seem to disappear—visually and physically.
An example
To exemplify what I am saying we can look at the Nintendo Wii. When this game console was launched it redefined what video game interaction was all about. Instead of playing video games mediated via a game control through which the movement of the user's thumbs represented movement of the player's body/avatar in the game, the Wii introduced a new game peripheral that enabled the user to engage on a bodily level in the game—e.g., to play tennis you actually have to swing your arm holding this pheriperal in your hand. Wii was not only a new kind of video game, but it was also a game changer for how we conceptualized interaction with the interface of video games. The Kinect was a second game changer in this area. The Kinect platform again helped us to reimagine the interaction with the video game and accordingly reimagine the form of the interface. On a concrete level the Kinect completely removed the game control as being part of the interface. Instead, the user’s body was suggested to be the game control and, accordingly, bodily gestures, movements, and position were introduced as the interaction modality/language for a conversation with the game machine. Visually and physically, the game control disappeared. Still, if the user knew just how to act in front of the system, the interface was still there.
Interfaces as obvious vs. magic
For anyone familiar with these video games and how they works the whole thing is kind of obvious. However, for anyone new to these reconceptualizations of the interface, it is tricky to understand where the interface is, how it works, and how to operate it. In order to get it to work the user needs to subscribe to how it works, to play along, and to interface with the modalities it supports. It is as such less about “facing the interface” than about trying it out and getting a sense of what works and what doesn’t. It’s about interfacing with the machine. As the interfaces visually dissolve this act becomes tricky and heavily reliant on the user’s previous experiences of similar interaction modalities.
The replacement of the game control with our bodies serves as a good example of the disappearing interface. I would like to argue that this is an example of a development only in its early stages. From my perspective this is likely to continue. Soon interaction with computers will be more about understanding that there is an interface around and knowing how to operate it than about actually seeing its face, the face of the interface. It’s recognition rather than recall in reverse. It’s “the matrix” for users knowing how to operate the interfaces around us, and it’s like magic for people experiencing them without knowing exactly how they work.
As we construct more complex, integrated, and advanced interaction landscapes [1], it is becoming increasingly hard to recognize just where the interfaces are in these landscapes. For instance, you snap your fingers in mid-air and somehow this action is connected to storing away the documents you have been working on today… or it is scripted to imply shifting to the next track played on your online audio device. The interfaces are becoming increasingly hard to see at a time when we’re simultaneously witnessing an increasing interest in visual cultural studies, in media studies, and in understanding computing and interaction through a material lens (see, e.g., [2]). This blog post is about this apparent paradox.
Interface aesthetics—three lenses
In our previous work [3] we have suggested that the development of interfaces have roughly followed three different aesthetics. In short, the development of the graphical user interface (GUI) is guided by aesthetics of visual appearance; ubiquitous computing, on the other hand, follows counter-aesthetics that ultimately are about making the computer disappear. Tangible computing or tangible user interfaces (TUIs) might be the answer to a middle way forward, hiding the computer while manifesting computing physically in our world. No matter which aesthetics we subscribe to, these are still pushing a computer-oriented view (showing or hiding “the box”), and we have continued to think about “an interface” that could be visually apparent, hidden, or physically manifested, but still representing the face of the computational box. And of course, this box-focused view of computing, with interfaces as the visual appearance of the box and physical input/output peripherals for interacting with the box, makes sense at a time when visual culture and material understandings of our world are growing as perspectives. And accordingly, we find it natural to also conceptualize interaction and interfaces along a material vocabulary: interface implies an interFACE. That is, it implies something we can “walk up to” and use, and of course, it should be ready-made for use and easy to use. However, with the disappearing interface this is all about to change.
From interFACE to INTERface—a change of perspective
Interaction is becoming more and more abstract. As we leave the computational box behind and remove the traces of the computer in new arrangements of interactables, we end up in a situation in which interfaces are potentially everywhere. As I described in my first blog post, any object or activity is a potential interactable that can be brought into a computational composition. This enables us to shift perspective, from the ready-made interface to be used, to interfacing as an act of the user. In this, the user is actively involved in interfacing him/herself in acts of computing and is active in bringing interactables into computational compositions. In this act the user connect services, objects, and digital materials into computational things and into processes of interaction. In its simplest form, it might be the act of inserting or removing a USB stick—an act of connecting or disconnecting two computational materials—or it might be less visual and more abstract, such as using one’s position and steps taken as input to the computer when going for a run with Runkeeper running on a mobile device. It might be the active engagement of linking devices and services together, but it can also make us passive. The user is nowadays in control of interfacing any kinds of interactables into an almost infinite number of compositions, and besides interactables and computing power the user has two additional tools at hand for making computational arrangements that allow for disengagement: scripts and timers. With these two tools at hand a user can for instance use an app on Facebook to script birthday greetings to his/her friends. A simple arrangement of a friends list, birth date information, a script, and a timer enable this configuration. I like to think about such contemporary arrangements as “scripted materialities.” I think of interfacing not only as a technical possibility but also, as this example illustrates, as a new social practice in the making.
Shifting perspective—implications and reflections
As we shift perspective from interFACE to INTERface there are of course some implications. To wrap up this blog post I would like to point to three major implications.
First of all, “AN interface” objectifies the interface as an object. Already when we talk about it in terms of AN interface we assume that it is ONE interface and that it has a face that we can interact with. When the interface disappears it leaves us with a sense of confusion. However, if we shift to think about it as a verb—“to interface”—we should at the same time leave the idea of ONE interface and the implicit idea that this interface needs to be visual. Instead, interfacing is a process and a practice of constantly bringing interactables into composition.
A second implication from this is that we should leave our “session” orientation behind when we think about interaction with computers. The session orientation made sense when interaction was about interaction with one computational machine in front of the user. As we leave this setup behind, when notetaking on a piece of paper can be part of a coming computational act and where sessions are ongoing, everywhere, and computational potentials, the implication for HCI is huge. It leads us to think about HCI beyond concern for the short turn-taking sessions between the user and the computer to also account for how we interface activities, computational resources, and other materials over time.
A third and final implication from shifting perspective here is that it brings with it a challenging of the distinction between design and use, and between the notions of designer and user. If interaction is not only about use, but simultaneously about interfacing as the creative act of bringing computational resources together, then design and use can no longer be held apart, either temporally or as two distinct professions. In this view the designer becomes the enabler of new interactables and the user becomes an everyday designer of his/her own interaction landscape.
In the end we see how this integrates well with HCIs contemporary interest in craft and craftsmanship. If interaction is so much about interfacing as this act of assembling computational composition then joinery should be foregrounded here. Given the perspective presented here, our interest in craft should be limited to not only design and programming of new interfaces. Instead, craft and joinery might work as an approach to reconceptualize interaction—to craft is to interface and “to INTERface” is then this relational act of bringing pieces into formal relations. In a sense, it can be seen as an act of joinery. Joinery is according to its definition “the craft of a joiner—cabinetmaking” and it includes types of carpentry, woodwork, and woodworking.
As we shift from objectifying interfaces as if these are the visual faces through which we can interact with computers, toward thinking along the lines of interfacing, then interaction becomes an act of joineries. It might not be about building cabinets, although computing is still very much imagined as a box, and it might not be about woodwork, although we can for sure find examples of computational projects involving wood. It will be about interaction landscaping, and it will be about this craft of joinery with a range of materials at hand for footprinting any interaction we can set ourselves out to imagine.
Endnotes
1. Wiberg, M. Landscapes, long tails and digital materialities—Implications for mobile HCI research. International Journal of Mobile Human Computer Interaction 4, 1 (2012).
2. Wiberg, M., Ishii, H., Rosner, D., Vallgårda, A., Dourish, P., Sundström, P., Kerridge, T., Rolston, M. Materiality matters—Experience materials. interactions 20, 2 (March + April 2013).
3. Wiberg, M. and Robles, E. Computational compositions: Aesthetics, materials, and interaction design. International Journal of Design 4, 2 (2010), 65-76.
Posted in: on Wed, March 13, 2013 - 10:51:34
Mikael Wiberg
View All Mikael Wiberg's Posts
Post Comment
No Comments Found