Cover story

XIX.6 November + December 2012
Page: 40
Digital Citation

On attention to surroundings


Authors:
Malcolm McCullough

back to top 

First, consider attention itself [1]. Bought and sold by websites, hoarded by overlords, and stolen by clever thieves, attention has become the coin of the realm. For attention becomes scarce as information becomes plentiful. Alas, that's a truth that is easy to know but difficult to remember. Everywhere people handle ever more information as if it didn't cost anything. So it is worth reciting the famous remark from visionary Herb Simon: "...[I]n an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: It consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention..." [2]. That quote is from 1971, right about the same time that sociologist Alvin Toffler coined the term future shock—long before smartphones, long before billions of flat-panel displays, and long before advertising analytics began to use face-recognition software to measure how many people were looking. Today, in an age of ambient information, the shock of the new and its costs to attention have diversified considerably. There is more to attention than where you are looking. There are also surroundings.

ins01.gif

Interaction designers know in the abstract that surroundings matter. Often they know quite specifically how the circumstances of engagement shape the flow of attention. They may know that when tangible circumstances become active components of mental processes, researchers call this embodied cognition. As more and more interactions move beyond the desktop into everyday life, more designers are dealing with this. Not all attention involves deliberation with symbols. There is also situational awareness.

Although you will probably never need such acute awareness as that of an athlete or a surgeon, you might nevertheless benefit from more mindfulness to situation, whether doing design work or just dealing with life. A sense of where you are and what is going on helps filter overload. It may assist and even sometimes restore the workings of attention. Many sites of life—kitchens, classrooms, boardrooms, laboratories, lobbies, sidewalk cafes, public plazas—are not only for particular activities but also about them. Sites put people into frames of mind, and often into particular spatial relationships as well, often at very carefully considered scale. So in a way, you could say that architecture and the city have always been technologies of attention.

Now there is reason to reconsider all of this. Doing so will alas take an outlay of attention. But like a highway-resurfacing project, that could be a temporary inconvenience for the sake of long-term improvement. Besides, it is well understood how the solution to too much information is often more (meta) information. Right now attention to surroundings could be worth your consideration. Right now, as the physical world fills with ever more kinds of digital media, in ever more contexts and formats, some workings of attention may be changing.

Of course, context matters more in an age of mobile, embedded, and tangible computing. Yet despite current obsessions with smartphones, mobility isn't everything; there are situated technologies, too. Layers of technology accumulate in the sites of everyday life, which they seldom replace but often transform. So, it is worth remembering that underneath today's rush to augment the city, fixed forms do persist, and noticing and working with them can improve other sensibilities.

back to top  An Inquiry into Attention

What is it like to make an amateur inquiry into attention? [3] If you're not a Zen master or a neuroscientist, how do you come away with meaningful results and not just more overload? In particular, how do you avoid neurobabble on how technology makes "us" think this way or that? What if you just want to know a bit more about attention in order to ground yourself in an age of superabundant and increasingly ambient information?

Superabundance is the word for it. You hardly meet anyone who wants to go back to life as it was before the Net. Let us rejoice: all the world's information at your fingertips, and no need to clutter your head with it! Well, information of a kind. And except when it gets in the way socially, or when corporations try to fence it off as their own. Abuses of superabundance do exist, and only time will tell which of those are correctable, much as it took 50 years to see what was wrong with car-based transportation. Incidentally, hindsight also seems helpful for remembering that people have often cursed at overload. Maybe always. For example, more than 400 years have passed since Erasmus, the first modern editor, famously lamented, "[I]s there no place to hide from this great flood of books?" [4] Today's debates on overload often begin with something like this. If you believe the old truism about the ability to keep no more than seven things in mind at once, then there must have been overload ever since there were eight.

The world itself has been both the cause and the cure of overload. People of any era have been fascinated, intimidated, irritated, or numbed by the world. Media may of course up the ante on all of this. Just about any stage in the history of information technology—such as newspapers or recorded music—has been accused of cutting people off from the world, and from each other. Yet the natural world has always saturated the senses, and even without technological assistance, the senses mediate the world. Indeed, if they ever stopped filtering, you would quickly go mad.

The counterargument has its merits. The human mind has always loved to wander, but it never had quite such exquisite means of doing so. Never before did so much experiential saturation come from artifice, with such purposeful design of interface. What has changed is how much of the world offers appeal and not menace, novelty and not tedium, immediacy and not heartbreaking distance. Never before has such a spectrum of the perceptual field been so deliberately placed, or so deliberately engineered for cognition.

An inquiry into attention thus has to question capacity. How does a brain that evolved for one set of stimuli deal with a world now made of quite another? Despite humanity's remarkable capacity for adaptive learning, how can the workings of attention ever adapt half as quickly as technology changes? How can attention adapt and change across the lifetime of an individual, a culture, or the species? For example, the case of food shows some lag. Humans have innate preferences to take on salt, sweets, and fat, since those were always rare in the past. Except now those are plentiful (at least for the luckiest billion humans) and people consume too much of them. Likewise with information it becomes possible, even likely, to take on too much of what you want in search of what you actually need. There now exists an argument about information obesity, a state reached by relentless feeding on the equivalent of empty calories [5].

So while superabundance may be the best word overall, it seems better to talk of overconsumption than simply of overload. This puts a little more responsibility on the individual, so to speak. Superabundance makes it mandatory to know more about the workings of attention.

One usual debate on overconsumption concerns multitasking. OMG, is there anything humans do that has yet to be done while also texting? Is there any work people do that has yet to be done while also watching a movie? Misconceptions of multitasking must be costing somebody something. Of course, capacity varies. A soldier can walk and chew gum, but almost nobody can safely text and drive. At the proverbial cocktail party, you can monitor many conversations to choose how to move among them. But if two friends each speak to you simultaneously, especially if one speaks into each ear, you are going to miss something, if not everything, that each has said. No amount of practice seems to change that one.

In many cases, practice does help, of course. The question of habit seems central. To what extent does whatever you grow up with become normal? Self-described "digital natives" claim that this can be almost anything. If you grew up with technology in a growing number of contexts and formats, they are just part of the world, and not so much of a distraction as they would be to people who learned the world without all that stuff. Cognitive scientists agree that many complex brain pathways are emergent and do adapt, especially through habit. "Neuroplasticity" definitely exists.

Not all pathways adapt, however. The workings of attention involve distinctly fixed processes for allocating mental resources. Amid the hierarchy by which the brain assembles cognition, switching costs and bottlenecks inevitably occur, especially when executing tasks. Moreover, according to some famous studies, they may do so especially for people who take pleasure in switching [6]. In this regard, leading cognitive scientists contend that effective multitasking is no more than a myth. It may feel good, but switching costs a lot, as executive processes must queue and load [7]. Instead, a more true productivity might involve alertness, at some less describable level, to tasks whose perceptual frames one is already in, and that might consist of recognizing and engaging more features of context. For, of course, not all attention is deliberative.

Also easy to know but somehow difficult to remember: Not all attention is visual. For example, there are strong effects of interpersonal distance—stand a couple of inches closer to or farther from someone to see those at work. The ever-increasing use of media has made vision seem more dominant, however. (Who first said that the look and feel of technology is almost all look and almost no feel?) The jumpy nature of vision has made attention seem jumpy too. For example, one of the oldest metaphors in cognition is that of a spotlight. As vision keeps shifting, its selective focus does seem to illuminate. And the gaze does usually indicate where deliberative attention has been directed, finding things more quickly where it is expecting to find something. Because many such visual processes are relatively practical to study clinically—more so than situational awareness in the field, at least—early cognition literature may have had a bias toward them. But embodied frames of reference also matter. Vision alone does not explain how attention gets assembled, nor does it explain attention's aspects of orientation and habit, or the importance of context. More recent cognitive research thus goes beyond the spotlight metaphor, and beyond selective attention, to understand fuller roles of embodiment [8].

Form informs. To inhabit habituates. Interaction designers who have studied activity theory understand those wordplays well. Contingencies of form and context affect what can be done with them, and therefore how they are known. Perhaps people from every era have had thoughts about how the world seems manifest, and how life and especially work have assumed a particular form. But now some interaction designers spend all day at this. To them it seems axiomatic that the intrinsic structure of a situation shapes what happens there. Not all that informs has been encoded and sent. Not all action requires procedures and names. The mind is not just a disembodied linguistic processor: Neuroscience has had a paradigm shift toward embodied cognition. For an expression of that shift, cognitive scientist and roboticist Andy Clark observed in the 1990s: "In general, evolved creatures will neither store nor process information in costly ways when they can use the structure of the environment and their operations upon it as a convenient stand-in for the information-processing operations concerned." On the nature of engagement, Clark summarized: "[M]emory as pattern re-creation instead of data retrieval; problem solving as pattern completion and transformation; the environment as an active resource, and not just a domain problem; and the body as part of the computational loop, and not just an input device" [9]. This use of props and structures assists with the processes of externalization and internalization, which activity theorists show is important to learning and tacit knowledge. Amid masterful, habitual, embodied actions, not only do technologies become more usable, but indeed attention may seem effortless [10].

So as an initial summary on attention itself, several common misconceptions seem easy enough to identify: Not all attention is visual or selective like a spotlight. Not all debates on attention concern multitasking. Overload has always existed, but overload isn't so much the problem as overconsumption. Superabundance is welcome, but it makes better attention practices more vital. Surroundings play a part in those practices. Not all attention is fragmented or paid; sometimes it just flows. Embodiment and orientation can be important components of attention. Not all attention involves thought. Not all interaction needs procedures and names. You don't have to be a yoga teacher to say all of this. For, as interaction designers know, affordances shape knowing. You can sense that a surface might work as a step or a table without it having been designed or declared as such.

back to top  Artifice as Environment

Next, beyond recalling those presumably familiar themes, what is it like to consider the workings of attention from the perspective of a larger environment? What is it like to go beyond the comfortable scope of the situated task into the messy inhabited world? What happens when form, resolution, location, touch, or environmental sensor data become components of making sense? As the world fills with new technologies in ever more contexts and formats, the terms of engagement vary more widely. It is worth reciting that wherever interactions become something to inhabit and not just to sit at, you can no longer be so sure who is a user.

Today's landscape of design possibilities is not just in your handheld; there are situated technologies, too. Computation increasingly becomes part of things not thought of as computers—for instance, parking meters, or even pavement [11]. Not every interaction is a portal to someplace else; many act in the here and now. Design sometimes creates persistent circumstances that people must live with, especially those that arrange people in space, and enable and so represent the institutions that occupy them. One word for that is architecture.

Despite its more specialized meanings among information scientists, that word usually refers to the built environment. There it has its own connotations about attention. Too often the main role of architecture has been assumed to be appearance: as signs and symbols, as if the city is nothing if not legible, and as if visual culture is nothing if not ironic. Thus, too often architects' approach to computation has been to find and fabricate novelties in form. That bias seems understandable in a world in which so many cultural media seek visual attention. But, alas, in the competition for eyeballs, buildings generally lose to faster media. Instead, their advantages come from persistent material embodiment. In the oft-cited paraphrase from master critic Walter Benjamin, architecture is experienced habitually, in the background.

As artifice in the background, architecture's benefits begin with physical comfort. Embodiment rightly belongs first among architecture's cognitive roles. As evidence for that primacy, traditional cultures without the benefit of high technology have tended to build comforts well, for instance, cool ceramic floors in a hot, dry climate like Morocco. Although comfort isn't exactly a form of attention, without it much finer attention gets disrupted. For some very basic overload, under which you soon notice less of anything else, wander out into the midday desert sun.

Like food or information (at least for the luckiest billion humans) built physical comfort is no longer so scarce. Modern technology has worked wonders to bring cooling and warming, and water, light, and power into unobtrusive, productive configurations. But as in so many other technological applications, this too has been taken too far, often at considerable cost to common sense. Much the way that in automobile-abiding burbs there is no place to walk, so in the glassed-in uniformity of 20th-century building there is no way to open the windows, too little contrast of light and shadow, and often nothing more interesting to look at than your screen. Much has been lost about the vividness of comfort: the warmth of a stone wall that has been in the sun; changing patterns of use over the course of a day or season; a ceiling worth contemplating, as if to look up in that manner might help other aspirations, too.

Perhaps ambient information can enable smarter, greener buildings. Perhaps networks of sensors, actuators, and microtransactions can restore variety and participation to inhabited comfort. Something has to change: Total uniformity turns out to be neither sustainable nor healthful. That has more implications for attention than you may first assume. For when a physical situation is one you would just as soon forget, you may just overconsume other media to compensate. Buildings thus illustrate a more general question for ambient information: Can you ever use technology to tune in to surroundings, or always just to tune them out?

So far we have identified three main cognitive roles for built context. First is comfort, preferably of the kind in which ambient does not mean numbingly uniform. Second, visual significance does of course matter, and architects know their semiotics, but this is not everything. Third, form is apprehended as embodied affordance. As with objects, so with environments: Appropriate form contributes to usability. Without good form you get a lot of annoying features and instructions.

A bit more on form: What does it mean that "to inhabit is to habituate"? It is well understood that habitual use builds tacit mastery, and it is well worth contrasting that process with the use of explicit instructions amid casual, first-time competence. Fixed spatial arrangements help habituation. The more enduring the environment, the more it shapes expectations without saturating attention. When different people perceive affordances similarly, the identity of the environment is reinforced. As people learn from their settings, they come to associate them with particular states of intent. Organizational theorists use the term sensemaking to describe how people make use of context to cope with complexity and overload. Much of this context is of course social: Mindful workers cue off of one another to figure out what is going on and what they should be doing. But much sensemaking is physical, too. It discovers latently embodied affordances. As Karl Weick, a leading researcher of the sensemaking process, once observed, "[A]ction is always just a tiny bit ahead of cognition, meaning that we act our way into belated understanding" [12]. Because habitual contexts make new cues more salient sooner, it is important not to cover them over in too many new media and procedures.

Lastly, consider a fourth role of architecture, or at least a more specific one implied by these others: Physical situations help us interpret what communications are about. This is worth affirming now as pervasive media put communications back into life's scenes. Digital cities pioneer William Mitchell famously explained this design challenge as a matter of placement. To shout "Fire!" in a crowded theater means one thing, he wrote; to shout "Fire!" to a squadron of soldiers means another; to affix a "fire" label to a plumbing fixture indicates where to hook up a firehose. But "if I receive the text message 'fire' on my mobile phone, at some random moment, I can only respond with a puzzled 'huh?'" Many words refer to or take scope from the places in which they are exchanged. A dramatist would understand this as mise-en-scène: A script needs a setting; objects provide orientation. "The meaning of a local, spoken, synchronous message is a joint product of the words, the body language of the participants in the exchange, and the setting," Mitchell explained. "But the introduction of technologies for inscribing physical objects with text, and the associated practices of writing, distribution, and reading, created a new sort of urban information overlay. Literary theorists sometimes speak of text as if it were disembodied, but of course it isn't; it always shows up attached to particular physical objects, in particular spatial contexts, and those contexts—like the contexts of speech—furnish essential components of the meaning" [13].

back to top  At Street Level

As interaction designers help build new urban information overlays, this drives new interest in the built environment. Beyond individual buildings, the city becomes the vital scale for rediscovery of surroundings. For, as ever, humanity is a political animal, and the city is the best device for realizing that nature. Look at the role of plazas in the Occupy movement, for instance. As an abstraction, urbanism unites many fields of creative and socially responsible work. Yet in the U.S., the city has long been unfamiliar and even suspect to many citizens, as it was to information technology's pioneers in their suburban office parks. Some of that bias began to change when dotcoms took up downtown warehouse digs in the 1990s, and it has accelerated as information technology has become a more worldwide endeavor. Urbanism now explodes into technological design thinking, as the time has come to build street-level apps and environmental big data for a rapidly urbanizing world.

Just 30 years ago, smart city meant fashionable dress [14]. Just 10 years ago, smart grid had yet to appear in the mainstream news media. Over the past decade, hundreds of aspiring labs have produced thousands of street-level applications, some as provocations for arts festivals such as ZeroOne and Ars Electronica, others quickly made a part of the everyday scene, such as Velib and CitiBike, urban-access apps that you might not think of as interaction design, but that might not have worked without it. So if the world is now filling with media in ever more contexts and formats, a lot of that is at street level. For though the human mind has always wandered, going for a walk has often helped it to do so. One of the most natural of human actions, walking does remarkable things with embodied cognition.

Since many interactions readers live in the U.S., consider one other paradigm shift, also in urbanism. Instead of emphasizing mobility in cars when designing for the built environment, we are increasingly favoring access on foot. Here, the difference between mobility and access seems worth noting. You want to get somewhere, not just to move, and you want a wider demographic (i.e., not just car owners) to be able to do so too. To judge from construction-market data, North America has been turning away from its far-flung exurbs (with underwater McMansions and $4/gallon gas) back toward the sufficiencies of good city life. This isn't just nostalgia, and it isn't just about the historic zones that have long since been surrendered to tourists. Today in U.S. real estate, walkable neighborhoods are one sector where demand most surpasses supply [15].

Walkability is largely possible because the perceptual field has changed. When the city was choked with industrial pollution, it was something to flee. Imagine Pittsburgh then and now, or even New York. Urbanism invites such longer-term historic sensibility anyway. After all, the city is an amazing device for memory, for example, in traces of wear on successive layers of new technological infrastructure. Although for urbanists the word infrastructure has usually meant massive public works in concrete, there also exist situated communications infrastructures, and those often have an interesting history. Picture the mid-19th century, with widespread overconsumption of social calling cards, trade broad-sides, and a huge variety of newspapers [16]. Earlier still, for those with a new interest in wayshowing and its history, consider the earliest introduction of street signs and guidebooks, in post-revolutionary Paris. Or for rich analogies to pervasive computing, it never hurts to take another look at early-20th-century electrification, which was a distinctly urban phenomenon.

The pertinence of this historian's argument for an inquiry into attention may not be obvious at first. But it ties to the big question of whether technology can ever help people tune in and not just out. For as the perceptual field shifts away from the soot, smoke, and clanking mechanical din of an industrial age, and toward a new layer of usability, interpretation, and perhaps even civic participation, the question of attention must somehow change. Self-respecting urbanists might remind you that various forms of media don't just describe urban space, but also help make it. In at least some sectors of the interaction design discipline, psychogeography has become a legitimate household word. Usability cannot be reduced to instruction. Landscapes of latent affordance build tacit knowing. Habitual casual participation is the stuff of good city life.


You cannot choose what you notice at the moment. But you can alter your habits. The more that those involve fascination with some aspects of the world instead of entertainment by tuning out, the less empty overconsumption or casual attention theft you may suffer.


It is also the stuff of better attention skills. In this, something else has changed in urbanism. Throughout much of the 20th century, the mindful citizen sought mostly to tune the city out. Design schools still propagate the famous essay, "The Metropolis and Mental Life," which was written over a century ago by the pioneering sociologist Georg Simmel, amid the future shock of clanking industrial Berlin [17]. Simmel was the first to describe how technological overload led to a numbed, blasé attitude. Late-century cultural critics found Simmel's characterization of retreat into ironic personal sensibility very prescient. For, beyond the side effects of railroad-era industry, which could overwhelm just as powerfully as the desert sun, 20th-century media culture later served up totalitarian radio propaganda, mass television spectacle, and cognitively engineered inducements of consumption. Today, with half a century of hindsight, you can revisit the latter by watching Mad Men. No wonder people tuned out, and still do. And yet by the turn of the millennium, the soot and noise of industry had mostly receded in American cities, and people were moving back in. Broadcast monoculture had given way to a galaxy of smaller distractions. For a then-and-now instance, compare today's nonstop entertainments at a major league baseball park with half a century ago, when people mostly watched the game, and were glad for the empty moments it provided for reflecting on one's week. Now people want to fill every second.

When you can bring your own distractions along, something much more immediate has changed. The appeal has become much more immense, and full-time usage is now normal. (The novelist David Foster Wallace called it early: Jest would eventually become infinite.) When distraction engineering is the mainstream industry, then the new entrepreneurial opportunities—the counterbets against the market herd—may instead be about tuning in.

back to top  For Further Inquiry

Although this already might be enough to get you rethinking attention, consider two more ideas in parting, one of them social and the other individual. Each suggests further inquiry. These are more complex issues than just remembering to take mental-health breaks.

The social idea concerns persistently shared space. It is an open question: As embodied media accumulate in inhabitable situations, where people have no choice but to live with them, does that ever constitute a commons? Now, there is a topic more misunderstood than attention. Suffice it to say that a commons is neither a market nor a state but a necessary complement to each. Commons don't have to be tragic. Thinking about commons doesn't make you a socialist. Commons practices usually uphold various kinds of non-fiscal capital against the kinds of cultural collateral damage that the actions of markets and states tend to incur. (To go deeper on this, see Berkman fellow Lewis Hyde's masterful Common as Air.) For working examples, consider how cities and citizens find acceptable degrees of noise, light, and signage. Lighting is one of those happy topics where better design lets you do more with less. Noise is now recognized as pollution; Chicago impounds "boom cars." Much as you might look back on a century ago and wonder how they could just dump raw sludge into the river, somebody could look back on these times and marvel how there was music coming out of so many ceilings, showering the sidewalks, and even playing in MRI chambers. Noise Stories curator Garrett Keizer reminds us that there is all the difference in the world between cranking it like you own the place and letting the other guy say his bit too [18]. Signage practices are more complicated, as is any etiquette about what you can say where. Here lie issues for the finest philosophers. Ambient information may escalate those issues. So to rephrase that opening question: Why is there no such thing as a tangible information commons?

The individual idea concerns the adaptive benefits of noticing your surroundings. Of course life is fast and you need to get on with it. And of course solitary reflectivity is a cultural bias. Not everyone wants to be left alone with his or her thoughts. To some, the Net is not just for sharing thoughts but for actually having them in the first place. Not everyone (especially those not in the luckiest billion) regards the world as a sweet place anyway. To most cultures throughout history, the world was understood as the source of intimidating distances, danger, disease, or drought. Even so, it was never so far from everyday experience. Only in the past hundred years did so much of humanity move indoors full time.

But there is more to this than stepping outside for a mental-health break. (Of course you may want to do so sometime—stopping to smell the roses doesn't make you a Luddite.) Cognition research has demonstrated the restorative power of quiet fascination with environments [19]. The keyword there is fascination. What makes this more than a mental-health break is neuroplasticity, the brain's considerable (but sorry, not infinite) capacity to be changed by habitual, embodied, adaptive learning. Here it might help to be a Zen master or a neuroscientist to explain this, but somehow those neural pathways are both better evolved and more adaptable for habitual fascination than for nonstop casual entertainment. Especially fascination with the world. If you are fascinated enough with the city, and it has noise pollution under control, you don't need any audio feed, and it may not occur to you to turn your own music on.

One word for ongoing perceptual habits is sensibility. You can acquire tastes and dispositions; life would be dreary if you never did so. As attention psychology pioneer William James once put it, "[M]y experience is what I agree to attend to" [20]. In the age of superabundance, you more or less have to take responsibility for your attention practices. Except now, one hundred years later, cognitive science has explained how attention is not a matter of will, and especially not a matter of instantaneous choice. Much more is now known about the mix of voluntary and involuntary attention, about alertness or orientation beside execution, about engagement and flow, and about the meditative remembering of different body states. Attention is no mere spotlight. You cannot choose what you notice at the moment. But you can alter your habits. Much more is now known about the importance of adaptive learning. The more that those habits involve fascination with some aspects of the world instead of entertainment by tuning out, the less empty overconsumption or casual attention theft you may suffer. Sensibilities to surroundings don't involve just roses, but also rooms, streets, and neighborhoods. Without those, you would be only more overloaded; there would be nothing left but your screens. It is worth remembering that the environment isn't simply someplace else in a pristine state, devoid of technology; it is all around you. It is slowly being augmented, with good design and bad. This seems worth your attention. Let's hope the temporary inconvenience of this essay has been so too.

back to top  References

1. To consider attention itself, and especially attention to surroundings, I have long been mining the literature, as an amateur but in a persistent and organized way, and with the benefit of excellent research library databases that complement the open Net. A feature article such as this one simultaneously increases the need to make generalizations and decreases the space to cite their very many origins. Please accept a disclaimer that I do know where so many of these ideas have come from, and only their juxtaposition and editorial synthesis is my own. For more detail, see my forthcoming Ambient Commons (Spring 2013, MIT Press). For these few endnotes here, please accept a limited number in abbreviated form.

2. Simon, H. Designing organizations for an information-rich world. In Computers, Communication, and the Public Interest. Martin Greenberger, ed. Johns Hopkins Univ. Press, 1971.

3. For two good contrasting trade press inquires into attention, see Maggie Jackson's Distracted—The Erosion of Attention and the Coming Dark Age (2009), and Winifred Gallagher's Rapt: Attention and the Focused Life (2009).

4. On history of early overload, search the work of Ann Blair or Geoff Nunberg.

5. On overload versus overconsumption, search the work of Linda Stone or Sherry Turkle.

6. Ophir, E., Nass, C., and Wagner, A.D. Cognitive control in media multitaskers. Proc. of the National Academy of Sciences 106, 33 (2009).

7. On switching costs and the fallacies of multitasking, search the work of David Meyer.

8. On embodied cognition, search the work of Andy Clark, Lou Barsalou, Anthony Chemero, or George Lakoff and Mark Johnson.

9. Clark, A. Being There: Putting Brain, Body and World Together Again. MIT Press, 1997.

10. On embodiment and activity theory, readers of this magazine might already know the work of Paul Dourish, or Victor Kapetelinin and Bonni Nardi.

11. On "information as a material," see Mike Kuniavsky's Smart Things (2011).

12. On sensemaking, search the work of Karl Weick.

13. Mitchell, M. Placing Words: Symbols, Space, and the City. MIT Press, 1995.

14. On a Lexis-Nexis word search of "smart cities" in world news media, two thirds of the ~1000 results from the last thirty years are from the last ten years.

15. On walkable urbanism, search the work of Chris Leinberger.

16. On social history of predigital urban information technology, search the work of Priscilla Ferguson or David Henkin.

17. Simmel, G. The metropolis and mental life. 1903. For interpretations of Simmel, search the work of David Frisby.

18. Keizer, G. The Unwanted Noise of Everything We Want. Public Affairs, 2010. Also noisestories.org

19. Kaplan, R. and Kaplan, S. The Experience of Nature: A Psychological Perspective. Cambridge University Press, 1989. The Kaplans are generally credited for the now widely-used expression "effortless attention."

20. James, W. Principles of Psychology, Vol 1. 1890, 402.

back to top  Author

Malcolm McCullough teaches architecture at Taubman College, the University of Michigan. He has recently completed a book on which this essay is based: Ambient Commons–Attention in the Age of Embodied Information (forthcoming, MIT Press, Spring 2013). He is also the author of Digital Ground (2004) and Abstracting Craft (1996), two other books on design and embodiment that have been widely read in the interaction design discipline.

back to top  Figures

UF1Figure. The basic idea of urban computing.

UF2Figure. Augment reality, perhaps, but don't mask it!

UF3Figure. A patch of sun crossing a wall.

back to top 

©2012 ACM  1072-5220/12/11  $15.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2012 ACM, Inc.

Post Comment


@ 0343665 (2013 04 29)

Fantastic text. I came here by searching for people that quote the Standford study on multitasking. The introduction is fantastic as it builds up an argument that attention has some features that do not change over time.