Technology and liberty

Authors: Jonathan Grudin
Posted: Tue, May 03, 2016 - 10:22:47

The absence of plastic microbeads in the soap led to a shower spent reflecting on how technologies can constrain liberties, such as those of microbead producers and consumers who are yearning to be clean.

Technologies that bring tremendous benefits also bring new challenges. Sometimes they create conditions conducive to oppression: oppression of the weak by the strong, the poor by the rich, or the ignorant by the clever. Efforts on behalf of the weak, poor, and ignorant often infringe on the liberty of the strong, rich, and clever. As powerful technologies proliferate, our survival may require us to get the balance right. Further constraints on liberty will balance new liberating opportunities.

Let’s start back a few million years, before beads were micro and technologies changed everything.

Fission-fusion and freedom

For millions of years our ancestors hunted and gathered in fission-fusion bands. A group grew when food was plentiful; in times of scarcity, it split into smaller groups that foraged independently. When food was again plentiful, small groups might merge… or might not. A fission-fusion pattern made it relatively easy for individuals, couples, or small groups to separate and obtain greater independence. This was common: Homo sapiens spread across the planet with extraordinary rapidity, adapting to deserts, jungles, mountaintops and the arctic tundra. That freedom led to the invention of diverse cultural arrangements.

1. Agriculture and the concentration of power

It is a law of nature, common to all men, which time shall neither annul nor destroy, that those who have more strength and power shall rule over those who have less. – Dionysius of Halicarnassus

Agriculture was a transformative technology. Food sufficiency turned roaming hunter-gatherers into farmers and gave rise to large-scale social organization and an explosion in occupations. Everyone could enjoy the arts, crafts, diverse foods, sports, medicine, security, and potential religious salvation, but with it came implicit contracts: Artists, craftspeople, farmers, distributors, athletes, healers, warriors, and priests were guaranteed subsistence. People were collectively responsible for each other, including many they would never meet, individuals outside their immediate kinship groups. People who wanted freedom might slip away into the wilderness, but those who reaped the benefits of civilization were expected to conform to cultural norms that often encroached on personal liberty.

The leader of a hunter-gatherer band had limited power, but agriculture repeatedly spawned empires ruled by despots—pharaohs in Egypt, Julio-Claudian emperors in Rome, and equally problematic rulers in Peru, Mesoamerica, and elsewhere. The Greek historian Dionysius lived when Rome was strong and powerful.

Why this pattern? Governments were needed for security and order: to protect against invasion and to control the violence between kinship groups that was common in hunter-gatherer settings but which interfere with large-scale social organization.

The response to oppression of the weak by the powerful? Gradually, more democratic forms of government constrained emperors, kings, and other powerful figures. Today, violence control is the rule; strong individuals or groups can’t ignore social norms. Even libertarians acknowledge a role for military and police in safeguarding security and enforcing contracts that the strong might violate if they could.

2. The second industrial revolution and the concentration of wealth

Another technological revolution yielded a new problem: oppression of the poor by the wealthy. In the early 20th century, monopolistic robber barons in control of railroads and mines turned workers into indentured servants. Producers could make fortunes by using railroads to distribute unhealthy or shoddy goods quickly and widely; detection and redress had been much easier when all customers were local.

The response to the oppression of the poor by the wealthy? Perhaps to offset the rise of populist or socialist movements, the United States passed anti-trust legislation in the early 20th century, giving the government a stronger hand in regulating business.  Also, the interstate commerce clause of the Constitution was applied more broadly, encroaching on the liberty of monopolists and others who might use manufacturing and transportation technologies exploitatively. It was a steady process. Ralph Nader’s 1965 book Unsafe at Any Speed identified patterns in automobile defects that had gone unnoticed and triggered additional consumer protection legislation. In contrast, after a loosening of regulations that enabled wealthy financiers to wreck the world economy a decade ago, the 2010 Dodd–Frank Wall Street Reform and Consumer Protection Act constrained the liberty of the wealthy, an effort to head off a recurrence that may or may not prove sufficient. Some libertarians on the political right, such as the Koch brothers, are vehemently anti-regulation, but for a century most people have accepted constraints [1].

3. Information technology and the concentration of knowledge

Libertarian friends in the tech industry believe that they desire the freedom of the cave-dweller. Sort of. Not strong and powerful, they support our collective endeavors to maintain security and enforce signed contracts. They are not among the 1%, either, and they favor preventing the very wealthy from reducing the rest of us to indentured servitude in the manner of robber baron monopolists.

However, my libertarian tech friends are clever, and they oppose limiting the ability of the intelligent to oppress the less intelligent through contracts with implications or downstream effects that the less clever cannot figure out: “The market rules, and a contract is a contract.” Technology that provides unencumbered information access gives an edge to sharp individuals. The Big Short illustrated this; banks outsmarted less astute homeowners and investors, then a few very clever fellows beat the bankers, who succeeded in passing on most of their losses to customers and taxpayers.

The response to the oppression of the slow by the quick-witted? A clear example is the 1974 U.S. Federal Trade Commission rule that designates a three-day “cooling-off period” during which anyone can undo a contract signed with a fast-talking door-to-door salesman. Europe has also instituted cooling-off periods. The U.S. law applies to any sale for over $25 made in a place other than the seller’s usual place of business. How this will be applied to online transactions is an interesting question. More generally, though, information technology provides ever more opportunities for the quick to outwit the slow. We must decide, as we did with the strong and the rich, what is equitable.

Butterfly effects

Technology has accelerated the erosion of liberty by accelerating the ability of an individual to have powerful effects on other individuals. Twenty thousand years ago, a bad actor could only affect his band and perhaps a few neighboring groups. In agrarian societies, a despot’s reach could extend hundreds of miles. Today, those affected can be nearby or in distance places, with an impact that is immediate and evident, or delayed and with an obscure causal link. It can potentially affect the entire planet. It is not only those with a finger on a nuclear button who can do irreparable damage. Harmful manufactured goods can spread more quickly than a virus or a parasite. A carcinogen in a popular product can soon be in most homes.

We who do not live alone in a cave are all in this together, signatories to an implicit social contract that may be stronger than some prefer, which limits our freedom to do as we please. Constraining liberty is not an effort to deprive others of the rewards of their efforts. It is done to protect people from those who might intentionally or unintentionally, through negligence, malfeasance, oppression, or simply lack of awareness, violate the loose social contract that for thousands of years has provided our species with the invaluable freedom to experiment, innovate, and trust one another—or leave their society to build something different. If the powerful, wealthy, or clever press their advantage too hard, we risk becoming a distrustful, less productive, and less peaceful society.

Plastic microbeads in cosmetics and soaps spread quickly, accumulating by the billions in lakes and oceans, attracting toxins and adhering to fish, reminiscent of the chlorofluorocarbon buildup that once devastated the ozone layer. In 2013 the UN Environment Programme discouraged microbead use. Regional bans followed. Even an anti-regulatory U.S. Congress passed the Microbead-Free Waters Act of 2015. It only applies to rinse-off cosmetics, but some states went further. The most stringent, in California of course, overcame opposition from Proctor & Gamble and Johnson & Johnson. Our creativity has burdened us with the responsibility for eternal vigilance in detecting and addressing potential catastrophes.


1. Politicians who favor freedom for themselves but would, for example, deny women reproductive choices might not seem to fit the definition of libertarian, but some claim that mantle.

Thanks to John King and Clayton Lewis for discussions and comments, and to my libertarian friends for arguing over these issues and helping me sort out my thoughts, even if we have not bridged the gap.

Posted in: on Tue, May 03, 2016 - 10:22:47

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

Oh, the places I will go

Authors: Joe Sokohl
Posted: Fri, April 29, 2016 - 1:26:11

Over the decades conferences, symposia, webinars, and summits all have formed critical portions of my professional development. I learned about controlled vocabularies and usability testing and the viscosity of information and personas and information visualization and so much more from attending two- or three-day events—and even local evening presentations from peers and leaders alike, all centered on this thing of user experience.

So along with dogwood blossoms and motorcycling weather, the conference season blooms anew. This year finds me focusing on three events, with the anticipation of meeting up with friends both met and as-yet unknown.

The Information Architecture Summit
May 4–8, 2016, Atlanta, GA

I’ve attended all but two of the summits since its inception at the Boston Logan Hilton in 2000. That year, the summit saw itself as a needed discussion and intersection point between the information-design-oriented (and predominately West Coast) information architects and the library-science-oriented East Coasters. 

When it began, at the ebbing of the wave of the dotcom headiness, the American Society of Information Science (and, later, Technology) decided to hold the summit only over the weekend and in an airport hotel, to reduce the impact of folks having to miss work. We had so much to do back then, didn’t we?

As it grew, the IA Summit has expanded the days of the core conference as well as adding several days of workshops before the summit itself.

Because I’ve attended all but two summits (2003 and 2004), I know a lot of the folks who have woven in and out of its tapestry. So, for me, this event is as important for inspiration from seeing who’s doing what and catching up with people as it is in learning from sessions.

But learning is a core component; last year’s summit in Minneapolis reignited an excitement for IA through Marsha Haverty’s “What We Mean by Meaning” and Andrew Hinton’s work on context and embodied cognition. 

So expect both heady bouts with science, technology, philosophy, and practicing work in IA. Oh, and come see me speak on Sunday, if ya’d like.

Then there’s the Hallway, where the conference really takes place. From conversations to karaoke, from game night to the jam, the IA Summit creates a community outside the confines of the mere conference itself.

Enterprise User Experience
June 8–10, 2016, in San Antonio, TX

Last year’s inaugural Enterprise UX conference took me and much of the UX world by storm: a much-needed conference focused on the complexity of enterprise approaches to user experience. Two days of a single-tracked session event followed by a day of optional workshops provided great opportunities for learning, discussion, and debate. 

Dan Willis highlighted a wonderfully unique session showcasing eight storytellers rapidly telling their personal experiences in at once humorous, at once poignant ways. 

This year, luminaries such as Steve Baty from Meld Studios in Sydney, MJ Broadbent from GE Digital, and Maria Giudice from Autodesk will be among a plethora of great speakers.

These themes guide the conference this year:

  • How to Succeed when Everyone is Your User
  • Growing UX Talent and Teams
  • Designing Design Systems
  • The Politics of Innovation

Plus, the organization of the conference is simply stellar. Props to Rosenfeld Media for spearheading this topic!

October 24–26, 2016, in Charlottesville, VA

This conference is as much of a labor of love and devotion to the field of UX in EDU as anything. Also, I’ve been involved since its inception: The first year I was an attendee, the second year a speaker, and ever since I’ve been involved in programming and planning. So, yeah, I’ve a vested interest in this conference.

Virginia Foundation for the Humanities’ (VFH) Web Communications Officer Trey Mitchell and former UVA Library programmer Jon Loy were sitting around one day, thinking about conferences such as the IA Summit, the Interaction Design Association’s conference, Higher Ed Web, and other cool UX-y conferences and thought, “Why don’t we create a conference here in Virginia that we’d wanna go to?”

Well, there’s a bit more to the story, but they created a unique event focused on the .edu crowd—museums, universities, colleges, libraries, institutes, and foundations—while also providing great content for anyone in the UX space.

As the website says, “edUi is a concatenation of ‘edu’ (as in .edu) and ‘UI’ (as in user interface). You can pronounce it any way you like. Some people spell it out like “eee dee you eye” but most commonly we say it like “ed you eye.”

Molly Holzschlag, Jared Spool, and Nick Gould stepped onto the podia in 2009, among many others. Since then, Trey and company have brought an amazing roster of folks. 

For the first two years, the conference was in Charlottesville. Then it moved to Richmond for four years. Last year it returned to Charlottesville and Trey led a redesign of the conference. From moving out of the hotel meeting rooms and into inspiring spaces along the downtown Charlottesville pedestrian mall to sudden surprises of street performers during the breaks, the conference became almost a mini festival where an informative conference broke out.

This year proves to continue in that vein. So if a 250-ish conference focused on issues of UX that lean toward (but aren’t exclusively) .edu-y sounds interesting…meet me in Charlottesville.

Posted in: on Fri, April 29, 2016 - 1:26:11

Joe Sokohl

For 20 years Joe Sokohl has concentrated on crafting excellent user experiences using content strategy, information architecture, interaction design, and user research. He helps companies effectively integrate user experience into product development. Currently he is the principal for Regular Joe Consulting, LLC. He’s been a soldier, cook, radio DJ, blues road manager, and reporter once upon a time. He tweets at @mojoguzzi and blogs at Joe Sokohl is Principal of Regular Joe Consulting, LLC
View All Joe Sokohl's Posts

Post Comment

No Comments Found

Violent groups, social psychology, and computing

Authors: Juan Pablo Hourcade
Posted: Mon, April 25, 2016 - 2:55:12

About two years ago, I participated in the first Build Peace conference, a meeting of practitioners and researchers from a wide range of backgrounds with a common interest in using technologies to promote peace around the world. During one session, the presenter asked members of the audience to raise their hands if they had lived in multiple countries for an extended period of time. Most hands in the audience went up, which was at the same time a surprise and a revelation. Perhaps there is something about learning to see the world from another perspective, as long as we are receptive to it, that can lead us to see our common humanity as more binding than group allegiances.

It’s not that group allegiances are necessarily negative. They can be very useful for working together toward common goals. Moreover, most groups use peaceful methods toward constructive goals. The problems come when strong group allegiances intersect with ideologies where violence is a widely accepted method, and dominion over (or elimination of) other groups is a goal.

A recent Scientific American Mind magazine issue with several articles on terrorism highlights risk factors associated with participation in groups supporting the use of violence against other groups. A consistent theme is the strong sense of belonging to a particular group, to the exclusion of other groups, in some cases including family and childhood friends, together with viewing those from other groups as outsiders to be ignored or worse.

Information filters or bubbles can play a role in isolating people so they mostly have a deep engagement with the viewpoints of only one group, and can validate extreme views with people outside of their physical community. These filters and bubbles are not new to the world of social media, but they are easily realized within it as competing services attempt to grasp our attention by providing us with content we are more likely to enjoy.

At the same time, interactive technology and social media can be the remedy to break out of these filters and bubbles. To think about what some of these remedies may be, I discuss some articles that can provide motivations, all cited in the previously mentioned Scientific American Mind issue.

The first area in which interactive technologies could help is in making us realize that our views are not always broadly accepted. This is to avoid a challenge referred to as the “false consensus effect”, through which we often believe that our personal judgements are common among others. Perhaps providing a sense of the relative commonality (or rarity) of certain beliefs could be useful.

Sometimes we may not have strong feelings about something, and it seems that if that is the case, we tend to copy the decisions of others we feel resemble us most, while disregarding those who are different. It’s important in this case, then, to highlight experts from outside someone’s group, as well as helping us realize that people from other groups often make decisions that would work for us too.

Allegiances to groups can get to the point of expressing willingness to die for one’s group when people feel that their identity is fused with that of the group. Interactive technologies could help in this regard by making it easier to identify with multiple groups, so that we don’t feel solely associated with one.

As I mentioned earlier, being part of a tight group most of the time does not lead to problems, and can often be useful. But what if the group widely accepts the use of violence to achieve dominance over others? One way to bring people back from these groups is to reconnect them with memories and emotions of their earlier life, helping them reunite with family and old friends. Social media already does a good job of this, but perhaps there could be a way of highlighting the positives from the past in order to help. With a bit of content analysis, it would be possible to focus on the positive highlights.

There is obviously much more to consider and discuss within this topic. I encourage you to continue this discussion in person during the Conflict & HCI SIG at CHI 2016, on Thursday, May 12, at 11:30am in room 112. See you there!

Posted in: on Mon, April 25, 2016 - 2:55:12

Juan Pablo Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Pablo Hourcade's Posts

Post Comment

No Comments Found

Collateral damage

Authors: Jonathan Grudin
Posted: Tue, April 05, 2016 - 1:06:30

Researchers are rewarded for publishing, but this time, my heart wasn’t in it.

It was 2006. IBM software let an employer specify an interval—two months, six months, a year—after which an email message would disappear. This was a relatively new concept. Digital storage had been too expensive to hang onto much, but prices had dropped and capacity increased. People no longer filled their hard drives. Many saved email.

When IBM put automatic email deletion into practice, a research manager asked her IT guy to disable it. “That would be against policy,” he pointed out. She replied, “Disable it.” Another IBM acquaintance avoided upgrading to the version of the email system that included the feature. When she returned from a sick leave, she found that a helpful colleague had updated her system. Her entire email archive was irretrievably gone.

“We call it email retention, but it’s really email deletion.”

Word got around that Microsoft would deploy a new “managed email” tool to all North American employees, deleting most messages after six months (extended grudgingly to 12 when some argued email was needed in preparing for annual reviews). Because of exceptions—for example, patent-related documents must be preserved for 10 years—employees would have to file email appropriately.

Many researchers, myself included, prefer to hang onto stuff indefinitely. I paused another project to inquire and learned that a former student of mine was working on it. A pilot test with 1,000 employees was underway, he said. In a company of 100,000, it is easy not to hear about such things. He added that it was not his favorite project, and soon left the team.

Our legal division had assembled a team of about 10 to oversee the deployment. Two-thirds were women, including the group manager and her manager. They were enthusiastic. Many had voluntarily transferred from positions in records management or IT to work on it. My assumption that people embracing email annihilation were authoritarian types quickly proved wrong, it was a friendly group with bohemian streaks. They just didn’t like large piles of email.

I had assumed that the goal of deleting messages was to avoid embarrassing revelations, such as an admission that smoking is unhealthy or a threat to cut off a competitor’s air supply. Wrong again. True, some customers clamoring for this capability had figured prominently in questionable government contracting and environmental abuse. But it is a crime to intentionally delete inculpatory evidence and, I was told, litigation outcomes are based on patterns of behavior, not the odd colorful remark that draws press notice.

Why then delete email? Not everyone realized that storage costs had plummeted, but for large organizations, the primary motive was to reduce the cost of “ediscovery,” not hardware expenditures.

Major companies are involved in more litigation than you might think. Each party subpoenas the other’s correspondence, which is read by high-priced attorneys. They read their side’s documents to avoid being surprised and to identify any that need not be turned over, such as personal email, clearly irrelevant email, and any correspondence with an attorney, which as we know from film and television falls under attorney-client privilege. A large company can spend tens of millions of dollars a year reading its employees’ email. Reduce the email lying around to be discovered, the thinking went, and you reduce ediscovery expenses.

Word of researcher unhappiness over the approaching email massacre reached the ears of the company’s Chief Software Architect, Bill Gates. We were granted an exemption: A “research” category was created, similar to that for patent-related communication.

Nevertheless, I pursued the matter. I asked the team about the 1000-employee pilot deployment. The response was, “The software works.” Great, but what was the user experience? They had no idea. The purpose of the pilot was to see that the software deleted what it should—and only what it should. The most important exception to automatic deletion is “litigation hold”: Documents of an employee involved in litigation must be preserved. Accidental deletion of email sent or received by someone on litigation hold could be catastrophic.

The deployment team was intrigued by the idea of asking the early participants about their experiences. Maybe we would find and fix problems, and identify best practices to promote. This willingness to seek out complaints was to the team’s credit, although I was realizing that they and I had very different views of the probable outcome. They believed that most employees would want to reduce ediscovery costs and storage space requirements, and about that they were right. But they also believed that saving less email would increase day to day operational efficiency, whereas my intuition was that it would reduce efficiency, and not by a small amount. But I had been wrong about a lot so far, a not uncommon result of venturing beyond the ivory tower walls of a research laboratory, so I was open-minded.

“Doesn’t all that email make you feel grubby?”

My new collaborators often invoked the term “business value.” The discipline of records management matured at a time when secretaries maintained rolodexes and filing cabinets organized to facilitate information retrieval. Despite such efforts, records often proved difficult to locate. A large chemical company manager told me that it was less expensive to run new tests of the properties of a chemical compound than to find the results of identical tests carried out years earlier.

To keep things manageable back then, only records that had business value were retained. To save everything would be painful and make retrieval a nightmare. Raised in this tradition, my easygoing colleagues were uncompromising in their determination to expunge my treasured email. They equated sparsity with healthy efficiency. When I revealed that I saved everything, they regarded me sadly, as though I had a disease.

I have no assistant to file documents and maintain rolodexes. I may never again wish to contact this participant in a brief email exchange—but what if five years from now I do? Adding everyone to my contact list is too much trouble, so I keep the email, and a quick search based on the topic or approximate date can retrieve her in seconds. It happens often enough.

I distributed to the pilot participants an email survey comprising multiple choice and open-ended response questions. The next step was to dig deeper via interviews. Fascinated, the deployment team asked to help. Working with inexperienced interviewers does not reduce the load, but the benefits of having the team engage with their users outweighed that consideration. I put together a short course on interview methods.

“Each informant represents a thousand other employees, a million potential customers—we want to understand the informant, not convert them to our way of thinking. For example, someone conducting a survey of voter preferences has opinions, but doesn’t argue with a voter who differs. If someone reports an intention to vote for Ralph Nader, the interviewer doesn’t shout, ‘What? Throw away your vote?’”

Everyone nodded.

“In exchange for the informant trusting us with their information, our duty is to protect them.” After they nodded again, I continued with a challenging example drawn from the email survey: “For example, if an employee says that he or she gets around the system by using gmail for work-related communication—”

White-faced, a team member interrupted me through clenched teeth, “That would be a firing offense!”

At the end of the training session, the team manager said, “I don’t think I’ll be able to keep myself from arguing with people.” Everyone laughed.

The white-faced team member dropped out. Each interview save one was conducted by one team member and myself, so I could keep it on track. One interview I couldn’t attend. The team manager and another went. When I later asked where the data were, they looked embarrassed. “We argued with him,” the team manager reported. “We converted him.”

My intuition batting average jumps to one for three

The survey and interviews established that auto-deletion of email was disastrously inefficient. The cost of the time that employees spent categorizing email as required by the system outweighed ediscovery costs. Time was also lost reconstructing information that had only been retained in email. “I spent four hours rebuilding a spreadsheet that was deleted.”

Workarounds contrived to hide email in other places took time and made reviewing messages more difficult. Such workarounds would also create huge problems if litigants’ attorneys became aware they existed, as the company would be responsible for ferreting them out and turning everything over.

Most damning of all, I discovered that managed email would not reduce ediscovery costs much. The executives and senior managers whose email was most often subpoenaed were always on litigation hold for one case or another, so their email was never deleted and would have to be read. The 90% of employees who were never subpoenaed would bear virtually all of the inconvenience.

Finally, ediscovery costs were declining. Software firms were developing tools to pre-process and categorize documents, enabling attorneys to review them an order of magnitude more efficiently. At one such firm I saw attorneys in front of large displays, viewing clusters of documents that had been automatically categorized on several dimensions and arranged so that an attorney could dismiss a batch with one click—all email about planning lunch or discussing performance reviews—or drill down and redirect items. That firm had experimented with attorneys using an Xbox handset rather than a keyboard and mouse to manipulate clusters of documents. They obtained an additional 10% increase in efficiency. However, they feared that customers who saw attorneys using Xbox handsets would conclude that these were not the professionals they wanted to hire for a couple hundred dollars an hour, so the idea was dropped. Nevertheless, ediscovery costs were dropping fast.

Positive outcomes

At the ediscovery firm, I asked, “Are there changes in Exchange that would help improve the efficiency of your software?” Yes. A manager in Exchange told me that ediscovery firms were a significant market segment, so I connected them and a successful collaboration resulted.

We found ways to improve the interface and flow of our “email retention” software, reducing the inconvenience for anyone who would end up using it.

I learned about different facets of organizations, technology use, and people. I loved working with the records management team and the attorneys. Attorneys in tech companies are relaxed, funny, and have endless supplies of stories. They never let you record an interview, but they are invariably good company.

As the date for the deployment to 50,000 Microsoft North America employees approached, word of our study circulated. The executive vice president overseeing our legal division convened a meeting that was run with breathtaking efficiency, like a scene in The West Wing. He turned to me. “We created a ‘research’ exemption for Microsoft Research. Why are you here?” I said, “I’m doing this for Microsoft, not MSR.”

The deployment was cancelled.

The product was released. Customers wanted it. A partner at a major Bay Area law firm heard of the study and phoned me. He was interested in our analysis of efficiency, but noted that for some firms, profitability was the only issue. “Consider Philip Morris,” he said. “One of their businesses is addicting people to something that will kill them. As long as that business is profitable, they will stay in it. If it ceases being profitable, they will get out. Efficiency isn’t a concern.”

Collateral damage

I saw a cloud on the horizon. The raison d’être of the team that had welcomed me was to oversee a deployment that would not happen. What would they do? I formed a plan. When a subpoena arrives, all affected employees are put on litigation hold. Their email and documents are collected and read by attorneys to identify relevant material. Determining relevance is not easy. It could be signaled by the presence or absence of project code names that evolved over time. People involved in discussions may have left the company. It is often difficult to determine which project a short email message refers to. Some employees file information under project names, but others rely on message topic, sender, recipients, date, urgency, or a combination of features. Some don’t file much at all, relying on Inboxes or other files holding thousands of uncategorized messages. Attorneys sit at computers trying to reconstruct a history that often spans several years and scores or hundreds of people.

I thought, “Here is the opportunity for email management.” Armed with tools and procedures, the team could help attorneys sort this out by working with the employees on litigation hold: identifying attorneys with whom privileged email was exchanged, listing relevant project code names, indicating colleagues always or usually engaged in communication relevant to the subpoenaed project and those wholly unrelated, and so on. This could greatly reduce the time that expensive attorneys spent piecing this together.

I worked on a proposal, but I was not fast enough. The legal division makes an effort to reassign attorneys whose roles are no longer needed, but it does not generate jobs for surplus records managers. The team was laid off, including the manager and her manager.

They had heeded a call to take on important work for the company. The positions they had left had been filled. They had welcomed me and worked with me, and it cost them their jobs. “Our duty is to protect our informants,” I had taught them, and then I failed to do it.

A few found other positions in the company for a time. None remain today. Before leaving, they held a party. It could have been a wake, but it was labeled a project completion celebration. Lacking a “morale budget,” it was potluck. An artist in the group handed out awards. Mine has rested on my office window sill for almost a decade. At the party, someone told me quietly, “We had a discussion about whether or not to invite you. We decided that you were one of the team.” I was the one not laid off, for whom the study was a success. But not a success I felt like writing up for publication.

Posted in: on Tue, April 05, 2016 - 1:06:30

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

Extremes of user experience and design thinking: Beards and mustaches

Authors: Aaron Marcus
Posted: Fri, March 25, 2016 - 11:04:27

The characteristics that differentiate us human beings, and at the same time unite people from different regions of the world is a matter that fascinates me.

Recently, I had a half-hour journey to a nearby Austin, Texas, hot-rod show to entertain my grandchildren. On the way, to keep the grandchildren interested, I happened to search on my phone through the Internet for the world’s longest beards and moustaches on men, and hair on women (about 18 feet long was the Guinness record for women, as I recall). We enjoyed looking at these hirsute oddities.

So, it was not without previous demonstration of interest that, just after I was dropped off by my son at a downtown stop to catch the No. 100 bus to the airport, a fellow came up to me and asked me if I was at the outbound stop headed to the airport or the inbound stop, because the inbound stop was just around the corner, and it was understandably confusing to a non-native. I could not help myself, and I immediately became social, extroverted, and interested in his unusual beard and his unusual subset of humanity. I complimented him on his outstanding growth and mentioned that I had just been searching the world for images of long beards and moustaches to show my grandchildren in Austin.

There began a 10-minute conversation with Patrick Dawson, of Seattle. He was just now returning after participating in the Annual Austin Men’s Beard and Mustache Competition, which had taken place Saturday night. Had I only known! This was Patrick’s second visit to Austin for purposes of this competition sponsored by the Austin Hair Club (!?) There were two competitions that day: the Open Competition, open to a maximum of about 250 competitors taking place at the Mohawk Arena. The Mohawk was filled to capacity with about 1000 people. The three top winners in their categories (goatees, beards, moustaches, mutton chops, etc.) compete in the evening competition, which features lots of booze and rowdiness. Patrick had won second place for goatees and first place for mustaches in the open competition and had placed first for “partial beards and goatees" in the evening competition. He had his precious trophy to prove his achievement in his carrying bag. 

There is even a Women’s Division competition, called the Whiskerina, featuring an award for the most creative strap-on beard made out of whatever the women-folk choose to use. I can imagine the creativity.

The Austin Facial Hair Competitions usually take place in February, for purposes of your future scheduling of, please note the exception for next year described below.

Patrick told me more during the half-hour sojourn to the Austin Airport: There is also a World Beard and Moustache Competition that takes place every two years. The last one took place on 3 October 2015 in Leogang, Austria, south of Salzburg (where I hope to be on 4 April at Persuasive Technology 2016 to give a conference tutorial about mobile persuasion design). Guess what? Patrick won first place for his goatee last year! Guess what again!? The next World Beard and Moustache Competition will take place Labor Day, Monday, 4 September 2017. Ladies and gentlemen, mark your calendars!

Patrick gave me permission to post and send his photo, and said he was very happy to participate in these competitions, which do not give money, just fame and honor, and raise money for worthy charities. He said the last one (perhaps he meant the one in Austin last weekend) raised $10,000. Not bad for some whiskers.

Well, I felt gloriously happy that I had been able to connect with him, learn these strange, exotic details, and enjoy my short time with him. His braided goatee made me nostalgically long for my 24-inch hair braid of the 1970s. Sigh…

Well, what did this teach me about user-experience design? First of all, that the exotic differences of human interests, preferences, expectations, values, signs, and rituals, is more fabulously complex than one could ever imagine. At the same time, it seemed heartening that people from all walks of life, all kinds of countries, and all kinds of cultures, could find shared enthusiasms and gather to honor the best of their breed or brood, while at the same time working toward humanitarian goals. It reinforced, for me, the necessity, if one is dedicated to “knowing thy user,” to do thorough research in the target community, to develop adequate personas and use scenarios, to thoughtfully consider the language, concepts, images, and activities of these groups when seeking to develop outstanding user experiences.

Posted in: on Fri, March 25, 2016 - 11:04:27

Aaron Marcus

Aaron Marcus is principal at Aaron Marcus and Associates (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts

Post Comment

No Comments Found

Designing the cognitive future, part IX: High-level impacts

Authors: Juan Pablo Hourcade
Posted: Tue, March 22, 2016 - 10:27:21

In previous blog posts I have been writing about how interactive technologies are changing or may change our cognitive processes. In this post I reflect on the high-level impact of these changes, and identify four main areas of impact, each with its own opportunities and risks: human connections and information, control, creativity, and (in)equality.

In terms of human connections and information, I identified two risk-opportunity axes. The first axis goes from social isolation to higher levels of empathy, while the second goes from bias to representative diversity in access to information and communication with others.

The first axis goes to the heart of arguments such as those in Alone Together about the risks in having technologies isolate people and cut them off from others [1]. At the same time, there is the opportunity for interacting with people we would not otherwise be able to reach, and even  better understanding others’ points of view. Anxieties about the impact of personal media on society are not new. For example, in early 19th-century England, there was a significant amount of concern about the growing popularity of novels that cited themes similar to those brought up these days with regard to interactive technologies, such as lack of intellectual merit and the potential to cause insanity through isolation [2]. On the positive end of the axis, technologies could help us re-engage and find the time for face-to-face activities, helping form important bonds, for example, between parents and children. The challenge for interaction designers is to enable opportunities for personal enjoyment while also encouraging and designing for social uses that enable previously unavailable forms of communication.

The second axis, referring to bias versus representative diversity in information access and communication, brings about a new version of a familiar challenge. As I mentioned in my previous blog post on communication, people used to have very localized biases in terms of the information they could access and the people with whom they could communicate. We are now replacing those biases with new biases brought about by personalized experiences with interactive technologies. At the same time, the opportunities to engage with a wide, representative variety of information and people are unprecedented. This access to a more diverse set of people could potentially lower the social distance between us and those who are different from us. Social distance is often a prerequisite for supporting armed conflict [3], and interactive technologies could help in this regard. The challenge for interaction designers is to make people aware of biases while enticing them toward accessing representative sources and communicating with representative sets of people.

In terms of control, the risk-opportunity axis goes from loss of control over our information and decision-making to greater control over our lives and bodies. Loss of control may come through the relentless collection of data about our lives, together with the convenience of automating decision-making, which could lead to significant threats in terms of privacy and manipulation. On the other hand, the same data can give us greater insights into our lives and bodies, help us make better decisions, and help us lead lives that more closely resemble our goals and values. The challenge for interaction designers is to keep people in control of their information and lives, and aware of the data and options behind automated decision-making.

In terms of creativity, the risk-opportunity axis runs from uniformity to greater support for inspiration, expression, and exploration. While there is currently more variety, a few years ago it seemed like most conference’s presentations looked alike due to a large majority of presenters following the paths of least resistance within the same presentation software. This is an example of the risk of uniformity, where even great tools, if most people use them the same way, can lead to very similar outcomes. There are obviously plenty of examples of ways in which interactive technologies have enabled new forms of expression, provided inspiration, lowered barriers to existing forms of expression, and made it easier to explore alternatives. The challenge for interaction designers is to do this while enabling a wide range of novel outcomes.

A final and overarching area of impact is political, social, and economic equality. The risk-opportunity axis in this case goes from having a select few with unequalled power due to their access and use of technology to having technologies that can be used to reduce inequalities in a fair and just manner. If interactive technologies can be thought of as a way of giving people cognitive superpowers, they could potentially bring about significant imbalances, giving us the risk identified above. On the other hand, we could design interactive technologies in such a way that will encourage these superpowers to be used to make the economy and social order more fair and just. This could be accomplished through the capabilities of the technology (e.g., enable high-income people to feel what it is like to live in low-income regions), or by designing interactive technologies that can easily reach the most disadvantaged and marginalized populations in a way that can enable their participation and inclusion.

What are your thoughts on these topics? How do you feel technologies are currently affecting you across these axes?


1. Sherry Turkle. 2012. Alone together: Why we expect more from technology and less from each other. New York, NY, USA: Basic books.

2. Patrick Brantlinger. 1998. The Reading Lesson: The Threat of Mass Literacy in Nineteenth Century British Fiction. Indiana University Press.

3. Dave Grossman and Loren W. Christensen. 2007. On combat: The psychology and physiology of deadly conflict in war and in peace. PPCT Research Publications Belleville, IL.

Posted in: on Tue, March 22, 2016 - 10:27:21

Juan Pablo Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Pablo Hourcade's Posts

Post Comment

No Comments Found

Critiquing scholarly positions

Authors: Jeffrey Bardzell
Posted: Tue, March 15, 2016 - 12:13:32

If I am right that HCI and neighboring fields will increasingly rely on the essay as a means of scholarly contribution and debate in the future, then it follows that the construction, articulation, and criticism of intellectual positions will become increasingly important.

In Humanistic HCI, we talk about the essay, the epistemic roles of positions, and how they should be peer reviewed. We defined a position thus:

[A] position is not merely a proposition; it instead holistically comprises an expert-subjective voice; a theoretical-methodological stance; its own situatedness within a domain; and a pragmatic purpose. (73)

But as I read design research papers in the fragmented and emerging subdomain of research through design (and similar practices, including constructive design, critical design, and so forth), I have been frustrated with how researchers characterize others' positions, especially ones they disagree with. 

The purpose of this post is not to discourage disagreement.

It is, rather, to seek ways to support disagreement in a scholarly manner.

I approach this topic by way of analogy. When learning social scientific methods, such as interviews, we are encouraged to write down as closely as possible, even verbatim, what participants say. We are discouraged from writing down our reactions. This is partly to minimize the risk that the reactions come to stand in for what research subjects actually said.

I think that risk is sometimes realized in research through design work, that is, that sometimes in our writings our reactions stand in for the actual positions of other researchers, and that this is hindering intellectual progress.

Recommendation: When critiquing or positioning oneself against prior work, one should summarize the position (its claim structure, voice/stance, theoretical and methodological underpinnings, and pragmatic goals/consequences), before and as a condition of, expressing one's reaction to or critique against it.

An example

Let me exemplify what I mean.

I begin by summarizing the argument of a well-known paper theorizing research through design (RtD): Zimmerman, Stolterman, and Forlizzi's 2010 paper, "An Analysis and Critique of Research through Design: Towards a Formalization of a Research Approach."

My summary of the authors' position:

Zimmerman et al. begin with the claim that research through design is an increasingly important practice, but one that it is not well theorized; as a result, it faces many practical challenges. The intended contribution of the paper is that the authors "take a step towards formalizing RtD as a legitimate method of inquiry within the HCI research community by detailing how RtD can lead to design theory" (310). To do so, they provide a critical literature review, summarize interviews with 12 RtD scholars, and analyze several "canonical" RtD projects (as identified by the interviewees). In their findings, they present several specific ways that RtD practitioners produce theory and frameworks as well as artifacts that "codify the designers’ understanding of the current state, including the relationships between the various phenomena at play therein, and the description of the preferred state as an outcome of the artifact’s construction" (314). Other findings they present is that their interviewees expressed a concern that a romantic conception of the designer-as-genius was inhibiting their ability to present RtD as research; that tacit or implicit knowledge was a key outcome of RtD and that, by definition, such knowledge is difficult to articulate; and that standards of RtD documentation were lacking. Near the end of the paper, Zimmerman et al. offer some recommendations: "there is a need for serious development of RtD into a proper research methodology that can produce relevant and rigorous theory" (316); "A need exists for more examples where the intentional choice and use of the RtD approach as a methodology and process is both described and critically examined" (317); and "Researchers who engage in RtD need to pay more attention to the work of other design researchers [...] It is of the utmost importance that RtD is analyzed and critiqued in a serious and ambitious way" (317). The paper concludes by observing that RtD is "alive and well" and "recognized by the design and HCI communities," but that "there is still a lot to be done when it comes to establishing RtD as a recognized and well-developed research approach" (318).

I believe that this summary represents their position, as I defined it earlier:

[A] position is not merely a proposition; it instead holistically comprises an expert-subjective voice; a theoretical-methodological stance; its own situatedness within a domain; and a pragmatic purpose. (73)

I also believe that were I to show the above summary to Zimmerman et al., that they would agree that that was their position. I think most readers of that paper in the community would agree that the above summary was overall faithful to the paper. It also included their own words, set in an appropriate context.

And then at that point I could launch my critique of it, because only then are we all on the same page.

But too often, this is not what happens in design research.

Consider the following quotes from the abstract and introduction of Markussen, Krogh, and Bang's 2015 otherwise excellent paper, "On what grounds? An intra-disciplinary account of evaluation in research through design." The bold is my own; it reflects statements about the Zimmerman et al. paper I just summarized.

In the research literature that is initially reviewed in this paper two positions are located as the most dominant representing opposite opinions concerning the nature of such a methodology. One position proposes a cross-disciplinary perspective where research through design is based on models and standards borrowed from natural science, social sciences, humanities and art, while the other position claims a unique epistemology for research through design insisting on its particularities and warning against importing standards from these other disciplines. [...] This “state of the art” has led some researchers to call for a policing of the research through design label, working out a formalized approach with an agreed upon method to document knowledge (Zimmerman, Stolterman, & Forlizzi, 2010). Other researchers, however, argue for appreciating the controversies and proliferation of research programs currently characterizing the field (Gaver, 2012). In caricature it can be noted that representatives of the first group works to associate design with changing existing research traditions (natural, technical, social sciences and humanities) dependent on the deployed methodology and measures for evaluation whereas the latter works to position design outside classical research and science. (pp. 1-2)

Instead of summarizing Zimmerman et al.'s argument, this paper asserts what appears to me to be a reactive interpretation of that argument: that Zimmerman et al. advocate "models and standards borrowed from natural science, social sciences, humanities and art," that their paper "calls for a policing of the research through design label," and that it seeks to work out "a formalized approach with an agreed upon method."

The whole Markussen et al. paper is then positioned in the introduction as a response to these two prior "positions" in research through design: one exemplified by Zimmerman et al. (the police model) and the other by Gaver (the sui generis model).

But do these two positions actually exist as such?

The problem is that Zimmerman et al. do not call for "policing" that I could see. Nor do they concern themselves with "labels." And while they do use the unfortunate term "formalizing" in several key locations, including the title (!), a reading of their position—what they claim, the expert stance behind those claims, the theoretical and methodological underpinnings by which those claims were made possible, and the pragmatic goals of such claims—suggests a much less controversial project.

Zimmerman et al. spoke to RtD practitioners about their accomplishments and challenges; they sought to gather and organize the accomplishments into a theoretical perspective that others could leverage; they sought to acknowledge practitioner challenges and envision how design research might help them overcome them, in a way that reflects their voices (i.e., the interviews) and their projects (i.e., the exemplars that they co-identified). This doesn't sound like "policing" to me; neither does it sound like natural science. It also doesn't sound like "formalizing" as I understand the word—I personally wish Zimmerman et al. hadn't used that term, or at least had clarified how they were using it, because I think it opened the door to that reactive interpretation that they are "policing."

The upshot of all of this is that once they get down to the business of their own contribution, Markussen et al.'s project looks to me like a welcome extension/expansion of Zimmerman et al.'s. Markussen et al. for instance argue that research on RtD needs to do a better job of attending to "how evaluation is actually being practiced within design research itself" (p. 2). And while I would say that Zimmerman et al. in fact did pay attention to that, nonetheless Markussen et al. paid more attention to it, and offered a more substantive account of that evaluation (they identified five different methods of RtD evaluation). Theirs is an original contribution, one that I intend to try out in my practice (which is probably the highest compliment I can give). And it is based in a reasonable critique of where Zimmerman et al. left off.

What I am pointing to in this instance—but I have seen it elsewhere in design research—is a tendency to offer straw man accounts of prior work that I believe are derived from reactive interpretations rather than a sober and scholarly account of what those positions actually were.

I hope that this blog post provides some practical and actionable guidelines to help design researchers avoid this problem.

If we cannot avoid the problem, we face the consequence of obfuscating where the agreements and disagreements actually are. We are encouraged to fight hobgoblins that aren't even there. At its worst (perhaps—hopefully—not in this example), it sows controversy and division. This can undercut the research community's shared desire to find common ground and learn together.


Part of the discipline of humanistic argumentation is taking others' positions seriously, even when one wants to criticize them.

To do so, one must first adequately characterize the position as such: its claims structure, its speaking voice and stance, its theoretical and methodological underpinnings, and its pragmatic purposes (and consequences).

An important goal of doing so is to ensure that everyone—including the original authors—is on the same page, that is, that this really is what that earlier position entailed, before the critique begins.

We want a community of learning, one that can accommodate informed disagreements, but we do not want a circular firing squad.

NOTE: This post was lightly edited and reblogged from

Posted in: on Tue, March 15, 2016 - 12:13:32

Jeffrey Bardzell

Jeffrey Bardzell is an associate professor of human-computer interaction design and new media in the School of Informatics and Computing at Indiana University, Bloomington.
View All Jeffrey Bardzell's Posts

Post Comment

No Comments Found

Technological determinism

Authors: Jonathan Grudin
Posted: Wed, March 09, 2016 - 1:21:43

Swords and arrows were doomed as weapons of war by the invention of a musket that anyone could load, point, and shoot. A well-trained archer was more accurate, but equipping a lot of farmers with muskets was more effective. Horse-mounted cavalry, feared for centuries, were also eliminated as a new technology swept across the globe, putting people out of work and prior technologies into museums.

Are we in control of technology, or at its mercy?

Concerns about technology predated computers, but they proliferate as digital technology spreads through workplaces and homes and into our clothing and bodies. We design technology. Do we shape how it is used?

Technological determinism, also called the technological imperative, became a computer science research focus when organizations began acquiring data processing systems half a century ago. In an excellent 1991 Communications of the ACM review titled “Examining the Computing and Centralization Debate,” Joey George and John King note that the initial studies produced conflicting hypotheses: (i) computers lead to the centralization of decision-making in organizations, and (ii) computers lead to decentralization of decision-making. This contradiction led to two new hypotheses: (iii) computerization is unrelated to centralization of organizational decision-making; (iv) management uses computerization to achieve its goals. George and King found that a fifth theory best fit the results: (v) management tries to use computerization to achieve its goals; sometimes it succeeds, but environmental forces and imperfect predictions of cause and effect influence outcomes. They concluded, “the debate over computing and centralization is over.”

In a 1992 paper in Organizational Science titled “The Duality of Technology: Rethinking the Concept of Technology in Organizations,” Wanda Orlikowski applied the structuration theory of sociologist Anthony Giddens to technology use and reached a similar conclusion. Giddens argued that human agency is constrained by the structures around us—technology and sociocultural conventions—and that we in turn shape those structures. Software, malleable and capable of representing rules, is especially conducive to such analysis.

These were guardedly optimistic views of the potential for human agency. Today, media articles that raise concerns such as oppressive surveillance and the erosion of privacy, excessive advertising, and unhealthy addiction to social media conclude with calls to action that assume we are in control and not in the grip of technological imperatives. How valid is this assumption? Where can we influence outcomes, and how?

It’s time to revisit the issue. Twenty-five years ago, digital technology was a puny critter. We had no Web, wireless, or mobile computing. Few people had home computers, much less Internet access. Hard drives were expensive, filled up quickly, and crashed often. The determinism debate in the early 1990s was confined to data and information processing in organizations. The conclusion—that installing a technology in different places yielded different outcomes—ruled out only the strongest determinism: an inevitable specific effect in a short time. That was never a reasonable test.

Since then, the semiconductor tsunami has grown about a million-fold. Technology is woven ever more deeply into the fabric of our lives. It is the water we swim in; we often don’t see it; we do not link effects to their causes. Whether our goal is to control outcomes or just influence them, we must understand the forces that are at work.

Technology is sometimes in control

The march of digital technology causes extinctions at a rate rivalling asteroid collisions and global warming—photographic film, record players, VCRs, rotary dial phones, slide carousels, road maps, and encyclopedias are pushed from the mainstream to the margins.

This isn’t new. The musket was not the first disruptive technology. Agriculture caused major social changes wherever it appeared. Walter Ong, in Orality and Literacy: The Technologizing of the Word, argued that embracing reading and writing always changes a society profoundly. The introduction of money shifted how people saw the world in fairly consistent ways. With a risk of a computer professional’s hubris, I would say that if any technology has an irresistible trajectory, digital technology does. Yet some scholars who accept historical analyses that identify widespread unanticipated consequences of telephony or the interstate highway system resist the idea that today we are swept in directions we cannot control.

Why it matters

Even the most beneficial technologies can have unintended side effects that are not wonderful. Greater awareness and transparency that enable efficiency and the detection of problems (“sunlight is the best disinfectant”) can erode privacy. Security cameras are everywhere because they serve a purpose. Cell phone cameras expose deviant behavior, such as that perpetrated by repressive regimes. But opinions differ as to what is deviant; your sunshine can be my privacy intrusion.

Our wonderful ability to collaborate over distances and with more people enables rapid progress in research, education, and commerce. The inescapable side effect is that we spend less time with people in our collocated or core communities. For millions of years our ancestors lived in such communities; our social and emotional behaviors are optimized for them. Could the erosion of personal and professional communities be subtle effects of highly valued technologies?

The typical response to these and other challenges is a call to “return to the good old days,” while of course keeping technology that is truly invaluable, without realizing that benefits and costs are intertwined. Use technology to enhance privacy? Restore journals to pre-eminence and return conferences to their community-building function? Easier said than done. Such proposals ignore the forces that brought us to where we are.

Resisting the tide

We smile at the story of King Canute placing his throne on the beach and commanding the incoming tide to halt. The technological tide that is sweeping in will never retreat. Can we command a halt to consequences for jobs, privacy, social connectedness, cybercrime, and terrorist networks? We struggle to control the undesirable effects of a much simpler technology—modern musketry.

An incoming tide won’t be arrested by policy statements or mass media exhortations. We can build a massive seawall, Netherlands-style, but only if we understand tidal forces, decide what to save and what to let go, budget for the costs, and accept that an unanticipated development, like a five-centimeter rise in ocean levels, could render our efforts futile.

An irresistible force—technology—meets an immovable object—our genetic constitution. Our inherited cognitive, emotional, and social behaviors do not stand in opposition to new technology; together they determine how we will tend to react. Can we control our tendencies, build seawalls to protect against the undesirable consequences of human nature interacting with technologies it did not evolve alongside? Perhaps, if we understand the forces deeply. To assert that we are masters of our destiny is to set thrones on the beach.

Examples of impacts noticed and unnoticed

Surveillance. Intelligence agencies vs. citizens, surveillance cameras vs. criminals, hackers vs. security analysts. We are familiar with these dilemmas. More subtly, the increased visibility of activity reveals ways that we routinely violate policies, procedures, regulations, laws, and cultural norms—often for good reason. Rules may be intended to be guidelines and not flexible enough to be efficient in all situations.

Greater visibility also reveals a lack of uniform rule enforcement. A decade ago I wrote:

Sensors blanketing the planet will present us with a picture that is in a sense objective, but often in conflict with our beliefs about the world—beliefs about the behavior of our friends, neighbors, organizations, compatriots, and even our past selves—and in conflict with how we would like the world to be. We will discover inconsistencies that we had no idea were so prevalent, divergences between organizational policies and organizational behaviors, practices engaged in by others that seem distasteful to us.

How we as a society react to seeing mismatches between our beliefs and policies on the one hand and actual behavior on the other is key. Will we try to force the world to be the way we would like it to be? Will we come to accept people the way they are?

Community. Computer scientists and their professional organizations are canaries in the coal mine: early adopters of digital technology for distributed collaboration. Could this terrific capability undermine community? The canaries are chirping.

In “Technology, Conferences, and Community” and “Journal-Conference Interaction and the Competitive Exclusion Principle,” I described how digital document preparation and access slowly morphed conferences from community-building to archival repositories, displacing journals. Technology enabled the quick production of high-quality proceedings and motivated prohibition of “self-plagiarizing” by republishing conference results in journals. To argue that they were arbiters of quality, conferences rejected so many submissions that attendance growth stalled and membership in sponsoring technical groups fell, even as the number of professionals skyrocketed. Communities fragmented as additional publication outlets appeared.

Community can be diminished by wonderful technologies in other ways. Researchers collaborate with distant partners—a great benefit—but this reduces the cohesiveness of local labs, departments, and schools. This often yields impersonal, metrics-based performance assessment and an overall work speed-up, as described in a study now being reviewed.

Technology transformed my workplaces over the years. Secretarial support declined, an observation confirmed by national statistics. In my first job, a secretary was hired for every two or three entry-level computer programmers to type, photocopy, file, handle mail, and so on. (Programs were handwritten on code sheets that a secretary passed on to a keypunch operator who produced a stack of 80-column cards.) Later at UC Irvine, our department went from one secretary for each small faculty group to a few who worked across the department. Today, I share an admin with over 100 colleagues. I type, copy, file, book travel, handle mail, file my expense reports, and so forth. 

Office automation is a technology success, but there were indirect effects. Collocated with their small groups, secretaries maintained the social fabric. They said “good morning,” remembered birthdays, organized small celebrations, tracked illnesses and circulated get-well cards, noticed mood swings, shared gossip, and (usually) admired what we did. They turned a group into a small community, almost an extended family. Many in a group were focused on building reputations across the organization or externally; the professional life of a secretary was invested in the group. When an employer began sliding toward Chapter 11, I knew I could find work elsewhere, but I continued to work hard in part because the stressed support staff, whom I liked, had an emotional investment and few comparable job possibilities.

We read that lifetime employment is disappearing. It involved building and maintaining a community. We read less about why it is disappearing, and about the possible long-term consequences of eroding loyalties on the well-being of employees, their families, and their organizations.

The road ahead

Unplanned effects of digital technology are not unnoticed. Communications of the ACM publishes articles decrying our shift to a conference orientation and deficiencies in our approach to evaluating researchers. Usually, the proposed solutions follow the “stop, go back” King Canute approach. Revive journals! Evaluate faculty on quality, not quantity! No consideration is given to the forces that pushed us here and may hold us tight.

Some cultures resist a technology for a period of time, but globalization and Moore’s law give us little time to build seawalls today. We reason badly about exponential growth. An invention may take a long time to have even a tiny effect, but once it does, that tiny effect can extremely rapidly build to a powerful one.

Not every perceived ill turns out to be bad. Socrates famously decried the invention of writing. He described its ill effects and never wrote anything, but despite his eloquence, he could not command the tide to stop. His student Plato mulled it over, sympathized—and wrote it all down! We will likely adjust to losing most privacy—our tribal ancestors did without it. Adapting to life without community could be more challenging. We may have to endure long enough for nature to select for people who can get by without it.

The good news is that at times we do make a difference. Muskets doomed swords and arrows, but today, different cultures manage guns differently, with significant differences in outcomes. Rather than trying to build a dike to stop a force coming at us, we might employ the martial art strategy of understanding the force and working with it to divert it in a safe direction.

Posted in: on Wed, March 09, 2016 - 1:21:43

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

@ComputerWorld (2016 03 14)

I think technology makes us human more because we can control them not it is a mercy. I also do not think that this will take over everything from human.

How efficient can one be? Productivity and pleasure in the 21st century

Authors: Aaron Marcus
Posted: Wed, February 17, 2016 - 11:40:51

Once, I had to take part in an international conference call at 9 a.m. West Coast time, but I had also scheduled my semi-annual dental appointment at 9 a.m. What to do? Well . . . it seemed straightforward. I would bring my mobile phone to the dentist together with my new Bluetooth earpiece, and listen in to the conference call while the dental hygenist worked over my molars. Simple. A combo business-personal moment joined efficiently.

I asked my regular dental assistant, Tanya (it seemed amazing that she had been a constant in the office for five years; even the dentist in charge of the firm had retired and been replaced with another). My request seemed only slightly unusual to her, and she was fine with my trying to combine two important things at once, as she laughed at my request. 

So while she set up her equipment, I set up mine. The only thing I hadn’t counted on was that the ear hook of the Bluetooth earpiece was meant to be used with an ear in the vertical position. The device dangled precariously from my head in my supine repose. Nevertheless, it seemed it would stay put while I dialed in. I had informed the leader of the conference call in advance (our group was planning a worldwide event) that I would be in a dentist’s chair with my mouth preoccupied and would be only listening, not speaking. She was fine with that, also, and thought it funny. As we started the call, she explained my situation to the participants. They all tittered briefly, then we got on with our call. 

I found there were several advantages to my otherwise constrained situation. Having the phone call to distract me kept my mind preoccupied. I noticed even less than normal the dental technician’s working over my gums with that little hooked, pointed metal device that I sometimes dread. All in all, my teeth are in pretty good shape (no cavities!), so I can’t complain, but as she did her thorough, scrupulous cleaning, it was not always pleasant. 

Meanwhile, my focus on the conversation actually seemed to be improved. It seemed more concentrated, because I could not speak! I could hear “inside the speakers’ comments” and seemingly tracked about three levels of the conversation: the surface or literal level, some implications for myself, and the speaker’s likely background intentions. I suddenly realized I had better write down a few notes. Uh, oh. I had forgotten to plan for that activity. Fortunately, while the dental assistant was cleaning my bicuspids, I found the pen I keep with my phone in my left-breast shirt pocket, and I pulled a piece of scrap paper out of my wallet; I think it was a cinema ticket-charge receipt. Perfect. I jotted down some notes that I used later to write up some comments and questions to the leader after the meeting. It had all gone pretty well.

I suggested to Tanya that the dental office might suggest to patients that they combine activities to make their day more efficient, and more effective in distracting them from whatever fear or pain they might be experiencing. I suggested that manicures and pedicures might be a good additional service offering, much as some Beverly Hills dental spas, or spa-equipped dental offices, now offer.

Then I got to thinking. I had given myself my own once-a-month haircut that morning before going to the dentist. This ritual involved dancing an electric buzz-cutter fitted with medium, short, and no plastic clip-on attachments, and running the device around my dome, progressively removing unwanted tufts of what was left of my former waves and curls. Once my hair was long, dark-brown, even braided down my back almost to my hips in the 1960s. Now, sigh, my few remaining hairs were short, gray, and mostly non-existent, with a bare patch growing ever larger starting at my crown. Ah, age. Anyway, the entire procedure takes about 15 minutes. So . . . why couldn’t this activity be done, also, while I was in the chair?

After the dentist’s visit, I had also planned to visit a women’s hairdressing and manicure salon to get a pedicure. I know. A perfect metrosexual man’s morning activity. Friends always eyed me a little suspiciously when I told them, occasionally, where I’d been. Ah, age; my feet seem so far away from my head and hands now. So after the dentist, I stopped in to see if I could get my once-a-month treatment. The place was empty. Great. I settled back into the big, comfortable chair. The assistant reminded me to turn on the back massage, and suddenly the chair sprang to life with strong, rhythmic vibrations, while the little lights on the chair’s control device blinked on and off to remind me of what was happening. As I leafed through the many women’s beauty, gossip, and healthcare magazines, looking for some additional wisdom that would explain how to understand women better, I thought: so this is why women enjoy going to beauty parlors and manicurists! It’s good to be treated like a queen, or a king, or maybe a princess or a prince, as the case may be. When the assistant began to massage my calves and feet, I was reminded: I am definitely hooked on this healthcare/beauty treat. I usually, but not always, pass up the offers of manicures; after all, a manly man can trim these things with a nail clipper or wire cutter, right? Down at the other end of my body, it’s harder to get my hands and feet in the right out-of-body position, and I’ve grown to enjoy this pedicure experience tremendously

So . . . I began to think: wait a minute. What if all of this could be done at once? Then I could have collapsed about two to three hours into just one. What a savings of time, if not money! I began to wonder: what other activities could be added? 

Then it occurred to me: there should be some sort of Guiness Book of World Records competition for the highest number of things that someone could accomplish at once while lying in a reclining chair at some healthcare service center. (Or maybe in an Aeron office chair, for Google employees.) 

After all, many times over the past decade, when I am sitting at my desk, I am looking at my computer screen, but also above that at the high-definition video screen that is fed the DirectTV satellite or cable feeds with a few of about 1000 stations. I might be listening to classical music coming in on from Europe that is broadcast over the speakers attached to the computer, while I might have two headsets on, one for each of the two phones nearby, with the mobile phone earpiece squeezed in under one of them, trying to field three conversations, while I review some email message from among the 200 to 500 that come in daily, and notice that someone is trying to reach me by voice-over-Internet. The Skype icon bleats for my attention and a new “boing” enters my consciousness. Which means, I have to free up at least one ear to add on the special Skype headset with microphone. 

I know. This all sounds a bit complicated. It is. Sometimes I get a bit confused about to whom I am speaking, what with having to quickly do multiple sequential mutes and un-mutes as I try to field questions or reply to comments from two or three people.

So I am not new to trying to juggle several things at once. Well, what would it be to max out the number of things to happen in a reclining chair? Here’s what might be happening at once:

  • Back massage via the built in chair massager
  • Body herbal wrap, somehow leaving the mouth, head, chin, arms, and feet exposed, as necessary
  • Conference call in one ear (at least!) via Bluetooth earpiece, with note-taking equipment on my lap
  • Dental care, but not x-rays
  • Haircut and beard trim, taking care that things don’t fall into my open eyes or mouth
  • Manicure
  • Music coming into the other ear, or perhaps audio feed form the video image on the wall or the videophone signal coming in over the Internet
  • Pedicure

I suppose I could also be fed intravenously, or some additional skillful person could be performing liposuction or other non-invasive laproscopic surgery that doesn’t require general anaesthesia, but I think we are entering the extreme zone of zaniness here. 

As an additional contextual challenge, all of these services might all be happening in a Spa-Bus or Spa-Limo, or even in my own vehicle, since I shan’t have to be bothered with driving, and the other passengers might be the specialists providing entertainment, nutritional, and other healthcare services.

Is there no end to this quest for efficiency combining productivity and pleasure in our lives of the early decades of the 21st century? Are there cross-cultural versions that may add additional, unexpected activities, like my simultaneously grinding white, spicy roots on shark skin to make hot, green wasabi paste or undergoing acupuncture? I am sure some mathematician or cognitive scientist may be able to prove that there is some topological limit to what can be accomplished, like the five-color map problem. Until then we can dream, can’t we, of beating the limit? We may have a new challenge to worldwide creativity. I await the latest results on some Internet blog.

Note: This text was originally composed in 2005 and has been slightly edited to update the document.

Posted in: on Wed, February 17, 2016 - 11:40:51

Aaron Marcus

Aaron Marcus is principal at Aaron Marcus and Associates (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts

Post Comment

No Comments Found

A dark pattern in humanistic HCI

Authors: Jeffrey Bardzell
Posted: Tue, February 09, 2016 - 3:52:11

I have noticed a dark pattern among papers that align themselves with critical or humanistic approaches to HCI. I myself have been guilty of contributing to that pattern (though I am trying to reform). But I still see it all the time as a peer reviewer and also as a Ph.D. supervisor.

And since I spend so much time evangelizing humanistic HCI, I thought it might also be good to point out one of its dark patterns, to encourage critical/humanist HCIers not to do it, and to encourage reviewers to call this out and use it as an argument against accepting the paper.

And of course I want to offer a positive way forward instead.

The dark pattern is:

"I love a critical theory/author; you in HCI should change your practice to use it, too."

Characteristic features of this dark pattern include the following:

  • An assertion of how naive HCI is for not also having known this theory all along.
  • No acknowledgement of nor engagement with the thousands of critical/humanistic papers in HCI.
  • No articulation of a research problem or question that HCI is already asking or expressing within a given HCI domain (e.g., personal informatics, HCI4D, user experience).
  • In other words, the research problem is expressed something like this: "I can't believe you ignoramuses don't know what Deleuze says about the Movement Image, but oh man, if you did, your HCI would be as hep as a Robbe-Grillet novel; happily, I am here to teach Deleuze to you; I've got a whole page devoted to his thought." (I might be exaggerating for effect, but seriously, sometimes this is how it sounds.)
  • A writing style that resembles that of a drunk Derrida trying to impress someone.
  • A references list full of Heidegger, Foucault, Merleau-Ponty, and Deleuze.
  • A references list that literally cites no HCI papers at all.
  • A response to one's inevitable rejection by saying that HCI is too stupid or too bigoted against the humanities to get it (tip: it is not).

Now, there is a light pattern version of this that I would argue for instead.

This version completely buys into the project of importing and evangelizing on behalf of this or that critical theory. But it seeks to do so in a dialogic, rather than imperialistic, way.

The light pattern is:

"I understand current HCI practice in domain D to be Dprax. I understand that Dprob is a known problem/challenge in Dprax. I hope to contribute to Dprax by turning to critical theory T to help understand better/clarify what is actionable about/reframe Dprob."

Characteristic features of this light pattern include the following:

  • Respect for prior HCI research as rigorous, informed, and of high scholarly quality, even if it doesn't seem aware of or make good use of your favorite theory or its ilk.
  • A focus on what that prior research itself articulates as a problem, gap, opportunity, challenge, or whatnot.
  • A generously cited and clearly articulated statement of the state of the art and its known challenges/problems. Your HCI readers, which might include the people you are citing, must be able to see themselves in your characterization; they must agree, within reason, that you've characterized their work and their challenges in a fair way.
  • A positioning of your theory as potentially helping others grapple with that challenge or problem. It won't solve it, so don't say it will.
  • A clear introduction to your theory in a way that is focused on the HCI problem domain you are trying to address. Do not offer a general "history of philosophy" overview of the theory; this is not Philosophy 101. Point readers to the Stanford Encyclopedia of Philosophy or a Cambridge Companion To and get to the point.
  • Write in a style that is accessible to HCI readers. It is OK to push stylistic boundaries and be somewhat challenging to readers—you want to be true to your critical-humanist self. But remember that you are inviting the community to take up your perspective; your writing should feel like an invitation, not a Gallic Howler.
  • You probably should have three different kinds of references:
    • HCI references in your domain of inquiry (personal informatics, crowdsourcing, sustainability, design fictions)
    • HCI references of a critical-humanistic nature (to align your approach with an established way of doing in HCI)
    • Primary and secondary references pertaining to your external theory

Humanistic approaches to HCI should be generous, dialogic, critical, and engaging. They should not be imperialistic power moves that condescend to HCI.

Posted in: on Tue, February 09, 2016 - 3:52:11

Jeffrey Bardzell

Jeffrey Bardzell is an associate professor of human-computer interaction design and new media in the School of Informatics and Computing at Indiana University, Bloomington.
View All Jeffrey Bardzell's Posts

Post Comment

No Comments Found

Wrong about MOOCs

Authors: Jonathan Grudin
Posted: Thu, January 28, 2016 - 4:04:01

This blog began in January 2013. There was a quid pro quo: You take the time to read my informal posts on a range of topics, I post observations only after convincing myself that they are viable. So far, only one has not held up, the first, January 2013’s “Wrong about MOOCs?” The third anniversary of the blog and the post is an occasion to review what went wrong.

In 2012, MOOCs made headlines. The concept and acronym for “massive open online course” were around earlier, but this was the year that the Coursera, Udacity, and edX platforms were founded by leading researchers from top universities. Little else was discussed in July 2012 at the biannual Snowbird conference of computer science deans and department chairs. In the opening keynote, Stanford’s President John Hennessy forecast that MOOCs would quickly decimate institutions of higher learning, leaving only a handful of research universities. He announced that by embracing MOOCs, he would see that Stanford was among the survivors.

I was sceptical. Completion rates were low. Institutions don’t change quickly. But by the time I wrote my first blog post several months later, I had drunk the Kool-Aid. Why?

There was a fateful roulette wheel spin. I randomly selected a MOOC to examine, Charles Severance’s "Internet History, Technology, and Security." I was impressed—and assumed it was typical. Yes, completion rates were low, but as we learned with the Internet and Web, attrition doesn’t matter at all when growth is exponential, and the potential for collecting data and sharing practices made contagion seem promising. The problem was that I stopped with Severance’s course. Had I examined others, I would have found that he was exceptional: experienced, talented, and dedicated. I over-generalized.

The student demographic data presented in Severance’s final lecture might have raised a flag. Most were college graduates. I thought, “Well, history is more interesting to people who lived through it.” But MOOCs still attract people who finished formal school studies. They have not made the strong inroads into undergraduate education that Hennessy and many others, including me, expected.

I overlooked changes in the undergraduate experience since my student days decades earlier. Many of us had arrived in college less informed about career possibilities. We were exploring. More students today arrive with specific career paths, no doubt for good reason. More of them work as they study. Focusing on university requirements, they may complain about the number of required courses and not volunteer to take additional online courses.

And high schools! I thought that secondary school students would take MOOCs in favored topics and outperform college or university students, transforming their image of higher education and forcing change. This was wrong for several reasons. With few undergraduate role models in MOOCs, comparisons aren’t possible. More to the point, I went to school before AP courses existed. At that time a MOOC could have been a godsend. Today, ambitious high school students pile on AP courses and take classes at community colleges. The existing system isn’t perfect, but it won’t change quickly, and it leaves no time for MOOCs.

I concluded the essay by looking ahead nine months: “The major MOOC platforms launched in 2012 claim a few million students, but … if we count students who do the first assignment and are still participating after a week, as we do in traditional courses, enrollment is much lower... The 2013-2014 academic year will provide a sense of how this will develop… The novelty effect will be gone. Better practices will have been promulgated. If there are fewer than 10 million students, the sceptics were right. More than a hundred million? Those who haven’t yet thought hard about this will wish they had.”

How did it turn out? About 10 million at the end of 2013 and 20 million at the end of 2015. This is fine growth, but it appears to be linear. Therefore, overly generous attendance criteria and low completion rates matter. MOOCs are primarily supplemental education, not replacement. Hennessy retracted his forecast. The university administrator panic of 2012 subsided.

Cheating turned out to be more of an issue than I expected. I thought that applicants listing MOOCs would be more carefully screened to confirm that they knew the material. Interviewers could inspect MOOC content or contact instructors. For an applicant to claim knowledge and then have their ignorance exposed in an interview would indicate mendacity or poor scholarship. However, employers would rather other people do the screening, and certification became part of the business model for MOOC providers. It could save prospective employers from having to confirm knowledge, but that provides an incentive to cheat. Careful studies show significant levels of cheating. Some of it is sophisticated. Just as a game player can log on as multiple characters who give accumulated materials to one among them who then quickly becomes powerful, some people log on as multiple students, guessing at test questions to determine the correct answers, which are fed to the real “student” who does well. One person automated this process, acing courses without doing any work at all. A researcher associated with a MOOC provider noted sheepishly that this student showed considerable talent.

Although some MOOC platform founders have moved on, their companies are active, and other large-scale online education efforts have surfaced. A sustainable niche has formed. Research and applied experimentation improves practice; innovative instructors are advancing the medium. Growth may be gradual, with existing institutions accommodating rather than being disrupted by the changes.

MOOCs as a context for research

MOOCs are a great setting for experimental research. With hundreds or thousands of participants, students can often be randomly assigned to different conditions with ease and no ethical downsides. Small-group projects can employ various criteria for grouping students to try different interventions. Students must agree in advance to participate, but instructors can devise assignments and interventions, all of which might improve performance, and find out which ones do, thus not unfairly disadvantaging anyone. For example, will homogeneous or diverse groups do better? Studies in other settings have found that it can vary based on the nature of the collaboration and the dimensions on which homogeneity or heterogeneity are measured. Such issues can be explored far more easily and rapidly in MOOCs than in similar classroom research of the past. Three years ago I felt that MOOCs would thrive because of the ability to iterate and move forward. I underestimated the workload in preparing a good online course, which may leave little time to add a significant research component. But a research literature is appearing.

Reflections on blogging

At the end of each year, I’ve reflected on the blogging experience. When I began in 2013 I thought I had about a year’s accumulation of thoughts that could be useful to others. The discipline of posting monthly was perfect—forcing me to budget time and developing a habit of noticing an email exchange or news story that could be developed into a suitable post. Each essay has been work, but only occasionally hard work; mostly it has been fun. Had I not committed to a monthly post, the activity would have shifted to a back burner and then fallen off the stovetop altogether, as it did for most of my fellow bloggers. There is not much feedback and reinforcement for blogging, so it has to be intrinsically motivated. When the goal is to be useful for others, this is not encouraging.

I have enjoyed many Interactions blog posts by others and wish there were more. They are not peer-reviewed. This can be a benefit, one can ask friends or respected colleagues to comment, but it makes blogging an expenditure of time that does not figure in widely recognized productivity metrics. For me, it is a way to organize thoughts and consider where I could focus more rigorously in the future. And the posts are sometimes read by some people—you, for example.

Posted in: on Thu, January 28, 2016 - 4:04:01

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

@Lilia Efimova (2016 02 02)

I enjoy reading your blog posts, but rarely comment - mainly because they are “food for thought” that doesn’t necessary provoke immediate reaction, but might surface eventually. I can imagine this could be similar for others. And, because it’s not peer-reviewed and might be tangential to one’s work it is not likely to get cited, so you don’t get feedback in a way a published article would.

All of which is next to the fact that blogging ecosystem with RSS reading an citation seems to be almost dead, which I find a pity. I tend to agree with Hossein Derakhshan (e.g. in his Guardian article) about the dynamics of social networks killing linking ecosystem, but would be interested to hear how you see it.

Designing functional design teams

Authors: Ashley Karr
Posted: Tue, January 19, 2016 - 4:10:30

My Top Three Takeaways from Years of Researching and Designing Functional Teams:

  1. Dysfunction in a design is a direct reflection of dysfunction in a team and or organization.
  2. People should come before protocol, because when protocol comes before people, dysfunction occurs. 
  3. When people have control and power over their workplace and how they work, functional teams have space to grow and thrive of their own accord.

Functional Team Plan Brainstorm by Hatim Dossaji, Jill Morgan, and
Mary Pouleson


For the past several years, I have been trying to find the answer to the question “How can a functional design team be created and maintained?” I have come to the conclusion that a completely functional design team is not possible, because it seems that “function” is circular rather than linear. A completely functional team would be Spock-like and thus becomes dysfunctional. However, we can minimize dysfunction and create enough function that we put out decent work, get along with our teams well enough, and at least somewhat enjoy our careers. 

The rest of this article will explain how I came to this conclusion and what I do with my teams to make them as functional as humanly possible. Please note as you read this that there is huge room for improvement in my methods and opinions, and any insights that readers have are much appreciated. Please add them to the comments section below or get in touch with me directly. Thank you in advance.


Current data and resources regarding designing functional design teams are sparse. What data and resources do exist are not very applicable and or actionable for design professionals. Most resources that I found came from management- and business-driven studies and resources and were predicated upon the concept of squeezing as much productivity from employees as possible to turn greater profits. As a design professional and as a humanist, I am more interested in creating team environments where people thrive and put out great work. I believe profits are side effects of compassion and quality, and so I make them my priority and bottom line—not money.

The following is a list of useful data and resources that I found and use on a regular basis:

  • The website
  • The book Crucial Conversations by Joseph Grenny, Kerry Patterson, and Ron McMillan and the associated website
  • The book Let My People Go Surfing by Yvon Chouinard
  • The website
  • The website
  • The book Discussing Design by Aaron Irizarry and Adam Connor
  • The book Designing Together by Dan Brown


I have drawn two overarching conclusions from these resources. In order to create functional teams: 

  1. Team members should have control over their fate in the workplace.
  2. Team members should be empowered to create and generate work, collaborate, relate, interact, socialize, and handle conflict on their own terms.

This reminds me of the famous quote by General George S. Patton: “Don't tell people how to do things, tell them what to do and let them surprise you with their results.” It seems that if design professionals are empowered to have control over how they work and how they interact with their team members, things will be more functional than if they are disempowered and told exactly how to do things and how to interact with their team members by another party. 

When I first began managing teams, I tried to solve every conflict amongst my team members. As I matured as a manager, I told my team members that they were adult professionals and they should be able to handle conflict on their own. As a result, there was less conflict all around. (I do have a caveat that if they cannot resolve the conflict on their own and they have to come to me, they waive their rights, and I get to decide their fate.) 

The Functional Team Plan

Recently, I created something I call the Functional Team Plan. I liken it to a project plan that takes into account emotional intelligence. It helps teams set their emotional tone, sculpt their culture, and form their communication and conflict resolution styles. My teams must draw it up at the same time they draw up their project plan and turn both in before our first formal design evaluation, during which we go over both of these documents. The Functional Team Plan includes the following:

Section 1: Individual Style

  • Each team member’s style of stress response (see Crucial Conversations)
  • Each team member’s personality type and how this affects how they are in the workplace (see
  • Each team member’s creative problem solving style (see Basadur)

Section 2: Identify, Define, and Describe (IDD)

  • Dysfunctional teams
  • Functional teams
  • How to move from dysfunctional teams to functional teams
  • How to create and maintain functional teams

Section 3: Plan

  • Communication plan
  • Decision-making plan
  • Conflict resolution plan

Once I created this loose structure for my teams, I found that they were able to handle conflicts on their own and in more respectful ways so that their projects developed at a good clip and their relationships with their teammates deepened. I believe the reason why this happened is because we removed the taboos and stigmas regarding talking about and dealing with conflict. We accept the fact that working on teams can be hard and conflict is bound to happen when people interact with each other on a regular basis. We make the hard things part of the conversation from the inception of a project. This right here is the gem—the heart of a functional team.

I will wrap up this article with my outline for the activity I take my teams through to create their Functional Team Plan. If anyone reading this article decides to run this workshop and the Functional Team Plan, please contact me. I am happy to give any help I can, and I would like to know how it works out for you.

Developing the Functional Team Plan Workshop 

  1. Overview (Total workshop time 2.5 - 3 hours)
    1. Step 0 - Take the test, complete your Basadur profile, and complete your stress assessment from Crucial Conversations before this activity begins
    2. Step 1 - IDD Dysfunctional Teams
    3. Step 2 - IDD Functional Teams
    4. Step 3 - IDD How to Move from Dysfunctional to Functional Teams
    5. Step 4 - IDD How to Create and Maintain Functional Teams
    6. Step 5 - Discuss Step 0 results
    7. Step 6 - Create your communication, decision making, and conflict resolution plans
    8. Step 7 - Submit your Functional Team Plan
  2. Step 1: IDD Dysfunctional Teams (10 minutes)
    1. Diverge - 5 minutes
      1. identify, define and describe dysfunctional teams on post-its
      2. you can use words, phrases, examples, stories
    2. Converge - 5 minutes
      1. create an affinity diagram and find emergent themes
      2. ii.capture these themes to help you create your Functional Team Plan
  3. Step 2: IDD Functional Teams (10 minutes)
    1. Diverge - 5 minutes
      1. identify, define and describe functional teams on post-its
      2. you can use words, phrases, examples, stories
    2. Converge - 5 minutes
      1. create an affinity diagram and find emergent themes
      2. capture these themes to help you create your Functional Team Plan
  4. Step 3: IDD Dysfunctional > Functional Teams (10 minutes)
    1. Diverge - 5 minutes
      1. identify, define and describe how to move from a dysfunctional team to a functional team on post-its
      2. you can use words, phrases, examples, stories
    2. Converge - 5 minutes
      1. create an affinity diagram and find emergent themes
      2. capture these themes to help you create your Functional Team Plan
  5. Step 4: Creating and Maintaining Functional Teams (10 minutes)
    1. Diverge - 5 minutes
      1. identify, define and describe how to create and maintain a functional team on post-its
      2. you can use words, phrases, examples, stories
    2. Converge - 5 minutes
      1. create an affinity diagram and find emergent themes
      2. capture these themes to help you create your Functional Team Plan
  6. Step 5 - Discuss Step 0 Results (30 minutes)
    1. Each team member explains their results from 16personalities, Basadur, and Crucial Conversations stress assessments so team members can understand their personality type, problem solving approach, and how they react to stress.
  7. Step 6 - Create Communication, Decision Making, and Conflict Resolution Plans (30 minutes)
    1. Team members agree and put in writing how they will communicate with one another, how they will make decisions, and how they will resolve conflict.
  8. Step 7 - Submit Your Functional Team Plan (30 Minutes)
    1. Teams will write out their team plan and submit to their manager. (The plans are usually 2 pages in length.)

Posted in: on Tue, January 19, 2016 - 4:10:30

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm,
View All Ashley Karr's Posts

Post Comment

No Comments Found

Job growth

Authors: Jonathan Grudin
Posted: Mon, January 11, 2016 - 11:03:16

Automation endangers blue and white collar work. This refrain is heard often, but could new job creation keep pace with job loss? Some leading technologists forecast that few of us will find work in fifteen years. They describe two possible paths to universal unemployment.

1. Robots or computers become increasingly capable. They have already replaced much human labor in farms, factories, and warehouses. Hundreds of thousands of telephone operators and travel agents were put out of work. Secretarial support in organizations thinned. In this view, jobs will be eradicated faster than new ones are created.

2. The technological singularity is reached: We produce a machine of human intelligence that then educates itself around the clock and designs unimaginably more powerful cousins. Human beings have nothing left to do but wonder how long machines will keep us around. Wikipedia has a nice article on the singularity. The concept arose in its current form in the mid-1960s. Many leading computer scientists predicted that the artificial intelligence explosion would occur by 1980 or 1990. Half a century later, leading proponents are more cautious. Some say ultra-intelligence will arrive before 2030. The median forecast is 2040. Ray Kurzweil, an especially fervent analyst, places it at 2045.

If the singularity is never reached, the jobs question centers on the effect of increasingly capable machines. If the singularity appears, all bets are off, so our discussion is limited to employment between now and its arrival.

My view is that the angst is misplaced: The singularity won’t appear [1] and job creation will outpace job loss. I apologize in advance for a U.S.-centric discussion. It is the history I know, but in our increasingly globalized economy much of it should generalize.

Occupational categories such as farming, fishing, and forestry are in long-term decline. Automation eliminates manufacturing jobs and reduces the need for full-time staff to handle some white collar jobs. Even when more new jobs appear than are lost, the transition will be hard on some people. Not everyone had an easy time when computerization displaced telephone operators and digital cameras eliminated the kiosks where we dropped off film canisters and picked up photos a day later. Nevertheless, jobs increased overall. Productivity rose, and could provide resources for safety nets to help us through disruptions.

The first massive employment disruption

For hundreds of thousands of years, until agriculture arose in the Fertile Crescent, China, Mesoamerica, and South America, our ancestors were hunters and gatherers. To shift from hunting to domesticating animals, from gathering to planting and tending crops, required a significant retooling of job skills. Suddenly, fewer people could produce enough food for everyone! Populations soared. With no television or social media, what would former hunters and gatherers do with their time? 

The parallel is strong. Existing jobs were not needed—more efficient new production systems could be handled by fewer people, in a time of population growth. Some people could continue to hunt and gather, and decry change. The effect was not mass unemployment, it was an unprecedented rise in new occupations.

These included working to improve agriculture and animal husbandry, breed more productive plant and animal species, and develop irrigation systems. But most new occupations were outside food production. Music, arts, and crafts flourished. Pottery and weaving reached exquisite levels; the Inca developed light tightly woven garments superior to the armor worn by the Spanish. Metallurgy flourished, useful and aesthetic. Trade in these goods employed many. Accounting systems were developed: Literacy and mathematics arose in agricultural communities. Stadiums were built for professional athletes. Surplus labor was used to build pyramids, which involved developing and applying engineering methods and management practices. Armies and navies of a scale previously unimaginable appeared on different continents. Political, religious, and medical professions arose.

Charles Mann’s 1491 describes what our species accomplished in the western hemisphere following the annihilation of traditional jobs. Before diseases arrived from Europe, western hemisphere populations were far larger. Archaeologists have only recently discovered the extent of their accomplishments. Mann identifies fascinating distinctions between the agricultural civilizations in the south and the hunter-gatherers who held sway in the north.

Prior to the transition to agriculture, relatively primitive tool-making, healing, cave-painting, and astronomy were part- or full-time occupations for some [2]. When agriculture automated the work of hunting and gathering, side activities exploded into organized occupations. Self-sufficiency in food made possible Chinese philosophers, Greek playwrights, and Incan architects.

Industrial revolutions

I lived in Lowell, Massachusetts, where ample water power in the 1820s (somewhat before I took residence) gave rise to the first industrial revolution in the U.S., built on pirated 50-year-old British technology. The transition from hand-crafted to machine production started with textiles and came to include metals, chemicals, cement, glass, machine tools, and paper. This wide-scale automation put many craft workers out of jobs. The Luddite movement in England focused on smashing textile machines. However, efficient production also created jobs—and not only factory jobs. In Lowell, the initial shortage of workers led to the extensive hiring of women, who initially received benefits and good working conditions [3]. Over time, they were replaced by waves of immigrant men who were not treated as well. Other jobs included improving factory engineering, supplying raw materials, and product distribution and sales. Inexpensive cement and glass enabled construction to boom. Despite the toll on craft work, the first industrial revolution is credited with significantly raising the overall standard of living. Of course, pockets of poverty persisted. As is true today, wealth distribution is a political issue.

The second industrial revolution began in the late 19th century. This rapid industrialization was called “the technological revolution,” though we might like to repurpose that title for the disruption now underway. Advances in manufacturing and other forms of production led to the spread of transportation systems (railroads and cars); communication systems (telegraph and telephone); farm machinery starting with tractors; utilities including electricity, water, and sewage systems; and so on. Not only buggy whip manufacturers were put out of business. Two-thirds of Americans were still employed in agriculture at the outset; now it is 2%. The U.S. population quadrupled between 1860 and 1930, largely through immigration. Job creation largely kept pace and the overall standard of living continued to rise, although many people were adversely affected by the changes, exacerbated by economic recessions. In developed countries, democracies offset disruptions and imbalances in wealth distribution by constraining private monopolies and creating welfare systems.

Since the end of the second industrial revolution in 1930, the U.S. population has tripled. Technological advances continue to eradicate jobs. Nevertheless, unemployment is lower than it was in the 1930s. How can this be?

A conspiracy to keep people employed

Productivity increases faster than the population. People have an incentive to work and share in the overall rise in the standard of living. When machines become capable of doing what we do, we have an incentive to find something else to do. Those who own the machines benefit by employing us to do something they would like done. They do not benefit from our idle non-productivity; in fact, they could be at risk if multitudes grow dissatisfied. The excesses of the U.S. robber barons gave rise to a socialist movement. High unemployment in the Great Depression spawned radical political parties. The U.S. establishment reacted by instituting a sharply progressive tax code, Social Security, and large jobs programs (WPA, CCC), with World War II subsequently boosting employment. Should machines spur productivity and unemployment loom, much-needed infrastructure repair and improvement could employ many.

If we face an employment crisis. The U.S. does not at present. The Federal Reserve raised interest rates in part to keep unemployment from falling further, fearing that wages will rise and spur inflation.

Many new jobs are in the service sector, which some say are “not good jobs.” Really? What makes a job good? Is driving a truck or working an assembly line more pleasant than interacting with people? “Good” means “pays well,” and pay is a political matter as much as anything else. Raise the minimum wage enough and many jobs suddenly get a lot better. Service jobs that are not considered great in one country are prestigious in others, with relative income the key determinant.

Where will new jobs come from?

The agricultural revolution parallel suggests that activities that already have value will be refined and professionalized and entirely new roles will develop. Risking a charge of confirmation bias, let me say that I see this everywhere. For example, in the past, parents and teachers coached Little League and high school teams for little or no compensation (and often had little expertise). Today, there is a massive industry of paid programs for swimming, gymnastics, soccer, dance (ballet, jazz, tap), martial arts, basketball, football, yoga, and other activities; if kids don’t start very young they won’t be competitive in high school. There is a growing market for paid scholastic tutors. Technology can help with such instruction, but ends up as tools for human coaches who also address key motivational elements (for both students and parents). At the other end of the age spectrum, growth is anticipated in care for elderly populations; again, machines will help, but many prefer human contact when they can afford it. For those of us who are between our first and second childhoods, there are personal trainers and personal shoppers, financial planners and event planners, uber drivers and Airbnb proprietors, career counselors and physical therapists, website designers and remodel coaches. Watch the credits roll for Star Wars: The Force Awakens—over 1000 people, many in jobs that did not exist until recently.

My optimism is not based on past analogies. It comes from credit for human ingenuity and the Web, which provides the capability to train quickly for almost any occupational niche. Documents, advice repositories, YouTube videos, and other resources facilitate expertise acquisition, whether you select teaching tennis, preparing food, designing websites, or something else. Yes, anyone who wants to design a new website can find know-how online, but most will hire someone who has already absorbed it. The dream of “end-user programming” has been around for decades; the reality will never arrive because however good the tools become, people who master them will have skill that merits being paid to do the work quickly and effectively. For any task, you can propose that a capable machine could do it better. But a capable machine in the hands of someone who has developed some facility will often do even better, and developing facility becomes ever easier.

For example, language translators and interpreters are projected to be a major growth area as globalization continues. Machine translation has improved, but is not error-free. Formal business discussions will seek maximum accuracy. Automatic translation will improve the efficiency of the human translators who will still be employed for many exchanges.

A challenge to the prophets of doom

When well-known technologists predict that most of their audience will live to see zero employment, I wonder what they think the political reaction to even 50% unemployment would be. The revolt of the 19th-century Luddites with torches and sledgehammers could be small potatoes compared to what would happen in the land of Second Amendment rights.

Fortunately, it won’t come to that. Instead of predicting when all the jobs will be gone, let the prophets of job loss tell us when the number of jobs will peak and begin its descent. Until that mathematically unavoidable canary sings, most of us can safely toil in our coal mines.

Let’s assume that machines grow more capable every year. It doesn’t always seem that way, but I don’t use industrial robots. The amusing Amazon warehouse robot videos do show automation of reportedly not-great jobs. Despite our more capable machines, the U.S. economy has added jobs every single month for more than five years. Millions more are working than ever before, despite fewer government workers, a smaller military, and no national work projects. Once or twice a decade a “market correction” reduces jobs temporarily, then the upward climb resumes [4].

Is it a coincidence that as the population doubled over and over again, so did the jobs? Of course not.


1. We can’t prove mathematically that the singularity will not be reached, but the chance of it happening in the 21st or 22nd century seems close to zero, a topic for a different blog post.

2. Why did these appear so late in human evolution? Possibly a necessary evolutionary step was taken. Perhaps reduction in predators and/or climate stabilization made hunting and gathering less of a full-time struggle.

3. The national park in Lowell covers the remarkable women’s movement that arose and was suppressed in the mills.

4. Use the slider on this chart:

Thanks to John King for discussions on this topic; his concerns about short-term disruptions have tempered my overall optimism.

Posted in: on Mon, January 11, 2016 - 11:03:16

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

Heartbreak House

Authors: Deborah Tatar
Posted: Thu, January 07, 2016 - 11:30:53

Years ago, when I read Bernard Shaw’s play Heartbreak House, I didn’t like it and it made me angry. Written before WWI and (not surprisingly) set in England, it focused in two groups of people, the denizens of Heartbreak House and those of Horseback Hall. Those of Heartbreak House are sensitive, aware, artistic, and unable to act. Those of Horseback Hall are confident, callous, and bellicose. And they dominate, dominate, dominate. They do not sell—we’re talking about cartoons that delineate a certain kind of British view in located in a particular historical moment—but they do rule unquestioningly. In my distant memory, the play ends with the members of Heartbreak House going outside to stare at the sky while waiting impotently and impatiently for the bombs to drop, because at least that would change something. Bernard Shaw was not sufficiently prescient to anticipate the horrors of Verdun, the Somme, Gallipoli, and so forth. But he saw that war prosecuted by Horseback Hall would be truly terrible, and it was.

At that time in my life, I was not a designer. In fact, I barely knew what a designer was or did, apart from making excessively expensive clothes and accoutrements. But my response to the play was designerly in that it was founded in the need to do something. How could those who knew what was important tolerate inactivity? How could they cede power to fools? 

Despite a subsequent career in making things and, in that way, taking action, I understand the sadness of the situation of Heartbreak House better now than I did at twenty. Why were the inhabitants of Heartbreak House so passive? To act rightly, we first structure our world so that we know what actions to take, when to take them, and what they mean. We make choices that bring us to the brink of action and then it is only a little step over the brink. The inhabitants of Heartbreak House were helpless to do the right things when it counted because their scope of action was defined by what was promoted as valuable by the stentorious inhabitants of Horseback Hall. 

Recently, I wrote a blog piece about the importance of Feminist Maker Spaces, as reported by Fox, Ulgado et al [1]. I don’t want to over-romanticize these, but I want to express how happy it makes me to imagine them as a kind of modern, designerly, more functional response to a kind of split that in some ways is not unlike the Heartbreak House/Horseback Hall split. Of course, it is not about war. No bombs are going to drop if Google does evil. But it is about hegemony. Feminist Maker Spaces invite small and local actions, but they also presage and forecast different kinds of larger actions and rhetoric than we commonly see in the dominant technological culture. 


1. Fox, S., Ulgado, R. R., & Rosner, D. (2015, February). Hacking Culture, Not Devices: Access and Recognition in Feminist Hackerspaces. Proc. of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (pp. 56-68). ACM.

Posted in: on Thu, January 07, 2016 - 11:30:53

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts

Post Comment

No Comments Found

Thoughts on the SIGCHI Accessibility Report

Authors: Jennifer Mankoff
Posted: Wed, December 30, 2015 - 11:43:54

This post is based around the SIGCHI Community Accessibility Report, posted on behalf of the SIGCHI Accessibility Community. Find us at and; contact us at

About 15% of people worldwide have a disability [1] and the likelihood of experiencing disability naturally increases with age. SIGCHI can attract new members, and make current members feel welcome, by making its events and resources more inclusive to those with disabilities. This in turn will enrich SIGCHI, and help it to live up to the ideal of inclusiveness central to the concept of user-centered design. It will also help to drive innovation, as accessibility efforts often drive more general technology advances (an example is speech recognition, which has many applications outside of accessibility today).

Inclusion of individuals with disabilities and accessibility have long been a focus of the community of scholars and practitioners affiliated with SIGCHI, starting in 1994, when SIGCHI’s flagship conference CHI had an accessibility chair. The CHI conference has a large number of papers dealing with accessibility (9% of papers in CHI 2015 were accessibility or disability related [2]). The inclusion of researchers and participants with disabilities within the SIGCHI community has led to advances in general technologies [3] and in research practices (e.g., Ability-Based Design [4]). 

However, written works about the accessibility of our scientific processes [5] and outputs [6], and reports and experiences from members of our community with disabilities have revealed a gap in the accessibility of conferences, research papers, and other aspects of SIGCHI. This spurred the efforts by the SIGCHI EC, which in turn encouraged the formation of the SIGCHI Accessibility Community [7], so that people with disabilities would have an avenue for helping to improve things. The mission of the Accessibility Community is to improve the accessibility of SIGCHI conferences and meetings (which includes awards ceremonies, program committee meetings, conferences, and so on) and the digital accessibility of SIGCHI websites and publications.

The first action of the Accessibility Community was to create an Accessibility Report intended to support informed decisions in the future and set goals that are responsive to the best practices and biggest problems facing our community, including specifically SIGCHI’s “physical” services (conferences and meetings) and “digital” services (websites, videos, papers, etc.), as well as its overall inclusiveness for people with disabilities. Our findings were based on input from the community at large, survey data from CHI attendees, and a survey of 17 SIGCHI conferences (only four of which had accessibility chairs in 2014). They show that many conferences and other SIGCHI resources do not adequately address accessibility. The report also sets out (hopefully) achievable goals for addressing them. 

While the report began as a well-intentioned data-collection effort, it has sparked a variety of positive and negative responses over the past few months from the people it was meant to help (SIGCHI members who face accessibility challenges) and the people it impacts (SIGCHI leaders, conference organizers, and so on). While everyone appreciates the effort that went into the report, it has functioned almost like a straw man in drawing out issues and facts that were not available (or that we did not have the insight to go after) when we were writing the report. Some of these include:

  • The inability of such a report to provide any sort of concrete handle by which disabled conference attendees (or conference organizers) can get real resources applied to problems of accessibility. This is a hot button issue that includes a range of wishes and concerns, from legal action against in-accessible conferences to the financial bottom line of conference chairs, who frequently fear a budget deficit up until the conference is over.

  • The lack of communication between the accessibility community in general (and disabled conference attendees in particular) and SIGCHI’s/ACM’s leadership. It turns out SIGCHI is putting money toward video captioning, ACM has been working toward universal accessibility of papers for some time (and has the beginnings of a plan in place), and more is possible (but only if communication channels are open).

  • The varied problems faced by conference chairs running conferences of different sizes were not represented at all in our report (something we hope to rectify in this years data-collection efforts). These are complex and multi-faceted, and include trade-offs that are easy to ignore when a single advocacy goal is in place (as by the accessibility community) but impossible to ignore when running a conference.

These are just some examples, and to a disabled attendee whose career depends on successful conference networking, they probably seem irrelevant to their basic right for equal treatment. However, when it comes to the more ambiguous problem of enacting accessibility, they are primary concerns that must be dealt with. 

Which raises my HCI and interaction design antennae sky high. This is a wicked problem [8], with all of the difficulties inherent in attempting to modify a complex multi-stakeholder system. In addition, one solution will never fit all the varied contexts in which accessibility needs to be enacted. Worse, to the extent that the “designer” here is the accessibility community, it’s not clear that our conceptual understanding matches that of the “user” we are designing around (conference organizers). Value-sensitive design, mental model mismatches (between different stakeholders affected by changes intended to increase accessibility), multi-stakeholder analyses, service design... all of these frames may help with the task of making SIGCHI as inclusive as I believe we’d all like it to be. 

So unsatisfying a conclusion to reach when the people who are differentially affected deserve a straightforward solution that directly addresses their needs and their right to access. Yet well-meaning change is not enough. Well-designed change is the bar we should strive to reach. 



2. 34 of 379 papers, listed here:

3. For example, speech synthesis and OCR have early roots in the Kurzweil Reader, a reading tool for people with visual impairments.

4. Wobbrock, J. O., Kane, S. K., Gajos, K. Z., Harada, S., & Froehlich, J. (2011). Ability-based design: Concept, principles and examples. ACM Transactions on Accessible Computing (TACCESS), 3(3), 9.

5. Brady, E., Zhong, Y., & Bigham, J. P. (2015, May). Creating accessible PDFs for conference proceedings. In Proceedings of the 12th Web for All Conference (p. 34). ACM.

6. Reuben Kirkham, John Vines, and Patrick Olivier. 2015. Being Reasonable: A Manifesto for Improving the Inclusion of Disabled People in SIGCHI Conferences. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA '15). ACM, New York, NY, USA, 601-612. 


8. Rittel, H. W., & Webber, M. M. (1973). Dilemmas in a general theory of planning. Policy sciences, 4(2), 155-169.

Posted in: on Wed, December 30, 2015 - 11:43:54

Jennifer Mankoff

Jennifer Mankoff is an associate professor in the Human Computer Interaction Institute at Carnegie Mellon University.
View All Jennifer Mankoff's Posts

Post Comment

No Comments Found

Crying wolf

Authors: Jonathan Grudin
Posted: Fri, December 11, 2015 - 3:58:59

In a stack of old papers headed for recycling was a Wall Street Journal article subtitled “Managers who fall for their office PCs could be the downside of the computer age.” In 1987, hands-on computer use was considered dangerous, for employees and employers alike!

Since Mary Shelley’s Frankenstein (1818), technology has often been viewed with dread. Woe unto us for challenging the gods with artificial intelligence, personal computers, email, the Internet, instant messaging, Wikipedia, Facebook, and Twitter.

AI is a special case. Grim outcomes at the hands of intelligent machines are a perennial favorite of some researchers, filmmakers, and the popular press, but the day of reckoning is put off the lack of technical progress. We don’t know what an intelligent machine will do because none exist. The other technologies did exist when the hand-wringing appeared—PCs, the Internet, Facebook, and so on. The fear was not that they would defeat us, but that we would use them foolishly and perish. An “addictive drug” metaphor, not a lumbering monster.

But the predictions were wrong. Most of us find ways to use new technologies to work more effectively. Our personal lives are not adversely affected by shifting a portion of television-watching time to computer use. Does fear of technological Armageddon reflect a sense of powerlessness, our inability to slow carbon emissions and end political dysfunction? Perhaps our inner hunter-gatherers feel lost as we distance ourselves ever more from nature and magical thinking. Alternatively, it could be that each of these technologies challenged an ancient practice that strengthened in recent centuries: hierarchy.


In the article that I set aside a quarter century ago, the technology reporter from the Wall Street Journal’s San Francisco Bureau wrote of “Rapture of the Personal Computer, a scourge characterized by obsessive computer tinkering, overzealous assistance to colleagues with personal computer problems, and indifference to family, friends, and non-computer job responsibilities.” Indifference to family, friends, and responsibility is a common theme in dystopian assessments of a new technology.

“In the long run, it’s a waste of time for the organization,” an assistant vice president of Bank of America concluded. A consultant described training 600 employees of another company to use desktop computers. “About 50 pushed into the technology much deeper, becoming de facto consultants to their departments. But a short time later, 40 of the 50 were laid off.”

The horror stories emphasize bad outcomes for computer users, but on close inspection, hierarchy seems threatened more than organizational health. The author writes, “The question of how to handle the 8% to 10% of users who seem to fixate on costly machines has dogged managers up and down the organizational flow-charts.” A good manager “leads subordinates by the hand through new software packages.” “One key to getting the most from resident experts is to shorten their leashes.” A manager is quoted: “The intention is not to stamp out creativity, but the important thing is that creativity has to be managed.”

“The problem has grown so serious,” the author maintains, “that some companies are even concluding that decentralized computing—placing a little genie on every desk instead of keeping a big one chained in the basement—may not have been such a keen idea after all.” In the end, not many acted on such conclusions. Little genies grew in number through the 1980s and 1990s.

The article concludes with an object lesson, a “so-called information-systems manager,” who after seventeen years wonders how his life could have been different. Despite a degree in economics, which to the Wall Street Journal means that he could have been a contender, he “weathered endless hours of programming frustration, two detached retinas, and the indignity of most people taking his work for granted.”

Managing what we don’t understand

In 1983, I took a job in a large tech company that had an email system in place. My new manager explained why no one used it: “Email is a way that students waste time.” He noted that it was easy to contact anyone in the organization: I should write a formal memo and give it to him. He would send it up the management ladder to the lowest common manager, it would go down to the recipient, whose reply would follow the reverse path. “You should get a response quickly,” he concluded, “in three to five days.” He advised me to write it by hand or dictate it and have it typed up. “Don’t be seen using a keyboard very much, it’s not managerial.”

Technology could be threatening to managers back then, even in tech companies. Few could type. Their cadence of planned, face-to-face meetings was disrupted by short email messages arriving unpredictably. Managing software developers was as enticing as managing space aliens; promises that “automatic programming” would soon materialize delighted managers.

As email became familiar, new technologies elicited the same fears. Many companies, including IBM and Microsoft, blocked employee access to the Internet well into the 1990s. When instant messaging became popular in the early 2000s, major consulting companies warned repeatedly that IM was in essence a way that students waste time, a threat to productivity that companies should avoid. In 2003, ethnographer Tracy Lovejoy and I published the article “Messaging and Formality: Will IM Follow in the Footsteps of Email?” [1]. 

People tried several new communication technologies in the early 2000s as they looked for ways to use computers that they had acquired during the Internet bubble. This software, popular with students, also incurred management suspicion.

Studying IM, employee blogging, and the use of wiki and social networking sites in white-collar companies, I found that they primarily benefit individual contributors who rely on informal communication. Managers and executives focus more on structured information (documents, spreadsheets, slide decks) and formal communication; most saw little value in the new media. As with email in an earlier era, individual contributors using these tools can circumvent formal channels (which now often includes email!) and undermine hierarchy.

However, the 2000s were not the 1980s. Managerial suspicion often ran high, but it was more short-lived. Many managers were tech users. Some found uses for new communication technologies. A manager stuck in a large meeting could IM to get information, chat privately with another participant, or work on other things. Some executives felt novel technologies could help recruit young talent. There was some enthusiasm for wikis, which offer structure and the hope of reaching the managers’ shimmering, elusive El Dorado: an all-encompassing view of a group’s activity and status. But wikis thrive mainly in relatively chaotic entrepreneurial settings; once roles are clear, simpler communication paths are more efficient. A bottom-up wiki approach competes, a little or a lot, with a clear division of labor and its coordinating hierarchy.

Knowledge and power

My daughters occasionally ask for advice on a homework assignment. If I need help, I usually start with a string search or Wikipedia. They often remind me that their teachers have drilled in that Wikipedia is not an acceptable source.

Do you recall the many denunciations of Wikipedia accuracy a decade ago? Studies showed accuracy comparable to the print encyclopedias that teachers accepted, but the controversy still rages; ironically, the best survey is found in Wikipedia’s Wikipedia entry. Schools are only slowly getting past blanket condemnations of Wikipedia.

I average two or three Wikipedia visits a day. Often I have great confidence in its accuracy, such as for presidential primary schedules. Wikipedia isn’t the last word on more specialized or complex academic topics, but it can provide a general sense and pointers to primary sources. Hearing about an interesting app or organization, I check Wikipedia before its home page. For pop culture references that I don’t want to spend time researching, a Wikipedia entry may get details wrong but will be more accurate than the supermarket tabloids on which many people seem to rely.

Why the antagonism to a source that clearly works hard to be objective? If knowledge is power, Wikipedia and the Web threaten the power of those who control access to knowledge: teachers, university professors, librarians, publishers, and other media. Hierarchy is yielding to something resembling anarchy. The traditional sources were not unimpeachable. I recall being disappointed by my parents’ response when I excitedly announced that My Weekly Reader, distributed in school, reported that we would use atomic bombs to carve out beautiful deep-sea ports. More recently, I discovered in 1491 that much of what we learned in school about early U.S. history was false. My science teachers, too, were not all immune to inventing entertaining accounts that took liberty with the facts. Heaven knows what they teach about evolution and climate change in some places. If a student relies on Wikipedia instead, I can live with that.

If a wolf does appear?

I heard Stewart Brand describe deforestation and population collapse on Easter Island, specifying the date when someone cut down the last tree, “knowing that it was the last tree.” Former U.S. Secretary of Defense Robert McNamara became a fervent advocate of total nuclear disarmament after living through three close brushes with nuclear war. Neither Brand nor McNamara were confident that we will step on the brakes before we hit the wall.

Perhaps we will succumb to a technological catastrophe, but I’m more optimistic. We may not address global warming until more damage is incurred, but then we will. We’ll rally at the edge of the abyss. Won’t we?

Musical chairs

In the meantime we have these scares. Perhaps the Wall Street Journal, Gartner, and others were right to warn managers of danger, but missed the diagnosis: The threats are to the managers’ hierarchical roles. When employees switched to working on PCs, their work was less visible to their managers. My manager in 1983 was not a micro-manager, but he got a sense of my work when my communication with others passed by him; when I used email, he lost that insight and perhaps opportunities to help. Public concern about automation focuses on the effects on workers, but the impact on managers may be greater as hierarchies crumble [2].

Consider Wikipedia again. Over time it became hierarchical, with more than 1000 administrators today. This may seem a lot, but it is one for every 100 active (monthly) editors and 20,000 registered editors. A traditional organization would have ten times as many managers. Management spans grow, even as more work becomes invisible to managers.

Fears about online resources may ebb when management ceases to feel threatened. Concerns were raised when medical information of variable quality flooded the Web. Today, many doctors take in stride the availability of online information to patients who still consider their doctor the final authority. Dubious health websites join village soothsayers and snake oil salesmen, who always existed, and may have been less visible and accountable. And might sometimes help.

In organizations, individual contributors use technology to work more efficiently. Hierarchy remains, often diminished (especially in white collar and professional work). Can hierarchy disappear? Perhaps, when everyone knows exactly what to do, in the organizational form that Henry Mintzberg labeled adhocracy. For example, a film project —an assembly of professionals handed a script—or a barn-raising by a group who all know their tasks. Technology can help assemble online resources and groups of trained people who can manage dependencies themselves, leaving managers to monitor for high-level breakdowns.

This is the efficiency of the swarm. An ant colony has no managers. Each worker is programmed to know what to do in any situation, with enough built-in system redundancy to withstand turnover. In our case, each worker has education, online resources, and communication tools to identify courses of action. With employee turnover on the rise, organizations build in redundancy, either in people or with online resources and tools that enable gaps to be covered quickly.

Someday a wolf may appear. In the meantime, the record indicates that each major new technology changes the current way of working and threatens those who are most comfortable with it, primarily management. Forecasts of doom are accompanied by suggestions that the tide can be ordered back, that the music can continue to play. Then, when the music stops, a few corner offices will have been converted to open plan workspace, and work goes on.


1. Links to this and studies of employee blogging, wiki use, and social networking in organizations are in a section of my web page titled “A wave of new technologies enters organizations.”

2. JoAnne Yates in Control through Communication described the use of pre-digital information technologies to shape modern hierarchical organizations and enable them flexibility beyond the reach of hierarchies that had existed for millennia. She mentioned ‘humanizing’ activities such as company parties and newsletters, less about information than emotional bonding, creating an illusion of being in one tribe, and thereby strengthening rather than undermining hierarchy.

Thanks to John King for general discussion and raising the connection to Yates’ work.

Posted in: on Fri, December 11, 2015 - 3:58:59

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found


Authors: Deborah Tatar
Posted: Wed, November 25, 2015 - 1:42:23

I started this series of posts with concerns about the allure of “going west” to my undergraduate students. My concern is about all the students and indeed my own sons, but especially the women. Sarah Fox, Rachel Rose Ulgado, and Daniela Rosner wrote a wonderful article about feminist hackerspaces [1] for the 2015 CSCW conference. The women they studied were by-and-large employed as professionals in the high tech industry. They were successful. Yet these women did not find what they sought or imagined through work. And when they turned to hacker spaces, there was more disappointment. The male hacker spaces were imbued with what they called ****-testing. 

Fox et al.’s account gave me a kind of PTSD-y flashback. Silicon Valley was a world in which the more prestige I acquired, the less I enjoyed success. The more I encountered the ultra-confident fantasies of freedom and superiority that drove so much behavior, the less I wanted to play the game. Eventually, I escaped. In Silicon Valley, freedom is often a zero-sum game, enforced by what some social scientists call micro-aggressions. Efficiency is often a way of one person taking for him or herself without having to think about or appreciate others.

I am so glad that the women Fox et al. report on have been able to make their own spaces, and I hope that these spaces truly help them lead the lives of whole people. But I do not think that feminist hacker spaces are going to solve the problems. 

The conditions that lead women to create feminist spaces are not the conditions that my students imagine when their eyes light up with the hope of going west. Well, that is not true. Some of my female students have a kind of untrammeled ambition. They really do seem to believe that, as one of my male students wrote in an essay on ethics some years ago, their chief obligation in life is to have a job. Anything that might threaten their job or success is just an inefficiency.

The feminist hackers were more like the other kind of student, the kind that hope to use their capabilities to be part of something bigger than themselves. This is a confusing mental space to be in. The feminist hackers exhibit an instructive ambivalent resentment towards Sheryl Sandberg’s “Lean-In Circles.” As Fox et al. note, while the feminist hackers have long engaged in many of the behaviors that Sandberg now recommends, they resent her and her advice. Indeed, it is important to realize that the same advice, the same behavior, is not the always the same. When women, for example, get together to address “imposter syndrome,” their larger attitude makes all the difference. Is the discussion a tool to understand their position in the world or a club used as a way to reproach women for lack of perfection? Lots of people say “Be more confident!” but no one seems to notice the cost that we pay for failed attempts at assertion. I remember watching my contributions to meetings regularly being ascribed to men—and then being called “arrogant” by my boss for acting exactly as I believed that the men had acted. I was devastated—and trapped. 

It also makes all the difference in the world whether the womens’ collective ambition is to dominate others or to connect. Sheryl Sandberg may be a perfectly lovely person, but her ability to get herself heard is also part of a willingness to profit off of other people’s compliance. I don’t admire that and I don’t want to design for it. 


1. Fox, S., Ulgado, R. R., & Rosner, D. (2015, February). Hacking Culture, Not Devices: Access and Recognition in Feminist Hackerspaces. Proc. of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (pp. 56-68). ACM.

Posted in: on Wed, November 25, 2015 - 1:42:23

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts

Post Comment

No Comments Found

Technology and nature

Authors: Jonathan Grudin
Posted: Fri, November 06, 2015 - 4:00:07

The black water mirrored the tropical forest above it so perfectly that the shoreline was impossible to pinpoint. The reflection included the palms with cannonball-clustered fruit that had stained the water. We kayaked for hours, seeing no sign of other human presence on the network of channels and lagoons that drain the marshes along the Caribbean coast. Or so we thought.

Costa Rica ranks first in sustainable ecotourism, with a plan to be the first carbon-neutral country by 2021 and a carbon-neutral airline already flying. Costa Rica ended deforestation and draws 90% of its energy from renewable sources. It long ago disbanded its army and recently banned recreational hunting—good news for its panthers, tapirs, sloths, monkeys, sea turtles, and colorful birds. It has the world’s highest proportion of national park and protected land (25%), and fifty times the world average biodiversity per hectare.

Technological support preceded our two-week visit. Anywhere Costa Rica provides online booking with an informative 24x7 chat line. It was effectively free. Our prices for hotels and services were what we would pay walking in off the street; Anywhere Costa Rica receives commissions. Telecommunication was reliable: Pickups were on time and convenient. The company even accommodated a spontaneous schedule change. 

Costa Rica’s three principal tourism zones are the Pacific coast, inland mountains including Arenal volcano, and the Caribbean coast. We skipped the Pacific with brand-name resort hotels nestled in coves. Inland and on the Caribbean, we found only small hotels and restaurants reportedly owned and operated by Costa Ricans. Fast food franchises were virtually non-existent. Technology again played a key role: The small enterprises rely heavily on reviews. The surrounding tourism-dependent communities actively contribute by providing uniformly excellent service.

Subtle uses of technology included the coordination of national park, private enterprise, and non-profit wildlife personnel in preventing poaching and enabling us to observe sea turtles beach, lay eggs, and leave, with minimal distraction. Young guides enlisted for short excursions were the best-informed and the most talented at human relations I’ve ever encountered. Many have university degrees in tourism, an attractive career option in a country where tourism accounts for more revenue than coffee and bananas.

Why we travel

Why didn’t our ancestors who moved into Europe and the Western Hemisphere all settle on the Mediterranean and California coasts? Some moved to the arctic, to deserts, or settled high in the Andes and Himalayas and deep in festering Amazon jungles. Times of scarcity that affect any region could motivate people to spread out, but which ones left? Perhaps a combination of curiosity and antisocial tendencies were decisive. 

In any case, our species has always traveled, and technology changed the experience, from ships, trains, and planes to radio, telephone, and satellite telecommunication. Traveler’s cheques gave way to ATMs. For decades, a staple of my day abroad was a search for a copy of the International Herald Tribune. I paid extortionate prices for a 4-day-old copy in some remote places. Today, the search is for an Internet connection, usually not hard to find.

In 1989, there were few computers and no Internet access between Cairo and Cape Town. On a hot, sunny day that year, looking out over a bustling, colorful Mombasa street scene, I had an epiphany: “Not one of the hundreds of people before me will ever be affected by computer technology! Their lives will be unaffected by my work.” As epiphanies go, this was not impressive. Today, the home page has links to tourist information, insurance and shipping companies, estate agents, and the answer to the question: The jinis of Mombasa: True or myth?

Technology and travel continue to evolve in tandem, for better and worse. British travelers said that helped them overcome timidity toward complaining about poor service. But we also heard of travelers threatening a bad review to blackmail merchants who depended on ratings.

Similarities and differences

I expected to be met at Kano International Airport on my first visit to Africa, in 1983. I wasn’t. No one knew I was coming. My destination, Jos, a large city in the temperate highlands of Nigeria, had no international phone service or telex, and my letter had not arrived. Nor was there local phone service—the one phone at the airport was a long-dead relic. Later, to phone his hospitalized wife in Kenya, a university colleague of my host drove for hours at night to a telecommunications relay station.

Travel back then was often like parachuting in with the clothes on your back. It evolved. In 1998, my wife and I visited Madagascar. We saw a satellite phone in use in a remote corner of the country. While we were there, the English-language newspaper in the capital announced the country’s first public demonstration of the Internet and Web, led by the students of a technical high school. When we pushed into the interior and lost outside contact, we emerged to discover that Frank Sinatra had died and the Department of Justice was suing Microsoft. In contrast, on a 10-day African camping safari in July 2015, we had to choose to stay out of touch. (Less dramatic news greeted us on resurfacing: Greece was still struggling and Donald Trump was still rising in the polls.)

I haven’t been to Scotland

“When you have made up your mind to go to West Africa,” wrote a 19th-century traveler, “the very best thing you can do is to get it unmade and go to Scotland instead; but if your intelligence is not strong enough to do so, abstain from exposing yourself to the direct rays of the sun, take 4 grains of quinine every day, and get an introduction to the Wesleyans; they are the only people on the Gold Coast who have got a hearse with feathers“ [1].

In 1983, expats still liked to talk about diseases in the region once called “White Man’s Grave.” I spent time with some who were ambulatory with malaria and worse afflictions. When my health faltered, I searched for garlic, whose curative powers were not yet recognized by medical science but which I’d come to trust a decade earlier in Guatemala. In 1983, diseases in Nigeria were rarely fatal if one could fly out for treatment by a knowledgeable doctor, but finding one could be a challenge. One colleague had returned to England with an illness, but was unable to convince doctors to prescribe the right drug until he had lost over 30 pounds and was close to dying. Expats advised me to load up on drugs—no prescriptions were required—before I returned to England, so I could self-medicate if I was harboring a parasite.

This might remain good advice, if prescriptions are still not required. This July, shortly before we left Africa for England, a spider bite sent neurotoxin into my back and around my thigh and triggered fever and hives across my body, and a tick lodged behind my wife’s ear and infected her. Web searches identified our assailants—a sac spider and African Tick Fever.

We arranged clinic visits in England. My problem was new and interesting to our young British doctor. “Is that s-a-x spider?” she asked. She told me to watch for secondary infections. She did not suggest applying an external antibiotic to reduce the odds of needing a skin graft later, advice I found on the Web. My Neosporine, previously used on monkey scratches and bites, seemed to do the trick. There is not much else to do for a neurotoxin. The lumps and fever dissipated. Gayna was not so lucky.

The Web said doxycycline would resolve tick fever in 48 hours. The doctor prescribed amoxicillin. When my wife’s fever exceeded 103 degrees Fahrenheit a few days later, we returned. An older doctor now on duty was convinced that she had malaria. No mosquitoes survived the cold South Africa winter nights, and we had taken anti-malarials anyway, but he wouldn’t prescribe doxycycline. Later, we got a call—someone at the clinic had phoned around and found a doctor from South Africa. They let Gayna pick up doxycycline after working hours and 48 hours later her fever was gone, although side effects of the antibiotic persisted for a week.

I guardedly consider this a success for technology in travel, thanks to Wikipedia and the Web.

Beyond being there

Travel to experience nature and different cultures is not the same revelation it once was. Hundreds of beautifully produced documentaries are available online. You can spend time and money to travel and look at a field, and see a field. Or you can watch a program that distils hundreds of hours of photography and micro-photography into a year in the life of a field, accompanied by expert commentary. Sure, travel provides a greater field of view, texture, nuance, and serendipity, but often less depth of understanding. In addition, distant lands now come to us—foods, music, arts, and crafts from around the world are in our malls.

The encroachment of science and technology on the natural world is eloquently lamented by Thomas Pynchon in Mason & Dixon and other works. We are indeed “winning away from the realm of the sacred, its borderlands one by one.” Agriculture, extraction of minerals, and the housing needs of growing populations fence in land that was available for animal habitation and migration. Wildlife is increasingly managed. If a zoo is a B&B for animals, national parks and reserves have become B’s: Bed is provided and the guests find their own food. With migratory paths blocked, animals that once trekked to sources of water during dry seasons are accommodated by constructing waterholes. When smart cats learn to drive dumb herbivores toward fences where they can easily be taken down, we build separate enclosures to keep some herbivores around. Vegetation along roads is burned so tourists can see animals that would be invisible if the forest came up to the road. Wild animals become accustomed to humans and willingly provide photo ops. Guides know where animals frequent and alert one another of sightings. Many animals have chip implants; geolocation and drones may soon ensure successful viewing.

These are all fine adjustments. Not everyone is content driving hours with no animal sightings. It provides a sense of wilderness spaces and makes sightings special, but viewing throngs of animals along rivers and waterholes is undeniably spectacular, and many tourists are in a hurry to check off the Big Five.

And Africa remains wild. Hippos are second only to mosquitoes as lethal animals on the continent. Hippos are aggressive, fearless, can outrun you, and they ignore protestations that as herbivores they shouldn’t chomp on you [2]. 

Implications for design

The debate is not whether to reshape the natural world, it is how we reshape it. And it turns out that this is not new. Charles Mann’s brilliant book 1491 documents humanity’s extensive transformation of the natural world centuries ago. Prior to the onslaught of European diseases, the Western Hemisphere was densely populated by peoples who carefully designed the forests and rainforests around them. Reading 1491 as we traveled in Costa Rica, I realized that prehistoric inhabitants would not have let those palms grow along the waterways. The black, reflective water conceals edible fish, lethal crocodiles and caiman, and venomous snakes and frogs. Such palms would be consigned to land far from water, their fruit used for pigments or other purposes. After the Spanish arrived and disease depopulated the region, untended palms spread to their present locations.

One reading of Mann’s message might be that since “primeval” forests were landscaped, why should we constrain change now? A better reading is that we have a powerful ability to design the natural world around us, and we should do it as thoughtfully as humanly possible. Technology can surely help with this.



2. Hippos can hoof it 20 mph. A fast human sprints 15 mph. Olympic sprinters reach 25 mph. Since hippos don’t organize track and field competitions, perhaps one lives that could outpace Usain Bolt. Hippos have taken boats apart to reach the occupants. Zulus considered hippos to be braver than lions. Crocodiles and water buffalo are also fast and lethal on land.

Posted in: on Fri, November 06, 2015 - 4:00:07

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

Multiple scales interaction design

Authors: Mikael Wiberg
Posted: Mon, November 02, 2015 - 12:54:10

”Attention to details” has always been a key concern for interaction design. With our attention to details we ensure that we as interaction designers think carefully about every little detail of how the user might interact with and through the digital technologies we design. As interaction designers, we share this belief that every tiny detail matters for the overall experience.

Although our field has evolved over several different “waves” of interaction design paradigms, this concern for interaction design at the scale of the details has remained. For instance, when we developed command-based interaction back in the 70s and 80s our focus was on dialogue-based system design. When we went for GUI design in the late 80s and onwards we again went straight for the details and developed a whole vocabulary to enable detailed conversations about the details of the GUI (including notions such as WYSIWYG, balance, symmetry, etc.). 

Today when we talk about interaction design it seems like attention to details might be  the concern for interaction design—at least if we look at the development in industry. Whether we look at interaction design in smartphones, smart watches, smart TVs, or just about any new interactive gadget, this concern for the details—in interface design, in icon design, in menu design, in hardware design, and so on—is always present. 

In relation to this contemporary concern for “the details,” some voices have been raised for the re-introduction of an industrial design approach to interaction design. While this approach for sure can help us to advance our close-up focus on the details, due to its long history of craftsmanship and associated focus on details, I suggest there are additional demands on interaction design competence emerging at the current moment. 

“Attention to details” helps the interaction designer to stay focused—on the details. This is a core competence for any designer, this ability to stay focused on the small details that also determine the overall impression of a product. However, it might be the case that there is a need at the current moment to think about attention to details not just as a single-threaded focus, but as something that works across multiple different scales. Traditionally, “attention to details” has meant “close-up attention to the visual details” (of the user interface). However, today our digital products are also parts of greater designed wholes in many different ways. Apps, for instance, are of course designed to run on smartphones and tablets, but they are also supposed to be designed in relation to other apps (to work and look like other apps), to be designed in a way that works in the context of an app store, and maybe also work on several different devices and at different screen sizes. Further on, many apps are also very much integrated with social media and as such these apps need to incorporate typical ways of interacting and communicating via such platforms (e.g., by sharing, liking, or re-tweeting others online). For the interaction designer, this means that nowadays he or she cannot only pay attention to the nitty-gritty details in the user interface. Beyond this, the designer needs to think about the interplay between the app and the device/hardware running the app, how modes of interacting with the app correspond to ways of interacting with social media, how interaction with the app might even be part of interacting via social media, etc., etc.

While one could say that the contemporary interaction designer might need to have “split-vision,” I would instead say that interaction design is beginning to work more and more like the profession of architecture. The well-experienced architect pays attention to details on very many different things at once. Typically this is referred to as working with architecture at multiple scales. The overall scale might be a concern for the visual appearance of the building; at another scale it might be about how the building might work as a social intervention in the context where it will be built; at yet another scale attention to details concern the program of the building or the placement of windows, doors, and hallways. Here it is impossible to say that one thing is more important than another to focus on. On the contrary, it is the ability to focus on all of these different scales that leads to great architecture.

In a similar way of seeing things I would say that at the current moment great interaction design demands a similar sensitivity, an ability to pay close attention to a multitude of scales operating simultaneously in any interaction design project. If only paying attention to user needs, then something else is left aside. Likewise, if only staying focused on some details in the user interface, then something else is probably overlooked.

So, what are the implications from this for the profession of interaction design? Well, we already know how to go for the “attention to details.” Let´s never forget about this core competence! But beyond this, I would say that there is a need for any interaction designer to develop skills and a sensitivity for which scales are at play in any particular design project and then learn not just how to pay attention to the details at each scale of the project, but also how to merge these details into functional wholes, that is, into great interactive products and services. From my perspective, this is about compositional interaction design and, accordingly, the interaction designer moves from his/her single-threaded attention to some details to also think about how to arrange these details into well-working compositions across these multiple different scales—thus this proposed notion of “multiple scales interaction design.” 

So, stay focused! On all the different scales of your design project!

Posted in: on Mon, November 02, 2015 - 12:54:10

Mikael Wiberg

Mikael Wiberg is Professor of Informatics in the Department of Informatics at Umeå University, Sweden.
View All Mikael Wiberg's Posts

Post Comment

@Mattias Arvola (2015 11 02)

Sounds like what Schön and Wiggins (Kinds of Seeing and their Functions in Design, Design Studies 13(2), 135-156) talked about as shifts between domains in architecture. The shifts are driven by a propagation of consequences of a design move from one doman to other domains. I have previously conceptualized it as zooming in and out beteween detal and abstraction levels, but perhaps it is more like transpositions between domains equally detailed. I think I need to think some more on this…

@Monica (2015 11 04)

UX and architecture have always comparable in many ways. I very much agree with this observation. I don’t think it’s a new phase, but one that is getting more recognition. The devil is in the details, but a designer much look at the solution in a holistic manner, as well.  The flow throughout the ecosystem, much as in architecture, along with the structural and technical requirements, very much map to UX. I like the comparison of the levels of attention. I would place the visual UX more along side the Interior Decorating phase of a house. The last phase, but one that very much integrates with the overall intention of the structure. Great piece, thank you.

Action and research

Authors: Jonathan Grudin
Posted: Fri, October 09, 2015 - 6:01:55

Three favorite research projects at Microsoft that were never written up: automated email deletion, an asynchronous game to crowdsource answers to consulting questions, and a K-12 education tool. I expected they would be, as were projects that led to my most-cited Microsoft work: persona use in development, social networking in enterprises, and multiple monitor use. What happened to them?

The unpublished projects had research goals, but they differed in also having immediate applied goals. Achieving the applied goal was a higher priority than collecting and organizing data to satisfy reviewers. In addition, there were research findings, but project completion provided a sense of closure that in other projects only comes with publication, because they aimed to influence practice indirectly: Publishing was the way to reach designers and developers.

Projects with immediate goals can include sensitive or confidential information, but this did not prevent publishing my studies. In my career, the professional research community has tried and sometimes succeeded in blocking publication much more than industry management.

I hadn’t thought carefully about why only some of my work was published. It seems worthwhile to examine the relationships among research, action, and our motives for publishing.

A spectrum of research goals 

Research can be driven by curiosity, by a theoretical puzzle, or by a desire to address a specific problem, or to contribute to a body of knowledge. The first HCI publication, Brian Shackel’s 1959 study of the EMIac interface, addressed a specific problem. Many early CHI studies were in the latter category: By examining people carrying out specific tasks, cognitive psychologists sought to construct a general theory of cognition that could forever be used in designing systems and applications. For example, the “thinking-aloud” protocol was invented in the 1970s to obtain insight into human thought processes. Only in the 1980s did Clayton Lewis and John Gould apply it to improve designs by identifying interface flaws.

When GUIs became commercially viable, the dramatically larger space of interaction design possibilities shattered the dream of a comprehensive cognitive model. Theory retreated. Observations or experiments could seek either to improve a specific interface or to yield results that generalize across systems. The former was less often motivated by publication, and as the field became more academic, it was less likely to be judged as meriting publication.

Action research is an approach [1] that combines specific and general research goals. Whereas conventional research sets out to understand, support, or incrementally improve the current state, action research aims to change behavior substantially. Action researchers intervene in existing practice, often by introducing a system, then studying the reaction. Responses can reveal aspects of the culture and the effectiveness of the intervention. This is a good option for phenomena that can’t be studied in a lab and for which small pilot studies won’t be informative. A drawback to action research is that it is often undertaken with value-laden hypotheses that can undermine objectivity and lower defenses against the implacable enemy of qualitative researchers, confirmation bias. Action research is often employed in cultures not shared by the researchers. It is at one end of a continuum that reaches conventional research. My research was not intervention-driven; it has sought to understand first, then improve.

Publication goals

The goals of publication partly mirror the goals of research. Publication can contribute to a model, framework, or theory. It can help readers who face situations or classes of problems that the researchers encountered. Publication can enable authors to earn academic or industry positions, gain promotion or tenure, attract collaborators or students, astonish those who thought we would never amount to much, or become rich and famous. All worthy goals, and not mutually exclusive. Many are in play simultaneously—it’s nice when a single undertaking contributes to diverse goals.

An examined life

Returning to the question of why my favorite work is unpublished, aiming for an immediate effect introduces constraints and alters priorities, but it doesn’t preclude careful planning for subsequent analysis. Steve Benford and his colleagues staged ambitious public mixed reality events under exacting time and technology pressure, yet they collected data that supported publication.

Early in my career, my research was motivated by mysteries and roadblocks encountered while doing other things. There were a few exceptions—projects undertaken to correct a false conclusion in a respected journal or to apply a novel technique that impressed me. Arguably I was too driven by my context, afflicted by a professional ADHD that distracted me from building a coherent body of work. On the other hand, it insured a degree of relevance in a dynamic field: Some who worked on building a large coherent structure found that the river had changed course and no longer flowed nearby.

My graduate work was in cognitive psychology, not HCI. I took a neuropsychology postdoc. My first HCI experiment was a side project, using a cool technique and a cool interface widget in a novel way, described below. The second aimed to counter an outrageous claim in the literature. The third explored a curious observation in one of the several conditions of the second experiment.

I left research to return to my first career, software development. There, challenges arose: Why did no one adopt our multi-user features and applications? Why were the software development practices of the mid-1980s so inappropriate for interactive software? Not finding answers in the literature, I gravitated back to research.

I persevered in researching these topics, but distractions came along. As a developer I first exhorted my colleagues to embrace consistency in interface design, but before long found myself often arguing against consistency of a wrong sort. I published three papers sorting this out. I was lured into studying HCI history by nagging questions, such as why people managing government HCI funding never attended CHI, and why professional groups engaged in related work collaborated so little.

Some computer-use challenges led to research projects. My Macs and PCs were abysmal in exploiting two-monitor setups; what could be done to fix that? As social media came into my workplace in the 2000s, would the irrational fear of email that organizations exhibited in the 1980s recur? Another intriguing method came to my attention: Design teams investing time in creating fictional characters, personas. Could this really be worthwhile? If so, when and why? And each of the three favorite projects arose from a local disturbance.

Pressure to publish. Typically a significant motivation for research, I escaped it my entire career. My first week of graduate school, my advisor said, “Be in no hurry. Sit on it. If it’s worth publishing, it will be worth publishing a year later.” Four years later, too late to affect me, he changed his mind, having noticed that job candidates were distinguished by their publications. No telling how publication pressure would have directed me, but the pressure to finish a dissertation seemed enough.

I published one paper as a student. My first lab assignment was to report on a Psychological Review article that proposed an exotic theory of verbal analogy solution. Graduate applications had required Miller’s Analogy Test, and I saw a simpler, more plausible explanation for the data. I carried out a few studies and an editor accepted my first draft, over the Psych Review author’s heated objection. This early success had an unfortunate consequence—for some time thereafter, I assumed that “revise and resubmit” was a polite “go away” rejection.

Publication practices pushed me away from neuropsychology. I had been inspired by A. R. Luria’s monographs on individuals with unusual brain function. I loved obtaining a holistic view of a patient—cognitive, social, emotional, and motivational. The standard research approach was to form a conjecture about the function of a brain region, devise a short test, and administer it to a large set of patients. Based on the outcome, modify or refine the conjecture and devise another test. This facilitates a publication stream but it didn’t interest me.

The early CHI and INTERACT conferences had no prestige. Proceedings were not archived, only journals were respected. There was not yet an academic field; most participants were from industry. Conferences served my goal of sharing results with other practitioners who faced similar problems. It was not difficult to get published. Management tolerated publishing, but exerted no pressure to do it.

When I returned to academia years later, conferences had become prestigious in U.S. computer science. Like an early investment in a successful startup, my first HCI publications had grown sharply in value. I continued to publish, but not under pressure—I already had published enough to become full professor.

I did however encounter some pressure not to publish results along the way.

“Sometimes the larger enterprise requires sacrificing a small study.”

My first HCI study adapted an ingenious Y-maze that I saw Tony Deutsch use to enlist rats as co-experimenters when I was in grad school. It measures performance and preferences and enables the rapid identification of optimal designs and individual differences. I saw an opportunity to use it to test whether a cool UI feature designed by Allan MacLean would lure some people away from optimally efficient performance. It did.

A senior colleague was unhappy with our study. The dominant HCI paradigm at the time modeled optimal performance for a standard human operator. If visual design could trump efficiency and significant individual differences existed, confidence in the modeling endeavor might be undermined. He asked us not to publish. I have the typical first-born sibling’s desire to please authority figures, so it was stressful to ignore him. We did.

A second case involves the observations from software development that design consistency is not always a virtue. I presented them at a workshop and a small conference, expecting a positive reception. To my dismay, senior HCI peers condemned me for “attacking one of the few things we have to offer developers.” My essay was excluded from a book drawn from the workshop and a journal issue drawn from the conference. I was told that it had been discussed and decided that I could publish this work elsewhere with a different title. I didn’t change the title. “The case against user interface consistency” was the October 1989 cover article of Communications of the ACM.

Most obstacles to publishing my work came from conference reviewers who conform to acceptance quotas of 10%-25%, as though 75%-90% of our colleagues’ work is unfit to be seen. It is no secret that chance plays a major role in acceptances—review processes are inevitably imprecise. A few may deny the primacy of chance—was your paper assigned to generous Santas or annihilators? Was it discussed before lunch or after lunch? And so on, just as a few deny human involvement in climate change, and probably for similar reasons. One colleague argued that chance is OK because one can resubmit: “Noise is reduced by repeated sampling. I think nearly every good piece of work eventually appears, so that the only long-term effect of noise is to sprinkle in some bad work (and to introduce some latency in the system)” [2].

Buy enough lottery tickets and you will win. In recent years, few of my first submissions have been accepted, but no paper was rejected three times. However, there are consequences. Rejection saps energy and good will. It discourages students. It keeps away people in related fields. Resubmission increases the reviewing burden.

The status quo satisfies academics. More cannon fodder, new students and assistant professors, replace the disheartened. But for research with an action goal, such as that which I haven’t published, the long-latency, time-consuming publication process has less of a point.

This is an intellectual assessment. There is also an emotional angle. Some may regard a completed project that strived for an immediate impact dispassionately as they turn to the next project. I don’t. Whatever our balance of success and failure, I treasure the memories and the lessons learned. Do I hand this child over to reviewers tasked with killing 75% of everything that crosses their path? Or do I instead let her mingle with friends and acquaintances in friendly settings?


1. Or set of approaches: Different adherents define it differently.

2. David Karger, email sent June 24, 2015.

Posted in: on Fri, October 09, 2015 - 6:01:55

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

Scripted interaction

Authors: Mikael Wiberg
Posted: Mon, October 05, 2015 - 10:51:06

"Interaction design" is a label for a field of research and for a practice. When we design interactive tools and gadgets we do interaction design. But what is it that we´re designing? And is this practice changing? Let me reflect on this a little bit.

Inter-action design

If we take this notion of interaction and we split it up into two words we get inter and action. From this viewpoint, inter-action design is about designing for the inter-actions to be supported by a computer system and between a computational device and a person. This leads to a design paradigm focused on which actions a particular design should support, and on how to trigger these supported actions via an interface. In short, interaction design becomes an issue of mapping supported actions to an understandable user interface. Of course, we can add to this design paradigm that menus, buttons, etc. designed to trigger these actions should be logically placed, easy to understand and use, etc. Clearly, very much of tradional HCI/interaction design relies on this interaction design paradigm, ranging in focus from interface design to issues of usability. From this viewpoint, interaction design is about arranging computational devices to support actions between users and their computers. Clearly human-computer interaction (HCI) was a perfect label for describing the main entities in focus for this design paradigm—in particular, a focus on how to design interfaces for efficient interactive turn-takings between humans and machines.

Interaction design

But interaction design might not only be about designing the interfaces for triggering certain supported actions. As we can notice from the HCI and interaction design research community, interaction design is also very much about experiences and experience design. Given one such perspective on interaction, we can think about interaction design not only in terms of designing the interfaces to trigger actions between entities, but also in terms of designing the form of the interaction per se, i.e., should the interaction be a rapid or slow? Frequent (in terms of user involvement)? Or should it be about massive amounts of information? When we start to think about interaction design from the viewpoint of designing not only the interfaces for interaction but interaction per se we shift focus from interface design to issues of experiences, perception, emotions, bodily engagement, etc. With a focus on designing the interaction, we then need to add a focus on how we experience this interaction we are designing. Again, and in relation to this design paradigm, we should acknowledge how our community has a stable ground for doing interaction design from this perspective, including for instance a focus on experience design, embodied interaction design, etc.

Less and less interaction?

However, we are now facing a new trend and a new challenge for interaction design. With the advent of ubiquitous computing, robotics, the Internet of Things, and new digital services like IFTTT (If This Then That), it is likely we will interact less and less with digital products (at least in terms of frequent and direct turn-takings between users and computers in typical interactive sessions as we have though about interaction design for more than a couple of decades now). In fact, we do not want to frequently interact with our robotic lawn mower, much less experience any interaction with this computational device. We just want it to work. On the other hand, if we are forced to interact with it then it is probably due to a breakdown. In relation to this scenario, does interaction design then become a practice of designing for less (direct) interaction or ultimately almost no interaction at all? (For a longer and more in-depth discussion on ”less and less interaction” see the paper ”Faceless Interaction” [1].) However, this does not mean than we are moving into an era where interaction design is not needed. On the contrary(!), if we are going in the direction of having less and less direct contact with the computational stuff surrounding us in our everyday lives, then we need an interaction design paradigm that can guide us toward the design of well-functioning tools, objects, and devices that can live side-by-side with us in our everyday lives. So, how can we go about doing such interaction design?

A proposal: A focus on scripted interaction

In HCI and interaction design research we have a long practice of studying practices and then designing interactive systems as supportive tools for these practices. Typically this has lead us in the direction of a ”tool perspective” on computers and we have looked for ways of supporting human activities with the computer as a tool. As we switch paradigms from the user as the active agent doing things supported by computational tools toward a paradigm that foregrounds the computer and how it is doing lots of things on behalf of its user/owner, we can no longer continue to focus on designing the turn-taking with the machine (in terms of interface design, etc.), nor can we focus on designing how we should experience this turn-taking, these interfaces, or the device. Instead, and here is my proposal, we need to develop ways for doing good scripted interaction design in terms of how the computational device can carry out certain scripted tasks. Of course, it should not be ”dump scripts,” but rather scripts that can take into account external input, sensor data, context-aware data, etc). As a community, we already have techniques for developing good scripts from thinking about linked actions. For instance, we are used to thinking about user scenarios, and have even developed techniques for doing story boards, etc. However, we also need techniques for thinking about chains of actions done by our computational devices; we need tools for simulating how various interactive tools, systems, and gadgets can work in concert; we need tools and methods for designing interaction design across services; and we need interactive tools for and ways of examining and imagining how these interactive systems will work and when breakdowns can ocur. However, as we´re standing in front of this development and the design challenges ahead, we can also move forward in informed ways. For instance, the development of scripted interaction design methods, techniques, and approaches can probably find a good point of departure in the book Plans and Situated Actions by Lucy Suchman [2]. While our community always seems to look for the next big thing in terms of tech development, we can simultanously feel secure in the fact that our theoretical grounding will somehow show us the way forward! 


1. Janlert, L-E., & Stolterman, E. (2015). Faceless Interaction - a conceptual examination of the notion of interface: past, present and future. In Human–Computer Interaction, Vol. 30, Iss. 6, 2015.

2. Suchman, L. (1987 ) Plans and Situated Actions: The Problem of Human-Machine Communication, Cambridge University Press.

Posted in: on Mon, October 05, 2015 - 10:51:06

Mikael Wiberg

Mikael Wiberg is Professor of Informatics in the Department of Informatics at Umeå University, Sweden.
View All Mikael Wiberg's Posts

Post Comment

No Comments Found

Go West!

Authors: Deborah Tatar
Posted: Tue, September 01, 2015 - 4:18:05

As a professor, I now get to witness young people aspiring to “go West.” They know the familiar trope “Go West, young man,” ascribed to the 19th-century publisher Horace Greeley. They inherit the idea of manifest destiny, even when the term itself was buried in a single paragraph in their high-school American History textbooks. They have heard of the excitement of Silicon Valley, the freedom of San Francisco, the repute of Stanford, and perhaps experienced the beauty of the Bay with its dulcet breeze and the sun that seems to transmit energy to young healthy human animals directly through the skin. Seattle, Portland, Vancouver: a bit darker and wetter, but all romantic technology and design destinations, infused with Makr, and open source ideology, and even the tragically hip. And then, there is the far, far west—although neither Taiwan [1] nor Shenzhen [2] are really on my students’ minds yet. Go West and seek your future!

Yet the advice to go West is associated with a darker kind of idealism: a tall, thin, awkward young man in WWI England haltingly explaining his enlistment, saying that he will “go West” [3]—that is, die—proudly if he must. As Siegfried Sassoon wrote, witheringly from the trenches: “You'd think, to hear some people talk/ That lads go West with sobs and curses/ … But they've been taught the way to do it/ Like Christian soldiers; not with haste/ And shuddering groans; but passing through it/ With due regard for decent taste.” (S. Sassoon, How to Die). In going West, what are my undergraduate students going into? Is it trench warfare or all Googly-eyes?  

Luckily for my conscience, my role does not really much matter. It does not much matter what I say because these undergraduates already have a fixed image. And they are not the only ones. Indeed, I started writing this in chagrin after a recent conversation with my own mother. She was excited and shocked to report that she had learned that Steve Jobs was a Deeply Flawed Human Being. I tamped my reaction to this news down to the utterance, “This is not a surprise.”

(This is the first of six or so posts that will have to do with our design choices, the settings in which we make them, and, especially, the position of women.)


1. Bardzell, S. Utopias of participation: Design, criticality, and emancipation. In Proceedings of the 13th Participatory Design Conference: Short Papers, Industry Cases, Workshop Descriptions, Doctoral Consortium Papers, and Keynote Abstracts 2 (Oct. 2014), 189–190. ACM, New York.

2. Lindtner, S., Greenspan, A. and Li, D. (2015) Designed in Shenzhen: Shanzhai manufacturing and maker entrepreneurs. In Proceedings of the 2015 Critical Alternatives 5th Decennial Aarhus Conference (2015), 85–96.

3. Cambridge Idioms Dictionary, 2nd ed. S.v. "west." (Retrieved Aug. 30, 2015 from

Posted in: on Tue, September 01, 2015 - 4:18:05

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts

Post Comment

@Smart Menu (2015 09 29)

Yes Agree.. When a professional man share his/her personal experience, The opinion keeps double value in that field.

Building troops

Authors: Jonathan Grudin
Posted: Wed, August 19, 2015 - 10:44:32

“Will you miss Khaleesi?” asked Isobel. At that moment, the samango urinated on Eleanor’s shoulder. “Ummm, yes?” Eleanor replied.

The primates at the Riverside Wildlife Rehabilitation Centre outside Tzaneen, South Africa were abandoned or confiscated pets, survivors of massacres by farmers who left newborns behind, injured by predators or motor vehicles, and so on.

Unexpectedly, the principal task of the volunteers (including my family in early July) on whom the center relies is not rehabilitating individual animals. It is building troops—establishing contexts that encourage and strengthen appropriate social behaviors in groups of vervet and samango monkeys and baboons. Many have no history of interacting with others of their species. For most volunteers, it is deeply engaging. Many extend their planned stays or return. Alpha volunteer Ben initially came for eight weeks; after two weeks he extended his tour to several months and was partway through a year-long return visit.

More predictably, a consequence of working closely with other primates is to reflect on similarities and differences between them and us. This yielded an insight for my work on educational technology design.

The cast

We joined three permanent staff members, a handful of local contract workers, about 35 volunteers, 140 baboons, 17 samango or Syke’s monkeys, and over 300 vervet monkeys. The 30 hectares are also home to several dogs (who get along fine with the monkeys), chickens, two ostriches, and wild troops of baboons and vervets. The volunteer population fluctuates seasonally, ranging from about 5 to 35. Primate numbers usually fluctuate slowly; most animals spend years there. However, a release into the wild of a troop of 80 baboons was imminent.

Covered wire-mesh enclosures for juvenile primates [1] comfortably hold a few dozen of them and us. They include constructions of suspended two-inch diameter stripped hardwood branches and bars on which the primates can race around. Larger enclosures house the older “middles,” in which troops are formed. The still larger pre-release camps are homes to the increasingly self-sufficient troops that are approaching release into carefully planned wild locations. These animals are often out of human view. Fences with an electrified top strand surround uncovered enclosures.

Major tasks

Food prep. Five hundred primates eat a lot. Tzaneen is in a major agricultural region. Riverside gets free or discounted imperfect food. Papayas, bread, cabbage, bananas, eggplants, oranges, and grapefruit arrive in truckloads and must be offloaded, stored in crates, and sorted daily for ripeness. Morning food prep is a major task: washing food and cutting it with machetes into different sizes for delivery in large bowls and crates to juvenile, middle, and main camps, with smaller quantities for quarantine and clinic enclosures that house a handful of animals. Milk with a pro-biotic is prepared for seven very young baboons and Khaleesi, the infant samango. Dozens of crates, machetes, and the food prep area are scrubbed or hosed down every day.

Enclosure cleaning. Equally important and laborious is the dirty, olfactory-challenging job of cleaning juvenile enclosures: removing food remains and excrement from everything and scrubbing it all down.

Monkey time. The other major task is to play with primates, particularly the juveniles. A group of volunteers sit in a cage [2], talking for an hour or more while baboons or vervets eat, race around, climb on the branches or the volunteers, inspect and present to the volunteers, and so on. Volunteers soon recognize the personality differences of different primates—and vice versa, baboons and vervets have favorite volunteers. We are teaching troop behaviors, taking the place of adult baboons or vervets and gently keeping them in line. The power of troop identification is demonstrated in daily walks to a distant pool. A juvenile baboon that escapes an enclosure may avoid recapture for days, but when a group of us let out all 11 baboons and walk to the pool, they race around freely (or climb up a volunteer to be carried) but do not try to escape. We then sit around the pool and talk as they play, racing around, mock-fighting, climbing trees, jumping onto and off of us, stealing sunglasses or hairbands, leaping into the pool, exploring odd things they find, and getting into mischief. After an hour we walk back, and they stay with us, a protective troop, and they return without protest to their enclosure. Social behavior is constant—individual baboons approach volunteers and present in different ways; we are told how to respond and what to avoid doing. Ben, the most familiar human, often had four or five baboons climbing on him as we walked along.

Other tasks. We harvest edible plants from areas outside the enclosures to supplement what grew in the larger ones, to familiarize them with what will constitute their fare in the wild. Another task is to assess troop health: We walk to the large pre-release enclosure, select a primate, and observe it for half an hour, coding its behavior in two-minute intervals. Other tasks arise less frequently, such as inserting a microchip for location monitoring into a new arrival or the ingenious process of introducing a new primate to a troop. And as discussed below, teaching newer volunteers the details for the tasks just described is itself a major task.

The fourth primate

In an unspoken parallel to baboon and monkey preparation, Riverside forms effective troops of volunteers, fostering skills that will serve them well when they leave. A month visit can be invaluable for resilient young people taking a gap year or a time-out to reflect on their careers. My family were outliers—most volunteers were single, with some young couples and siblings. Almost all were 15-35, two-thirds were women, and they came from the UK, Netherlands, USA, Belgium, Switzerland, Israel, and Norway. A chart in the dining room listed past involvement from dozens of other countries. Some volunteers had prior experience in centers focused on lions, elephants, sharks, and other species. We had previously visited a rescue center in Costa Rica.

Most days a volunteer or two left and others arrived, creating an interesting dynamic. Getting newbies up to speed on all tasks is critical. After a week, volunteers know the tasks. After two weeks, they are experts. After three weeks, most people who had had more experience have left, so they become group leaders. Group leaders are responsible for training new arrivals and getting crucial tasks accomplished. This means guiding a few less-experienced volunteers who have different native languages, backgrounds, and personalities. We saw volunteers grow in confidence and leadership skills before our eyes. Rarely do people develop a sense of mastery, responsibility, and organizational dynamics in such a short time while doing work that makes a difference. As a side benefit, future parenting of a messy, temperamental, dependent infant will not be intimidating, although this could differ for volunteers in a shark or lion rehab center.

Children and adults

Years ago, I watched schoolchildren who in large numbers shared an unusually long multi-floor museum escalator with me. I did one thing—watch them—but the kids were whirlwinds of activity, talking with those alongside, behind, or in front of them; hopping up or down a few steps; taking things from backpacks to show others; looking around and spotting me looking at them; and so on. In a few minutes, most shifted their attention a dozen times or more.

Juvenile primates are like that. One found a bit of a mostly-buried metal connector next to me at the pool and pulled at it, then quickly dug out the dirt around it, pulled more, brushed it off, pulled again, and then raced off to chase another baboon. Anything they found or stole, they examined curiously as they ran off with it, then dropped it. Their main focus was each other and us. Whether leaping in the pool or climbing trees, they tended to do it in groups, chattering constantly. When one of us retrieved a large hose they were trying to drag off, they looked for an opportunity to mischievously make off with it again.

The adult baboons in the pre-release troop are different. They usually walk slowly and focus at length on one task—harvesting and eating flowers or pods, sitting and surveying the compound, and so on. They interact less frequently—usually amicably, but not playfully. Adults and juveniles seem different species.

Adult baboon software developers would have trouble designing tools that delight juvenile baboons.

Implications for design

Watching them, I realized that although I’ve spent hours sitting in classrooms, I’d not thought holistically about a troop of children. “Ah,” you might say, “but it’s obvious, we know children are different. We were once kids. Many of us have kids.”

No, it’s not obvious. Differences are there, but we don’t see them. Painters depicted children as miniature adults for centuries before Giotto (1266?-1337), famous for naturalistic observation, painted them accurately, with proportionally larger heads. Today, a child is often seen to be a partially formed adult, with some neural structures not yet wired in—a tabula rasa on which to write desirable social behaviors. This focuses on what children are not. It overlooks what they are, individually and collectively.

Without unduly stretching the analogy, children monitor “alpha males and females” and figuratively hang on to favorites much as the juvenile baboons literally hung onto Ben. Video instruction will not replace teachers for pre-teens. Kids explore feverishly, test boundaries, learn by trial and error and from cohort interactions, and get into mischief more than adults do. They wander in groups, carrying backpacks stuffed with books, folders, and tools.

I hadn’t seen these distinctions as comprising a behavioral whole. Some, I hadn’t considered at all.

Tablets. For adults, carrying tablets everywhere is inconvenient. Making do with a mobile phone is fine. Seeing students as a troop, arriving at school with stuffed backpacks, it sank in for the first time that for them, adding a tablet could be no problem. With digital books, notebooks, and tools available, a tablet can reduce the load.

Learning through trial and error. Adults, including most educational technology designers, use pens, not pencils. I collected the pencils left around our house by Eleanor and Isobel. The problem isn’t finding a pencil, it is finding one with some eraser left on. Trial and error is endemic in schoolwork; a digital stylus with easy modeless erasing fits student behavior.

Color. Adults may forget that when they were children, they too were fascinated by colors. When collecting pencils around the house, I also found many color pencils and markers of different sizes. Unlimited color space and line thickness also come with digital ink.

Cohort communication. The pace of communication among students might be more systematically considered. Teachers often first hear about new applications from students. One Washington school district designates “Tech Minions” to exploit this information network.

Conclusion: the significance of troops

More examples of student-adult distinctions that inform the design of technology for students could be identified. More broadly, this experience brought into focus the salience of social behavior. Studies of primate intelligence focus on tool use—rocks to crack nuts or thorns to spear insects. Useful, but these guys eat a lot of things. Finding food may be a lower priority than avoiding being food: for hyenas, wild dogs, lions and other cats on the ground, for eagles, snakes, and leopards when arboreal. They organize socially and observe other species to increase safety. They work as a group when threatened. They have a range of sophisticated social behaviors. Could Riverside’s process for inserting a baboon peacefully into a troop be adapted for bringing on a new development team member?

Ummm, yes?


1. At Riverside the younger primates are called “babies.” I call them “juveniles” here because, although less than a year old, they are actively curious and mischievous, climb trees, fight and play together, and recognize a range of people and conspecifics—often resembling children or young teens. My daughter Isobel firmly notes that at times they act like babies.

2. Although called cages, these enclosures of about 15’ by 15’ by 10’ are spacious given the small size of the residents and their ability to climb the walls or retreat up into the branches.

Thanks to Eleanor Grudin and Isobel Grudin for Riverside details and reports from the juvenile enclosures, Gayna Williams for planning the trip, and Michelle Vangen for an art history assist.

Posted in: on Wed, August 19, 2015 - 10:44:32

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

Central control

Authors: Jonathan Grudin
Posted: Tue, July 14, 2015 - 11:14:10

Two impressive New Yorker articles described powerful, laser-focused leaders whose vision affects technology design and use. Xi Jinping consolidated control of the world’s largest country. Jonathan Ive did so at the world’s most valuable company. Both have reputations for speaking frankly while avoiding impolitic statements. They listen, then make decisions confidently. Each is a good judge of character who assembled a highly capable and loyal inner circle. They triumphed through strategic, non-confrontational politics.

“Before Xi took power, he was described, in China and abroad, as an unremarkable provincial administrator… selected mainly because he had alienated fewer peers than his competitors” [1].  Xi volunteered for geographically remote posts that distanced him from the Beijing struggle—until a call came for a new face that would address corruption.

“Ive’s career sometimes suggests the movements of a man who, engrossed in a furrowed, deferential conversation, somehow backs onto a throne” [2].

Once enthroned, each extended his control with a clear sense of purpose.

Some analysts argue that centralized management is not viable in the complex 21st century global economy. In the late 20th century, Ronald Reagan, the autocrat Lee Kuan Yew who created modern Singapore, Steve Jobs, and Bill Gates made impacts. Today, presidents, prime ministers, sultans, and sheiks seem to struggle as large corporations drift from one CEO to another. However, the careers of People’s Republic of China President Xi Jinping and Apple Chief Design Officer Jonathan Ive suggest an effective new leadership style.

The roused lion and the Internet

Xi Jinping set out to attack corruption, address pollution and environmental threats, assert China’s place on the world stage, and maintain economic growth. The first two goals will take time. Xi chose initial steps toward them that also consolidated his power.

The understandable third and unsurprising fourth goals serve rational Chinese interests, but they will conflict at times with goals of other countries. Napoleon once described China as a sleeping lion. Xi remarked, “The lion has already awakened, but this is a peaceful, pleasant, and civilized lion.” We hope so. The treatment of China’s ethnic minorities undermines confidence. China’s efforts to control disputed islands are domestically popular but risk destabilizing the world. Should China’s economy falter, distracting the public with assertiveness abroad could be a temptation.

A few months ago China shut down all internal access to a range of Internet sites, including Bloomberg, Reuters, The New York Times; Facebook, Twitter, and Google. Filters that had previously blocked them could be circumvented by the use of virtual private networks (VPNs). Communist China has long granted special privileges to a hierarchical elite group, of which Xi was a member from birth. The elite are trusted to behave. The government decided that too many untrusted actors had acquired VPN access.

Restricting information to an elite is not new. For centuries, the Catholic Church tolerated discussion and dissent by an elite whose VPN was Latin and Greek. The masses were excluded. Indeed, to prevent the public from seeing how far church practice had strayed from scripture, translating the Bible from Latin to a living language was a crime more serious than murder [3]. The first person to publish an English Bible translation was executed. Martin Luther declared war by posting proclamations in German, not Latin.

That was in the age of monarchs. How widespread is elite governance today? Chinese leaders grew up together, many British leaders attended Eton, and all nine current Supreme Court Justices plus all U.S. presidents since Reagan are alumni of Harvard or Yale [4]. Five million Americans—more than 1%!—have security clearances that permit them to access information but not discuss it publicly. Differences are mostly of degree. No doubt many in Chinese military, intelligence, and economic planning retain Internet access, even apart from those using it to hack our systems. 

To maintain economic growth, China may steal intellectual property through hacking and informants. Annoying, but again, nothing new. Americans stole carefully guarded British water power intellectual property to launch our industrial revolution. The French bugged high-price Air France seats to collect economic information. A colleague at a previous job pretended to drink alcohol at business gatherings to catch gems that fell from loosened lips. But it’s not fun when it’s your turn to be victim.

Will curtailed Internet access pay off for China? Suppressing dissent could build consensus and national spirit—would that compensate for reduced innovation and nimbleness? Mass media enjoys focusing on technology negatives, but many of us appreciate the information access provided by Wikipedia, YouTube, and Facebook. China could drive hundreds of thousands of budding entrepreneurs to emigrate. Of course, that would leave a billion people to soldier on. Mao reportedly said that China should kill one person in a thousand to remain unified. A more civilized, pleasant lion might send them into exile, perhaps buying time to prepare the country for greater openness. We’ll see.

Empowered design

Steve Jobs was forced out of Apple in 1985. When he returned in 1997, he gave Jonathan Ive, an award-winning designer who had joined in 1992, responsibility for the group that designed the iMac, iPod, iPhone, and iPad hardware. After Jobs’ death, Ive’s control extended to software and architecture. Ian Parker describes Ive working tirelessly with a tight-knit group and a clear vision, focusing intensely on Apple’s first major post-Jobs product, the Apple watch.

In April, the watch was available for pre-order. In May, Ive became Chief Design Officer, Apple’s third chief officer. Lieutenants assumed responsibility for the day-to-day hardware and software operations. Ive’s focus was said to be Apple’s ambitious building project, but that seemed well underway months earlier. Is Ive focused on an unannounced project, such as Apple’s rumored automobile? Will he remain broadly engaged despite his lieutenants’ promotions, or did the watch project exhaust Ive, who reportedly took his first long vacation when it was done? In any case, more than Apple’s financial returns may ride on the success of the watch.

Does direct contact with users have a future?

Steve Jobs thought not.

Substantial human factors/usability engineering went into the Macintosh. Most of it was done at Xerox PARC. An eyewitness account depicted in the film Jobs  starts with Steve Jobs accusing Bill Gates of stealing the Mac GUI. "Well, Steve, I think there's more than one way of looking at it,” Gates replies. “I think it's more like we both had this rich neighbor named Xerox and I broke into his house to steal the TV set and found out that you had already stolen it.”

Apple hired Xerox’s brilliant advocate of user testing, Larry Tesler. But only after Jobs was forced out in 1985 was Tesler able to form a Human Interaction Group. Between 1985 and 1997, Apple built two HCI groups, one in the product organization and one in “advanced technology” or research. Joy Mountford arrived in early 1987, recruited outstanding contributors with backgrounds ranging from psychology to architecture and theatre, and led Apple in staging bold demonstration projects at CHI, publishing research papers and the influential book The Art of Human-Computer Interface Design, and forging new links between HCI and the fields of design and film. In 1993, Apple hired Don Norman, who became the first executive with a User Experience title.

The meteoric rise of HCI at Apple was followed by a faster descent when Jobs returned in 1997 in the midst of financial challenges. Jobs laid off Don Norman and the researchers, reportedly telling one manager, “Fire everyone, then fire yourself.” He eliminated the HCI group in the product division. Some graphic designers were retained, but of the usability function, Jobs reportedly said, “Why do we need them? You have me.”

For almost two decades, Apple has been absent from HCI conferences. Ian Parker describes Apple engineers walking about waving prototype watches to see if they behave but systematic user testing has not been a priority. Consequences include products such as the hockey puck mouse that quickly disappeared and performance problems more generally.

Nevertheless, Apple has succeeded spectacularly. This success calls into question the value of direct interaction with users. HCI conferences such as CHI, HCII, and INTERACT draw thousands of submissions and attendees every year. Apple ignores them, yet outperforms rivals who participate. How do we explain this? Possibilities:

  1. User research is important. Apple got lucky. The string of successes were rolls of dice that also came up with the Lisa, the Newton, and Apple TV.
  2. This is possible, but it would be unfortunate because few will believe it. Management experts are like the drug cartel chiefs in a Ridley Scott movie, about whom one character says ominously, “They don’t really believe in coincidences. They’ve heard of them. They’ve just never seen one.”

  3. User research is no longer cost-effective. Brilliant visual designers, telemetry, and agile methods that enable rapid iteration with a delivered product can fix problems quickly enough.
  4. This would suggest that much of the HCI field risks being an academic promotion mill that could implode, leaving the job to visual design and data analytics.

  5. User research isn’t needed for consumer products for which everyone has intuitions. Brilliant visual design and branding provide the edge. User research may be needed when developing for vertical markets or enterprise settings where we lack experience.
  6. This is probably at least part of the explanation. Visual design was long suppressed by the cost of digital storage. When Moore’s law lifted that constraint, a surge of innovation followed. From this perspective, the prominence of design could be transient or could remain decisive. For established consumer products like automobiles and kitchen appliances, design is a major partner. In more specialized domains, equilibrium will take longer to reach.

  7. User research is important unless someone with extraordinary intuitive genius is in control. Apple did just need Jobs.
  8. Ive and Jobs collaborated intensely for a decade, designer and design exponent. If Jobs had intuitive insight into public appeal and channeled the designers, guiding Apple and Pixar products to success, what happens now that half the team is gone?

Ive personally gravitates toward luxury goods. He drives a Bentley Mulsanne, described by Parker as “a car for a head of state,” flies in a personal jet, buys and builds mansions, hangs out with celebrities, and designs unique objects for charity auctions. Does he have deep insight into the public? The nature of Steve Jobs’ contribution or genius may be revealed by the subsequent trajectory of his partner.

If the smartwatch succeeds, (2) and (3) are supported, (1) and (4) are not. Initial reports are mixed. A class of master’s degree students peered at me over their Macbooks a few weeks ago. “Who plans to buy an Apple watch?” I asked. No hands went up. “That seems to have come and gone,” one said. But some media accounts hail the watch as a runaway success. It is early: The iPod was slow to take off and the Mac itself floundered for 18 months, a factor in Jobs being fired in 1985 [5].

Are design and telemetry enough?

Apple’s success in relying on design has been noticed by rivals. At Microsoft 10 years ago, user research (encompassing usability engineering, ethnography, data mining, etc.) was a peer of visual design. Today, it is subordinate: Most user researchers, re-christened design researchers, report to designers. Fewer in number, most have remits too broad to spend much time with users. In another successful company, a designer described user researchers as “trophy wives,” useful for impressing others if you can afford one.

Management trusts telemetry and rapid iteration to align products and users. Usability professionals discovered long ago that to be most effective they should be involved in a project from the beginning, not asked at the end “to put lipstick on a pig.” Telemetry and iteration propose to fix the pig after it has been sold and returned. It can work with simple apps and easygoing customers, but not when an initial design led to architecture choices that block easy fixes.

Useful telemetry requires careful design and even then reveals what is happening, but not why. Important distinctions can be lost in aggregated data. Telemetry plays into our susceptibility to assume causal explanations for correlational data, which in turn feeds our tendency toward confirmation bias. Expect to see software grow increasingly difficult to use. A leading HCI analyst wrote, “Apple keeps going downhill. Their products are really difficult to use. The text is illegible. The Finder is just plain stupid, although I find it useful to point out the stupidities to students—who cannot believe Apple would do things badly.” As Apple’s approach is emulated, going downhill could become the new normal.

Maybe it is a pendulum swing. The crux of the matter is this: A few hours spent watching users can unquestionably improve many products, but when is it worthwhile? Quickly launching something that is “good enough” could be a winning strategy in a fast-paced world of disposable goods. The next version can add a feature and fix an egregious usability problem. Taking more time to build a more usable product could sacrifice a window of opportunity.

A new leadership style may emerge. Osnos concludes, “In the era of Xi Jinping, the public had proved, again, to be an unpredictable partner…‘The people elevated me to this position so that I’d listen to them and benefit them,’ he said... ‘But, in the face of all these opinions and comments, I had to learn to enjoy having my errors pointed out to me, but not to be swayed too much by that. Just because so-and-so says something, I’m not going to start weighing every cost and benefit. I’m not going to lose my appetite over it.’” 

On a more cheerful note, systematic assessment of user experience may be declining in some large software companies, but it is gaining attention in other companies that are devoting more resources to their online presence. For now, skills in understanding technology use have a strong market.


1. Evan Osnos, Born red. The New Yorker, Apr 6, 2015.

2. Ian Parker, The shape of things to come. The New Yorker, Feb 25 2015.

3. This was a plot element in the recent drama Wolf Hall.

4. Reagan’s principal cabinet officers were from Harvard, Yale, and Princeton.

5. The first Mac had insufficient power and memory to do much more than display its cool UI. This is discussed here and was briefly alluded to in the film Jobs.

Thanks to Tom Erickson and Don Norman for clarifying the history and terminology, and to Gayna Williams and John King for suggestions.

Posted in: on Tue, July 14, 2015 - 11:14:10

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

Critical waves

Authors: Deborah Tatar
Posted: Mon, July 13, 2015 - 11:41:13

I have just been trying to figure out how to explain some of our design ideas inside the various ACM/CHI-ish communities in the upcoming paper-writing season. 

I have found that if I keep my head down and focus on the work of shaping ideas and things, enough comes along career-wise to keep body and soul together, but I’m not really all that good at getting my ideas published. (My second-most-cited paper, “The Three Paradigms of HCI” [1], was never properly published because, for structural reasons, it was routinely evaluated by people who disliked it. Warning: If you write a paper called “The Three Paradigms of X,” you can expect it to be reviewed by three people, each an esteemed proponent of one of the paradigms, and each equally likely to dislike the premise of the paper, since no claim is made about the primacy of any of them.) 

But the fact that I’ve made a career despite problems like this does not particularly help my students. In general, I love my students and would like to tell them Useful Things that lead to their Growth, Well-Being, and Success. 

So, in the course of positioning work, I turned to a slowly unfolding debate to figure out how to present some of the ideas that our group has been working with and to explain to myself why one of my most important papers, reflecting on what important topics design should address in the next 10 years, was recently rejected. This note reflects a kind of a selfish approach, less concerned with what people actually have been saying and more concerned with reading how papers will be or are being read. In this post, I call that their shadow, but I might also call it connotation rather than denotation, or, coming from a different philosophical position, perlocutionary meaning.

I went back and re-read some of the Bardzells’ work on critical design (“What’s 'Critical' about Critical Design?” [2] “Analyzing Critical Designs” [3]) and then Pierce et al.’s response [4] in the 2015 CHI proceedings. And then I asked myself what purpose human-computer interaction research fulfills in the world—and who gets to decide. A key idea to me is to answer through research and thought, “what is important to design?”, “how do you know that you have done it?” and, increasingly, “how does/can design shape the world?” Many of these are ideas that I share with Steve Harrison. 

Critical design: reflexivity and beyond

The notion of critical design is very welcome and very familiar to me—my first degree was in English and American Literature and Language, and I love the idea of provoking design ideation through critical inquiry. The Bardzells develop the ideas of critical theory for HCI design in “What is Critical…”, a CHI paper that advocates critical theory as an important lens for both designer and consumer: reflection, perspective shifting, and theory-as-speculation as methods of understanding what is around us and taking design action, and supporting these with dialogical methodology. I or my students can take that paper, use it to propose a design, and say “This is what we were thinking. What do you think?” 

This is in line with Schon’s reflective practitioner, of course. More broadly, this has a strong family resemblance to the design tensions framework [5] that I proposed in 2007, which was also a method of getting people to think more deeply about the meaning of their design choices and what constitutes value in a design situation. The design tensions framework was concerned with how easy it is in HCI to overlook design success if that success is due to an accretion of small factors rather than a single big idea. But it is only one piece of the puzzle, and “What is Critical…” seems to me to provide another piece. 

Canonical—and non-canonical—examples

Following on “What is Critical…”, a graduate student, Gabriele Ferri, working with the Bardzells, wrote a DIS 2014 paper that could be interpreted as playing a nice speculative game, taking critical design to the next level. On one hand, this paper addresses absolutely crucial questions for design researchers and for teachers: What constitutes a contribution in the field and how do we recognize one? One component consists of examples, particularly teaching examples, of critical design. This caught my attention, in part because in the “Three Paradigms” paper, we criticize HCI for a lack of key teaching examples, and note that teaching examples were one of the chief elements that Kuhn noted in identifying something as a paradigm. The absence of teaching examples in HCI makes teaching HCI seem like trying to land on Jupiter; we can point to an accretion of stuff that moves around the sun in an orderly way, but, hey, it’s all gas! Does a person really want to go to a planet where there is nothing to land on or grasp? So, I respect the fact that the paper took on the question of examples. Furthermore, I appreciate that part of the paper is saying, “Let’s openly talk about what is in and what is out.” It accepts that people’s careers rise and fall on judgments about their work (that, at least, is graspable!). This is very important for discourse in the field. 

On the other hand, there was something troubling about the Ferri et al. 2014 DIS paper. As I read it, I could hear unpleasant echoes of family holiday meals long ago at which my various relations would defend the scholarly canon of knowledge, scoff at “the so-called social sciences”—“They’re not sciences!” “Physics is the only really science!” “Math is the queen of disciplines”—and otherwise revel in their refinement and insight. 

OK, the paper does not, strictly speaking, propose a canon. In fact, the canon that the paper does not propose (that is, a non-canon) is very intimidating. If you, newbie, claim to be doing critical design, are you going to get smacked down because your design isn’t in the non-canon? What if your design does not rise to the level that a person would call, for instance, transgression (transgression being part of the non-canon)? If you say that your design is in the non-canon, then does it sound less like research? 

The notion of canon that is not proposed by Ferri et al. is scary enough so that reading the paper, I do not know what to tell my students. Anxiety is not always a bad thing—maybe my students and I should be anxious, maybe we all should be a lot more anxious about the contributions of our field, maybe this is precisely what I helped call for in a different paper, “Making Epistemological Trouble” [6]—but, indeed, the shadow cast by this paper is quite anxiety provoking! 

Trying to illuminate HCI using shadows from design 

So, with these musings, I came to the Pierce et al. paper from this past CHI that, I’m told, is widely interpreted as a response. The authors propose a bunch of problems, some deep, some shallow, with the approach proposed and explored by the Bardzells’ and their colleagues. Like me, Pierce et al. also appeared to think that the prospect of a canon, even the shadow of a canon, is scary. They point out that critical design is not the only way for design and HCI to interact. Pierce et al. are certainly correct that only a trickle of ideas about design has made it into HCI thinking. True, but somehow unfair; I have to point out that only a trickle of ideas about experimentation and modeling have made it into HCI, yet few people associated with the Third Paradigm trouble themselves to obtain anything more than a stereotypic, fifth grade view of the Second Paradigm. The authors then go on to propose some possible futures. There are some nice ideas about openness. As with the earlier papers, there is a denotative content that is interesting and the sense of active minds at play. 

But, like the Ferri and Bardzell paper it criticizes, the Pierce et al. paper also has shadows, and to my mind they are darker ones. It really does not take on the problem that the Bardzells and company address about how HCI should be open to design. The shadow of the Pierce et al. approach is that, while it promises freedom (“open up HCI to all kinds of ideas from design”), it threatens domination by the few, the arbiters of design. Is design in HCI nothing more than translation from design to HCI? If it is only translation, that does not sound like freedom to me. And then the injunction to “focus on tactics, not ontology” makes a major power move that seems to threaten to shut down discourse. Although the authors disclaim the desire to control speech about design themselves, they write that “the issue of the designer’s intention needs to be handled carefully.” What does that mean? Who gets to say it? 

Pierce et al. calls for metadata about the designer’s intention to be taken into account in discussing ideas. 

Really? What justifies that position?

Addressing the designers’ intention cannot find its justification in arguments about how design works, because when a designer creates an artifact it goes out into the world with whatever juju marketing and circumstance give it. Michael Graves was recently memorialized in Metropolis by someone who had worked on his low-end consumer product line for Target. His colleague praised Graves’ focus on usability. That’s nice. Nonetheless, I am perfectly free to tell you that I was relieved when my Graves-designed hand-mixer finally died. The balance was wrong and my kitchen walls were frequently spattered. The only reason I know anything about the designer’s thought is because Graves’ achievements lay towards the art side of design, and artists may sometimes be offered a little blurb and an occasional token of respect or, in this case, a tribute. I concede that Graves’ purpose was not to promote egg-based wall decoration in my home but this is irrelevant to my experience as a user or even a client of the design work. 

So this reverence for the designer’s intent is not, I think, a claim about design. Is it a claim about design research? Well, on the one hand, in any kind of research, we make reference to prior work, but, in HCI more than most of the other fields of my concerns and expertise, we are also highly selective. For goodness sake, we write ten-page papers! Why should the designer’s intention, which is, importantly, also the researcher’s intention, be so prioritized? The direct thought that Pierce et al. present is about respecting authorial voice. That is an important discussion to have, a discussion that I would recast as concerned with developing schools of thought as we move forward in design research in HCI. But let’s go back to the shadow and the question of what I tell my students. 

I see something much less discussable and more draconian in the shadow of the Pierce et al. paper than in the Bardzells’ papers. I am sure that Pierce and company themselves would be horrified were I to tell my students, “Do not reconsider the meaning of (for example) the Drift Table because Gaver hath spoken and he hath named it Ludic!” That is a very dark shadow indeed, because unanswerable.

Years ago, Steve Harrison, Maribeth Back, and I wrote a paper called “It’s Just a Method!” [7]. The idea was that design methods are not important in-and-of themselves, but only because of what they enable the designer to perceive about the situation. In parallel, I would like my students to be able to use theory to inspire themselves and to describe their projects to others. But it’s hard to use much of these theories this way. We put our little 10-page barque to sea and theory threatens to fall on our heads as though we floated beneath a calving iceberg. 

Meanwhile, none of this gets the field closer to the issues that so trouble me about the status of technology in shaping our views of ourselves as subjects. Students, back to shaping our little research boat! 


1. Harrison, S., Tatar, D., & Sengers, P. (2007, April). The three paradigms of HCI. In Alt. Chi. Session at the SIGCHI Conference on Human Factors in Computing Systems. San Jose, California, USA (pp. 1-18).

2. Bardzell, J. and Bardzell, S. (2013). What is "critical" about critical design? In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 3297-3306.

3. Ferri, G., Bardzell, J., Bardzell, S., & Louraine, S. (2014, June). Analyzing critical designs: categories, distinctions, and canons of exemplars. In Proceedings of the 2014 conference on Designing interactive systems. (pp. 355-364). ACM.

4. Pierce, J., Sengers, P., Hirsch, T., Jenkins, T., Gaver, W., & DiSalvo, C. (2015, April). Expanding and Refining Design and Criticality in HCI. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 2083-2092). ACM.

5. Tatar, D. (2007). The design tensions framework. Human–Computer Interaction, 22(4), 413-451.

6. Harrison, S., Sengers, P., & Tatar, D. (2011). Making epistemological trouble: Third-paradigm HCI as successor science. Interacting with Computers, 23(5), 385-392.

7. Harrison, S., Back, M., & Tatar, D. (2006, June). It's Just a Method!: a pedagogical experiment in interdisciplinary design. In Proceedings of the 6th conference on Designing Interactive systems (pp. 261-270). ACM.

Posted in: on Mon, July 13, 2015 - 11:41:13

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts

Post Comment

No Comments Found

Design by the yard

Authors: Monica Granfield
Posted: Tue, June 30, 2015 - 10:34:26

Be a yardstick of quality. Some people aren't used to an environment where excellence is expected. — Steve Jobs 

How do you build a common language with which to communicate design goals for a product and measure if the goals have been met? Organizations are moving faster than ever, leaving little time for long meetings, discussions, and explanations. Ideas, designs, and research findings need to be synthesized down to the salient points to sell an idea or discovery to the organization. So then, as user experience practitioners, how can we set expectations to align and achieve a common set of design goals, upfront, for all ideas, findings, and designs, to work towards?  

Establish a UX yardstick. A UX yardstick is comprised of a dozen base words that encompass what you want a design to achieve. The more basic, the better, which is one of the reasons why you need about a dozen words to drive designs. Words like “Clear” or “Procedural” are base words that are less likely to be misinterpreted. However, design buzzwords like "Simple" or "Intuitive" can mean different things to different people. John Maeda wrote an entire book on what simple means, The Laws of Simplicity. It turns out that describing or achieving simplicity is not so simple after all. Designers understand how the concepts of these buzzwords are surfaced in a design and what it takes to achieve them. However, how we define simple and how a CEO might define simple may be worlds apart, therefore driving the need to use the most raw and basic descriptive words possible, to set the design goals and establish a common language.  

The yardstick will not only establish a common language across the organization, it will serve as a tool with which to set and measure design goals. The words appear on the yardstick, left to right, respectively, from easiest to achieve, to the most challenging to achieve. The order of the words helps to determine the order of achieving the design goal, and weights the level of design effort. A word that appears at the beginning of the yardstick, such as "Clean," might be much easier to achieve than, say, a word further down the yardstick, such as "Effortless." Using the words on the yardstick will clearly set design expectations in manner that is commonly understood. To build a common understanding, the words should align with the company's product goals and missions, and the meaning and order of appearance should be understood and approved with stakeholders.

Milestones can be set on the yardstick and used as a mechanism to explain a grouping of words. Here I use the paradigm of metals—bronze, silver, gold, and platinum—to explain the level of design effort and the goals that design effort encompasses. So, for example, the design goal for "bronze" design coverage is that the design is clean and useful, and the design goal for "silver" coverage requires that the design is clean, useful, learnable, straightforward, and effective. Metrics can also be placed on the goals to further explain and measure the success of each design goal. The words build on one another, establishing more goals to reach as you move across the yardstick, grounding and guiding your designs, avoiding churn and disagreement later down the line. Returning to the goal words throughout the product development process will assist in determining if the design approach is meeting expectations based on facts, rather than consensus or opinion. Keeping designs focused and in alignment with company and product goals will result in more successful users and products. 

When it comes to project scoping or resourcing, the yardstick can be useful too. Not all features can or need the same level of attention for each release. The order of the words on the yardstick assists in weighing the level of design effort needed. Some may be small while others may be of high impact to the customer base or the market space. So the small upgrade feature may only need to encompass the first three goal words on the yardstick and a new, market-competitive feature may need to reach all the way to the end of the yardstick, encompassing all of the words as the goal of the design—a full yard of design. A feature that needs to achieve a full yard of design will need more time and resources than a feature that only needs a half a yard of design. The UX yardstick can set the pace for time and resources needed to achieve a commonly understood design goal.

Other inspirations, such as the company's business or product goals, can be placed on the yardstick for reference and guidance. I have placed these in the upper right corner of the yardstick. A quote or a motto for a design organization on the yardstick can also serve to inspire and guide the design intentions for the product.  

Creating a UX yardstick can assist in framing and measuring the goals and expectations of a design, helping to determine if a feature needs a half or full yard of design and what it means to achieve that level of design. The design goals on the yardstick should remain fluid and change, keeping pace with any business and product changes, yet always serving as the guide, beacon, and inspiration on design direction. 

Posted in: on Tue, June 30, 2015 - 10:34:26

Monica Granfield

Monica Granfield is a user experience designer at Go Design LLC.
View All Monica Granfield's Posts

Post Comment

No Comments Found

Customers vs. users

Authors: Jonathan Grudin
Posted: Wed, June 03, 2015 - 10:41:03

Perspectives on handwriting and digital ink in schools.

Going through customers to reach users is a challenge as old as HCI. When computers cost a fortune, acquisition decisions weren’t made by hands-on users. Those responsible believed that they knew what users needed. They were often wrong. Making life worse for designers, the marketers who spoke with customers felt they knew best what users needed and often blocked access to customers and users. Developers were two unreliable jumps from use.

Enterprise settings today haven’t changed much. Marketing and acquisition remain overconfident about their understanding of user needs. However, users now have more options. They can request inexpensive software or customize what is provided. Employees who experience decent consumer software are bolder about communicating their needs. If not listened to, they bring in their own tools for some tasks.

The consumer market has one fewer hurdle—customers are the users. A product organization still deals with distributors who may be overly optimistic about their knowledge of consumers, but this self-corrects—a product sells or it doesn’t. A useful product ignored by distributors may fail, but a poor consumer product isn’t forced on users, as often happens in enterprises. 

Although people, including me, have long praised efforts to get direct feedback from hands-on potential “end-users,” many teams settle for A/B testing or less. In the critical endeavor of educating the world’s billion school-age children, an unusually clear illustration of the challenge has appeared. Resistance to an advantageous change has different sources, including cost, but even where cost is not an issue, a chasm separates customers and users, delaying what seems desirable and probably inevitable.

“The pen is mightier than the keyboard.”

This is the title of an elegant 2014 paper published in Psychological Science by Pamela Mueller of Princeton and Daniel Oppenheimer of UCLA [1]. Three experiments compared the effects of taking lecture notes with a pen or a keyboard. In the first, subsequent memory for factual information was equal, but students taking notes by hand had significantly more recall of conceptual information. Keyboard users had taken more notes, typing large chunks of verbatim lecture text. Those writing by hand couldn’t keep up, so they summarized; this processing appears to aid recollection.

In the second experiment, students were told that verbatim text is not as useful as summaries. Despite this guidance, the results were the same: lower performance and more verbatim text for keyboard users. In the third experiment, students could study their notes prior to testing: Although the typists had more notes, they again did worse.

Sharon Oviatt, author of The Design of Future Educational Interfaces (Routledge, 2013), conducted a wide range of rigorous comparisons in educational settings. She looked at pen and paper, the use of styluses with a tablet, and handwriting with a normal pen and special paper that enables digital recording. The results were dramatic. Students with pen interfaces did significantly better in hypothesis-generation and inferencing tasks. They solved problems better when they had a pen to diagram or to jot down thoughts.

Oviatt and others have observed that digital pens as a keyboard supplement help good students, but somewhat unexpectedly they can dramatically “level the playing field.” Students who think visually, who compensate for shorter memory spans by quickly jotting down notes, or who benefit from rapid trial-and-error, engage more and perform better. How this technology, with others, can save students from “falling through the cracks” could fill another essay.

Mueller and Oviatt discussed their studies in these keynote presentations at WIPTTE 2015, well worth 90 minutes. The authors primarily make a case for handwriting over keyboards for a range of tasks. More surprising at first glance, Oviatt found digital pens outperforming pen and paper. In a Venn diagram task, digital pens led to more sketches and more correct diagrams than pen and paper. Digital ink supports rapid trial and error due to the ease of erasing. Page size, page format (blank, lined, grid), ink colors, and line thicknesses are easily varied, engaging students and supporting task activities. Handwriting recognition enables students to search written and typed notes together. Digital notes are readily shared with collaborators and teachers.

Students can’t use keyboards to write complex algebraic equations (or even practice long division). A keyboard and mouse aren’t great for drawing the layers of a leaf or light going through lenses, placing geographic landmarks on a map, creating detailed historical timelines, or drawing illustrations for a story. SBAC annual state assessments, it was announced, will require digital pens in 2017.

Who would resist including a digital pen with computers for students, the key users of education?


Last week, a journalist friend mentioned that although he didn’t use a digital pen, his daughter borrows his tablet and uses its pen all the time. So did my daughter before she got her own tablet, which she insisted have a good pen.

I’m never without a traditional pen. I take notes, mark up printed drafts, make sketches, and compile weekly grocery-shopping lists. The journalist takes interview notes with a pencil, which holds up better when paper gets wet. We rarely use our digital pens. I use a digital signature for letters of recommendation, that’s about it.

Why would children but not their parents use digital pens? Well, few adults write as many equations as the average child, but it may be more relevant that unlike students, we rarely share handwritten work with others. We were taught that it’s unacceptable or unprofessional to turn in handwritten work—essays are to be typed up, illustrations recreated with a graphics package. Meeting minutes that are taken by hand are retyped before distribution. A whiteboard might be photographed after a meeting, but the notes are then typed. Colleagues who comment on my drafts may initially write on a paper printout, but they then typically type it into comment fields—additional effort for them, easier to read but more difficult for me to contextualize than ink-in-place would be. We consider handwriting second-class and let it deteriorate. “I can’t even read my own handwriting half the time anymore,” said a colleague.

The customers—superintendents or administrators making the purchasing decisions—don’t use a digital pen. Digital ink enthusiasts tell them that they would be more efficient if they did, but these customers are successful professionals, happy with how they work and not planning to drop everything to buy a new device and learn to use a digital pen. “Are you saying I’m inefficient?” Such exhortations can be counterproductive.

Instead, remind such customers that their needs differ from their users’ needs. K-12 is different from most professions: Handwriting is part of the final product for both students and teachers. Students don’t retype class notes, which include equations and sketches. They don’t resort to professional graphics programs. Teachers mark papers by hand; it is more personal and more efficient. Lecturing to an adult audience, I can count on them to follow my slides, but teachers guide student attention by underlining, circling, and drawing arrows.

These customers are unaware that they don’t fully appreciate the world of the users: teachers and students. A superintendent thinks, “Digital pens are a frill, an expense, they’ll get lost or broken. Students should improve their keyboard skills, which is better professional training anyway.” But students can’t type electron dot diagrams or feudal hierarchy structures.

The future seems clear, but these customers are not always wrong about the present. When a student uses a computer once a week or in one class a day, digital ink has less value. Most class notes will be on paper in a binder or folder, so digital notes will be dispersed unless printed. Students have no personal responsibility for the pen. This changes when a student carries a tablet everywhere: to classes, home, on field trips, to work on the bus to an athletic competition. Two years ago I described forces that were aligning behind 1:1 device:student deployments, which are now spreading in public schools. Several new low-cost tablets come with good digital pens. Prices will drop further if pens come to be considered essential.

Who will win?

Steve Jobs railed against digital pens, but he also opposed color displays until he embraced them. Apple is now patenting digital ink technology, feeding rumors about a better pen than the capacitive finger-on-a-stick iPad stylus. Google education evangelists described handwriting as obsolete, but recently Google announced enhanced handwriting recognition. Microsoft stopped most digital ink work when it embraced touch, but is now strongly committed to improving it.

An adverse trend: A comfortable digital pen is too wide to garage in ultra-thin tablets. On the other hand, vendors may realize that 80% of the world’s population does not use the Roman alphabet and finds keyboard writing very inconvenient. In China, tablets are available with digital pens of higher resolution than can be purchased elsewhere. Cursive writing and calligraphy may not return to fashion, but digital pens are likely to. 1975–2025 may become known as “the typing era,” a strange interlude forced on us by technology limitations.

Acceptance may be slow. 1:1 deployments will not be the norm for a few more years. Aging customers who speak on behalf of middle-aged teachers and young students rarely sit through classes and may not learn new tricks. In an era of tight budgets, many don’t grasp the implications of the downward trajectory in infrastructure and technology costs and the upward trajectory in pedagogical approaches that can take advantage of technology that, at last, has the capability and versatility to help.

The greatest challenge

Students have absorbed the message: Professionalism requires typing. Long essays must be typed so overworked teachers can read them more quickly. A job résumé should look good. No one points out that for rapid exchanges, handwriting is often faster and more effective—and the world is moving to brief, targeted communication.

Oviatt asked students whether they would prefer keyboarding or writing for an exam. They choose the keyboard, even when they get significantly better outcomes with a pen! The customer has gained mind control over the users. Education needs a reeducation program.

In concluding, let’s pull back to see this education example in the larger framework of the conflict between customers and users. Overall, much is improving. Users have more control over purchases and customization. Users have more access to information to guide their choices. They have more ways to express dissatisfaction. Customers, too, have paths to greater understanding of the users on whose behalf they act. They do not always succeed in finding those paths, as we have seen, so vigilance is to be maintained.


1. This play on "the pen is mightier than the sword” was independently used almost a decade ago by computer graphics pioneer Andy van Dam, for talks lauding the potential of digital pens or styluses.

Thanks to the many teachers, students, and administrators who have shared their experiences, observations, and classrooms.

Posted in: on Wed, June 03, 2015 - 10:41:03

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

Robots: Can’t live with them, can’t live without them

Authors: Aaron Marcus
Posted: Mon, May 04, 2015 - 10:05:20

Robots, androids/gynoids (female robots), and AI agents are all the rage these days...or all the dread, depending on your views. (In much of the discussion below, I shall refer to them collectively as robots for simplicity, since any disembodied AI agent with sufficient access to the world’s technology could arrange for human or non-human forms to represent itself.)

It seems one can’t avoid seeing a news article, editorial opinion, or popular opinion about them in social media every day, or a new movie appearing about them, now as primary characters, every week.

A recent article reports that China may have the most factory robots in the world by 2017 [1]. Another article reports that humanoid customer-service robots are starting service in Japan [2]. Still another reports that robots will serve next to human “co-workers” in factories [3], perhaps to re-assure human workers that they will not be completely replaced. Humanoid robots like the Japanese Honda Asimo have captured people’s imagination worldwide.

Recent movies have focused on robots, androids/gynoids, or non-embodied artificial intelligence agents as well, turning them into lead characters. We’re a long way from Maria the “maschinenmensch” of Metropolis or Robbie the Robot in Forbidden Planet turning them into lead characters, as in, Her, Chappi, and more recently Ex Machina or Avengers: Age of Ultron.

Movies have been “humanizing” and “cutifying” robots for years, ever since George Lucas made us laugh at/with the Laurel and Hardy characters of R2D2 and C3PO in the Star Wars movies of the late 1970s. In 2008’s Wall-E, this approach was taken to new heights of “adorableness”...perhaps technology is using Hollywood unconsciously to soften us up, making us think of them as friendly and non-threatening, allowing us to forget more ominous representatives from the Terminator series, Total Recall movies, I, Robot, or Avengers: Age of Ultron, perhaps preparing us for the coming wave of robots everywhere.

Although much of moviedom has focused recently on friendly robots and some news focuses on convenient use of drones to deliver packages to our homes, other more sinister signs have emerged. A recent national public radio (NPR) program focused on ethical issues. These included the rise of “killer robots” being developed by military in several countries. Should a killer robot incorporated into our armed forces be programmed to have compassion for a young child (and hesitation to fire upon someone) who has been given a lethal weapon by a female family member, as we saw in American Sniper?

That program also discussed the rise and use of sex robots in Japan, currently a “harmless” entertainment for (mostly) men. It seems there is no end to men’s ability to treat women as objects...and objects as women). This seems perhaps a less harmful way to make “comfort women” available to human (male) armed forces. The discussion of sex robots did recognize the potential for encouraging a-social or anti-social behavior in people. Some argue that the possibliity of sex/emotion robots for those unable to have “normal” relations with people might be helpful, but the discussants debated the value of offering child-aged sex robots for pedophiles. Ethical review, discussions, and potentially new laws seem in order. Such strategies are discussed in [4]. 

There seems likely to be growing interest and need for human-centered, sophisticated solutions to Human-Robot-Interaction (HRI) in all phases of their deployment. This represents a new age of HCI, where the “computer” has assumed human form and/or seems to exhibit human intelligence and personality. 

I was reminded by another public radio discussion recently of the work of the philosopher Martin Heidegger, a discussion about being, consciousness, existence, thought. As with the earlier radio program, new issues seemed to pop up readily in my mind, and no doubt in others’. Some of these topics have probably been discussed elsewhere in the world, and resources/discussions are no doubt available on the Internet. I have not had time to pursue any/all of these:

  • Are most robots shown in Hollywood movies a product of Western culture? Does Asimo exhibit characteristics of Japanese culture? Will we see the emergence of cross-cultural similarities and differences of robots, androids, gynoids, and AI agents? What would a “Chinese” robot look like, speak like, and behave like? What would an “Indian” robot be like?

  • Should humans be allowed to marry robots? Should robots be allowed to marry humans? Should robots be allowed to marry each other? 

    Definitions of marriage are being hotly debated these days. If robots are sufficiently “intelligent” to be almost indistinguishable from humans, and humans fall in love with them (as depicted in Her), or vice-versa, ought we not to consider soon what the legal ramifications are for state and federal laws? At the end of Her, Samantha, the AI agent, abandons her human special friend and runs off with other AI agents because they are more intelligent and fun to play with. Is this one of several likely future scenarios?

  • How many spouses should a robot be allowed to have? Although past potentates had many, many today might argue that it is hard enough to manage the relationship with just one. However, AI agents seem much more capable. In Her, the AI agent Samantha admits to having “intimate,” “special” emotional relations (and at least attempting a form of physical relationship using a sex surrogate) with 641 people (male? female? both?) other than the human (male) lead character of the film, Theodore Twombly, and “she” talks to more than about 30,000 others. Might one advanced “female” AI marry 642 human beings? As for “mass marriages” of a group of people to another, I think the Catholic Church in approximately the 16th century introduced the concept of Christians (or Christian nuns) being married to Jesus, or to the Church, which today I believe still survives as a concept within the religion.

  • Can there be such a thing as a Jewish robot? Can a robot convert to some religion? Why or why not?

  • Won’t all the philosophers and thinkers of the past (Plato, Aristotle, Machiavelli, Wittgenstein, Arendt, etc., to name a few) have to have their concepts, principles, and conclusions reconsidered in the light of robots asking the very same questions? Listening to a debate about the meaning of the terms of Heidegger caused me to think that Buber’s “I-Thou” concepts, Sartre’s existentialism, and Kantian moral/behavior-theory based on “categorical imperatives” may all have to be reconsidered in the light of non-human intelligence/actors in our society.

  • Can robots inherit our property and other assets? What happens to our human legacies with respect to property and other assets? Can robots be inheritors of trusts and family assets? If corporations in the U.S. are now like persons, will robots be far behind? Can/should they vote? What rights do they have?

  • Where are the senior robots? Most of the human-clad ones seem to look like the young, beautiful ladies on display in Ex Machina, which seems yet another 15- to 35-year-old male techie’s a-social, somewhat misogynist fantasy. Do robot women really need to wear 4-inch heels to be able to make eye-to-eye contact with their male overlords?

  • Are robots our “mind-children” and destined to replace us? Some are beginning to take this approach today, as indicated in numerous position papers, editorials, and feature articles. Perhaps we should, like mature parents, be grateful for our progeny and hope that they will remember and respect their past elders. I am reminded of an Egyptian papyrus from millenia ago that complained about the younger generation not giving enough respect to their seniors. Ah, the spirit of the late comedian Rodney Dangerfield, “I don’t get no respect!” may be transferred to the entire human race.

  • What exactly are the basic principles of HRI? Have they already been established in HCI? In general human-human relations? Which should we be aware of and/or worry about?

We are in the middle of experiencing a monumental change in technology and human thought, communication, and interaction, akin in significance to our actually encountering alien beings from other planets (which does not yet seem actually to have occurred in a widespread form, setting aside the few representatives of the Men in Black series), or the the reality of split-brain experiments first carried out in 1961, which exposed the possibility of more than one “person” residing inside our skulls.

Stay tuned for more challenges, fun, and games, as we enter the Robot Reality.


Portions of this blog are based on a chapter about robots in my forthcoming book HCI and User-Experience Design: Fast Forward to the Past, Present, and Future, Springer Verlag/London, September 2015, which in turn is based on my “Fast Forward” column that appeared in Interactions during 2002-2007.


1. Aeppel, Timothy (2015). “Why China May Have the Most Factory Robots in the World by 2017.” Wall Street Journal, 1 April 2015, p. D1.

2. Hongo, Jun, “Robotic Customer Service? In This Japanese Store, That’s the Point.” Wall Street Journal, 16 April 2015.

3. Hagerty, James R. (2015). “New Robots Designed to Be More Agile and Work Next to Humans: ABB introduces the YuMi robot at a trade fair in Germany.” Wall Street Journal, 13 April 2015. 

4. Blum, Gabriella, and Witten, Benjamin (2015). “New Threats Need New Laws.” Wall Street Journal, p. C3,  18 April 2015.

Posted in: on Mon, May 04, 2015 - 10:05:20

Aaron Marcus

Aaron Marcus is principal at Aaron Marcus and Associates (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts

Post Comment

No Comments Found

A matter of semantics…

Authors: Richard Anderson
Posted: Tue, April 28, 2015 - 12:07:54

In 2005, I wrote a blog post entitled, “Is ‘user’ the best word?” followed a year later by “Words (and definitions) matter; however…” The debate about the words we use in our field and their meaning has continued since that time, with many of the old arguments being resurrected. For example, regarding the beleaguered term user:

  • Jack Dorsey dropped its use at Square, arguing that it is a rather passive word that “is a massive abstraction away from real problems people feel on a daily basis. No one wants to be thought of as a “user.”

  • Margaret Gould Stewart revealed that Facebook sort of banished the term, saying it is “kind of arrogant to think the only reason people exist is to use what you built. They actually have lives, like, outside the experience they have using your product.”

  • Natalie Nixon argued “the next time you begin to ask about your users, stop. Reorient and remind yourself that you are solving problems for people. That subtle shift in language will do wonders for your sense making skills and build a different sensitivity to the challenge at hand.”

  • Eric Baumer et al., in Interactions, argued that studying non-users is as important as studying users and stated that “only two professions refer to their clients as users: designers and drug dealers.”

The preferred alternatives, as a decade ago, are usually person and people or human(s). Baumer et al. argued for consideration of “potentially more descriptive terms such as fan, player, client, audience, patient, customer, employee, hacker, prosumer, conscript, administrator, and so on.” But even such alternatives might have shortcomings. For example, regarding the word customer (also preferred by Dorsey):

I still can’t imagine the term user going away anytime soon. Indeed, some have defended it, as reflected in the following tweets:

Nevertheless, there has been an increase in the volume of objections to the term, reflecting, I think, a recognition of the need to think bigger—to consider and design experiences beyond the digital in order to design the best possible digital experience.

I address such issues beginning on the first day of my teaching of General Assembly’s UX Design Immersive course. Students need to know that the terms we use in our field matter and, though not spoken of much above, are sometimes defined differently by different people. This has included two of my instructor colleagues, one of whom called all paper prototype testing “Wizard of Oz” testing and the other who called all paper prototype testing “walkthroughs.” Say what?!? In my view, neither one of them are correct.

Some of the other areas of debate regarding terms we use include what UX design means and how it differs from UI design (see, for example, “The experience lingo”), what an MVP is (see, for example, “The MVP is NOT about the product”), and whether it is even an adequate concept (see, for example, “Minimum Compelling Product”).

Such debates seem destined to never end, which might possibly be a good thing. As Jared Spool recently tweeted:

Posted in: on Tue, April 28, 2015 - 12:07:54

Richard Anderson

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.
View All Richard Anderson's Posts

Post Comment

No Comments Found

Digital divides considered harmless

Authors: Jonathan Grudin
Posted: Tue, April 21, 2015 - 8:00:16

The problem with early technology is that you get stuck with all this legacy sh*t.
– Director of Technology at a leading private high school

The impermanence of elevation differentials in seismically active terrain

Educational technologists have expressed concern about disadvantaged students falling farther behind; haves versus have-nots. Education faces challenges, but I assert that digital divides are not the big ones. Paradoxical as it may sound, divides can at times offset the advantages enjoyed by wealthier schools. The uninterrupted increase in the capability and decrease in the price of technology can be a challenge for early adopters. This is especially evident in education now, because primary and secondary public schools have reached a tipping point, but let’s first consider other domains in which apparent advantages proved to be short-lived or illusory.

Before Germany was reunified in 1990, wealthy West Germany had a strong technology infrastructure. It then invested two trillion euros in the former East Germany. Not all of the expenditures were wisely planned, but a strong digital infrastructure was. Soon after, a friend in the West complained that the East had better computational capability than he and his colleagues, who were saddled with older infrastructure and systems.

The exponential rise in capability and decline in cost often rewards late adopters. Who among us, contemplating a discretionary hardware upgrade, hasn’t wondered how much more we could get by waiting a few months? Early adopters spend money for the privilege of debugging a new technology and working out best practices through costly trial-and-error, after which the price drops. The pioneers establish roles and develop work habits that are shaped for systems that are soon surpassed in capability by offerings that may benefit from different approaches. The “have-nots” of yesterday who start today can adopt practices tuned to better, less expensive systems. They benefit from knowing what has and hasn’t worked.

In her 1979 book In the Name of Efficiency, Joan Greenbaum revealed that executives marketing early mainframe computers could not document productivity gains from the use of extremely expensive systems. Businesses paid millions of dollars for a computer that had roughly the computational power of your smartphone. Mainframe vendors were selling prestige: Through the mystique around technology, customers who bought computers impressed their customers, an indirect benefit as long as no one realized that the emperor wore no clothes.

This phenomenon was reflected in Nobel prize-winning economist Robert Solow’s comment in the mid-1980s: “You can see the computer age everywhere but in the productivity statistics.” It was labeled “the productivity paradox.” Two decades of computer use had delivered no apparent economic benefit.

Those who followed did benefit: purchasers of systems that arrived in the late 1980s and 1990s. The systems were much less expensive, and software designers had learned from the ordeals of mainframe users. Productivity gains were measured.

Optimistic technologists define the digital high ground. Their colleagues in marketing build dazzling if not always tangible castles, attracting those who can afford the price tag. Consider home automation. Fifteen years ago, wealthy homeowners built houses around broadband or tore up floors and walls to install it. This set them apart, but how great or long-lasting was their advantage? I soon heard groans about maintenance costs. Only after wireless provided the rest of us with equivalent capability at a small fraction of the cost did services materialize to make access valuable. The early explorers had to decide how long to maintain their legacy systems.

This introduces a second challenge: An aging explorer may not realize when the rapid movement of underlying tectonic plates shifts yesterday’s high ground to tomorrow’s low ground. Late entrants can steal a march on early adopters who are set in their ways.

To have and have not

Digital divides melt away. “Have-nots” become “haves,” and by then more stuff is worth having. In the 1970s, Xerox PARC built personal computers with tremendous capability, 10 years ahead of everyone else. The allure of working with them attracted researchers from minicomputer-oriented labs and elsewhere. It was said that for many years, no researcher left PARC voluntarily. In the 1980s, another research opportunity for the wealthy arose: LISP machines built by Symbolics, LMI, and Xerox, expensive computers with hardware optimized for the programming language favored for artificial intelligence.

There was a clear divide in the research community. And then, in the early 1990s, Moore’s law leveled it. High-volume chip producers Intel and Motorola outpaced low-volume hardware shops. I remember the shock when it was announced that Common LISP ran faster on a Mac than on a LISP machine. LISP machines were doomed. Less predictably, interest in LISP (and AI) declined, perhaps due to a loss of the mystique that masked a failure to deliver measurable benefit. PARC researchers had made landmark contributions, albeit not many that Xerox profited from, but as its researchers shifted to commercial platforms, PARC’s edge faded. A digital divide evaporated.

The same phenomenon unfolded more broadly. Through the 90s, leading industry and university research labs could afford more disk storage, networking, and high-end machines. It made a difference. A decade later, good networking was widely accessible, storage costs plummeted, and someone working at home or in a dorm room with a new moderately priced high-end machine had as good an environment as many an elite lab researcher with a three-year-old machine. Money still enables researchers to explore exotic hardware domains, but for many pursuits, someone with modest resources is not disadvantaged.

The big enchilada

Discussions of haves and have-nots often focus on emerging countries: the challenges of getting power, IT support, and networking. I thought harvested solar energy would solve the problem sooner than it has. Mobile phones arrived first. In many emerging regions, phone access is surprisingly close to universal. Soon all phones will be smart. If mobile, cloud-based computing is the future, those in emerging markets who focus now on exploiting mobile technology could outpace us, just as Germans in the East leapfrogged many in the West.

Promoting accessibility to useful technology is undeniably a good thing. But our optimism about our wonderful inventions can exaggerate our estimates of the harm done to those who don’t rush in. Some of the same people who lament digital divides turn around to decry harmful effects of the over-absorption in technology use around them.

Education: And the first one now will later be last…

For over forty years I didn’t think computers were a great investment for primary or secondary schools, even though my destiny changed in high school when I taught myself to program on a computer at a nearby college that was unused on weekends. It sat in a glass-walled air-conditioned room and had far less computational power than a hand-calculator did twenty years later. I first programmed it to discover twin primes and deal random bridge hands. It was fun, but I saw no vocational path or educational value—the college students weren’t using it, my classmates had other concerns, and maintaining one cost more than a teacher’s salary. Pedagogy was the top priority in K-12. I may have sensed even then that ongoing professional development for teachers was second. As the decades passed and costs declined, having a computer or two around for students to explore seemed fine, but a digital divide in education didn’t seem a threat. Some wealthy schools struggled with expensive, low-capability technology, subsidizing the collective effort to figure out how to make good use of it.

But the times they are a-changing. It was evident two years ago. New pedagogical approaches, often called 21st-century skills and tied to Common Core State Standards: critical thinking, problem-solving, communication and collaboration skills, adaptive learning, project-based learning, and adaptive online-only assessment. I’ve seen this fresh, intelligent reorientation in my daughters’ classes. Students and teachers face a learning curve. Parents can’t help much, not having experienced anything like it. But if successful, the results will be impressive.

Software will be useful in supporting this. Sales of high-functionality devices to schools are rising rapidly. The price of a good tablet has halved in two years and will continue to drop. Public K-12 schools are joining the private schools that are 1:1 (“one to one,” each student carries a device everywhere). When not poorly implemented, 1:1 is transformational. I have attended public school classes with beneficial 1:1 Kindle, iPad, Chromebook, and laptop PC deployments.

Students in the past used school computers when and how a teacher dictates. A student issued a device decides when and how it is used, in negotiation with teachers and parents. The difference in engagement can be remarkable. Third and fourth graders are strikingly adept, middle school seems a sweet spot, and high school students are often out in front of their teachers.

Hardware and software value propositions change dramatically with a 1:1 deployment. When a student can take notes in all classes, at home, and on field trips, good note-taking software is invaluable. A good digital pen shifts from being an easily lost curiosity, when used once a day in a computer lab, to a tool used throughout the day to take notes, write equations, sketch diagrams and timelines, adorn essays with artwork, label maps, and unleash creativity.

Dramatic changes in pedagogy are enabled. Time is saved and collaboration opportunities are created when all work is digital. Teachers who formerly saw a student’s work product now see the work process.

Rapidly dropping prices and recognition of real benefits are bringing 1:1 to public schools in many countries, erasing a digital divide that separated them from elite private schools that could previously afford to issue every student a device. Of course, it requires preparation: Professional development for teachers, addressing new pedagogical approaches and technology, and wireless infrastructure for schools, which is expensive although also declining in cost. Most teachers today have experience with technology. The shift to student responsibility and initiative helps—tech-savvy fourth graders need little assistance when they reach middle school. In fact, students often help teachers get going.

As 1:1 spreads, pioneers risk being left behind. Private-school students are often from computer-using homes where parents have strong preferences for Macs, PCs, or Android. As a result, many private schools adopted a bring-your-own-device (BYOD) policy. Teachers who face classes with myriad operating systems, display sizes, and browser choices are limited in assigning apps, giving advice, and trouble-shooting. They are driven to lowest-common-denominator web-based approaches. In the past, students nevertheless gained experience with technology. A digital divide existed, though the benefits of technology use were not always evident.

Public schools can’t require parents to buy devices. Fewer public school students arrive with strong preferences and almost all are delighted to be given a device. School districts get discounts for quantity purchases of one device, and teachers can do much more with the resulting uniform environment. When preparing to go 1:1, one of the largest U.S. school districts had classes try over a dozen different devices in structured tasks. In summarizing what they learned, the Director of Innovative Learning began, “One of the things that our teachers said, over and over again, is, ‘Don’t give students a different device than you give us.’ That was an Aha! moment for us” [1].

Many public schools that carefully research their options choose this path. The future may be device diversity, but teachers struggle with non-uniformity. They can’t assume students have a good digital pen, which is far more useful when available for sketching, taking notes, writing equations, annotating maps, and so on in every class, at home, and on field trips. Bring Your Own is rarely the way to start learning anything. Instruction in automobile mechanics began with everyone looking at the same car to learn the basic parts and their functions. Allegory is taught by having a class read Animal Farm together, and later encouraging independent reading. Digital technology is no different.

I have visited many public and private 1:1 schools. Some private schools that are BYOD do not realize how much the affordances have shifted. Back when technology capabilities were limited, their students were on the advantaged side of a digital divide. Today, technology can make a bigger difference, and I have seen public school students who benefit more than nearby private school students.


Serious economic inequalities affect healthcare, housing, and nutrition. Before adding technology to the list, consider its unique nature: a steadily flowing fountain of highly promoted but untested novelty that takes time to mature as prices drop. We nod at the concept of the “technology hype cycle.” We are not surprised by productivity paradoxes. Yet some books that belabor technology for its shortcomings and frivolous distractions also decry digital divides: Questionable technologies are not available to everyone! The news is not so bad. Courtesy of Moore’s law and those who are improving technology, divides are being erased faster than they are being created.


1. Ryan Imbriale. The smart way to roll out 1-to-1 in a large district. Webinar presented March 11, 2015.

Thanks to Steve Sawyer, John King, and John Newsom for observations, comments, and suggestions.

Posted in: on Tue, April 21, 2015 - 8:00:16

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

Six lessons for service blueprinting

Authors: Lauren Chapman Ruiz
Posted: Mon, April 20, 2015 - 8:38:39

Learning about customer experience, and how to leverage the service blueprint as a research tool, is essential for researchers and designers, as this will help them stay ahead in this rapidly changing world.

This March, I was lucky enough to facilitate a Thinkshop with 25 designers attending the AIGA Y Design Conference.

We left with some interesting conclusions around how to build and use service blueprints as research tools.

Lessons that emerged from the Thinkshop:

  1. Know when you need a journey map verses a service blueprint. A journey map illustrates the customer experience and provides insight into how a customer typically experiences a service in different contexts over time. It focuses on the primary actions the customer is taking, along with what they feel and think as they move through your service. Journey maps are good for revealing a customer's experience for strategic, visionary decisions, while blueprints are good for reworking a specific process once a strategic decision has been made.

  2. Before you start problem solving, you need to have a clear picture of your current state. It's great to come up with all kinds of exciting ideas, but without clearly knowing the pain points and the service strengths, you risk creating something that falls flat, or could worsen your service experience.

  3. Understanding and mapping the experience of a service employee is just as critical as the customer experience. A blueprint falls flat if it doesn’t include a clear picture of the service provider, along with the backstage people and processes. It’s great to research your customer, but just as important is researching the service provider, who has great influence on the experience of your customer.

  4. The blueprint helps us identify opportunities. Large time gaps between customer actions are always opportunities, moving activities up or down lanes can create big opportunities for innovation, and removing extraneous steps or props can simplify the service experience. Every touchpoint—people, places, props, partners, and processes—has the opportunity to provide value to the customer (and service provider!).

  5. Value isn’t always measurable. While it’s important to identify metrics along your service blueprint—places where you can track for success—there is a level of value that is critical but may not be trackable, and that’s okay. For example, some people go to coffee shops solely for the reason that the shop felt like a place they belonged. This warm feeling of belonging can’t be tracked, but it’s why people keep coming back. This value isn’t always measurable, but it’s often at the core of why customers stick with a service. Clearly articulating value exchanges, and the opportunity of new value, helps everyone see why each touchpoint is important, even if it isn’t measurable.

  6. Just because it can be a service doesn’t mean it should. This one is simple—not everything needs to be become a service.

For those of you designing for a service-oriented company, what lessons have you learned? What are critical skills and tools you use each day?

Posted in: on Mon, April 20, 2015 - 8:38:39

Lauren Chapman Ruiz

Lauren Chapman Ruiz is a Senior Interaction Designer at Cooper in San Francisco, CA, and is an adjunct faculty member at CCA .
View All Lauren Chapman Ruiz's Posts

Post Comment

No Comments Found

The Facebook “emotion” study: A design perspective would change the conversation

Authors: Deborah Tatar
Posted: Sun, April 19, 2015 - 12:58:05

Jeff Hancock from Cornell gave the opening plenary at the 2015 CSCW and Social Computing conference in Vancouver last month (3/16/15). Jeff was representing and discussing the now infamous “Facebook Emotion Study,” in which a classic social psychology study was conducted on over 600,000 unwitting Facebook members, to investigate the effects of increasing the percentage of positive or negative elements in their news feeds on the use of emotion words in their subsequent posts. He apologized, he explained, and he did so with a pleasing and measured dignity.

But he also made choices that dismissed or at least downplayed what to my mind are some of the most important issues and implications. He focused on the research study itself, that is, whether and how it is ethical to conduct research on unwitting participants. Hancock focused on the ethics of conducting research. We can share that focus: we can, for example, focus on whether Cornell lacked sufficient oversight, or we can focus on informed consent. I’m glad that he is doing that.

But that view ignores the elephant in the room. The way we interpret the ethics of the research is grounded in the way we evaluate the ethics of the underlying practice as conducted by Facebook. The study aroused so much public heat (The Guardian, The New York Times, Forbes) in part because it exposed how Facebook operates routinely.

We could argue that researchers are not responsible for the systems that they study and therefore that the underlying ethics of Facebook’s practice are irrelevant to the discussion of research. But that point of view depends on the existence of a very clear separation of concerns. In this case, the main author was an employee of Facebook. We must consider that pleasing Facebook, one of the most powerful sources and sinks of information—and of capital—in the world, was and is a factor in the study and its aftermath.

To my mind, the wrong aspects of the research are grounded in the wrong aspects of the system itself, and while Jeff Hancock is a multitalented, multifaceted guy, an excellent experimentalist, and presumably a searcher after truth, I also think that he gave Facebook a pass in his CSCW presentation. That, by itself, is an indicator of the deeper problem. It is really hard to think critically about an organization that has such untrammeled power. Hancock put the blame on himself, which is an honorable thing to do. He does not deserve more opprobrium and it was pretty brave of him to talk about the topic in public at all. But in some sense the authors of the study are secondary to the set of considerations the rest of us should have.

Regardless of what Hancock said or did not say, Facebook and other large corporations—Google, Amazon, Facebook and so forth—the so-called GAFA companies—make decisions about what people see in unaccountable ways. These decisions are implemented in algorithms. As Marshall and Shipman (“Exploring Ownership and Persistent Value in Facebook”) reported the next day, people do not know that algorithms exist much less what they contain. Users can imagine that the algorithms are at least impartial, but who actually knows? And, even if the algorithms are impartial, we must remember that impartial does not always mean fair or right.  It certainly does not mean wise. On one hand, all of these companies take glory in their power; on the other hand, they fail to claim responsibility for their influence and their power, to a large degree, rests in their influence.
Is what Facebook is doing actually wrong? Not everyone thinks so. Hancock cited Kariahalios’s work, indicating that when people learn about what Facebook does routinely they are initially very upset, but after a couple of weeks they realize they want to read news that is important to them. But this is precisely the place where Hancock’s argument disappointed. Instead of scrutinizing this finding, he moved on.  

I have recently in ACM Interactions called out the ways that computers, as they are designed and most people interact with them today, dominate humans through their inability to bend, the way people often or even usually do. I hypothesize that computers put users in a habitually submissive role. On this analysis, the real damage inflicted by the influence of the large, unregulated companies on internet interaction is that the systems they create fail to reflect to us the selves we wish we were. Instead, they reflect to us the people they wish we were: primarily, compliant consumers. And I have raised the possibility that this has epidemiological-scale effects.

This is important from a design perspective. As I said, Hancock is a multifaceted, multitalented guy, but he is not a designer. The design question is always “What could we do differently?” and he neither asked that nor pushed us to ask it. Instead of talking about all the ways that Facebook or a competitor could provide some of its services (perhaps a little compromised) in a better way, the analysis tacitly accepted the trade-off that we cannot have both—on the one side, transparency, honesty, and control, and on the other side, paired-down and selected information.  A person cannot do everything in one talk, but this was an important missing piece.

I am not the only person to talk this way about the importance of reconceptualizing technologies such as Facebook or the possible dangers of ignoring the need to do so.  In my Interactions article, I cited an intellectual basis for the claims in a wide range of thinkers (Suchman, Turkle, Nass) and would have cited more but for the word limit. Lily Irani’s Turkopticon is an exercise in critical design. Chris Csikszentmihaly’s work at the Media Lab represented a tremendous push-back. The Bardzells have also been central in designing responses.  

And then, some of the points I make here—and more—were brought up beautifully in the closing plenary by Zeynep Tufekci of UNC Chapel Hill. Tufekci did not give Facebook a pass. She was forthright in her criticisms. She analyzed the situation from a different intellectual basis, offering a range of compelling examples of issues and problems. Most plaintive was the example of the New Year’s card, created by Facebook, that read “It’s been a great year” featuring the picture a 7-year old girl who had died that year. The heartbreaking picture had received a lot of “likes” and so was impartially chosen by the algorithm. The algorithm was written, as all algorithms are, by people, who were not so prescient as to imagine all the situations in which a large number of people might “like” a picture, much less how their assumptions might play out in actual people’s lives. The algorithm was written to operate on information that was, by the terms of the EULA (end-user licensing agreement), given to Facebook.  Are we allowed to give our information to Facebook and other companies in this way? After all, we are not allowed to sell ourselves into slavery, although many early immigrants from Ireland and Scotland came to North America this way.

But the considerable agreement between Tufekci’s criticism, mine, and others’ is tremendously important. It exists in the face of a countervailing tendency to think that the design of technology has no ethical implications, indeed no meaning. My on-going effort is to design technologies that, sometimes in small ways, challenge the user relationship with technology and create questions.

After Hancock’s talk, but before Tufekci’s, one of my friends commented that the real threat to Facebook’s success would be another technology that does not sell data. In fact, Ello is such an organization, constructed as a “public benefit company,” obligated to conform to the terms of its charter. It intends to make money through a “freemium” model. According to Sue Halpern of The New York Review of Books (“The Creepy New Wave of the Internet,” November 2014), Ello received 31,000 requests/hour after merely announcing its intention to construct a social networking site that did not collect or sell user data. At 31,000 requests an hour, the trend would have had to have continued for many, many, many hours to start being able to compete with Facebook’s 1.4 billion users, but this level of response suggest that there is a deep hunger for alternatives.

Perhaps le jour de gloire n’est pas encore arrivé, but, designers, there is a call to arms in this! Thank god for tenure and a commitment to academic free speech.

Aside from the ways that we are, like Esau in the Bible, selling our birthrights for a mess o’ pottage (that is, selling our information and ultimately our freedom to GAFA companies for questionable reward), there is another issue of great concern to me: the almost complete inutility of the ACM Code of Ethics to address the ethical dilemmas of computer scientists in the current moment. I asked Jeff Hancock about this, and he said that “it had been discussed” in a workshop held the previous day about ethics and research.  I will look forward to hearing more about that, but it seemed clear that because his focus is primarily on the narrower issue of research, and this was the matter brought up repeatedly in the press coverage and public discourse, he is more concerned with new U.S. Institutional Review Board and Health and Human Services regulations rather than that the position of the ACM. But CSCW, and, for that matter, Interactions, are ACM products. ACM members should be concerned with the code of ethics.

Posted in: on Sun, April 19, 2015 - 12:58:05

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts

Post Comment

@andersontudosobreblog (2015 09 03)

good post

Designing the cognitive future, part VIII: Creativity

Authors: Juan Pablo Hourcade
Posted: Wed, April 08, 2015 - 10:44:36

In the past decade there has been an increasing amount of interest in the HCI community on the topic of creativity. While it is not a process at the same basic level as perception or attention, creativity is often listed as a topic in cognition, and it is the focus of this post. 

Creativity is not easy to define. Reading through several definitions, I liked the one by Zeng et al. who defined it as “the goal-oriented individual/team cognitive process that results in a product (idea, solution, service, etc.) that, being judged as novel and appropriate, evokes people's intention to purchase, adopt, use, and appreciate it” [1].

If we want to enhance creativity, it is worth learning a bit about the factors that appear to affect creativity. The research literature points at two factors: diversifying experiences [2] and fluid intelligence mediated by task switching [3].

In terms of diversifying experiences, there is anecdotal evidence that many highly creative people grew up with diverse experiences, for example, speaking many languages, living in many countries, or having to cross cultures [2]. It makes sense that the ability to have multiple perspectives on a topic or experience would help with creativity. There is also evidence that people can be more creative in the short term right after experiencing a situation that defies expectations [2]. Perhaps throwing our neuronal systems off-balance makes it more likely that a new path be traveled in our brains.

The other factor that seems to make creativity more likely is fluid intelligence, the ability to solve problems in novel situations. More specifically, one factor related to fluid intelligence that appears to make a difference is task switching, the ability to switch attention between tasks (or approaches) as needed [3]. 

So how are technologies affecting creativity, and how might technologies affect creativity in the future?

When it comes to diversifying experiences, interactive technologies can certainly help provide more of those. We can already experience media from all sorts of sources in all sorts of styles. These can provide us with much broader backgrounds full of different perspectives. They can also give us convenient access to inspirational examples. In addition, it is easier than ever to find perspectives and points of view that may challenge ours, modifying our neuronal ensembles. 

Other ways through which technologies may provide us with even richer perspectives is by enabling us to interact remotely with a wide variety of people. Just like online multiplayer games enable gamers from around the world to form ad-hoc teams, these could be formed for other purposes. Having truly interdisciplinary, multicultural teams come together after a few clicks and keystrokes could potentially make it much easier to gain new perspectives on problems. Similar technologies could also make it easier to reach groups of diverse people who could quickly provide feedback on ideas to see which ones are worth pursuing.

Interactive technologies could also help with the task switching necessary to consider several alternative solutions to problems. Technologies could, for example, enable the quick generation of alternatives, or may enable quicker shifting by easily keeping track of ideas. Tools can also make it easier to express ideas that are in our heads. High-quality design tools are an example. They simply give us a much bigger palette and toolbox. These design tools can be complemented by other tools that can make these ideas concrete, such as 3D printers. Holographic displays could also be very helpful in this respect. 

Could interactive technologies get in the way of creativity? It’s possible. If the sources of experiences become standardized, it could affect the ability to gain different perspectives. If we have technologies deliver experiences that keep us in our comfort zones, this is also likely to reduce our ability to be creative. If most people use the exact same tools to pursue creative endeavors, then we are more likely to come up with similar ideas.

So what could the future of creativity hold? The ideal would include readily available diverse experiences, especially those that challenge our views and help us think differently. It could also include powerful, personalized tools that help us make our ideas concrete, discover alternatives, and obtain quick feedback, perhaps doing so with diverse groups of people. 

What would you like the future of creativity to look like?


1. Zeng, L., Proctor, R. W., & Salvendy, G. (2011). Can Traditional Divergent Thinking Tests Be Trusted in Measuring and Predicting Real-World Creativity? Creativity Research Journal, 23(1), 24–37.

2. Ritter, S. M., Damian, R. I., Simonton, D. K., van Baaren, R. B., Strick, M., Derks, J., & Dijksterhuis, A. (2012). Diversifying experiences enhance cognitive flexibility. Journal of Experimental Social Psychology, 48(4), 961–964.

3. Nusbaum, E. C., & Silvia, P. J. (2011). Are intelligence and creativity really so different?: Fluid intelligence, executive processes, and strategy use in divergent thinking. Intelligence, 39(1), 36–45.

Posted in: on Wed, April 08, 2015 - 10:44:36

Juan Pablo Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Pablo Hourcade's Posts

Post Comment

No Comments Found

The future of work

Authors: Jonathan Grudin
Posted: Tue, March 24, 2015 - 12:23:12

Some researchers and pundits predict that automation will bring widespread unemployment. This is unlikely. The shift of some labor to technology has been in progress for decades, but in the past 5 years the United States added almost 12 million jobs. Where is the automation effect? What will materialize to shift us from fast forward to permanent reverse gear? What drives this fear?

In an earlier post, I mentioned an invitation to a debate on this topic after being among the optimists in a Pew Research Center survey. Pew’s respondents were divided. 48% believe technology will increase unemployment; 52% believe employment will increase. I was quoted: “When the world population was a few hundred million people there were hundreds of millions of jobs. Although there have always been unemployed people, when we reached a few billion people there were billions of jobs. There is no shortage of things that need to be done and that will not change.”

Four principal speakers at the Churchill Club forum on Technology and the Future of Work were eminent economists, including a former chair of the White House Council of Economic Advisors and a former director of the White House National Economic Council. The other four were technologists. A Singularity University representative insisted that within five years, all work would be done by intelligent machines. Jobs in China, he said, would be the first to go. The President of SRI said that it would take 15 years for all of us to be out of work. A third technologist exclaimed that the impact of technology now was like nothing he had ever seen.

“That’s because you didn’t live in the 19th century,” an economist said dryly.

Off-balance, the technologist responded, “Neither did you!”

“No, but I’ve read about it.”

The best guess at tomorrow’s weather: the same as today’s

Technologies eradicated occupations, yet the workforce grew. Americans employed in agriculture fell from 75% to about 2% in a little over a century—and that was after the industrial revolution transformed the Western world. Hundreds of thousands of telephone operators were replaced by technology in the late 20th century. The economists at the forum did not anticipate imminent doom, but some expressed reservations about the nature of the jobs that will be available and concern over growing disparities in income and wealth. Will workers who lose jobs retool as rapidly as in the past? We don’t know. Shifting from farming to manufacturing required major changes in behavior and family organization.

At one time, positively rosy views of automation-induced leisure were common. Buckminster Fuller predicted that one person in ten thousand would be able to produce enough to support ten thousand people. Since one person in ten thousand would want to work, he surmised, no one would have to work unless they wanted to. Arthur Clarke, another writer, inventor, and futurist, had similar views.

Alas, the evidence suggests otherwise. Hunter-gatherer societies were relatively egalitarian as they struggled to survive. Agricultural self-sufficiency and the ability to satisfy basic necessities did not lead to leisure—hierarchies of privilege sprang up and funneled most resources to the top 1%. Growing income inequality in the United States may be a regression to the norm [1]. The 1% profit by finding ways to keep the 99% producing: There are armies to equip and pyramids to build.

Let’s bet against a job-loss pandemic. Despite the current enthusiasm for deep machine learning, the Singularity isn’t imminent. We hear more complaints about systemic malfunctions than breathless reports of technological adroitness. Deep Blue was an impressive one-trick pony focused on a constrained problem: a handful of objects governed by a limited number of rules. It was dismantled. The Watson search engine retrieves facts, which is lovely, but it’s our job to use facts. Four years after the Jeopardy victory, investors are pessimistic about IBM.

In the 1970s, it was widely predicted that programming jobs such as mine at the time would be automated away. This was music to management’s ears—programmers were notoriously difficult to manage and insisted on high wages, which they tended not to use for haircuts, suits, and country club memberships. Tools improved and programmers became software engineers, but employment rose.

The web brought more predictions of job loss—who would need software developers when anyone could create a web page? But markets appeared for myriad new products—developers didn’t go away— as did hundreds of millions of web sites. Anyone can create a site, but it requires an investment of time to learn a skill that will be exercised infrequently and needs to be maintained. It is more cost-effective to hire someone with a strong sense of design who can do a sophisticated job quickly. The new profession of web site designer flourished.

I visited my niece, a prosperous organic farmer. She has solar panels that supply a battery that maintains an electric fence. In the distance, solar panels power a neighbor’s well pump that irrigates a field. Solar already employs twice as many Americans as coal mining. Employment created by technology, and it is early days—the Internet of Things will bring jobs we can’t imagine.

Work that is not inherently technical benefits from technology. Almost any skill that could be considered as a career—cooking fish, growing orchids, professional shopping, teaching tennis—can be acquired more rapidly through resources on the web. Reaching a level of proficiency for which people will pay takes less time than in the past.

When I was 18, I knew the one “right way” to hit each tennis stroke and was hired to teach in a city park system. Twenty years later, I attended a weekend morale event at a tennis camp. There was no longer one right way to teach. Technology had changed coaching. My strokes were videotaped. The instructor identified all weaknesses. (Today, fifteen years later, machine vision might identify the problems.) The coach’s task shifted: She decides which of the five things I’m doing wrong to focus on first. She gauges how many I’m capable of taking on at once. A coach sizes up your personality and potential. Should she say “good job!” to avoid discouraging you or “try again, hold it more level!” to keep you going? A good instructor knows more about strokes and also understands motivation. Jobs are there. A weekend warrior could go online, have a friend videotape strokes, and perhaps find analytic software. How many will bother?

We all have agents

An employee of the science fiction author Robert Heinlein told me that Heinlein was methodical. He and his wife researched the planet to find the perfect place to live and settled on a plot between Santa Cruz and San Francisco. The only drawback was seismic. The Heinleins designed a house to “ride earthquakes like a ship rides the sea,” anchoring heavy furniture deep in the foundation.

When Heinlein worked on a book, he stayed in his room and gained weight. To avoid obesity he lost weight before starting a book, ending up back at normal. To the dismay of his agent, Heinlein was not that fond of writing and his royalties covered his needs, my friend said. Heinlein’s agent determined that his job was to find and entice the couple with expensive consumer items, to convince Heinlein to undertake the ordeal of writing another book.

We all have agents who benefit from our labor by convincing us to work for things we may not need. One can have reservations about consumerism, but the ingenuity to devise and market objects is a notable human skill.

Good jobs

“Will the new jobs be good jobs?” What does this mean? Were hunting, gathering, and farming good jobs? Working on an assembly line or as a desk clerk? Is an assistant professor’s six-year ordeal a good job? “Good” generally means high-wage; wages are set more by political and economic forces than by automation. The inflation-adjusted minimum wage in the U.S. peaked in 1968, almost 50% higher than today—wage growth will raise the perceived quality of many jobs. Globalization and competition can drive down wages and are enabled by technology, but are not consequences of automation. 

Despite the rapidly falling U.S. unemployment rate, there is uneasiness. Youth unemployment is high, not all of the new jobs are full-time, wage growth is rising more slowly than anticipated, income disparity is increasing. And it is reasonable to ask, “Could another shockingly sudden severe recession appear?”

I see students in high school and college who have none of the optimism my cohort did that good jobs will follow graduation. Anxiety is high, fed by reports they won’t live as well as their parents. Some of this is the reduction in support for education: Student debt was rare in my era. Because my friends and I were confident in our future prospects, we could take a variety of classes and explore things that were interesting, whether or not they had obvious vocational impact. We spent time talking and thinking. Well, and partying. Maybe mostly partying. Anyway, students I encounter today express a greater need to focus narrowly on acquiring job-related skills. There is still some partying. Maybe not as much.


Growth in U.S. employment is the strongest since the Internet bubble years. Productivity growth has tapered off—the opposite of what would be expected were automation kicking in. With little objective evidence of automation-induced joblessness, it is natural to wonder who is served by the discourse around employment insecurity.

Two beneficiaries come to mind. The top of the hierarchy benefits if the rest of us worry about unemployment and accept lower wages. Employers benefit when graduates are insecure. Also, technologists benefit in different ways. The reality may be that productivity gains from computerization have leveled off, but the perception that technology is having a big impact could be to the psychological and economic benefit of technologists. Who would want to lay off those writing code so powerful it will put everyone else out of work?


1. In the conclusion to the formidable 1491, Charles Mann suggests how the young United States resisted this norm.

Thanks to John King for discussions on these issues.

Posted in: on Tue, March 24, 2015 - 12:23:12

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

Humanity’s dashboard

Authors: Aaron Marcus
Posted: Wed, March 18, 2015 - 11:33:50

The Doomsday Clock. Source:

Speaking of the Apple Watch or iWatch, as it is called informally...Time is running out...

Many decades ago, people around the world learned of the Doomsday Clock (maintained since 1947 by the Bulletin of Atomic Scientists, and referred to indirectly in the movie Dr. Strangelove, which spoke of the Doomsday Machine). The clock, poised at about three minutes to midnight, summarized estimates of how close humanity was to a global nuclear holocaust.

Thankfully (?) we have other challenges to consider these days in addition to nuclear proliferation and the threat of nuclear war. Alas, people do not think enough about these challenges in a way that might move them to inform themselves more and to lead them to action.

Of course, one can debate forever about what are the worst calamities awaiting human civilization. Don’t forget that in the 2013 remake of the science-fiction classic The Day the Earth Stood Still, Klatu came to the planet earth to save the earth from its human population—not to save us, because humankind had demonstrated to another civilization beyond the earth that it was simply incapable of managing the earth in a beneficial way. Let’s hope that we can do better than the miserable scorecard implied by the movie.

Let’s also avoid unproductive name-calling and blame, and simply concentrate on “What do we have here?” and “What can we do to improve things?” If you asked me for a quick list of major challenges/issues, I might list the following, not necessarily in order of urgency or impending doom:

Solar power: To provide sustainable energy, we must solve a way to make use of the abundant energy given to us daily by the sun. There is simply no excuse. Yes, big oil companies may experience some technical difficulties, but the long-term benefits are indisputable. A project I worked on in 1978 focused my attention on visualizing global energy interdependence. We have made some positive steps, but we are in need of much more progress.

Desalination of the oceans to provide drinking water: Another project we did for SAP a few years ago focused our attention on the crisis facing all megacities of the earth: they are depleting their underground water tables and will run out of drinking water in a few decades unless something drastically changes. We have abundant oceans that cover about 70% of the earth. There is simply no excuse. We must solve how to create easy, simple, inexpensive, abundant desalination processes that can be undertaken wherever feasible.

Race/religion/gender equality: Recent events in the USA have uncovered racism that was thought to have been minimized in past decades. The USA is not the only place to suffer from persecution and punishment of people because of race, religion, and gender orientation. News from many other countries tell of horrible acts, unfair laws, and lack of justice. There is simply no excuse. We must do better.

Nuclear war: The Doomsday countdown may have changed, but it has not disappeared. Negotiations with Iran and with other countries bring our precarious hold on global safety to our attention. There is simply no excuse. We must do better.

Education for all: One of the best solutions for long-term equality, prosperity, and peace is education, for all. Yet UNICEF estimates that about 100 million children have no access at all to education, with more than half of these girls. People cannot get access to even the rudiments of a suitable education. There is no excuse. We must do better.

Perhaps you can think of 1-2 others that belong at the top of the list for multiple Doomsday Clocks. 

Which brings me to the iWatch. Like the ipod, iPad, and iSomethingElse, all of these products focus on the individual consumer—on the hedonistic pleasure of that individual’s life. Apple frequently touts a display for its new watch with many colorful circles/signs/icons/symbols, almost like a toy, or a reference to some Happy Birthday party with colorful balloons. All well and good. Yet there is a different approach possible.

Our work at AM+A for the past five years on persuasion design and our Machine projects (to be documented in mid 2015 in Mobile Persuasion Design from Springer UK) taught us the value of dashboards as a key ingredient (together with journey maps or roadmaps to clarify where we’ve come from and where we are going, focused social networks of family and key friends, focused just-in-time tips and advice, and incentives) to keep people informed and motivated for their behavior change.

And what journey am I considering: humanity’s journey through history and through our very own lives, each day, for all of us. I think we need to be reminded more of our Membership in Humanity.

I propose that we don’t need so much of an iWatch as a WeWatch...with a Dashboard of Humanity, that would constantly be the default display on our wrist-top device (of course, with the local time discretely and legibly presented) to remind us: Where are we? How are we doing? How much time does humanity have left? 

Perhaps it would be a star chart with five to seven rays, or just five to seven bars of a bar chart. The position, length, direction, and colors would become so familiar to us that we could “read it” easily, and detect immediately if something ominous or propitious had recently occurred, and it would be the gateway to exploring all the underlying details that might motivate us and move us forward on our own personal journeys. 

Perhaps it would be something that could be made available by the manufacturers for all platforms, all price ranges, from the most humble to Apple’s new $17,500 luxury version of its watch for the “0.1 Percent Users.”

Yes, showing the time is important. But so is showing the future of humanity for all of us on earth. 

Yes, the latest stock-market report is important. But so is showing the future of humanity for all of us on earth.

Something to consider in thinking about the next “killer” app for Apple prepares to release its new watch.

Posted in: on Wed, March 18, 2015 - 11:33:50

Aaron Marcus

Aaron Marcus is principal at Aaron Marcus and Associates (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts

Post Comment

No Comments Found

Designing the cognitive future, part VII: Metacognition

Authors: Juan Pablo Hourcade
Posted: Fri, March 06, 2015 - 12:49:00

In this post, I discuss my views on designing the future of metacognition. The definition of metacognition I use in the post refers to the monitoring and control of other cognitive processes. Monitoring helps us reflect on what is happening and what happened, while control enables us to regulate cognitive processes. While monitoring and control can happen automatically, here I focus on explicit metacognition: our ability to reflect on and justify our behavior based on the processes underlying it. 

It turns out that it is very difficult to report on cognitive processes because we have little direct conscious access to them. However, we do have access to their outcomes. In fact, we experience actions and their consequences closer in subjective than objective time (this phenomenon is called intentional binding). But, this only happens when our actions are voluntary. This phenomenon helps us experience agency and feel responsibility for our actions.

Psychologists are increasingly arguing that metacognition is most useful to help us better collaborate. One of the hints that this may be the case comes from studies suggesting that we tend to be better at recognizing the causes of behavior in others than in ourselves. In addition, there is evidence that our metacognitive abilities can improve by working with others, and that collaborative decisions (at least among people with similar abilities) tend to be superior to individual decisions, given shared goals.

Think of how reflecting with someone else about our behavior, decisions, and perceptions of the world can help us make better decisions in the future. For example, a friend can help us reflect about the possible outcomes of our decisions and provide different points of view. It is through contact with others that we can learn, for example, about cultural norms for decision-making. 

These discussions can also be useful for collaborative decision-making. Being able to communicate about our goals, abilities, shortcomings, knowledge, and values can help us better work together with others. Understanding the same things about others (a.k.a. theory of mind) can take us one step further. It can get us to understand collective versions of goals, abilities, shortcomings, knowledge, and values.

Metacognitive processes can help us make joint decisions that are better than individual ones, develop more accurate models of the world, and improve our decision-making processes. Through these, we can get better at resolving conflict with others, correcting our mistakes, and regulating our emotions.

So what are some roles that technologies are playing and could play with respect to metacognition? 

One obvious way technology is helping and can continue to help is in enabling us to record our decisions and our rationale for these. Rather than having to recall these, we can review them, analyze them, perhaps even chart them to better understand the areas in which we are doing well and the ones where we could be making better decisions. 

Technologies can also be useful in helping us gain a third-person view of our own behavior. Video modeling, for example, is widely used with special needs populations, such as children diagnosed with autism, to help them better reflect on behavior. Similar tools could be used to reflect on group processes.

Communication technologies can also be helpful. They could help us have richer face-to-face discussions, as well as have quicker access to more people with whom to talk about our behavior and decision-making. Expanding the richness of these communications and the number and diversity of people we are likely to reach could help us improve our metacognitive abilities. 

The most exciting and controversial developments are likely to come from technologies that help us better understand and control our cognitive processes. One approach that is already being used in the human-computer interaction community is neurofeedback. Current solutions use electroencephalogram technology to obtain information on brain activity. Researchers have used these, for example, to help people relax by making them aware of the level of activity in their brains. 

An up-and-coming alternative is near-infrared spectroscopy, which can be used to scan activity in the brain’s cortex. If there are cognitive processes that occur in the cortex, they could be monitored. These monitoring technologies could also be used to share this information with others. They may be useful, for example, in working with a therapist. At the same time there could be serious privacy issues. Could your employer require you to wear a device to know when you are not paying attention?

To assess control over cognitive processes, one option that is currently being explored is transcranial ultrasonic technology, which some researchers think could eventually be used to activate specific regions of the human brain (e.g., see William Tyler’s work at Arizona State). This again poses significant ethical challenges. Would you allow someone else to make decisions on stimulating your brain?

I would largely prefer expanding on what already works (e.g., better communication with others), although I find advanced monitoring and control technologies intriguing, especially if I could keep information private and be in full control of any brain stimulation.

How would you design the future of metacognition? Would you want to have more information about and control over your cognitive processes? Or you would you prefer technologies that help you do more of what already works (i.e., communicating with others)? Would you want to do both? 

This post was inspired in part by this article: Frith, C.D. The role of metacognition in human social interactions. Philosophical Transactions of the Royal Society B: Biological Sciences 367,1599 (2012), 2213–2223.

Posted in: on Fri, March 06, 2015 - 12:49:00

Juan Pablo Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Pablo Hourcade's Posts

Post Comment

No Comments Found

Ai Wei Wei on Alcatraz

Authors: Deborah Tatar
Posted: Tue, March 03, 2015 - 1:07:35

When I saw that the Chinese dissident artist Ai Wei Wei was to have a show on Alcatraz, I knew that I had to attend. Not only had I written three prior blog posts about his work and its relationship to criticism and freedom through design, but there was something immediately compelling about the idea of art from an artist that had been imprisoned situated in a notorious former prison. 

When I started these blog entries for ACM Interactions, I called them “the background-foreground playground.” My plan was to talk a lot about the importance of what we treat as background and as foreground in design, and how bringing forward something that has previously been backgrounded can be a crucial contribution, both in design and in design research. And, of course, I thought that I would eventually get around to thoughts about the importance and mystery of background. Yeah, well, it hasn’t worked quite that way. Only if you look at my blog posts from exactly the right perspective can you see these interests. 

So I thought, “Ai Wei Wei on Alcatraz! Of course!”

But then, my thought stopped short. 

Alcatraz is, of course, an island in San Francisco Bay with a prison. But it is no Robben Island or Chateau d’If. Alcatraz’s most famous prisoner was the gangster Al Capone. Al Capone was not a prisoner of conscience, the victim of desperate circumstance, a freedom fighter, or a casualty of the brutal use of power. Al Capone was Not a Nice Man. Mothers felt their children were safer when he was locked up in an impregnable fortress. 

So, the relationship between Alcatraz and Ai WeiWei is not actually obvious. What could the home of Al Capone have to do with prisoners of conscience? Perhaps the show was just a bad idea. 

When I finally got my ticket and boarded the ferry, I was anxious to see what a great, but still fallible artist did with the setting. How did he make felt meaning?

The first experience, unhappily, consisted of announcements—on the ferry, brochures, a video, and a ranger talk—that over and over told visitors that the Ai Wei Wei exhibit was not about imprisonment but rather about the relationship between freedom and imprisonment. This fit with the National Park service slogan, “Alcatraz: More than a prison,” but it was not an auspicious start. The announcements struck me as sapping energy from the setting. There is plenty of dramatic natural beauty around the San Francisco Bay. Why visit Alcatraz if not to experience a frisson of horror? I wondered whether Ai Wei Wei even knew that these announcements frame the experience. Although he is not currently in jail, the Chinese government has not restored his passport, and he was not able to visit the site himself. 

But maybe the tedious announcements serve some purpose. Perhaps people don’t like surprise. And surprise is what we get. 

With Wind greets visitors head on, just above eye-level.

After hiking the 300+ feet upwards from the ferry, we walk into a large barren concrete room, formerly a prison work room. We are greeted by the huge head of a Chinese dragon kite. Its body follows, consisting of a sequence of round silk panels hanging from the ceiling, snaking perhaps a hundred feet back into the room. Each panel is gorgeous and different. 

Eventually one notices that, integrated into some of the panels, are embedded quotations, in English, from prisoners of conscience. Many of these quotations are quite hard to read because they are embedded in other elaborate patterns. I have included a picture of one of the easier ones, which is, interestingly, from one of the few Americans, North Americans, or even Western-Hemispherians in the set. The cumulative impact of these panels is, for me, almost like a line of people, tied together but each with a unique and uniquely represented life, some glimpses of which are seen through the occasional words and names. We wonder what unknown and unspoken struggles are signified by the panels without words.

With Wind extends backwards into the space. NB: the drab cement of ceiling and floor look polished in this snapshot but they are not.

Around are other Chinese kites, many not unlike those that frequently fly on the greensward of the San Francisco Marina just across the Bay, but far more luxurious and primarily made out of silk, festooned with pictures of birds, mostly owls. I don’t think that the dragon kite could fly, but with its low-hanging head, it also echoes another tradition that has come to San Francisco from China: the dragons that twist and wind along the narrow streets in the parade every Lunar New Year, amidst the popping and smoke of firecrackers. The dominant feeling is celebration, freedom, joy, light. The piece speaks of transcendence.

Ai Wei Wei is concerned with prisoners of conscience. Because he is on Alcatraz, he does not have to say that transcendence is hard, achieved rarely, perhaps just a hope or a dream. The cement of the floor, ceiling and walls and the rusted, decrepit sink in the corner tell us that. The kites would be treacle in another setting, but Alcatraz makes them commentary. He establishes an altogether different set of background thoughts than the specter of Al Capone. 

Every few panels shows an embedded message.

Three other pieces are worth mentioning. Just behind the kite room is another huge room, with 16 columns between the cement floor and the cement ceiling. On the cement floor lies a “graveyard” of images of prisoners of conscience, each consisting of a portrait made of Legos. As you walk amongst these Lego portraits, they are very hard to see. You have to actively struggle to see that which is in front of you. But if you photograph them, say, with your iPhone, all ambiguity falls away and only the clarity of the image is preserved. 

Douglas Coupland made a similar point in some pieces in his recent Vancouver show: that which is complex in real life is simple when seen through the lens of mediation. But we could not see Coupland’s pieces except through mediation. The Coupland show caused us to question what we consider real. 

Here the blurring is a different design decision addressing a different aspect of being. It is as if to say that the least that we, the viewers, can do is make the effort to see. Our view of prisoners of conscience is constructed.

Later, we have the opportunity to see the piece from prison guards’ gallery. Because we are farther away, the faces are clearer, but we see them through bars. They are at once more clear and less personal. The prisoners are accounted for. (Batya Friedman’s “The Watcher and the Watched” [1] tries to address this latter experience in relationship in CHI as well as other work harking back to Bentham’s Panopticon.) 

The other two pieces that I will mention seem to me to move us into more deep engagement. One is explicit. There is an array of postcards, each addressed to a different living prisoner. People pick a postcard and sit in the prison cafeteria at long benches, each trying to imagine what to say to a real imprisoned prisoner of conscience. (Apart from sentiment, this has a pragmatic function of showing the governments that these people are not forgotten.)

The last piece engages more explicitly with the prison location, but by now (if your reaction resembles mine), we are far beyond “othering,” the frisson of fear and interest we might feel with respect to a gangster, his gat, his moll, his rolling walk and big cigar. We are ready to be sad, and real, and from a place of real sadness, respect principled resistance. 

A cell at Alcatraz, playing the words of Iranian poet Ahmad Shamlu.

Twelve to fifteen cells in the oldest, smallest, dampest, most decrepit wing of the main cell block are associated with audio clips. The slight contextual change of entering the cell now puts us into the experience of prison. No one would voluntarily spend time in these dreadful spaces. Yet, when entering the cell, we are able to hear music and poetry associated with a prisoner of conscience, as if in the dream space that they might have created. Toxica’s grating sounds and anarchic hostility brings vivid people and events of creation into the space of its particular cell. So, too, does Pussy Riot. So too does the poetry of Victor Jara or the songs of the Robben Island singers. Each recording makes a place in the way that Steve Harrison and I described in "People, Events, Loci" and that Jonas Fritsch talks about in his dissertation on affect and technology [2]. Each space has meaning, and the meaning is stronger because music, like life, moves quickly from experience to memory.

This is the most expansive piece in the sense that it includes words and music from around the world and across time (although most of the singers are dead or musical groups defunct). The Czech Republic is, for example, represented both in resistance to the Soviet Union, by Toxica, and in resistance to the Nazis by Pavel Haas’ Study for String Orchestra, written and first played in the Terezin Concentration camp. The piece shows us digitized sound, in some sense, in that each experience is cataloged into its own place in the cell block, which can be seen as an array with three rows and about 20 cells in each row. Yet, despite this digitization, the piece is also transcendent as a totality. 

So far my blog entries have backgrounded issues of background-and-foreground. @Large: AI Wei Wei on Alcatraz succeeds in foregrounding them.


1. Friedman, B., Kahn Jr, P. H., Hagman, J., Severson, R. L., and Gill, B. (2006). The watcher and the watched: Social judgments about privacy in a public place. Human-Computer Interaction, 21(2), 235-272.

2. Fritsch, J. 2011. Affective Experience as a Theoretical Foundation for Interaction Design. Ph.D. thesis submitted to Aarhus University.

Posted in: on Tue, March 03, 2015 - 1:07:35

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts

Post Comment

No Comments Found

Body/brain data/licenses

Authors: Aaron Marcus
Posted: Fri, February 27, 2015 - 4:06:38

We all have bodies and brains.

Some of us have driver’s licenses, social-security numbers, passports, and email addresses issued by or monitored by one or more governments and their agencies. Our identifiers of ourselves have limited shelf lives. We all arrive on earth stamped with an expiration date or “best used before” date in our genes. Now, with the Internet of Things, not only can our refrigerators and our chairs and our floors communicate with the Internet to let other people and/or things know about ourselves, our current location, condition, mood, and state of cognitive, emotional, and physical health, but our own bodies can be communicating with the universe at large, whether we are paying attention to that fact or not. Most people won’t notice or care...

I do.

Because I have been sort-of dead at one point, as my heart was stopped (I was on an artificial heart during triple-bypass surgery), and because all of my original nuclear family members are dead (may they rest in peace), and my own body is being kept alive with about five stents in the arteries around my heart, I think about my body/brain data.

This awareness/knowledge does not necessarily lead to depression, lethargy, or enervated wandering of the mind. This being attentive can sharpen and focus attention, to decide about what one can, must, should do with (as for me) about 350 million seconds left of life. There is even one wristwatch that offers a death clock to remind one of the countdown.) This awareness/knowledge can lead one to jettison many frivolous commitments and objectives (unless one decides to devote oneself to frivolity, of course).

For me, it has led to internal observation and speculation about body/brain data in the age of the Internet. Some comments/observations follow.

Marilyn Monroe’s estate carefully guards the memory and legacy of the original movie star born Norma Jean Baker. Michael Jackson and Elvis Presley continue to generate income from the sale of their music and videos, and their effigies, photos, videos, etc., which are carefully monetized and managed. Imagine the future of body data, including 3D scanned images of iconic stars, which might be captured while they are alive, and be made available after their deaths.

Actually the licensing of personal data could start during life. I have speculated for my Health Machine project (see, for example, Marcus, A. The Health Machine. Information Design Journal, Vol. 19, No. 1, 2011, pp. 69-89, John Benjamins, publisher) that Lady Gaga might distribute/franchise licensed data about what she eats for breakfast, and these data might be distributed not only to her “Little Monsters” (her fans), but be distributed by FitBit or other personal health-management products, such as electronic, “wired” toothbrushes that record one's brushing history and compare that history to others who set good examples of health maintenance.

The birth of death stars 

I speculate that some enterprising company will offer (perhaps some already do) authentic, guaranteed, websites or social media centers that collect your lifetime body data, in addition to your thoughts and messages, as you voluntarily record them, or as they are automatically recorded (note the recent ability to “read” an image of what someone is seeing by monitoring her/his brain waves) in order to send them out to family and friends, loved ones, or hated ones, for, say, the next 100 years.

In a way, you will be able to “live on,” including “your” reactions to future events. This phenomenon would give new import to “What would Jesus do?” (or say). In fact, anyone could continue to broadcast messages, somewhat like messages from a “dead star” just as light takes years, in some cases millions of years, to reach places long after the “sender” far away in the universe is gone. 

I have not speculated on the cost or the legalities of guaranteeing “permanence.” Note that the cryogenic companies had to guarantee the viability of preserved human heads or bodies in the 1980s or 1990s...and someone had to pay for this service. That matter may be worthy of further speculation.

Brain buddies

Slightly off-topic, but speaking of our brains and possible companions...Apple’s Siri has made quite an impression. Then came Microsoft’s Cortana. Now Microsoft China has introduced XiaoIce, Cortana’s little sister. Each has its own personality traits, a far cry from Microsoft’s Paper Clip of decades ago. The movie Her portrayed a super-intelligent agent voiced by the sultry Scarlett Johannson. That story had a sad ending: the super-agent decided to go off and play with other intelligent agents rather than relate to feeble human beings.

Let’s avoid looking at the pessimistic side and assume that a Super-Siri of the future will be our Brain Buddy and accompany us through our lives, assigned at birth. Like a social security number and email address, which all “with it” babies nowadays acquire at birth, the future you will get an AY = Augmented You, or AI = Augmented I. Take that as a starter thought and go with it. Where do we land?

Will these companion cranial entities survive us mortal souls and continue to emit snappy chatter to all who inquire of us and our latest thoughts on Topic X? Only time will tell...

Posted in: on Fri, February 27, 2015 - 4:06:38

Aaron Marcus

Aaron Marcus is principal at Aaron Marcus and Associates (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts

Post Comment

No Comments Found

Designing the cognitive future, part VI: Communication

Authors: Juan Pablo Hourcade
Posted: Tue, February 03, 2015 - 5:33:59

Continuing the series on designing the cognitive future, in this post I discuss communication. This is a topic on which the HCI community has spent a significant amount of energy, with conferences such as CSCW fully dedicated to it.

Two hundred years ago, most personal communications occurred face-to-face, with the most common exception being letter writing, for those who were literate. This meant that most communication was in real time with those in physical proximity, requiring the need to process information through the senses (mostly audio and visual), and respond through speech and gestures in real time. Letter writing involved different kinds of cognitive processes. Besides reading and writing, it also involved thinking about when the recipients might read the letter, their context, and how long it may take to receive a reply. Overall, most people lived in a very localized bubble; with perspectives and points of view that were very local, only expanded for those with the means and education through books and other print media.

The telegraph brought greater efficiency to remote communications, with the telephone significantly extending our ability to communicate remotely, eventually without intermediaries. However, expenses associated with these communication methods meant that most personal communication still occurred with those nearby. 

With the combination of mobile devices and Internet connectivity we have nowadays, we have the ability to communicate anytime, anywhere, with an unprecedented number of people. Not only that, but for many people in high-income regions, it can happen quite conveniently and at affordable costs. This trend is likely to expand into lower-income regions, with remote communication becoming more widely available, more convenient, and less expensive, while providing access to more people with greater fidelity. The cognitive processes involved in communication are still quite similar to those used for face-to-face communication, although the bandwidth is narrower, making these communications easier to process, but often more ambiguous. There are cognitive challenges, though, in navigating the myriad of communication options and their respective etiquettes. 

The trend moving us from communicating primarily with those nearby to communicating with our favorite people from around the world is likely to continue. One possible challenge brought about by this trend is that many people may end up communicating almost exclusively with people they like, who share their lifestyle, viewpoints, ideas, and values. This could be exacerbated by increased automation in service industries, meaning that people could avoid previously necessary interactions with strangers. People are also increasingly accessing personalized mass media. Taken together this could lead to people operating in a new kind of bubble, one that tends to only reinforce personal beliefs, and that may make people unaware of others’ realities, including those of people physically around them.

At the same time, communication technologies can enable people to stay in contact with others from halfway around that world, perhaps people they met only once. This could have the opposite effect, providing new perspectives, ideas, and realities. As translation technologies improve, there are also possibilities of people engaging in sustained communication with others with whom they do not share a language. Perhaps they could meet through a mutual interest (e.g., music, art, sports) and communicate in ways not previously possible. This could have even greater effects in lowering barriers between cultures and moving past stereotypes.

What about the technology? It’s easy to imagine extensions of what we are currently seeing: anytime, anywhere communication with anyone; higher-fidelity, fully immersive remote communications engaging all our senses with the highest quality audio, high-definition holograms, full-body tracking and haptic feedback, and who knows, maybe even smell and taste. My guess is most people would stick with the audio and video a majority of the time.

But there could be other communication technologies that go beyond. A longstanding wish and feature in science fiction stories is the ability to communicate thoughts. Something that may be more attainable is technology to help understand the feelings of others, during both face-to-face and remote communication. This could involve processing facial expressions, tone inflections, heartbeat, and so forth. The outcomes could make communication easier for some groups, such as autistic people. 

At the same time, personal communication technologies could further enable people to more fully express themselves. I saw an example the first time a girl diagnosed with autism used a tablet-based zoomable drawing tool. What on a piece of paper would have looked like scribbles turned into a person drawn from the details to the whole, with two large eyes looking at it. The tool enabled her to tell us what it felt like to be observed. This ability to express thoughts in ways that were not previously possible is something personal communication technologies can enable, something that could potentially make a big difference in the lives of people who have difficulty expressing their thoughts.

So what are my preferences for the future of personal communications? While I greatly enjoy easily being able to communicate with loved ones and people who share my values, I find the opportunities in enabling communication with those who live in very different contexts crucial to helping us make wise decisions about our world. Similarly, most of us are quite fortunate in being able to express ourselves at least to the degree that we can fulfill our basic needs and interests. As a community, we need to continue our work in extending that ability to all people.

How would you design the future of personal communications? 

Posted in: on Tue, February 03, 2015 - 5:33:59

Juan Pablo Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Pablo Hourcade's Posts

Post Comment

@Arthur (2015 07 02)

Hi Juan Pablo,

I liked that article very much, thanks for posting this. And actually I am now working on an academic project in HCI that does exactly what you are talking about: helping people from different culture communicate better.
I think that the place with the most room for development is intercultural communication, because as you mentioned, the fact that the world is getting more and more interconnected allowed us to communicate with people we would never have met otherwise. And the speed of technological advances in the domain is much higher than the speed of our adaptation.
Especially, I want to augment video-conference tools by automatically analyzing facial expressions, emotions and gestures, and displaying information to the chat-users in order for them to learn culture-general capabilities.
Specifically, I am now looking for tools that can help me achieving this. Needs to be accurate and JavaScript-ready.
Have you already worked with e.g. Kinect, Intel RealSense, or VisageSDK? If yes, what is your opinion about them?
Any pointers?


Taking stock

Authors: Jonathan Grudin
Posted: Thu, January 22, 2015 - 5:20:50

Two years of monthly posts. A year ago I weighed the experience and suggested that discussion is becoming a less effective use of time, given the ease of scanning masses of information and perspectives on most topics. A blog contributes to the information pile, but engaged discussion may diminish. I see occasional spontaneous flare-ups or flurries. Does your online or offline experience differ?

Twelve posts later, another time to reflect. When Interactions began online blogs, my goal of one a month seemed unambitious—we were asked to write twice that. I stuck to it and now feel downright gabby. I persevered because the discipline of finishing by the end of the month ensures that something gets written. We may have envisioned less polish and more conversation. Some people read them—I’m not sure how many, but everyone is busy.

In 2010, Don Norman reflected on five years of authoring Interactions columns, writing “My goal has always been to incite thought, debate, and understanding.” He asked, “Have they made a difference? How can one tell? If I am to judge by the paucity of email I receive, the infrequent citations, even in blogs, and the need for me to repeat many of my arguments year after year, I would have to say that the columns have not had any impact. Is this due to the work’s inelegance, the passivity of this audience, or perhaps the nature of the venue itself? I reject the first reason out of self-interest and the second out of my experience that in person, you are all a most vocal group. That leaves the third reason.”

He shifted to the magazine Core77. His essays there drew more comments but were ephemeral. Typing “Don Norman” into its search engine returns 931 hits; in no discernible order one finds links to his articles, passages mentioning him, and other things. In contrast, Don’s 37 Interactions essays are easily found in the ACM Digital library. Two are among the top 10 most-cited Interactions articles and one 1999 article was downloaded 221 times in the past six weeks

If immediate impact is the goal, and sometimes it is, Core77 won. On the other hand, Interactions could be more promising to someone who shares Arthur Koestler’s view: “A writer’s ambition should be to trade a hundred contemporary readers for ten readers in ten years’ time and for one reader in a hundred years.” Don’s Core77 essays are collected on his website, but few authors expect readers to forage on their sites.

I edited and wrote Interactions history columns for eight years. I hoped they would interest some readers, but a major target was scholars embarking on a history of HCI “in ten years’ time.” (In a hundred years, the Singularity may write our history without requiring bread crumbs.) The online blogs are indexed and will be easily retrieved as long as Interactions and its blog interface are maintained, but they aren’t archived by ACM. There is no download count to identify which attracted readers. They may rarely be found. Why post?

I noted in "Finding Protected Places" that blogging is a way to explore, clarify, and sometimes discover ideas, to fix holes in fact or logic that do not become evident until reading a draft. Friends can help improve a short essay. A published post can later be revisited and expanded upon. Whatever its quality, each of my posts reflects thoughts that are carefully organized in the hope of propelling someone who is heading down a path I took, a little faster than I managed.

Publishing is an incentive to finish and to ask friends for feedback, but an essay that fails to clear my (fairly low) bar of “potentially useful to someone” goes unpublished. Don identified three goals in publishing. Unlike Don, I never hope to incite debate. All else equal, I’d rather calm and move past debate. But Don’s other goals, incite thought and understanding, yes. When I’ve agonized and made a connection, however modest, sparing the next person some agony is a contribution. If a more complete treatment exists, undiscovered by me and my well-read friends, it doesn’t matter if others are now wrestling with the same issues. Once in a while I discover a better analysis that was written prior to the publication of mine; I can revisit the issue and point it out. I have a few demons, but NIH isn’t one of them.


Where does this reflection leave us? After two years, continue blogging or rest? One of my demons is a fear of running out of new things to say without noticing it. Links to past posts, such as the two above, can only up to a point be justified as building on previous thoughts. Even with a low bar, the supply of topics I’ve thought about enough to produce a worthwhile essay in a few days is limited. Will a worthwhile thought per month materialize? A few remain, and electrodes could be hooked up to the carcasses of a few unfinished past efforts to see if they can be brought to life.

My greatest concern is that views shaped in the 1970s and 1980s offer little to folks with radically different experiences, opportunities, and challenges. I frequently dream of wandering lost in familiar rooms and streets, thronged with busy people I don’t recognize. They’re friendly, but they can’t help me find my destination, and show no sign of needing help finding theirs.

Since 1970, a guiding image for me has been the final scene of Fellini’s Satyricon. I recall the movie as a grotesque view of ancient Rome that Fellini shaped into a personal vision. A boy we have seen mature into a man attends the funeral of the poet who inspired him. The poet appears to die twice. At the beginning, he is a poor man kept to amuse a corrupt aristocrat. Burned and beaten for speaking the truth, he collapses after bestowing the spirit of poetry on the boy. Near the end, he unexpectedly returns as a wealthy man surrounded by aging admirers. The source of his wealth is not explained—I assumed it was his literary reputation. He is fading and soon dies peacefully. His will is read: His fortune will go to those who eat his body. His followers cite cannibalism precedents in the literature. The protagonist and several other young men and women briefly watch the gruesome feast, then turn to a seagoing sailboat and distant shores.

Posted in: on Thu, January 22, 2015 - 5:20:50

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

Lifetime effigies: 3D printing and you

Authors: Aaron Marcus
Posted: Tue, January 20, 2015 - 3:54:02

We all have limited shelf lives. During our own lifetimes, or afterwards, some of us might wish to “publish” many hundreds, thousands, millions, or even billions of lifelike effigies, miniature or full-sized replicas of ourselves. 3D printing now offers a relatively practical, medium-cost way to achieve that objective. What was once the prerogative of wealthy pharaohs, kings/queens, and dictators of Middle-Eastern or Southeast Asian countries, is now available to almost anyone.

Many of the greats of the Internet world (only around, say, since the explosive arrival of the Web in 1994, or maybe a decade or two earlier if you count the technical origins) and many of the greats of the design world (say, for the past 60 years of the post-World War 2 era) have already died. Every one of them will, eventually. Perhaps we can capture them now, or in the near future, to keep around as replicas.

This awareness/knowledge of impending death does not necessarily lead to depression, lethargy, enervating wandering of the mind. This consciousness can focus attention to help us decide about what one can, must, and should do with (as for me) about 350 million seconds left. There is even one wristwatch that offers a death clock to remind one of the countdown. This awareness/knowledge can lead one to jettison many frivolous commitments and objectives (unless one decides to devote oneself to frivolity, of course).

For me, it has led to observations and speculations about what to do about death, or life-after-death, in the age of the Internet. Some comments follow.

Jeremy Bentham (February 15, 1748 – June 6, 1832) was a British philosopher and social reformer. Before he died, as early as 1769, he began planning for the dissection of his body upon death and its preservation as an “auto-icon.” His remains were displayed three days after his death. His skeleton and head were preserved and stored in a wooden cabinet, the skeleton padded with hay and wearing his clothes. The University College London acquired his remains in 1850 and displayed them to the public. A 360-degree rotatable, high-resolution “Virtual Auto-Icon” is available at the UCL Bentham Project's website

Dr. Gunther von Hagens gained fame in the end of the last century for replacing the water in cells of dead bodies and making plasticized versions of them, which were/are exhibited in the Body Worlds exhibits (see my article about this phenomenon: Marcus, Aaron (2003). “Birth/Death of Information as Art: ‘BodyWorlds.’” Information Design Journal, Vol. 11, No. 2/3, pp. 246, John Benjamins Publishing Company. See also here). 

Some years ago, I had speculated that people would take this plastination process, mass-produce it, and lower the cost, enabling any family to have beloved past family members, pets, and perhaps a few close personal friends preserved and on display in one’s home or business. It was only a matter of time.

After all, and as suggested above, life-sized, and larger, effigies of rulers have graced palaces, monuments, cathedrals, and numerous other governmental, religious, and commercial sites. Also, sports figures: I am reminded of a two- or three-story-sized three-dimensional effigy of the internationally famous Chinese basketball star Yao Ming, which graced a building in Beijing or Shanghai when I drove past it in 2002. I speculated that 3D printing might bring about something like the democratization of effigies.

Now, as before, that reality, the evolution in honoring past or present personages, has already been anticipated by several companies. The first I encountered using 3D printing technology was in Berlin in October 2014. I discovered TwinKind 3D, which has a franchised shop in Berlin. They produce miniature 3D-printed replicas of people based on specialized scans that they make. The cost is “reasonable”: 100 Euros (about $125 for a 10-centimeter figure, and 25,000 Euros (about $32,000) for a two-meter (that is, life-sized) version.

Photos of 3D printed bodies from TwinKind3D store in Berlin.
(Photos by Aaron Marcus, used with permission of TwinKind3D,

I found a similar company in China while reviewing exhibits recently at SIGGRAPH Asia 2014 in Shenzhen, China: Apostrophe’s in Hong Kong, but with locations, also, in Beijing and Taipei, offers 10cm effigies for 1900 Hong Kong dollars, or about $245. They produce effigies for games, advertising, 3D posters, and other commercial uses.

Examples of Apostrophe’s 3D images of people.
(Photo by Aaron Marcus, used with permission of Apostrophe’s,

One immediately speculates about the future of such effigies, whether miniature or life-sized.

Licensed effigies

Imagine the global hubbub caused when Lady Gaga releases her next.... no, not Internet music video, but the latest licensed, authentic, non-reproducible 3D effigy of herself. Would some media stars do this? Of course they would! To better communicate with their “Little Monster” fans and to increase income from purchases of branded items, just as Hello Kitty has added her adorable face to pens and combs. 

Speaking of copies, authorized or not, I am reminded of my initial shock in 2002, in Xian, China, I when I encountered an “authentic reproduction” facility located near the archeological site of the 10,000 Chinese warriors that had been unearthed nearby in Central China. Here, the interested tourist could buy official copies, miniature, half-sized, and full-sized. I chose half-sized, and the official government knock-off factory kindly packaged them and shipped them to my home for an additional cost. Clearly, the government had realized the value of official copies. 

Who hasn’t had a pin-up of some favorite music, cinema, or television star ornamenting a teenage (or younger) bedroom? Now Lady Gaga might be featured in your home or business, wherever you would like her, provided she made available official, licensed replicas. 

I have written earlier about this topic of copies in an earlier publication and lecture in which I cited Alexander Stille’s brilliant essay “The Culture of the Copy and the Disappearance of China’s Past” (in his book, Stille, Alexander (2002), The Future of the Past, Farrar, Straus and Giroux, pp. 40-70). Stille mentions the entirely different attitude that traditional Chinese culture has had to copying. In many works, the original materials of an artifact may have been completely replaced as they decay over centuries, but the work is still revered as the “authentic entity,” just as most of the cells of our body are replaced after a period of time, but we still consider the “person” to be the same entity. In China, artists have been traditionally expected to copy their masters. There is one case of a well-known, successful painter who sold his own work as well as copies of previous masters signed with their signatures. (For further reading on copies and their role in culture, please see Schwarz, Hillel (1996). Culture of the Copy. Cambridge: MIT Press, 568 pp.)

While I am on the subject of Lady Gaga and the quantified self, I have also speculated in our case study of the Health Machine (see for example, Marcus, Aaron (2011). “The Health Machine.” Information Design Journal, 19 (1), pp. 69-89), that Lady Gaga might license the data about what she had for breakfast this morning, thus encouraging her “little monsters” fans to copy her eating behavior and thus to be persuaded to change their own eating, and exercise, behavior.

Now, back to effigies...

Gallery of former selves

One can make one’s own Madame Tousseau exhibit based on laser scans of one’s naked or clothed body. Remember how embarrassed you were as a child when your parents showed others nude photos of you as a baby or toddler? Imagine what is coming up! Together with videos, photos, and recordings, an entire Museum of One’s Self could be possible, for the obsessively self-centered and suitably moneyed. Silicon Valley entrepreneurs, please note.

Gallery of your favorite dead relatives

For those who are obsessively nostalgic, imagine the delight in assembling a flotilla of dead relatives at family gatherings, dinners, and other celebrations. Just like the statues at Hearst Castle in California...only yours. An entire family legacy in miniature could take up just a cabinet...or a room, depending on your size preferences.

Recall that Jeff Koons the noted pop-artist has sold his work showing 3D sculptures of famous people and objects for spectacular prices. On November 12, 2013, his Balloon Dog (Orange) sold at Christie's Post-War and Contemporary Art Evening Sale in New York City for $58.4 million. In 1991, one of his Michael Jackson and Bubbles sculptures, a series of three life-size gold-leaf-plated porcelain statues of the sitting singer cuddling Bubbles, his pet chimpanzee, sold at Sotheby's New York for $5.6 million. 

Now, 3D printing brings the possibility of favorite icons to an affordable level, for those who would to savor, save, or swoon. We may all rejoice...yes?

Cemeteries: A new playground for effigies

Let me mention two important experiences I had in cemeteries. 

Many years ago, probably in the 1980s, I was in Japan and visited a cemetery. I witnessed families having picnics near their relatives’ graves. I thought that odd and interesting, because in the USA, cemeteries are usually thought of as dull, dreary, somber, and scary places. In Japan, they seemed to be places of joy, where family members communed with the spirits of their dead ancestors.

In 2008, I visited a Jewish cemetery in Odessa. I was shocked to find large-scale life-sized photographic portraits of the deceased sandblasted or acid-etched into the large tombstones. 

Odessa Jewish cemetery tombstone. (Photo by Aaron Marcus.)

This type of display was quite unlike what Jewish cemeteries usually do, or for that matter, most Christian cemeteries in Europe and North America. Of course, we have had life-like portraits on caskets dating back to the Coptic Christians in Egypt from thousands of years ago, and Egyptian and other ancient civilizations were not hesitant to make life-sized, even more likely giant-sized life-like or stylized replicas of their dead rulers, as mentioned above.

Now comes 3D printing! A new business opportunity for the recorders and archivists. What vast sums of money will be spent to decorate one’s grave with a chosen life-sized 3D replica of oneself, in whatever pose one wishes (subject to local laws on disreputable behavior or attire). Imagine the makeover of cemeteries worldwide, becoming a kind of Disneyland of the Dead. 

In sum, 3D printing enables anyone to gain the eternal display capability and economic publishing of one’s image once available only to a few in past centuries and millennia. We have the democratization of death-effigies.

Note, also, that some organizations or institutions already in existence may swoop in quickly to capitalize on producing the first “authentic replicas” or “commemorative replicas,” just as the Franklin Mint in the United States produces authentic commemorative coins of its own minting.

The future of 3D effigies is bristling with possibilities. Of course, we shall have to consider side effects that may be undesirable, but...we’ll leave that for another pondering.

Posted in: on Tue, January 20, 2015 - 3:54:02

Aaron Marcus

Aaron Marcus is principal at Aaron Marcus and Associates (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts

Post Comment

No Comments Found

Wavy hair is in

Authors: Monica Granfield
Posted: Fri, January 09, 2015 - 5:17:41

Has anyone else wondered if UI design is becoming trendy and fashionable? Like hair, clothing, interiors, and architecture, are user interfaces succumbing to waves of industry trends, becoming period-based creations? 

Lately, as I move from one application to another, I have noticed how much applications are all beginning to look more and more alike. It feels very much like the look-alike teenage girls at the mall, with their long, bone-straight hair, in their black NorthFace jackets, skinny jeans, and Ugg boots. They all look in style, but they all look the same. This is the point for teenagers, but is it the point of a good user experience? 

There is the style movement of flat UI happening, but then there is the actual presentation. With Windows Metro stepping into the flat but colorful arena first came the welcome new wave of clean, flat UI. It was great to have grids, better use of fonts, and the notion of whitespace all in the game! It’s just that from Google to Apple and beyond, the interfaces and experiences all seem to be so similar, so much so, that at a glance, sometimes I can't tell one from the other. 

A few months ago, I was talking to a colleague about an assortment of applications, which had been printed out to get a high-level look at the current state of trending applications in the market. What I immediately noticed was how many apps are using the same visual stylings, such as the same colors, the same rounded images, and the same light outline of the icons. These presentations are nice, except when you can't tell one from the other. Is this because of pressure to get software out the door, and that perpetuating what is a known entity is faster and more predictable? The problem with this is creating an experience that is so similar to all others, both visually and experientially, is not only mundane, it also weakens the brand. How do you differentiate your product and promote your brand? 

Are we evolving into periods of UI design? There was the "Battleship Gray" era of the late 80s and early 90s, then the colorful, almost bulbous, three-dimensional interfaces of the new millennium. After years of gray, how fun and refreshing it was to have all that color and tangible components. And how can we forget the excitement of the period of interfaces without color, the elegance of the transparent era of glass. Then Windows Metro brought us the clean, crisp, and modern flat UI. This has left me wondering, could we be in the "mid-century" era of UI design? If so, it would be great to have some of the mid-century flare, beyond round user images, to support more distinct visual presentations and overall user experiences. We now have motion graphics that aid in adding distinct dimension to the end-user experience. As we know, similar interaction can be useful and predictable, but if we, as designers, provide the right affordances, interactions should be useable, discoverable, and meet the user’s needs for the intended experience. Will flat UI design, like mid-century design, become a classic look and feel that lasts throughout the ages? Will UI cycle through styles, revisiting various styles and trends again? Will we embrace a style period but still have the confidence to apply unique and identifiable visuals to our own experience, within the mainstream market? 

If UI is fashionable, what is the next big trend? Will battleship gray cycle back around or will wavy interfaces with subtle curves come into fashion next? After all, flat hair is out—wavy hair and Bean boots are making a comeback. So maybe it’s time to spice things up in the UX world and not only move beyond the flat hair, fleece, and fuzzy boots, but find your own color boots or fleece this time around. 

Posted in: on Fri, January 09, 2015 - 5:17:41

Monica Granfield

Monica Granfield is a user experience designer at Go Design LLC.
View All Monica Granfield's Posts

Post Comment

No Comments Found

Destroying the box: Experience architecture inspiration from Frank Lloyd Wright

Authors: Joe Sokohl
Posted: Mon, January 05, 2015 - 12:58:06

"When we said we wanted a house at Bear Creek," client Lillian Kaufmann said to Frank Lloyd Wright, "we didn't imagine you would build it ON the creek!" 

To which Wright replied, "In time you'd grow tired of the sight of creek...but you'll never grow tired of the sound."

And he was right. Fallingwater stands as the most recognized house in architecture. Yet it's not just a was a home. The Kaufmanns loved it.

Similarly, owners of other Wright-designed buildings may have struggled with the architect, the implementation may have had flaws, the builders and other constructors may have gone behind Wright's back to fix perceived design flaws...but they all loved the buildings. The architect's vision remains inspiration to this day.

For the past five years or so, I’ve been writing and presenting on the topic of how Wright’s work and life provide inspiration for my work in architecting experiences. Here I’m taking a look at three Wright landmarks: Fallingwater in Ohiopyle, the Pope-Leighy house in Alexandria, and Taliesin West in Phoenix. I believe that, through Wright's examples, we can learn elements that take our approaches to experience architecture to newly useful and inspiring levels for our clients and the users of our work.

Why these houses? Plain and simple: They showcase aspects of Wright’s interaction with clients, technical problems, and different contexts of use. Plus, I’ve visited them, studied them, and photographed them. In addition, they cover the gamut of Wright’s philosophy of architecture.

Fallingwater and “the box”

Undoubtedly Fallingwater is an iconic house. Yet the path that Wright took to create this house lies not just in the choices to cantilever the house over Bear Run, but the devotion to the experiences that the Kaufmanns and their guests would have. 

Wright prepared for the designing of the house by immersing himself in the context. He spent hours getting to know E.J. and Lillian Kaufmann and Edgar Kaufmann Jr. (“Junior” spent time at Taliesin in Wright’s famous fellowship). 

When Wright received the commission, he spent time walking the grounds where the Kaufmann’s wanted to site a weekend house near Bear Run—and specifically near a large, sloping rock next to a 15-foot-high waterfall.

After he returned to Wisconsin, Wright dispatched a team of draftsmen to make elevation drawings of every tree, boulder, bush, and rivulet in the site’s area...and then did nothing for three months.

Wright drew the entire concept in three hours while Kaufmann the client drove up from Chicago to Spring Green.

The key UX takeaway is that we need to have time to allow our concepts to gestate. Great design comes from spending time to do it. In addition, Wright’s focus on what clients want rather than what they need has parallels to our work. Our research needs to be detailed enough that we can postulate great solutions, not just pedestrian ones.

Experience architecture considerations

In working on the design for Fallingwater, Wright took into account how the family would use the property. Yet he also looked beyond what they had said outright and instead discovered what they really wanted. His casemate windows opened without a corner support, and he liberally used glass corners in both Fallingwater and Taliesin West, truly “breaking the box” of the traditional home. Wright’s famous retort to Lillian Kaufmann about Fallingwater’s placement highlights this. Wright could look beyond the Kaufmanns’ desire to look at the waterfall and cut to the core of their desire: to be in nature, not just at nature.

Other details at Fallingwater show his ability to design elements of the environment that would add to the Kaufmanns’ experiences there. 

For example, the built-in kettle that would swing over the fire has an indentation that it fits in when not in use. This approach to available yet integrated design elements abounds throughout Wright’s designs. 

E.J. complained that the desk in his bedroom was too small; he wanted something bigger. Wright disagreed, saying that it was the right design. The client and architect went back and forth, until Kaufmann said, “I need a bigger space on which to write my checks to you.” So Wright came up with the innovative approach—instead of allowing the desk to block the window, or redesigning the window to open outward, he created the scallop in the desk.

In a similar way, when confronted with objections from clients or team members such as technologists, we need to seek innovative solutions to design problems. Rather than taking the easy path, Wright would have us find the right path for the context, the use, and the nature of the experience.

Context is king 

As the Lao Tse quotation in Taliesin West’s music auditorium says, “The reality of the building does not consist in roof and walls but in the space within to be lived in.” You see this quotation as you ascend the staircase, and its prominence and inescapability highlight its place in Wright’s vision.

This commitment to an understanding of the context where experience occurs provides a keen, focused approach we can bring to our designs. We can seek inspiration from Wright by finding a third option when we’re on the horns of a dilemma. The approach to context answers well the question, how does the site selection integrate with user needs and desires?

When it doesn’t fit

Sometimes, a design conflicts with other elements of the experience. At Taliesin West, some of Wright’s favorite students gave him a vase as a present. He placed it on a shelf in the living room but the base was too big for the shelf.

He could have widened the shelf...but he said, “No, the shelf is correct.” 

He could have asked them to replace it with a smaller vase...but he said, “No, the vase is correct.” Instead, he had a circle cut out of the glass so the vase would fit.

At the Pope-Leighy house’s construction, Loren Pope came up with design-altering solutions that Wright allowed to stand. After Wright left the site of his house while it was being constructed, Loren Pope convinced the builder to place the horizontally designed windows vertically in the kids’ bedroom. Though this step conflicted with Wright’s design, he allowed it to remain.

Sometimes, we need to understand when a design idea from the client does not destroy the entire design. As Wright said, “The more simple the conditions become, the more careful you must be in the working out of your combinations in order that comfort and utility may go hand in hand with beauty as they inevitably should.”

Posted in: on Mon, January 05, 2015 - 12:58:06

Joe Sokohl

For 20 years Joe Sokohl has concentrated on crafting excellent user experiences using content strategy, information architecture, interaction design, and user research. He helps companies effectively integrate user experience into product development. Currently he is the principal for Regular Joe Consulting, LLC. He’s been a soldier, cook, radio DJ, blues road manager, and reporter once upon a time. He tweets at @mojoguzzi and blogs at Joe Sokohl is Principal of Regular Joe Consulting, LLC
View All Joe Sokohl's Posts

Post Comment

No Comments Found

Life after death in the Age of the Internet

Authors: Aaron Marcus
Posted: Mon, December 29, 2014 - 12:45:47

We all have limited shelf-lives. We all arrive on earth stamped with an expiration date or a “best used before” date in our genes. It’s just that most of the Internet, most of the Web, most mobile products/services, most wearables, and most of the creation/discussion of the Internet of Things, created by and for younger people, just don’t seem to notice the clock ticking...

I do.

Some of my childhood friends have recently died. My parents (may they rest in peace) are dead. My younger brother (may he rest in peace) died 15 years ago. I don’t say “passed” or “passed away.” I say “died.” The real deal. No euphemisms. Just reality.

Many of the greats of the Internet world (only around, say, since the explosive arrival of the Web in 1994, or maybe a decade or two earlier if you count the technical origins), many of the greats of the design world (say, for the past 60 years of the post-World War 2 era) have already died. Every one of them will, eventually.

This awareness/knowledge does not necessarily lead to depression, lethargy, or enervated mind-wandering. This awareness/knowledge can sharpen and focus attention, to decide about what one can, must, and should do with (as for me) about 350 million seconds left. There is even one wristwatch that offers a death clock to remind one of the countdown. This awareness/knowledge can lead one to jettison many frivolous commitments and objectives (unless one decides to devote oneself to frivolity, of course).

For me, it has led to observation and internal speculation on what to do about death, or life-after-death, in the age of the Internet. Some comments/observations follow.

Life-after-death management systems (LADMs)

I have a website, Twitter account, Facebook account, email account, LinkedIn account, and many other accounts, so numerous that I cannot even keep up, remember, or contribute to them. What will happen to them? What should I do with them? Who will care for them? Should they just all be properly triggered now to expire when I do? How?

Clearly the Internet business startup community must create a death management system, in which one can set up the proper termination, or continuation of all these accounts, including a means to fund them for, say, a 100 years maximum, or at least 10-20 years, by which time, society will have so changed that one cannot predict whether the continuation of any of these will be usable, useful, or appealing at all.

A few other life-after-death on the Internet possibilities offer themselves as likely products/services of the future.

Authentic, guaranteed, “eternal” websites or social-media platforms 

OK, you’ve worked a long time to build up a website and a social-media platform, perhaps managed with Toot-Suite. What happens when you die? Not all of us have Marilyn Monroe or Elvis Presley organizations to manage our legacy. Clearly, we shall need a place to gather our thoughts (before death), as well as videos or audio recordings to collect them and to have them be available in perpetuity, provided we have paid the right maintenance fee, just as one pays a cemetery a fee for maintaining a funeral plot. Only the possibilities here are more complex and exciting. Imagine being able to deliver messages, emails, thoughts on current and future events composed from documents and data you have prepared while still alive, to communicate to your friends, your family members, and your enemies, or for the general public. For 100 years, or more you will live on, including “your” reactions to future events, based on clever algorithms that intuit what you might have done or said in the future. This gives What Would Jesus Do? Or Say? a new significance.

Digital attics: Authentic, organized (or less expensive, disorganized), eternal

In past millennia and centuries, only very wealth or famous people could collect, preserve, and guarantee the availability of their belongings, articles, diaries, etc. Now some centers preserve the physical legacy of publications and objects of a select few artists, writers, philosophers, government and political notables, etc. 

In the future, we shall all have the challenge and the possibility of preserving our own collections in digital attics, whether organized or not, presumably authentic. In them, one will be able to purchase “guaranteed” storage of all digital, scanned, or generated media and artifacts: email, photos, videos, drawings, recordings, business cards, other memorabilia, scrapbooks, diaries, journals, collections of business cards, and other ephemera and memorabilia. Our descendants may (or may not) examine these items. Others, scholars or detectives, or interested professionals, may search among the artifacts (for a fee no doubt) to look for sociological, anthropological, design-history, historical, political, or other patterns of information that may be of value. 

If you think this ridiculous, recall that the famous Geniza (Jewish) collection in Cairo, Egypt, was a place where people placed religious books that were no longer being used, for a thousand years, as well as laundry lists, and other seemingly mundane messages, which gave scholars unique insight into societies of earlier centuries when they were discovered a thousand years later. Naturally, wealthy and/or powerful people will have the most elaborate and extensive of these archives or attics, but digital media, servers, etc., offer the chance for everyone to preserve almost everything...if they choose to do so, and someone can pay for it forward in time.

These collections of your digital things would carry on after you die, and would offer guaranteed storage of all digitized, scanned artifacts. Of course, wealthy people will have the most thorough collections, just as we were able to pour over King Tutankhamen’s belongings 4000 years later. Today, such collections are available to a few at the Harry Ransom Center at the University of Texas/Austin, to those who archive their stories in NPR’s Story Corps collection archived at the USA’s National Library, or to those photographers, like Walker Evans, who was able to retain a collection of anonymous people from the southern USA in the 1930s. Tomorrow, anyone who wishes can (for the right price) preserve a digital legacy. 

Growth of professional archivists

With all this preserving going on, it seems there will likely come into being a fleet of professional archivists who will help “vacuum” data and scan artifacts, and undertake interviews with selected family members and friends. To some extent these people already exist, but in the future, with the legion of people preserving all of their past, this seems a profession likely to grow in numbers worldwide. Pre-death data gathers would guarantee confidentiality and quality results, so you can say what you want, to be released only after your own death and/or the death of all whom you mention specifically. There might even be a Pre-Death Data Gathers Society, with annual conventions at which they discuss their techniques and methods.

Pre-death funerals

Some years ago, I invented what I thought was a unique phenomenon: the pre-death funeral. One could arrange for this in advance, perhaps at a time when one knew there was little time left. Then, one could send out invitations, have speakers, ceremonies, rituals, foods, and publications, as appropriate. Imagine being able to enjoy the accolades (presumably people would be thoughtful, kind, or at least funny, as in celebrity roasts on TV/the Internet), rather than being just a silent, somber, stiff participant at normal funerals. 

Here is what I wrote, but never published, in 2011:

Today, 23 March 2011, television and radio announcers interrupted the regular news (of the Japan post-earthquake and tsunami nuclear reactor meltdown plus the announcement of double-dose radiation in Tokyo that endangers babies, and the ongoing Libyan battles for overthrowing Qadaffi plus other protests throughout Middle-East countries) to announce that Elizabeth Taylor died last night at the age of 79. 

I am 67. Her death and the television video clips of her life, especially in the last decades caused me to consider. She had accepted a special Academy Awards honor for her fight for AIDS. The audience applauded long and loudly. She seemed thrilled at the acclaim. What a wonderful experience she must have had to receive the respect, honor, love, and acknowledgment for her achievements. How many of us have such achievements? How many of us have been celebrated in such an event? A few members of the human race. 

Some organizations sponsor lifetime achievement awards, like the National Design Museum, the American Institute of Graphic Arts, and the Business Forms Management Association. However, these are usually fairly brief affairs, sometimes with multiple persons being so honored.

Some people have the benefit of retirement ceremonies at their place of employment, but as the movie “Schmidt” (featuring Jack Nicholson) showed, these may be somewhat routine, tentative in their seriousness, and mixed in their success, often with the anxiety of the change that is currently taking place as the employee leaves the company, with perhaps a watch or a small trophy after decades of work.

Of course, there are also the “roasts” of the Comedy Central satellite/cable network. However, these are something else: some praise and acknowledgment of achievements, but heavily mired in insults and disrespect. Hardly a replacement for the Academy Awards acknowledgment ceremony.

How many of us would like to be respected, honored, loved, and acknowledged for our achievements, for our contributions to bettering the world, of our lives, even if modest in scope? Probably every one of us. However, society has not developed a system for such lifetime honors and ceremonies. Except one: the eulogy at someone’s funeral. Alas, that person eulogized is not “present” (except as a corpse in a casket) to hear the words, to accept the praise, to enjoy the company of family and friends.

I have a modest solution to this problem: the Pre-Passing Ceremony or the Pre-Death Funeral. 

This event would take place late in someone’s life, perhaps at the age of 65, or whatever age was deemed appropriate, unless they were already officially notified of a lethal disease that, unfortunately, predicted an early death. Of course, if they somehow overcame this medical/legal situation and went on to live a long life, they might qualify for two such celebrations. 

Who might “authorize” such celebrations? I am not sure, but the organization of a national or international government or NGO, which we might call the Pre-Passing Ceremony Commission, like the organizations that assign Internet domains, would help to make things orderly, official, authorized, and more significant. 

That organization might also handle post-death maintenance of all Internet-related properties such as Facebook pages, blogs, etc. This organization might also make arrangements for pre-death interviews, post-death email messages and video/phone calls to family and friends, to help keep the dead person’s life and memory ever-present among selected family and friends. But that is another story. 

Back to the Pre-Passing Ceremony…

Mortuaries, churches, synagogues, mosques, social groups, and business organizations might all be interested in sponsoring such gatherings. Why? Because of the possibilities of selling tickets to attendees, and being repaid for the expenses of organizing, publicizing, recording, catering, and managing these events, including Lifetime Books and Lifetime Websites. The event might be simulcast to other locations so that people not able to attend could take part, just as there are ‘round-the-world video connections for the Oscars and New Year’s Eve celebrations. This might expand the audience, participation numbers, publications, PR…and budgets.

The fund-raising, as well as the spirits-raising possibilities are numerous. One might find a use for nearly empty movie theatres that desperately offer their locations for business events before they succumb and close their doors. 

All of these events and publications do not preclude a post-death funeral or memorial service, which might take advantage of previous existing documents, contacts, and events that can be repurposed as appropriate. They might even help generate the structures for ongoing anniversaries or “yorzeit” celebrations as they are called in Judaism.

Perhaps this ceremony would start in California, ground-zero for new ceremonies, new cults, new chips, new technology, and new social media. Are you ready to start celebrating a lifetime…before it is too late?

Alas, I discovered that someone earlier in the 20th century had already thought of this idea and even staged his own funeral, and Bill Murray, the comic actor in the movie Get Low in 2010 had been inspired to consider the idea. I am not speaking of merely pre-funeral arrangements, nor merely “ celebrating the life of X,” but something with the awareness of pending mortality.

Well, in any case, there seems to be a lot of life left in the idea of pre-death rituals and after-death Internet/social-media virtual you’s. You’ll get used to it...

Posted in: on Mon, December 29, 2014 - 12:45:47

Aaron Marcus

Aaron Marcus is principal at Aaron Marcus and Associates (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts

Post Comment

No Comments Found

The rise of incompetence

Authors: Jonathan Grudin
Posted: Thu, December 11, 2014 - 11:08:26

“To become more than a sergeant? I don't consider it. I am a good sergeant; I might easily make a bad captain, and certainly an even worse general. One knows from experience.”
   — from Minna von Barnhelm, by Gotthold Ephraim Lessing (1729–1781)

“There is nothing more common than to hear of men losing their energy on being raised to a higher position, to which they do not feel themselves equal.”
    — Carl von Clausewitz (1780–1831)

"All public employees should be demoted to their immediately lower level, as they have been promoted until turning incompetent."
   — José Ortega y Gasset (1883–1955)

“In a hierarchy individuals tend to rise to their levels of incompetence.”
   — Laurence Peter (1919–1990)

We should be enjoying a golden age of competence. We have easy access to so much information. YouTube videos show us how to do almost anything. Typing a question into a search engine very often retrieves helpful answers. We see impressive achievements: Automobiles run more efficiently and last longer, air travel grows steadily safer, and worldwide distribution of a wide variety of products is efficient. Nevertheless, there is a sense that overall, the world isn’t running that smoothly. Governments seem inept. One industrial sector after another exhibits bad service, accidents, inefficiencies, and disastrous decisions. The financiers whose ruinous actions led to worldwide recession and unemployment didn’t even lose their jobs. In HCI, many nod when Don Norman says “UI is getting worse—all over.” How could incompetence be on the rise when knowledge and tools proliferate?

The last of the four opening quotations, known as the Peter Principle, was introduced in the 1969 best-selling book of the same name. The other writers noted that people are promoted to their levels of incompetence; Peter went further, explaining why organizations keep incompetent managers and how they avoid serious harm. I will summarize his points later, but for now join me in a thought exercise:

Assume the Peter Principle was true in 1969. How are technology and societal changes affecting it?

There are several reasons to believe that managerial incompetence is escalating, despite the greater capability of those who are competent—who, in Peter’s words, “have not yet reached their levels of incompetence.”

Strengthening incompetence

1.  Competent people are promoted more rapidly today. Thus, even if well-trained, they can reach their levels of incompetence more quickly. In the rigid hierarchical organizations of the past, promotions were usually internal and often within a group. Few employees had to wait the 62 years and counting that Prince Charles has for his promotion, but wait they did. Today, with the visibility that technologies enable, competent employees can easily find suitable openings at the next higher level in the same or a different organization. Organizational loyalty is passé. A software developer joins a competitor, an assistant professor jumps to a university that offers immediate tenure, a full professor is lured away by a center directorship or deanship. The quickest way to advance in an organization can be to take a higher position elsewhere and return later at the higher level. LinkedIn reduces the friction in upward trajectories.

2. Successful organizations grow more rapidly than they once did, creating a managerial vacuum that sucks people upward. Enterprises once started locally and grew slowly. Mass media and the Internet enable explosive growth, with technology companies as prime examples. As a project ramps up and adds team members, experienced workers are incented and pressured to move up a management ladder that can quickly grow to 8 or 10 rungs. A person can plateau at his or her level of incompetence while very young.

3. The end of mandatory retirement extends the time that employees can work at their levels of incompetence. In 1969, Peter’s great teacher who became an incompetent principal probably had to retire at 65. Today he could have a decade of poor performance ahead.

4. The decline of class systems and other forms of discrimination is terrific, but egalitarian systems are less efficient if everyone progresses to their level of incompetence, whereas competent employees trapped beneath a class boundary or a glass ceiling are ineligible for promotion and thus fail to achieve incompetence. In the 1960s, many women found job opportunities only in teaching, nursing, and secretarial work. Accordingly, there were many extraordinarily capable teachers, nurses, and secretaries. I benefited from this indefensible discrimination in school and my father benefited from it in his job. (If this argument seems alarming, read to the end!)

5. Increased job complexity is a barrier to achieving and maintaining competence. As the tools, information, and communication skills required for a job increase, someone promoted into the position is less likely to handle it well. The pace of change introduces another problem: A competent worker could once count on remaining competent, but now many skills become obsolete. “Life-long learning” isn’t a cheerful concept to someone who was happy to finish school 30 years ago.

Accept the premise of the Peter Principle and these are grounds for concern. But you may be thinking, “The Peter Principle is oversimplified, competence isn’t binary, lots of us including me haven’t reached our levels of incompetence and don’t plan to.” Peter would disagree and insist that you are on a path to your level of incompetence, if you haven’t reached that destination already. I will summarize Peter’s case, but first let’s consider another possibility: Do other changes wrought by technology and society undermine the Peter Principle? The answer is yes.

Weakening the Peter Principle

1. Technology has so weakened hierarchy in many places that it’s difficult to realize how strong hierarchy once was. Peter christened his work “hierarchiology” because flat organizations are not built on promotions. The ascent at the heart of his principle is almost inevitable in rigid hierarchies where most knowledge of a group’s functioning is restricted to the group. I worked in places where initiating a work-related discussion outside the immediate team without prior managerial approval was unthinkable. Memos were sent up the management chain and down to a distant recipient; the response traveled the same way. The efficiency and especially the ambiguous formality of email broke this. A telephone call or knock on the door requires an immediate response; an email message can be ignored if the recipient considers it inappropriate to circumvent hierarchy. Studies in the 1980s showed that although most email was within-group, a significant amount bypassed hierarchy. Hierarchy is not gone, but it continues to erode within organizations and more broadly: Dress codes disappear, children address adults by first name, merged families have complex structures, executives respond directly to employee email, and everyone tweets.

2. Hierarchy benefits from an aura of mystery around managers and leaders. Increased transparency weakens this. In hierarchical societies, rulers tied themselves to gods. Celebrities and the families of U.S. Presidents once took on a quasi-royalty status. In The Soul of A New Machine, the enigmatic manager West was held in awe by his team. Not so common anymore. Leaders and managers are under a media microscope, their flaws and foibles exposed [1]. When managerial incompetence is visible, tolerating it to preserve stability and confidence in the hierarchy is more challenging. In addition, internal digital communication hampers an important managerial function: reframing information that comes down from upper management so that your unit understands and accepts it. The ease of digital forwarding makes it easier to pass messages on verbatim, and risky to do otherwise because a manager’s “spin” can be exposed by comparison with other versions.

3. When organizations are rapidly acquired, merged, broken up, or shut down, as happens often these days, employees have less time to reach their levels of incompetence. Unless brought in at too high a level, they may perform competently through much of their employment.

And the winner is…

…hard to judge definitively. We lack competence metrics. People say that good help is harder to find and feel that incompetence is winning, perhaps because we expect more, promote too rapidly, or keep people around too long. But could it be that only perceived incompetence is on the rise? Greater visibility and media scrutiny that reveal flawed decisions could pierce a chimera of excellence that we colluded in maintaining because we wanted to believe that capable hands were at the helm.

Despite these caveats, I believe that managerial incompetence is accelerating, aided by technology and benign social changes that level some parts of the playing field. Two of the three counterforces rely on weakened hierarchy, but hierarchical organization remains omnipresent and strong enough to trigger hierarchy-preserving maneuvers at the expense of competence, as summarized below.

Part II: Hierarchy considered unnatural

Peter’s “new science of hierarchiology” posits dynamics of levels and promotions. Archaeology and history [2] reveal that when hunter-gatherers became food-sufficient, extraordinarily hierarchical societies evolved with remarkable speed: Egypt and Mexico, China and Peru, Rome and Japan, England and France. Patterns of dysfunction often arose, but hierarchy persevered. Our genes were selected for small-group interaction; large groups gravitate to hierarchy for social control and efficient functioning. Hence the universality of hierarchy in armies, religions, governments, and large organizations.

As emphasized by Masanao Toda and others, we evolved to thrive in relatively flat, close-knit social organizations where activity unfolded in front of us. Hierarchical structures are accommodations to organizing over greater spatial and temporal spans. They can be efficient, but because they aren’t natural we should not be surprised by dysfunction. Hierarchy that emerges from our disposition to jockey for status in a small group can play out in less than optimal ways in large dispersed communities.

Those at the top work to preserve the hierarchy, with the cooperation of others interested in stability and future promotion. When employees are promoted but prove not up to the task, removing them has drawbacks. It calls into question the judgment of higher management in approving the promotion. Who knows if another choice will be better? The person’s previous job is now filled. If it is not disastrous, best to leave them in place and hope they grow into the job. In this way, a poor school superintendent who was once a good teacher or athletic coach hangs on; an incompetent officer is not demoted. When high-level incompetence could threaten an organization, other strategies are employed: An inept executive focuses on procedural aspects of the job and is given subordinates “who have not yet risen above their levels of competence” to do the actual work.  An incompetent manager is “kicked upstairs” to a position with an impressive title and few operational duties. Peter labels this practice percussive sublimation and describes organizations that pile up vice presidents “on special assignments.” In a lateral arabesque, a manager is moved sideways to a role in which little damage can be done. Another maneuver is to transfer everyone out from under a high-level non-performer, yielding a free-floating apex. Reading Peter’s amusing examples of these and other such practices can bring to mind a manager one has known. Perhaps more than one [3].

The Peter Principle

Researching a book on the practices of good teachers, Laurence Peter encountered examples of poor teaching and administration. His humorous compilation of “case studies” drawn from education and other fields, fictionalized and padded with newspaper stories, eventually found a publisher and became a best-seller. A 1985 book subtitled “The Peter Principle Revisited” promised “actual cases and scientific evidence” behind “the new science of hierarchiology.” It delivered no such thing. He may have observed and interviewed hundreds as claimed, but he provides a limited set of examples: capable followers promoted to be incompetent leaders, capable teachers who made poor administrators, experts on the shop floor who became bad supervisors, great fundraiser-campaigners who prove to be poor legislators, and so on. Sources of eventual incompetence are intellectual, constitutional, social, and other mismatches of skill set to position requirements.

The phenomenon is also evident in less formal hierarchies. A good paper presenter is promoted to panel invitee. A successful panelist receives a keynote speech invitation. A young researcher is invited to review papers. Promotions to associate editor or associate program chair, editorship or program chair, and more prestigious venues or higher professional service can follow until incompetence is achieved. Percussive sublimation and lateral arabesques are found in professional service as well as in organizations. The visibility of competent performance can undermine it by spurring invitations: A strong, proactive conference committee member may deliver weak, reactive service when subsequently on four committees simultaneously.

At times Peter claims that there are no exceptions to his principle. Pursuit of universality led him to dissect apparent exceptions, yielding the insights into how organizations handle high-level poor performers. Elsewhere Peter acknowledges that many people work ably prior to their “final promotion,” suggests a few ways to avoid promotion to your level of incompetence, and presents the nice class boundary analysis that identifies pools of competence.

The prevalence of class systems explains why the 18th and 19th century quotations above described the possibility of promotion to incompetence whereas the 20th century quotations stated its inevitability in an age with less discrimination. Less discrimination against white males, anyway. Allow me an anecdote: As CSCW 2002 ended, I went to a New Orleans post office to mail home the bulky proceedings and other items. As I started to box them, one of two black women behind the counter laughed and told me to give them to her, whereupon she discussed the science of packaging while rapidly doing the job and entertaining her colleague with side comments. I left with no doubt that the two of them could have managed the entire New Orleans postal service. Courtesy of workplace discrimination perhaps, I had the most competent package wrapper in the country.

Spending a career with a single employer—actors in the studio system, athletes and coaches with one team, faculty staying at one university, reciprocal loyalty of employees and company—was once common. Promotions were internal, waiting for promotions was the rule, and years of competent performance was common, abetted by glass ceilings and early retirements. Those days are gone.

The versatility of programming made it a nomadic profession from the outset. When I worked as a software developer in the mid-1980s, we questioned the talent of anyone who remained in the same job for more than three years. A good developer should be ready for new challenges before an opening appeared in their group, so we were always ready to find a job elsewhere. When after two years I left my first programming job—work I loved and was good at—to travel and take classes, my manager tried to retain me by offering to promote me to my level of incompetence—that is, he offered to hire someone for me to manage.

With job opportunities in all professions visible on the Internet and intranets, a saving grace of the past disappears: When the number of competent employees exceeded the number of higher positions, not all could be promoted. Today, a capable worker aspiring to a higher position can likely find an employer somewhere looking to fill such a position. 

Concluding reflections

This essay on managerial efficacy began with the observation that rapidly accessed online information is a powerful tool for skill-building: In many fields, individual competence and productivity has never been higher. Because someone who does something well is a logical candidate for promotion to manage others doing it, this undermines managerial competence: Managing is a complex social skill that is learned less when studying online than through apprenticeship models.

As class barriers and glass ceilings are removed, subtle biases continue to impede promotion, so by the logic of the Peter Principle, past victims of overt discrimination are especially likely to be capable as they more slowly approach their final promotion.

What should we do? Think frequently about what we really want in life, and keep an eye on those hierarchies in which we spend our days, never forgetting that they are modern creations of human beings who grew up on savannahs and in the forests.


1. “It is the responsibility of the media to look at the President with a microscope, but they go too far when they use a proctoscope.”— Richard Nixon

2. Charles Mann’s 1491 provides incisive examples and a thoughtful analysis.

3. The reluctance of monarchs to execute other royalty reflected the importance of preserving public respect for hierarchy. The crime of lèse-majesté, insulting the dignity of royalty, was severely punished and remains on the books in many countries. Today we are reluctant to prosecute or even force out financial executives who made billions driving our economy into ruin. Our genes smile on hierarchy, our brains acquiesce.

Thanks to Steve Sawyer, Don Norman, Craig Will, Clayton Lewis, Audrey Desjardins, and Gayna Williams for comments.

Posted in: on Thu, December 11, 2014 - 11:08:26

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

Mobile interaction design in the 2013-2014 academic year

Authors: Aaron Marcus
Posted: Wed, December 03, 2014 - 5:32:12

In May 2014, I was blessed with three invitations to be a guest critic at three end-of-the-semester design courses in three different departments of two educational institutions. Here is a quick summary of my experience, much delayed because of professional course/workshop/lecture presentations and book writing.

New Product Development Course, Mechanical Engineering and Haas School of Business, University of California at Berkeley, Berkeley, California

This course is led by Prof. Alice M. Agogino, Department of Mechanical Engineering. (The course description is here.)

From what I have seen over the past few years, the course provides an experience in preliminary project planning of complex and realistic mechanical engineering systems, but includes the possibility of projects that are, in effect, mobile user-experience design. Design concepts and techniques are introduced, and student's do innovative design/feasibility studies and present them at the end of the course. The primary reading material is the textbook Product Design and Development (Second Edition) written by Karl Ulrich and Steve Eppinger, a rather basic step-by-step discussion of the processes. The course objectives include innovation and achieving customer-driven products. The topics covered include personas and empathic design; translating the "voice of the customer"; concept generation, selection, development and testing; decision analysis; design for the environment; prototyping; an ethics case study; universal design and entrepreneurship; and intellectual property. Sounds pretty comprehensive doesn’t it! Well, I am sure the students do get a good introduction to the design/development process.

The reviews were held in Innovation Lab at the Haas School of Business. Some projects that seemed, to me, especially interesting to the HCI/UX/CHI/IXD communities were these: 

Headphones: A product that improves existing headphones by increasing longevity for daily active users while enhancing durability and ergonomics 

ResidentSynch, The Smart Home: A product to integrate, monitor and optimize the use of the various products in urban households

Samsung-Intel Next Digital 1 (IoT): A platform that connects all of our devices with objects around us 

Samsung-Intel Next Digital 2 (Sensorial Experience): The integration of technology and sensorial experiences, enabling consumers the ability to interact with technology in a way that stimulates multiple human senses simultaneously 

Smart Alarm Clock: Redesign the experience of waking up and starting your day.

One of the more intriguing projects, which I reviewed in detail, was the Samsung Loop project, with a Samsung staff member as a mentor/guide. When people actually meet (not virtually), controlled information can pass from one ring to another when in close contact, for example, when people meet and shake hands. The device was clever, minimal in form, and similar to other ongoing research for “finger-top” devices. Of course, there is the cultural challenge that not all people shake hands upon meeting.

Example of the Loop finger-device for exchanging
information between people who actually meet.
(Photo by Aaron Marcus)

Even if some projects were quite inventive, most of the projects had mediocre presentations of their end results, perhaps not surprising from a group of engineering and business students. One exception was the work of an outstanding student Elizabeth Lin, a computer-science major, whose graphic design and explanatory skills were at a professional level.

University of California at Berkeley, Department of Computer Science, Course in Mobile Interaction Design, Berkeley, California

Prof. Bjørn Hartmann invited me to be a critic in his course, in which he offers, together with Prof. Maneesh Agrawala, a semester’s activity in learning about mobile user-experience design, mobile user-interface design, and mobile interaction design. The projects were impressive, as presented in two-minute visual, verbal, and oral summaries. Two other critics and I were asked to judge them. The two other people were much younger user-experience/interaction design professionals: Henrietta Cramer and Moxie Wanderlust, both from the San Francisco Bay area (by the way, Moxie Wanderlust made up his name in graduate interesting user-experience design project in itself!). 

Amazing to me was the fact that, after judging 24 presentations in under an hour, we were all almost identical in our judgments as to the best overall project, the best visual design, and the most original. I was a little worried that we would differ greatly in our reviews. We didn’t!

Here in this class, students were able to code or script working prototypes within eight weeks. Alas, insufficient time had gone into user-experience research, usability studies, and visual design. Not a surprise from a center for computer science. The course and the projects are well documented online: 

California College of Arts, Department of Graphic Design, San Francisco, California

The lead professors of the senior thesis presentations, Prof. Leslie Becker and Prof. Jennifer Morla, invited me to join the group of guest critics for 20 senior thesis projects of varying media and subject matter. One of the most interesting was that of Maya Wiester, whose project focused on 3D printing of food and a mobile app that would manage food ordering. New characteristics for one’s favorite cuisine might include not only ethnic/national genre (Italian, Chinese, Thai, Mexican, etc.) and sustainability/healthfulness (vegan, organic, low-salt, no-MSG, gluten-free, local, etc.) but new attributes such as shape (Platonic solids, free-form, etc.), color (warm, cool, multi-colored, etc.), and surface texture (smooth, rough, patterned, etc.).

Example of 3D-printed food by Maiya Wiester.

Many of the projects were quite interesting and often powerful formal explorations, but usually without business/production/implementation considerations (no business plans), little testing/evaluation, and little or no implementation of computer-software-related applications.

What was evident from visiting all three educational sites was that each had its special emphasis, but no one place succeeded in providing, at an undergraduate (or graduate) level, the sufficient depth needed to produce the user-experience design professionals so much needed now. It seems that on-the-job experience is what gives new graduates the necessary depth and breadth of educational experience and expertise.

Perhaps it has always been this way. Some educational institutions claim to provide it, that is, the “complete package.” I think most do not deliver the complete set of goods, but some are trying harder. Having visited or talked with faculty at 5-10 institutions in the last year or two in five or six different countries, I am prepared to say that at least the educational leadership is aware of the challenge.

Posted in: on Wed, December 03, 2014 - 5:32:12

Aaron Marcus

Aaron Marcus is principal at Aaron Marcus and Associates (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts

Post Comment

No Comments Found

Lightning strikes!

Authors: Deborah Tatar
Posted: Tue, December 02, 2014 - 1:15:05

Every once in a while, in the world of high technology, I encounter someone who is doing a perilous, marvelous thing: planting his/her feet on the ground, and, in grounding him/herself, becoming a conduit for far more. 

At the Participatory Design Conference in Namibia last month, Cristóbal Martínez did this both figuratively and literally. Cristóbal is a graduate student of James Paul Gee and Bryan Brayboy’s, in Rhetoric, at Arizona State. He is also a mestizo from el pueblo de Alcalde, located just north of Santa Fe, New Mexico. Martínez is also part of a collective, Radio Healer, that explores, through rhetoric and performance, indigenous community engagement. And some of that engagement involves appropriation of pervasive media. 

At PDC, Martínez committed an act of digital healing for—or perhaps with—us. The performance that I saw integrated traditional and novel elements. He first spoke, then donned a mask to dance and made music with shell ankle-rattles, a flute, his voice, and three Wi-mote-enabled instruments. One, a bottle that he tilts and turns, has drone-like properties, almost like a Theramin, that establish a kind of keening baseline for the performance. The other two—handheld revolving platforms with Wi-motes’s affixed—are musically more complex. They provided rhythmic form through the period of revolution of the platforms as he held them, and melodic content through the variation in tonality as the Wi-platforms revolved.

Martínez engaged in performance

In some important sense, this was not much different from the buffalo dances my family and I have witnessed and enjoyed on New Year’s Day near to Martínez’ home in New Mexico. I experienced it with similar emotion in Namibia as in New Mexico, and indeed my direct experience in Namibia was overlaid with a memory. We had taken our three-year-old, Galen, to the Buffalo Dances, and he stood there on the arid yellow winter ground, entirely absorbed, for several hours, apparently indifferent to the considerable cold. He was also uninterested in offers of snacks, lunch, or naps. (Our baby was also completely absorbed from the comfort of his much warmer stroller, so that made an entirely absorbed family.) 

Eventually, the huge Abuelo (grandfather) in his cowboy hat came over to Galen. Galen, also in a cowboy hat, and with his great serious, steady dark eyes, and his soft trusting little baby cheeks, looked up at the Abuelo. The Abuelo silently held out his great huge man hand and folded Galen’s little one into a serious man’s handshake. After which he gave him a very tiny, very precious candy cane. There it was, in two silent, economical gestures: acknowledgement of the elements of the man my son would become (and now is), and the child that he was. We had come to witness the dances, but we were, in this way, also seen. 

In some sense, Martínez’ performance was like that for me. It was replete with directly experienced meaning—meaning perceptible to my three-year-old and even the 8-month-old baby. I was a witness; I found my own meaning in it despite my non-native status, and I was to some extent and in some generous way invited to partake.

But there are some differences worth thinking about. Martínez was performing in isolation from his collective; he was performing for us, a small and sympathetic but definitely global audience; and then—the trickiest bit—he was adopting and adapting Wi-motes. 

I am not in much position to talk about what Radio Healer means in its own setting, except that we cannot think about what Martínez was showing us without thinking about the framing he gives it. His purpose is to open up and enliven a self-determining community through the assertion of what he calls “indigenous technological sovereignty.” He wants his people to engage in critical discourse around the appropriation of technology. He wants them to re-imagine it for their own purposes. Possibly “he wants” is too egocentric a phrase. He probably sees himself as a conduit of a larger collective wanting. This quest is given purpose by their own pursuit of their own cultural logic and lives, but it is also given poignancy by pressures that particularly impinge on Indian sovereignty. Not only have Indians been decimated, abused, underserved and neglected in the past, but they also are currently colonized in many ways, not the least of which is a considerable pressure to be a kind of living fossil—to live as stereotypes of themselves, unable to change, and unable to create living community. 

Martínez is not the only person to protest this pressure. Glenn Alteen and Archer Pechawis expressed it at last spring’s DIS conference in Vancouver in explaining the operation of grunt gallery (especially Beat Nation). And, while in Vancouver, I was lucky enough to be able to see Claiming Space, the hip-hop Indian art show at the Museum of Anthropology, created by teenage urban First Nation people. The question that was posed around that curated exhibit was, “Why should it be noteworthy that Indians engage with hip-hop?” 

And the good question that follows on this (“Why should it be noteworthy that Indians use Wi-motes in their ritual?”) brings us back to the meaning of Martínez’ performance for us at PDC. Because Indians are also Americans or Canadians or citizens of the world, and, even if they were space aliens, Martínez’s healing tacitly suggests that the performance has relevance beyond its indigenous origin. 

To my mind, the relevance stems first from the fact that we must all—Indian, not-Indian, American, African—decide how to live, given the palpable options in front of us and then, secondarily, among the pressure of computation. Computing is a structural enterprise, but we are not structural creatures. How do any of us create lives, our own rich lives, in the constant presence of the reductionist properties of the computer? In this sense, computation is a colonization that we all face. 

Martínez’ performance underscores the ways in which it is terribly, desperately hard to wrest even a device as simple and innocuous-seeming as a Wi-mote from its place in fulfilling and fueling consumption in a consumer society towards location as an expressive element in a spiritual practice. He cannot even do this simple thing, using it in a performance, without it being noteworthy, even definitional! How much more difficult it is to resist the self-definition we see in the systems embedded into the everyday activities of our lives? Martínez, arguably, provides us with a model for culturally responsive critical engagement with emerging pervasive technologies in the early 21st century. His performance raises appropriation to a design principle: Use that which is noteworthy, but use it for your own purposes. Make sure that they are your purposes. Or as Studs Terkel used to say, “Take it easy, but take it.” 

Posted in: on Tue, December 02, 2014 - 1:15:05

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts

Post Comment

@Lucy Suchman (2014 12 12)

Deborah, thanks so much for this lucid reflection on a complicated event!  It makes its significance clearer even for those of us who were present.


User experience without the user is not user experience

Authors: Ashley Karr
Posted: Mon, November 24, 2014 - 11:09:43

Take away: User experience (UX) is a method of engineering and design that creates systems to work best for the intended user. In order to design in this way, users must be included in the design process through user research and usability testing. If user research and usability testing are not practiced, then UX is not being practiced.

What is the problem?

I have seen and interacted with countless individuals and organizations claiming that they practice UX, but in reality, they practice what I call personal experience design, stakeholder experience design, or client experience design. In this type of design, there may be a department called UX within an organization, there may be a few individuals with UX in their job descriptions, or there may be a consultant or agency that sells UX services to clients. However, these departments, agencies, and individuals never work directly with representative users. Simply put, what these people and groups are practicing is not user experience because they are not including the user in their design process. 

Why do we exclude the user from user experience?

Excuses for not working with users include: lacking the time and money to conduct research, not knowing that working with users is part of the UX process, not believing working with non-designers or engineers would positively impact a design, actually being afraid of recruiting and working with users, and not knowing how to conduct research and testing. My hope is that the frequency of these excuses will drop considerably over the next few years as more people become aware of how valuable user research and usability testing methods are and how fast, easy, and enjoyable user research and usability testing can be. 

User research and usability testing can speed up the design process for a number of reasons. The most important of which, in my opinion, is that the data gathered from research and testing is very difficult to argue with. This data aligns teams and stakeholders and cuts back on time spent agonizing and arguing over design decisions. If two departments or two team members can’t agree on a design element, for example, they can test it and let the users choose. Setting up and running a test is often times faster and more productive than arguing or philosophizing about the design with teams, clients, and/or stakeholders.

Why is it important to work with users?

Working with users allows us to understand users’ mental models regarding a design. This, in turn, allows us to design an interface that matches users’ mental models rather than the interface matching the engineers’, designers’, or organizations’ implementation models. It should make sense to design something that the user will understand. The way to understand what makes sense to the user is to talk to, listen to, and observe them. Although making sure that users understand how to use your design does not guarantee success, it does increase your chances. I also want to mention before closing this paragraph that by working with users in this way, we often gain deep insights that spur innovation for our current and future designs. Needs we may never have spotted expose themselves and point us toward a new and important function or a completely novel product.

What are user research and usability testing?

User research is a process that allows researchers to understand how a design impacts users. Researchers learn about user knowledge, skills, attitudes, beliefs, motivations, behaviors, and needs through particular methods such as contextual inquiry, interviewing, and task analysis. Usability testing is a method that allows a system or interface to be evaluated by testing it on users. The value of usability tests is that they show how users actually interact with a system—and what people say they do is often times quite different than what they actually do. Note that the biggest difference between user research and usability testing is this: A usability test requires a prototype and user research does not necessarily require a prototype.

How can you begin practicing user research and usability testing now?

Depending on your situation, you may be able to begin practicing user research and user interviewing informally right now by simply asking people around you for feedback regarding your design. For most of us, this is very feasible and we do not need permission from anyone to get started. If your representative users are quite different than the people around you, you are working on a project that is protected by non-disclosure agreements, or you are working within a large organization with an internal review board, you will have a few more hoops to jump through before being able to begin using these techniques. You will just have to work a little harder to recruit representative users, require research participants to also sign NDAs, and/or receive approval from the internal review board before you can start. 

Let’s set the hoops aside now and accept that getting out and away from your own head, assumptions, desk, office, and devices in order to talk to and observe users in their native habitats is one of the most powerful design techniques that exists. Being afraid or uninformed can no longer be excuses. Once you have conducted a user interview or a round of usability testing, the fear subsides, and there are so many amazing and free resources available online to guide practitioners through research and testing. I recommend and a hearty Google search to get the information you need to begin incorporating user research and usability testing into your process. And, that is the point of this article—to get you to begin.

Posted in: on Mon, November 24, 2014 - 11:09:43

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm,
View All Ashley Karr's Posts

Post Comment

No Comments Found


Authors: Jonathan Grudin
Posted: Fri, November 21, 2014 - 4:18:54

Issues that elicit passionate and unpredictable views arrive, fast and furious:

  • The right to be forgotten
  • Facebook emotion manipulation
  • Online bullying and cyber hate
  • Institutional Review Boards
  • Open publishing
  • The impact of technology on jobs

Is consensus eroding?
Has HCI broadened its scope to encompass polarizing topics?
Are social and mainstream media surfacing differences that once were hidden?

After briefly reviewing these topics, I’ll look at controversies that arose in years past—but back then it was two or three per decade, not one every six months. Unless I’ve forgotten some, the field is more contentious in its maturity.

Recent controversies

Expunging digital trails. The European Union ruled that people and organizations can prevent search engines from pointing to some past accounts of their activity. This might favor those with financial resources or high motivation due to particularly unsavory histories, but it could counter an erosion of privacy. That said, it doesn’t expunge records, it just puts a hurt on (non-European) search engine companies. Will it work? The BBC now lists articles that search engines must avoid, which could increase the attention the articles receive.

The Facebook study. Researchers set off a firestorm when they claimed to induce “emotional contagion” by manipulating Facebook news feeds. Whether or not the study showed emotional contagion [1], it revealed that Facebook quietly manipulates its news feed. The use of A/B testing to measure advertising effectiveness is not a secret, but this went further. If happy people prove more likely to click on ads, could Facebook systematically suppress grumpy peoples’ posts? Many in the HCI community defended the study as advancing our understanding; I could not predict which side a colleague would come down on. I was a bit torn myself. I’m a strong advocate for science, but what if one of the unidentified half million people barraged with selectively negative friends’ posts dropped out and another leapt from a bridge? The “contagion” metaphor isn’t comforting.

Prior approval to conduct research. Did the Facebook terms of agreement constituted ethically adequate “informed consent” by those in the study? The limited involvement of an Institutional Review Board (IRB) was challenged, which brings us to another ongoing debate. IRB approval is increasingly a prerequisite for academic research in health and social science, intended to avoid harm to study subjects and legal risk for research institutions. It seems reasonable, but in practice, researchers can wear down examiners and get questionable studies approved, and worthwhile research can be impeded or halted by excessive documentation requirements.

The Facebook study team included academic researchers. Industry researchers are often exempt from IRB review, which can irritate academics. I have experienced both sides. I’ve seen impressive, constructive reviews in clinical medicine. In behavioral and social science, I’ve seen good research impeded. Could a PhD based on human subject observation or experimentation become a license to practice, as a surgeon obtains a license, with onerous IRB reviews invoked only after cases of “malpractice”?

Hate speech and bullying vs. free expression. A few well-publicized teenage suicides after aggression in online forums led to legislation and pressure on social media platform and search engine developers to eliminate unwanted, disturbing online confrontations. But it’s complicated: What is offensive in one culture or subculture may not be in another. Offensiveness can depend on the speaker—“only Catholics can criticize the Pope,” but on the Internet no one knows you’re a Catholic. Some people find it entertaining to insult and horrify others—“trolls” exhibit poor taste, but is it hate speech?

We want to protect the young, but how? When a child is the target and a parent is known, should abuse be reported? Some kids can handle it and would rather not involve their parents. Some researchers argue that children with abusive parents might be worse off if a parent is brought in. This topic calls for research, but legislation is being enacted and software developers can’t wait. People take different sides with considerable passion.

Open publishing. Publishing becomes technically easier every year. Why not cut publishers out of the loop? Many who have looked closely respond, “Because publishers oversee details that busy professionals would rather avoid.” Nevertheless, ACM is under pressure despite its very permeable firewall: Individual and conference “author-izer” features permit free worldwide access, and thousands of institutions buy relatively inexpensive ACM site licenses without grumbling. Open access was embraced first in mathematics and physics, where peer review plays a smaller role, and in biology and medicine, where leading journals are owned by for-profit publishers. Within HCI and computer science, calls for open access are strongest in regions that rely less on professional societies for publication.

Publishers sometimes spend profits in constructive ways. ACM scans pre-digital content that has no commercial value into its useful digital library archive and supports educational outreach to disadvantaged communities. Some for-profit publishers have trail-blazed interactive multimedia and other features in journals. Publishers do shoulder tasks that over-taxed volunteers won’t.

The impact of technology on jobs. In 1960, J.C.R. Licklider proposed a “symbiosis” between people and computers. Intelligent computers will eventually require no interface to people, he said, but until then it will be exciting. Many of his colleagues in 1960 believed that by 1980 or 1990, ultra-intelligent machines would put all humans out of work—starting with HCI professionals. No one now thinks that will happen by 1980 or 1990, but some believe it will happen by 2020 or 2030.

A recent Pew Internet survey found people evenly split. Digital technology was credited for the low-unemployment economy prior to 2007, so why not blame high tech for our lingering recession? That’s easier than taking action, such as employing people to repair crumbling infrastructures.

In September, a Churchill Club economic forum in San Francisco focused on automation and jobs. The economists, mostly former presidential advisors, noted that in the past, new jobs came along when technology wiped out major vocations. The technologists were less uniformly sanguine. A Singularity University representative forecast epidemic unemployment within five years. The president of SRI more reassuringly predicted that it would take 15 years for machines to put us all out of work. I contributed a passage written by economics Nobel Laureate and computer scientist Herb Simon in 1960, appearing in his book The Shape of Automation for Men and Management: “Technologically, machines will be capable, within 20 years, of doing any work that a man can do.”

Past controversies: Beyond lies the Web

The HCI world was once simple. We advocated for users. Now it’s more complicated. Users hope to complete a transaction quickly and leave; a website designer aims to keep them on the site, just as supermarket designers place frequently purchased items in far corners linked by aisles brimming with temptation.

Some disagreements in CHI’s first 30 years didn’t rise to prominence. For example, many privately berated standards as an obstacle to progress, whereas a minority considered standards integral to progress, claiming that researchers generally favor standards at all system levels other than the one they are researching. Social media weren’t there to surface such discussions and mainstream media rarely touched on technology use. However, controversy was in fact rare. Program committees desperately sought panel discussions that would generate genuine debate. We almost invariably failed: After some playful posturing by panelists, good-natured agreement prevailed.

There were exceptions. Should copyright apply to user interfaces? Pamela Samuelson organized CHI debates on this in 1989 and 1991. Ben Shneiderman advocated legal punishments and testified in a major trial. On the other side, many including Samuelson worried that letting a lawyer’s nose into the HCI tent would lead to tears.

In spirited 1995 and 1997 CHI debates, Shneiderman took strong positions against AI and its natural-language-understanding manifestation [2]. His opponents were not mainstream CHI figures and the audience generally aligned with Ben. AI competed with HCI for the hearts and minds of funders and students. NLU dominated government funding for human interaction but never established a significant foothold in CHI, where most of us knew that NLU was going nowhere anytime soon.

Today, there is some openness to discussing values and action research. Back then, CHI avoided political or value-laden content. We were engineers! One might privately lean left or libertarian, but the tilt was scrupulously excised from written work. Ben Shneiderman was a lonely voice when he advocated that CHI engage on societal issues.

The prevailing view was that HCI must eschew any hint of emotional appeal to get a seat at the table with hardware and software engineers, designers and managers. Ben’s calls to action were balanced by his conservative reductionist methodology: There is no problem with experiments that more experiments won’t solve.

The most heated controversies were over methods. Psychologists trained in formal experimentation, engineers focused on quantitative assessment, and those who saw numbers as essential to getting seats at the table were hostile to both “quick and dirty” (but often effective) usability methods and qualitative field research.

In 1985, in the first volume of the journal Human Computer Interaction, Stu Card and Alan Newell said that HCI should toughen up because “hard science” (mathematical and technical) always drives out “soft science.” Jack Carroll and Robert Campbell subsequently responded that this was a bankrupt argument, that CHI should expand its approaches to acquire better understanding of fundamental issues. In 1998, again in the journal HCI, Wayne Gray and Marilyn Salzman strongly criticized five influential studies that had contrasted usability methods. They drew 60 pages of responses from 11 leading researchers.

The methodological arguments subsided. Today’s broader scope may reflect greater tolerance for diverse methods, or maybe the tide turned—advocates of formal methods shifted to other conferences and journals.

Concluding observation

Today contention flares up, but public attention soon moves on. Is there any sustained progress? Some topics are explored in workshops or small conferences. A Dagstuhl workshop made progress on open publishing, but we did not produce a full report and uninformed arguments continue to surface. Some people with strong feelings aren’t motivated to dig deeply; some with a deep understanding don’t have the patience to repeatedly counter emotional positions.

The vast digital social cosmos that surfaces controversies may diminish our sense of empowerment to resolve them. We touch antennae briefly and continue on our paths.


1. The study showed that news feeds that contain negative words are more likely to elicit negative responses. For example, if people respond to “I’m feeling bad about my nasty manager” by saying “I’m sorry you’re feeling bad” and “Tough luck that you got one of the nasty ones,” the authors would conclude that negative emotion spread. Other explanations are plausible. When my friend is unhappy, I may sympathize and avoid mentioning that I’m happy, as in the responses above.

2. A record of the CHI 1997 debate, “Intelligent software agents vs. user-controlled direct manipulation,” is easily found online. A trace of the more elusive CHI 1995 debate can be found by searching on its title, “Interface Styles: Direct Manipulation Versus Social Interactions.”

Thanks to Clayton Lewis for reminding me of a couple controversies and to Audrey Desjardins for suggestions that improved this post and for her steady shepherding of Interactions online blogs.

Posted in: on Fri, November 21, 2014 - 4:18:54

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

Disrupting the UX design education space

Authors: Richard Anderson
Posted: Wed, November 19, 2014 - 12:31:55

Room 202

My teaching partner Mandy and I stood in silence looking around the room one last time in which magic had happened the preceding 10 weeks. We teach the UX Design immersive for General Assembly in San Francisco. 10 weeks, 5 days/week, 8 hours/day of teaching and learning, of intense, hard work, of struggle, of laughter, of transformation, of bonding that will last forever. Educational experiences don’t get any better than this.

The UX Design immersive is intended mostly for people wanting to make a career transition. Students make a huge commitment by signing up for the course, stopping whatever they were doing prior, and in some cases, traveling long distances to do one thing: to become a UX designer.

General Assembly is one of several new educational institutions that are slowly disrupting the higher education space. Jon Kolko has identified the following qualities shared by many of these institutions’ programs:

  1. they are short;
  2. they focus on skill acquisition;
  3. they produce a portfolio as evidence of mastery;
  4. they are taught by practitioners;
  5. they promote employment and career repositioning, rather than emphasizing the benefits of learning as an end in itself;
  6. they typically focus on "Richard Florida" type jobs and careers: the creative disciplines of software engineering, product design, advertising, marketing, and so-on.

As described by Jon:

“Students who graduate from these programs have a body of work that they can point to and say ‘I made those things.’ This makes it very easy to understand and judge the quality of the student, particularly from the standpoint of a recruiter or hiring manager.”


“These educators have a deep and intimate understanding of both the material that is being taught and the relevancy of that material to a job.”

Given the increasingly heard argument that academic programs are not producing the kinds of designers needed most by industry (see, for example, "On Design Education")… And given that 90% of UX Design immersive students secure jobs within 90 days of the end of their cohort... (I might be moderating a panel contrasting different institutional instructional models at the Interaction 15 Education Summit in February.)

What is it like teaching the UX Design immersive at General Assembly? To get a sense of this, read the Interactions magazine blog post written earlier this year by our fellow UX Design immersive instructor in Los Angeles, Ashley Karr, entitled, “Why Teaching Tech Matters.” Also, Mandy and I might be conducting a mock classroom at the Interaction 15 Education Summit in February to give attendees a mini-experience of the immersive program.

Tears filled the room on the final day of the course. We all had put everything we had into the preceding 10 weeks, and we could not help but be emotional. We hope the magic will happen again when we teach the course again in December. But it all will happen in a different space (a new campus opens tomorrow), and Mandy and I will be paired with other instructors instead of each other. 

I will miss the magic of Room 202 in the crazy, crowded 580 Howard Lofts with only one bathroom and no air conditioning, situated next to a noisy construction site; I will miss the magic of working closely with the amazing Mandy Messer; and I will miss the magic of getting to know a certain 21 special, fabulous people who are now new UX designers. 

But we will do it again, and we will try to do it even better.

Posted in: on Wed, November 19, 2014 - 12:31:55

Richard Anderson

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.
View All Richard Anderson's Posts

Post Comment

@Jonathan Grudin (2014 11 19)

Congratulations and very nice, Richard. Almost makes me wish I was still a practitioner so I could join in. I hope you find a way to franchise this.

@Ashley Karr (2014 11 19)

Hello! Thanks for writing this - and thanks for the shout out. This is a very good article to reference when folks asks questions regarding types of programs they should look into re: career changes and continuing education. Good luck with your next round!

The top four things every user wants to know

Authors: Ashley Karr
Posted: Thu, November 13, 2014 - 11:43:16

I have been conducting research with human participants for roughly 14 years. Some of my studies have been formal (i.e., requiring the approval of an ethics or internal review board), some a have been informal (i.e., guerilla usability testing for small start-ups), and some have landed in between those two extremes. Additionally, the populations and domains that I studied ranged widely; however, I have found certain similarities with all my participants. I am sharing with you now the results of my meta-analysis of all the usability testing, user research, field studies, and ethnographies that I have completed so far in my career. I call my meta-analysis, “The Top Four Things Every User Wants to Know.” I use it on a daily basis—it really does come in handy that often. I share it with you now:

  1. Users want to see a sample of your design. For example, they want to see pictures, a demo, or a video of how your design functions. Telling your users about your design through speech or text is not as effective as showing them, unless, of course, you are working with a visually impaired population. If users aren’t able to see a representative example of your design functioning, they will lose interest very quickly.

  2. Users want to know what your design will cost in terms of money. Hiding fees or prices associated with your design breaks trust. If users can’t find fees or prices quickly and easily, they become disinterested in your design and either look to competitors or simply move on. When conducting research on pricing and fee structures, 100% of users in my studies have told me that they are willing to pay more money (within reason) for a product or service if that means they are able to find out how much money they will be spending upfront. Another design decision that can break trust is using the term free. 100% of users that I have studied regarding this phenomenon state something like, “Nothing in this world is free. What are they hiding from me?” Interestingly, I have done studies on systems that are purely informational and very distant from the world of commerce. Users were still concerned that they would be charged in some way for their use of the system.

  3. Users want to know how long it will take to use and/or complete a particular task with your design. Users do not believe you when you say or write that your design saves time, is easy to learn, and quick to use. If you tell your users these things, you waste their time and break their trust. If you show them how quick and easy your design is, and your design actually performs this well, then you’re getting somewhere.  

  4. Users want to know how your design will help or harm them if they decide to start using it. As one of my students said, “Users want to know how your design will make their life more or less awesome before they decide to truly commit and interact or purchase.” This transcends time and money, and moves into deeper realms, such as adding more meaning to peoples’ lives or more positive engagement with their surroundings.

To recap, users want to know what your design is, what it costs, how long it will take to use, and how it will make their life more awesome (or not)—and in that order. These may seem like overly simplistic design elements, but very basic things get overlooked in the design process all the time. If those basic elements do not make it into the design before launch or production, disaster and failure may strike. Also, this list helps me when I get stuck due to over thinking and possibly over complicating a design. It reminds me to keep my design straightforward and highly functional for my users. 

Straightforward. Highly functional. These may be the two most important characteristics of any design. Add those to the four things that every user wants to know, and we have six quality ingredients for user-centered design.

Posted in: on Thu, November 13, 2014 - 11:43:16

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm,
View All Ashley Karr's Posts

Post Comment

No Comments Found

Love it or list it

Authors: Monica Granfield
Posted: Wed, November 12, 2014 - 3:57:45

Have you ever tried to coordinate a project, a group of people, their activities, and their progress? Or organize your thoughts or what needs to get done? What has been your most efficient tool?

For me it's most often some form of a simple list. 

All kinds of systems to help individuals and organizations get organized and improve efficiency have been created over time. From pads of paper with checkboxes printed on them, to magnetic boards with pre-canned task components, for kids and adults alike. Paper planners and systems like Franklin Planners have stood the test of time. The digital age has brought a slew of complex products for targeted industries or personal use. All come with the promise of organizing, tracking, planning, and even projecting work; resourcing; generating analytics; optimizing for efficiency; and having hordes of free time to have coffee with friends. How many deliver on this promise?

From professional to personal use, there's a productivity product out there for you. I have had the pleasure of evaluating some of these tools, using some of these tools, and yes, even designing a tool to help boost communication, organization, and productivity for users. The dynamics of organizing people and activities is not easy and can quickly become complex.

I have witnessed the initial flood of enthusiasm over the promise of accomplishment they bring and then watched the enthusiasm fizzle out and the use of the product simply fade out over time. Too often these tools to boost productivity become a full-time job for at least one person in an organization. I have also witnessed users struggle with the use of these products. Sometimes users are successful using portions of the product; other times products are so complex and hard to grasp that hours of training and use still fails in making them successful. The vast majority of users become confused and use only the top few features that will meet with the expectations of management. Management is often quiet about how productivity tools enter an organization, with little feedback from the people who will use them.

Many of these products try to emulate a conceptual model, rather than how people work or what they need from a product. If you are not familiar with the conceptual model, learning the product will prove to be a challenge. One product I used tried to emulate the conceptual model of the Agile process. However, there are many interpretations of what the Agile process is and how to implement it. Also, Agile typically includes software development, but not other related disciplines such as documentation, UX, marketing, or hardware. Roles in disciplines not included as part of the process in the product are retrofitted into the conceptual model. The users in these roles don’t understand the model, get frustrated learning or managing the product, and then start the decline into becoming a non-user.

Rather than struggle with a tool that doesn't meet the needs of the group, organization, or users, it becomes easier to just resort back to a good old-fashioned organizational tool such as a list. Most of these lists end up being created and managed in a spreadsheet or documentation program, which are more familiar to users, therefore making it easier to successfully manage your people and activities such as tracking changes, sorting, filtering, and simply checking something off when completed. No complex processes where you assign tasks and stories, or forward users to a new phase of a project. No logging in, no trying to find who owns what and who did what. No time wasted trying to figure out how this glorified list with the complex system of built-in features works. All you need to do is glance at the list, with its glorious titles, headers, columns, and rows, all there right in front of you. Prioritize the list, reorder, highlight items, cross something off and ta-dah... you are done. Now you can go and have coffee with your friends.

If an application that is meant to organize and increase productivity becomes too complex and hard to use, the abandonment rate will rise. Organizations will abandon one product for another and, if all the while their users don't love the product, the users most likely will slowly and quietly resort to listing it.

The simplicity of a list is all that is needed to keep me organized and boost my productivity. Lists play a key role in tracking what needs to be done, keeping inventory of issues, and tracking and assigning who needs to do what, when.

Sometimes the simple and straightforward solution just works. If you don't love your productivity tool, do you list it? 

Posted in: on Wed, November 12, 2014 - 3:57:45

Monica Granfield

Monica Granfield is a user experience designer at Go Design LLC.
View All Monica Granfield's Posts

Post Comment

No Comments Found

Batman vs. Superman (well, actually, just PDC vs. DIS)

Authors: Deborah Tatar
Posted: Tue, October 28, 2014 - 10:48:54

The Participatory Design Conference (PDC), which just had its 13th meeting in Windhoek, Namibia, is a close cousin to DIS, the Design of Interactive Systems conference. Both are small, exquisite conferences that lead with design and emphasize interaction over bare functionality; however, like all cousins (except on the ancient Patty Duke show in which Patty Duke played herself and her “British” cousin), there are some important differences. Unlike DIS, PDC is explicitly concerned with the distribution of power in projects; furthermore, the direction of distribution is valenced: more power to those below is good. 

In a way, it is odd for designers to think about distributing power downward—how much power do designers actually have?—and yet the PD conference is an extremely satisfactory place to be. Even if designers do not, in fact, have much power, we are concerned with it. The very act of designing is an assertion of power. Why design, if not to change behavior? And what is changing behavior if not the exertion of power? And if we are engaged in a power-changing enterprise, how much finer it is to, within the limitations of our means, take steps that move power in the right direction rather than ourselves accepting powerlessness? To contemplate what is right is energizing! 

DIS has been held in South Africa, but PDC one-upped it by being held in Namibia and by attracting a very significant black African cadre of attendants. Consequently, a key question after the initial round of papers featured the fears of the formerly colonized. The question was whether the practices of participatory design are not in some sense a form of softening up the participating populace for later, more substantial exploitation. Of course, one hopes not, but how lovely to be at a conference that does not sweep that important issue under the rug. 

As this question points out, we do not always know what is right. Ironically, acknowledging this allows the conference to feel celebratory. Perhaps it was the incredible percussive music, the art show, or Lucy Suchman’s “artful integration” awards, received this year by Ineke Buskers, representing GRACE (Gender Research in Africa into ICTs for Empowerment), and by Brent Williams, representing Rlabs, which supports education and innovation in townships in South Africa and impoverished communities around the world. Both organizations are local, bottom-up, and community-driven appropriations of technology. Or perhaps it was the fact that so many papers focused on the discovery of the particulars that people care about in their lives and the creation of technologies that influence their lives. 

I went to the conference because I have been hoping to jump-start more thought about power in the DIS, CHI, and especially the CSCW communities. I know that many people in these communities have become increasingly concerned about power in the last few years. I hear whispering in the corridors, the same way Steve Harrison, Phoebe Sengers, and I heard whispering before we wrote our “Three Paradigms” paper (the one that tried to clarify basic schools of thought within CHI and how they go together as bundles of meaning). Now the whispering is different. It is about how the study of human-computer interaction needs to be more than the happy face on fundamentally exploitative systems. 

In any case, I knew that PDC would be ahead of me and that the people who would have the most sophistication would very likely not be American. As wonderful as Thomas Jefferson is, he—and therefore we in America—are too much about social contract theory and what Amartya Sen calls “transcendental institutionalism” to adjust easily to certain kinds of problems of unfairness, especially unfairness that requires perception of manifest injustice. As Amartya Sen points out, Americans draw very heavily on the idea that if we have perfect institutions, then the actual justice or fairness in particular decisions or matters of policy does not matter. Transcendental institutionalism is a belief that makes it very difficult to effectively protest fundamentally destructive decisions such as treating corporations as people. In HCI and UX, we tend to think that if users seem happy in the moment, or we improve one aspect of user experience, the larger issues of the society that is created by our design decisions are unimportant. Transcendental institutionalism, again. 

These issues are explored more at PDC. I attended a one-day workshop on Politics and Power in Decision Making in Participatory Design, led by Tone Bratteteig and Ina Wagner from the University of Oslo. Brattetieg and Wagner would have it, in their new book (Disentangling Participation: Power and Decision-making in Participatory Design, ISBN 978-3-319-06162-7) as well as at the conference, that the key issue in the just distribution of power is choice in decision making. The ideal is to involve the user in all phases of decision making: in creating choices, in selection between choices, in implementation choices (when possible) and in many choices that surround the evaluation of results. Furthermore, a participatory project should, they feel, have a participatory result: it should increase the user’s power to. “Power to” arises from a feminist notion of power in which dominion is not paramount. Instead, it is closer to Amartya Sen’s notion of capability. Freedom, from Sen’s perspective, consists of the palpable possibilities that people have in their lives. 

Pelle Ehn, who gave the keynote and is making a farewell round of extra-US conferences before retiring (he will be the keynote speaker at OzCHI shortly), put this view in a larger context by reminding us of Bruno Latour’s notions of “parliaments” and “laboratories.” In this way of thinking, the social qualities of facts are paramount. Though Latour appears to have pulled back from this view later in his life, it has the great advantage of helping us perceive issues of power in design. Power is hard to see. The waves of utopian projects (one even called Utopia) that Ehn has shepherded during his long career each reveal more about the shifting and growing power of technology and the institutions that profit from it. The implicit question raised is “what now?” 

A portion of this concern might seem similar to what we say all the time in human-computer interaction. After all, what is the desirable user experience if not the experience of power to? However, in fact, the normal conduct of HCI and PD only resemble one another at a very high level of abstraction. They differ in the particulars. My t-shirt from the 2006 PDC conference reads, “Question Technology,” but I would say that the persistent theme is not precisely questioning technology as much as questioning how we are constructing technology. PDC does not ask whether technology serves, but who, precisely, now and later. PDC asks “what is right?” 

PDC is the still-living child of CPSR (Computer Professionals for Social Responsibility), an organization that, after a long life, sadly went defunct just this year. I have not been involved with it in recent years, and I am not sure what to write on the death certificate, but it seems to me that the rise of untrammeled global corporate capitalism fueled by information technology has created new problems, that something like CPSR is much needed, and that PDC stands for many questions that need better answers than we currently have. 

I did not get to hear Shaowen Bardzell’s closing plenary, because I had to journey over 30 hours to get home and be in shape to lecture immediately thereafter, but rumor has it that she energized the community. I am optimistic that her insider-outsider status as a naturalized American, raised in Tapei, gives her the perspective to address issues at the edge between, as it were, power over and power to.

Returning to the question of Batman vs. Superman, DIS is also a very delightful place to be. But the freedom of spirit that has characterized it since Jack Carroll resuscitated it in 2006 exists in uneasy implicit tension with the concerns and measurements of its corporate patrons.

Posted in: on Tue, October 28, 2014 - 10:48:54

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts

Post Comment

@Jonathan Grudin (2014 10 29)

Nice essay, thanks Deborah.

@Ineke Buskens (2014 10 31)

Delightful piece Deborah, thank you! Much love from South Africa!

Uses of ink

Authors: Jonathan Grudin
Posted: Fri, October 17, 2014 - 10:06:47

Many species communicate, but we alone write. Drawing, which remains just below the surface of text, is also uniquely ours. Writing and sketching inform and reveal, record, and sometimes conceal. We write to prescribe and proscribe, to inspire and conspire.

My childhood colorblindness—an inability to see shades of gray—was partly overcome when I read Gabriel Garcia Marquez. But for nonfiction I continued to prize transparency. Crystal clarity for all readers is unattainable, but some writers come close. For me, Arthur Koestler’s breadth of knowledge and depth of insight were rivaled by the breathtaking clarity of his writing, no less impressive for often going unnoticed.

Lying on the grass under a pale Portland sun the summer after my sophomore undergraduate year, I took a break from Koestler—The Sleepwalkers, The Act of Creation, The Ghost in the Machine, and his four early autobiographical volumes—to read Being and Nothingness. Sartre had allegedly changed the course of Western philosophy. After a few days I had not progressed far. I had lists of statements with which I disagreed, felt were contestable, or found incomprehensible. “You are reading it the wrong way,” I was informed. Read it more briskly, let it flow over you.

This struck me as ink used to conceal: Jean-Paul Sartre as squid. A cloud of ink seemed to obscure thought, which might be profound or might be muddled.

As years passed I saw clear, deep writers ignored and opaque writers celebrated. In Bertrand Russell’s autobiography, Frege declared Wittgenstein’s magnum opus incomprehensible and Russell, Wittgenstein’s friend, felt that whatever it meant, it was almost certainly wrong. Decades later, my college offered a course on Wittgenstein (and none on Frege or Russell). I had given up on Sartre but took the Wittgenstein course. I liked the bits about lions and chairs, but unlike my classmates who felt they understood Wittgenstein, I sympathized with Frege.

It finally dawned on me that in nonfiction, as in complex fiction, through the artful construction of an inkblot, a verbal Rorschach, a writer invites readers to project conscious or unconscious thoughts onto the text and thereby discover or elaborate their own thoughts. The inkblot creator need not even have a preferred meaning for the image.

It requires skill to create a good verbal projection surface. A great one has no expiration date. “Sixty years after its first publication, [Being and Nothingness] remains as potent as ever,” says Amazon. (It’s now over seventy years.)

Let’s blame it on the Visigoths

George Orwell was a clear writer. His novels 1984 and Animal Farm are unambiguous enough to be assigned to schoolchildren. In a long-defunct magazine he published this beautiful short essay, a book review. (Thanks to Clayton Lewis for bringing it to my attention.)

The Lure of Profundity
George Orwell, New English Weekly, 30 December 1937

There is one way of avoiding thoughts, and that is to think too deeply. Take any reasonably true generalization—that women have no beards, for instance—twist it about, stress the exceptions, raise side-issues, and you can presently disprove it, or at any rate shake it, just as, by pulling a table-cloth into its separate threads, you can plausibly deny that it is a table-cloth. There are many writers who constantly do this, in one way or another. Keyserling is an obvious example. [Hermann Graf Keyserling, German philosopher, 1880–1946.] Who has not read a few pages by Keyserling? And who has read a whole book by Keyserling? He is constantly saying illuminating things—producing whole paragraphs which, taken separately, make you exclaim that this is a very remarkable mind—and yet he gets you no forrader [further ahead]. His mind is moving in too many directions, starting too many hares at once. It is rather the same with Señor Ortega y Gasset, whose book of essays, Invertebrate Spain, has just been translated and reprinted.

Take, for instance, this passage which I select almost at random:

“Each race carries within its own primitive soul an idea of landscape which it tries to realize within its own borders. Castile is terribly arid because the Castilian is arid. Our race has accepted the dryness about it because it was akin to the inner wastes of its own soul.”

It is an interesting idea, and there is something similar on every page. Moreover, one is conscious all through the book of a sort of detachment, an intellectual decency, which is much rarer nowadays than mere cleverness. And yet, after all, what is it about? It is a series of essays, mostly written about 1920, on various aspects of the Spanish character. The blurb on the dust-jacket claims that it will make clear to us “what lies behind the Spanish civil war.” It does not make it any clearer to me. Indeed, I cannot find any general conclusion in the book whatever.

What is Señor Ortega y Gasset's explanation of his country’s troubles? The Spanish soul, tradition, Roman history, the blood of the degenerate Visigoths, the influence of geography on man and (as above) of man on geography, the lack of intellectually eminent Spaniards—and so forth. I am always a little suspicious of writers who explain everything in terms of blood, religion, the solar plexus, national souls and what not, because it is obvious that they are avoiding something. The thing that they are avoiding is the dreary Marxian ‘economic’ interpretation of history. Marx is a difficult author to read, but a crude version of his doctrine is believed in by millions and is in the consciousness of all of us. Socialists of every school can churn it out like a barrel-organ. It is so simple! If you hold such-and-such opinions it is because you have such-and-such an amount of money in your pocket. It is also blatantly untrue in detail, and many writers of distinction have wasted time in attacking it. Señor Ortega y Gasset has a page or two on Marx and makes at least one criticism that starts an interesting train of thought.

But if the ‘economic’ theory of history is merely untrue, as the flat-earth theory is untrue, why do they bother to attack it? Because it is not altogether untrue, in fact, is quite true enough to make every thinking person uncomfortable. Hence the temptation to set up rival theories which often involve ignoring obvious facts. The central trouble in Spain is, and must have been for decades past, plain enough: the frightful contrast of wealth and poverty. The blurb on the dust-jacket of Invertebrate Spain declares that the Spanish war is “not a class struggle,” when it is perfectly obvious that it is very largely that. With a starving peasantry, absentee landlords owning estates the size of English counties, a rising discontented bourgeoisie and a labour movement that had been driven underground by persecution, you had material for all the civil wars you wanted. But that sounds too much like the records on the Socialist gramophone! Don’t let’s talk about the Andalusian peasants starving on two pesetas a day and the children with sore heads begging round the food-shops. If there is something wrong with Spain, let’s blame it on the Visigoths.

The result—I should really say the method—of such an evasion is excess of intellectuality. The over-subtle mind raises too many side-issues. Thought becomes fluid, runs in all directions, forms memorable lakes and puddles, but gets nowhere. I can recommend this book to anybody, just as a book to read. It is undoubtedly the product of a distinguished mind. But it is no use hoping that it will explain the Spanish civil war. You would get a better explanation from the dullest doctrinaire Socialist, Communist, Anarchist, Fascist or Catholic.

Clarity, ink clouds, and ink blots in HCI

In our field, we write mostly to record, inform, and reveal. At times we write to conceal doubt or exaggerate promise. The latter are often acts of self-deception, although spurred by alcohol and perhaps mild remorse, some research managers at places I’ve worked have confessed to routinely deceiving their highly placed managers and funders. They justified it by sincerely imagining that the resulting research investments would eventually pay off. (None ever did.)

Practitioners who attend research conferences seek clarity and eschew ambiguity. They may not get what they want—unambiguous finality is rare in research. But practitioner tolerance for inkblots is low. Over time, as our conferences convinced practitioners to emigrate, openings were created for immigrants from inkblot dominions, such as Critical Theory.

For example, echoing Orwell’s example of beardless women, the rejection letter for a submission on the practical topic of creating gender-neutral products stated, “I struggle to know what a woman is, except by reference to the complex of ideological constructions forced on each gender by a society mired in discrimination.” Impressive, but arguably an excess of intellectuality. It does not demean those struggling to know what a woman is to say that when designing products to appeal to women, “a better explanation” could be to define women as those who circle F without hesitation when presented with an M/F choice. A design that appeals both to F selectors and M selectors might or might not also appeal to those who would prefer a third option or to circle nothing, but in the meantime let’s get on with it.

Posted in: on Fri, October 17, 2014 - 10:06:47

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

Designing the cognitive future, part V: Reasoning and problem solving

Authors: Juan Pablo Hourcade
Posted: Tue, October 14, 2014 - 3:38:54

I have been writing about how computers are affecting and are likely to affect cognitive processes. In previous posts I have touched on perception, memory, attention, and learning. In this post, I discuss reasoning and problem solving.

Computers are quite adept at deductive reasoning. If all facts are known (at least those we would use to make a decision), computers can easily use logic to make deductions without mistakes. Because of this, it is likely that we will see more and more involvement of computers in our lives to help us make decisions and guide our lives through deductive reasoning. We can see this already happening, for example, with services that tell us when to leave our home for our flight based on our current location and traffic on the way to the airport. 

These trends could go further, with many other activities that involve often-problematic human decision-making moving to the realm of computers. This includes driving cars, selecting what to eat, scheduling our days, and so forth. In all these cases computers, when compared with people, would be able to process larger amounts of information in real time and provide optimal solutions based on our goals.

So what will be left for us to do? One important reasoning skill will involve an understanding of the rules used to determine optimal outcomes in these systems, and how these relate to personal goals. People who are better able to do this, or go further determining their own sets of rules, are likely to derive greater benefits from these systems. One of the bigger challenges in this space comes from systems that could be thrown off balance by selfish users (e.g., traffic routing). People who are able to game these systems could gain unfair advantages. There are design choices to be made, including whether to make rules and goals transparent or instead choose to hide them due to their complexity.

What is clear is that the ability to make the most out of the large amounts of available data relevant to our decision-making will become a critical reasoning skill. Negative consequences could occur if system recommendations are not transparent and rely on user trust, which could facilitate large-scale manipulation of decision-making.

The other role left for people is reasoning when information is incomplete. In these situations, we usually make decisions based on heuristics we developed based on past outcomes. In other words, based on our experiences, we are likely to notice patterns and develop basic “rules of thumb.” Our previous experiences therefore are likely to go a long way in determining whether we develop useful and accurate heuristics. The closer these experiences are to a representative sample of all applicable events we could have experienced, the better our heuristics will be. On the other hand, being exposed to a biased sample of experiences is likely to lead to poorer decision-making and problem solving.

Computers could help or hurt in providing us with experiences from which we can derive useful rules of thumb. One area of concern is that information is increasingly delivered to us based on our personal likes and dislikes. If we are less likely to come across any information that challenges our biases, these are likely to become cemented, even if they are incorrect. Indeed, nowadays it is easier than ever for people with extreme views to find plenty of support and confirmation for their views online, something that would have been difficult if they were only interacting with family, friends, and coworkers. Not only that, but even people with moderate but somewhat biased views could be led into more extreme views by not seeing any information challenging those small biases, and instead seeing only information that confirms them.

There is a better way, and it involves delivering information that may sometimes make people uncomfortable by challenging their biases. This may not be the shortest path toward creating profitable information or media applications, but people using such services could reap significant long-term benefits from having access to a wider variety of information and better understanding other people’s biases.

How would you design the cognitive future for reasoning and problem solving? Do you think we should let people look “under the hood” of systems that help us make decisions? Would you prefer to experience the comfort of only seeing information that confirms your biases, or would you rather access a wider range of information, even if it sometimes makes you feel uncomfortable?

Posted in: on Tue, October 14, 2014 - 3:38:54

Juan Pablo Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Pablo Hourcade's Posts

Post Comment

No Comments Found

Conceptual precision in interaction design research

Authors: Mikael Wiberg
Posted: Wed, October 01, 2014 - 11:13:28

Interaction design research is to a large extent design driven. We do research through design. A design can be seen as a particular instantiation of a design idea. Accordingly, interaction design research is also about the development of ideas. However, there is no one-to-one relation between design ideas and design instantiations. A design idea can be expressed through a wide variety of designs. That is partly why we work with prototypes in our field and why we think iterative design is a good approach. We explore an idea through a number of variations in terms of how the design idea is manifested in a particular design.

Of course, precision is always a key ingredient in research. We appreciate precision when it comes to definitions, measurements, and descriptions of research methods applied, data sets, data-collection techniques, and formats. As a community I also think that we all agree that research contributions and conclusions should be stated with precision. Precision enables us to position a particular research contribution in relation to an existing body of research.

However, do we work with similar precision when it comes to articulating our design ideas? And do we work with such precision when we articulate how our design ideas are manifested in the designs we produce as important outcomes from our research projects? I certainly hope so! At least I have noticed a growing concern for this matter in our field over the past few years. In a recent paper Erik Stolterman and I discuss the relation between conceptual development and design [1], and in a related paper Kristina Höök and Jonas Löwgren present the notion of “strong concepts” in relation to interaction design research [2].

So, if we can agree that design and conceptual development goes hand in hand in interaction design research, and if we can agree that precision is key for this practice as well, then maybe we should also ask the question of how we can advance our field through design. That is to answer the fundamental question of whether we can make research contributions, i.e., progress, through design? In a forthcoming NordiCHI paper we elaborate on this issue (see [3]). In short, we suggest that we should focus on the formulation of classes of interactive systems and that we should develop ways of analyzing designs both in relation to the elements that constitute a particular design, and in relation to how a particular design composition can be said to belong to, extend, challenge, or combine such classes. We discuss this in relation to the importance of a history of designs and a history of design ideas for interaction design research, under the label of “generic design thinking.” Again, if we as a research community should take on one such classification project then precision will again be key! For reviewing the past, and for moving forward! 


1. Stolterman, E. & Wiberg, M (2010) Concept-driven Interaction Design Research, Human Computer Interaction (HCI), Vol 25, Issue 2, p. 95-118.

2. Hook, K. and Lowgren, J. 2012. Strong concepts: Intermediate-level knowledge in interaction design research. ACM ToCHI. 19, 3, Article 23.

3. Wiberg, M & Stolterman, E (2014) "What makes a prototype novel? - A knowledge contribution concern for interaction design research", Full paper, In Proc. of NordiCHI 2014;

Posted in: on Wed, October 01, 2014 - 11:13:28

Mikael Wiberg

Mikael Wiberg is Professor of Informatics in the Department of Informatics at Umeå University, Sweden.
View All Mikael Wiberg's Posts

Post Comment

No Comments Found

Lasting impact

Authors: Jonathan Grudin
Posted: Wed, September 24, 2014 - 10:35:29

An enduring contribution can take different forms. It can be a brick, soon covered by others yet a lasting part of a field’s foundation. Alternatively, it can be a feature that remains a visible inspiration.

Eminent scientists and engineers have offered insights into making an impact.


“If you want to predict the future, invent it,” said Alan Kay. In the 1960s, Kay conceived of the Dynabook. His work was widely read and had a lasting impact: 50 years later, tablets are realizing his vision. Vannevar Bush’s 1946 essay “As We May Think” has been aptly termed science fiction—an outline of an impossible optomechanical system called the Memex—but it inspired computer scientists who effectively realized his vision half a century later in a different medium: today’s dynamic Web.

Kay and his colleagues took a significant step toward the Dynabook by developing the Xerox Alto. Bush attempted to build a prototype Memex. However, generations of semiconductor engineers and computer scientists were needed to reach their goals—introducing a second factor.


A century ago, the celebrated inventor Thomas Edison captured the balance, proclaiming that genius was “1% inspiration, 99% perspiration.” He recognized the effort involved in inventing the future, but by attributing output to input this way he overlooked the fact that not all clever, industrious people thrive. Circumstances of birth affect a person’s odds of contributing; even among those of us raised in favorable settings, many creative, hard-working people have a tough time. More is needed.


A century before Edison, Louis Pasteur observed that “chance favors the prepared mind.” This elegantly acknowledges the significance of perspiration and inspiration while recognizing the role of luck.

Lasting impact awards

On a less exalted level, several computer science conferences have initiated awards for previously published papers that remain influential. My 1988 paper on challenges in developing technology to support groups received the first CSCW Lasting Impact Award at CSCW 2014 in Baltimore. The remainder of this essay examines how that paper came to be written and why it succeeded. Avoiding false modesty, I estimate that the impact was roughly 1% inspiration, 25% perspiration, and 75% a fortunate roll of the dice. (If you prefer 100% totals, reduce any of the estimates by 1%.)

Seeing how one career developed might help a young person, although 30 or 40 years ago I would have avoided considering the role of luck. Like all students in denial about the influence of factors over which they have no control, I would have been anxious thinking that intelligence and hard work would not guarantee success. But those who move past Kay and Edison to Pasteur can use help defining a path toward a prepared mind.

The origin of my paper

“Why CSCW applications fail: Problems in the design and evaluation of organizational interfaces,” was an unusual paper. It didn’t build on prior literature. It included no system-building, no usability study, no formal experiment, and no quantitative data. The qualitative data was not coded. The paper didn’t build on theory. Why was it written? What did it say?

The awkward title reflected the novelty, in 1988, of software designed to support groups. Only at CSCW‘88 did I first encounter the term groupware, more accessible than CSCW applications or Tom Malone’s organizational interfaces. Individual productivity tools—spreadsheets and word processors— were commercially successful in 1988. Email was used by some students and computer scientists, but only a relatively small community of researchers and developers worked on group support applications.

I had worked as a computer programmer in the late 1970s, before grad school. In 1983 I left a cognitive psychology postdoc to resume building things. Minicomputer companies were thriving: Digital Equipment Corporation, Data General, Wang Labs (my employer), and others. “Office information systems” were much less expensive, and less powerful, than mainframes. They were designed to support groups and were delivered with spreadsheets and word processing. We envisioned new “killer apps” that would support the millions of small groups out there. We built several such apps and features. One after another, they failed in the marketplace. Why was group support so hard to get right?

Parenthetically, I also worked on enhancements to individual productivity tools. There I encountered another challenge: Existing software development practices were a terrible fit for producing interactive software to be used by non-engineers.

Writing the paper

In 1986 I quit and spent the summer reflecting on our experiences. I wrote a first draft. My colleague Carrie Ehrlich and I also wrote a CHI’87 paper about fitting usability into software development. A cognitive psychologist, Carrie worked in a small research group in the marketing division. She had a perspective I lacked: Her father had been a tech company executive. She explained organizations to me and changed my life. The 1988 paper wouldn’t have been written without her influence. It was chance that I met Carrie, and partly chance that I worked on a string of group support features and applications, but I was open to learning from them.

In the fall, I arrived in Austin, Texas to work for the consortium MCC. The first CSCW conference was being organized there. “What is CSCW?” I asked. “Computer Supported Cooperative Work—it was founded by Irene Greif,” someone said. I attended and knew my work belonged there. The field coalesced around Irene’s book Computer Supported Cooperative Work, a collection of seminal papers that were difficult to find in the pre-digital era. Irene’s lasting impact far exceeds that of any single paper. I may well have had little impact at all without the foundation she was putting into place.

My research at MCC built on the two papers drafted that summer: (i) understanding group support, and (ii) understanding development practices for building interactive software. MCC, like Wang and all the minicomputer companies, is now gone. Wedded to AI platforms that also disappeared, MCC disappointed the consortium owners, but it was a great place for young researchers. I began a productive partnership there with Steve Poltrock, another cognitive psychologist, which continues to this day. We were informally trained in ethnographic methods by Ed Hutchins and in social science by Karen Holtzblatt, then a Digital employee starting to develop Contextual Design. MCC gave me the resources to refine the paper and attend the conference.

The theme: Challenges in design and development

Why weren’t automated meeting scheduling features used? Why weren’t speech and natural language features adopted? Why didn’t distributed expertise location and project management applications thrive? The paper used examples to illustrate three factors contributing to our disappointments:

  1. Political economy of effort. Consider a project management application that requires individual contributors to update their status. The manager is the direct beneficiary. Everyone else must do more work. If individual contributors who see no benefit do not participate, it fails. This pattern appeared repeatedly: An application or feature required more work of people who perceived no benefit. Ironically, most effort often went into the interface for the beneficiary.

    Was this well known? Friends and colleagues knew of nothing published. I found nothing relevant in the Boston Public Library. Later, I concluded that it was a relatively new phenomenon, tied to the declining cost of computing. Mainframe computers were so expensive that use was generally an enterprise mandate. At the group level, mandated use of productivity applications was uncommon.

  2. Managers decide what to build, buy, and even what to research. Managers with good intuition for individual productivity tools often made poor decisions about group support software. For example, audio annotation as a word processor feature appealed to managers who used Dictaphones and couldn’t type. But audio is harder to browse, understand, and reuse. We built it, no one came.

  3. You can bring people into a lab, have them use a new word processor for an hour, and learn something. You can’t bring six people into a lab and ask them to simulate office work for an hour. This may seem obvious, but most HCI people back then, including me, had been trained to do formal controlled lab experiments. We were scientists!

The paper used features and applications on which I had worked to illustrate these points.

Listening to friends

The first draft emphasized the role of managers. I still consider that to be the most pernicious factor, having observed billions of dollars poured into resource black holes over decades. But my friend Don Gentner advised me to emphasize the disparity between those who do work and those who benefit. Don was right. Academia isn’t strongly hierarchical and doesn’t resonate to management issues. Academics were not my intended audience and few attended CSCW’88, but those who did were influential. Criticizing managers is rarely a winning strategy, anyway.

Limited expectations

Prior to the web and digital libraries, only people who attended a conference had access to proceedings. I wanted to get word out to the small community of groupware developers at Wang Labs, Digital Equipment Corporation, IBM, and elsewhere, so they could avoid beating their heads against the walls we had. Most CSCW 1988 attendees were from industry. I assumed they would tell their colleagues, we would absorb the points, and in a few months everyone would have moved on.

It didn’t matter if I had missed relevant published literature: The community needed the information! Conferences weren’t archival. The point was to avoid more failed applications, not to discover something new under the sun.

The impact

At the CSCW 2014 ceremony for my paper, Tom Finholt and Steve Poltrock analyzed the citation pattern over a quarter century, showing a steady growth and spread of the paper’s influence. I had been wrong—the three points had not been quickly absorbed. They remain applicable. A manager’s desire for a project dashboard can motivate an internal enterprise wiki, but individual contributors might use a wiki for their own purposes, not to update status. Managers still funnel billions of dollars into black holes. Myriad lab studies are published in which groups of three or four students are asked to pretend to be a workgroup.

All my subsequent jobs were due to that work. Danes who heard me present it invited me to spend two productive years in Aarhus. A social informatics group at UC Irvine recruited me, after which a Microsoft team building group support prototypes hired me. Visiting professorships and consulting jobs stemmed from that paper and my consequent reputation as a CSCW researcher.

Why the impact?

The analysis resonated with people’s experience; it seemed obvious once you heard it. But other factors were critical to its strong reception. The paper surfaced at precisely the right moment. In 1984 my colleagues and I were on the bleeding edge, but by 1988 client-server architectures and networking were spreading. More developers worked on supporting group activities. The numbers of developers focused on group support had risen from handfuls in 1984 to hundreds in 1988, with thousands on the way.

I was fortunate that the CSCW’88 program chair was Lucy Suchman. Her interest in introducing more qualitative and participatory work undoubtedly helped my paper get in despite its lack of literature citation, system-building, usability study, formal experiment, quantitative data, and theory. In subsequent years, such papers were not accepted.

The most significant break was that the paper was scheduled early in a single-track conference that attracted a large, curious crowd. Several speakers referred back to it and Don Norman called it out in his closing keynote.

Finally, ACM was at that time starting to make proceedings available after conferences, first by mail order and then in its digital library.

Extending the work involved some perspiration. A journal reprinted the paper with minor changes. A new version was solicited for a popular book. Drawing on contributions by Lucy Suchman, Lynne Markus, and others, I expanded the factors from three to eight for a Communications of the ACM article that has been cited more than the original paper.

No false modesty

I was happy with the paper. I had identified a significant problem and worked to understand it. But as noted above, the positive reception followed a series of lucky breaks. Acknowledging this is not being modest. Perhaps I can convince you that I’m immodest: Other papers I’ve written have involved more work and in my opinion deeper insight, but had less impact.

Fifteen years later I revisited the issue that I felt was the most significant—managerial shortsightedness. The resulting paper, which seemed potentially just as useful, was rejected by CSCW and CHI. It found a home, but attracted little attention. Authors are often not the best judges of their own work, but when I consider the differences in the reception of my papers, factors beyond my control seem to weigh heavily.

Ways to contribute

The Moore’s law cornucopia provides diverse paths forward. Your most promising path might be to invent the future by building novel devices. In the early 1980s, we found that that does not always work out. To avoid inventing solutions that the rest of us don’t have problems for, prepare with careful observation and analysis, and hope the stars align.

A second option, which has been the central role of CHI and CSCW, is to improve tools and processes that are entering widespread use.

Third, you can tackle stubborn problems that aren’t fully understood. My 1988 paper was in this category. It is difficult to publish such work in traditional venues. With CSCW now a traditional venue, you might seek a new one.

In conclusion, Pasteur’s advice seems best—prepare, and chance may favor you. Preparation involves being open and observant, dividing attention between focal tasks, peripheral vision, and the rear-view mirror. For me, it was most important to develop friendships with people who had similar and complementary skills. It takes a village to produce a lasting impact.

Posted in: on Wed, September 24, 2014 - 10:35:29

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

Lessons from leading design at a startup

Authors: Uday Gajendar
Posted: Tue, September 16, 2014 - 10:01:28

In the past 100+ days I’ve led the successful re-invigoration of a fledgling design capability at a 2-year-old startup into a robust, cohesive, solidified practice with vitality to carry it further, with unified executive support. This includes a revitalized visual design language, visionary concepts to provoke innovation, and strategic re-thinking of UX fundamentals core to the product’s functionality. Being my first startup leadership role, this has certainly proved to be a valuable “high learning, high growth” experience filled with lessons, small and large. I’d like to share a few here that made the most definitive mark on my mind, shaping my design leadership model going forward. Hopefully this will help other UX/HCI/design professionals in similar small-team leadership situations. 

Say “no” to preserve your sanity (and the product focus): It’s critical early on to set boundaries demarcating exactly what you’ll work on and what you’ll defer to others. As a former boss liked to say, “You can’t boil the ocean.” Be selective on where you’ll have design impact with immediate or significant results that you can parlay into your next design activity. “Saying No” also builds respect from others, telling colleagues that you have a direction and a purpose to deliver against.

Remove “like” from the discussion: Everyone has opinions about design—that’s simply natural and expected. One way to mitigate the “I like” (or “I don’t like”) is to remove that word and instead focus on “what works” or “doesn’t work” for a particular persona/context/scenario. This forces the discussion to be about functional nature of design elements, not subjective personal tastes.

Role model good behavior from day one: It’s only natural for a startup starved for design expertise to immediately ask for icons and buttons after the designer found the bathroom and got the computer working. After all, that’s what most interpret design as—the tactical items. As a designer in that context, it’s your opportunity to demonstrate the right behavior for engaging and creating, such as asking user-oriented questions, drafting a design brief, sketching at whiteboards, discussing with engineers, etc. 

Build relationships with Sales, your best friend: Yes, sales! You gotta sell to customers and your sales leader will point you to the right folks to learn about customers, markets, partners, etc. Understanding the sales channel, which is the primary vehicle for delivering a great customer experience, is vital to your success as a design leader. Build that rapport to actively insert yourself into the customer engagement process, which is a gold mine of learnings to convert into design decisions. 

Don’t get hung up on Agile or Lean: These are just process words and mechanisms for delivering code, each with their particular lexicon. They are not perfect. There is no ideal way to fit UX into either one. Yet the overall dynamic is complementary in spirit and should enable smooth, efficient, learning-based outcomes to help iterate the product-market fit goals. The gritty, mundane details of JIRA, stories, estimations, sprint reviews, etc. are simply part of the process. Keep up your design vision and learn how to co-opt those mechanisms to get design ahead of the game, like filing “UX Stories” based upon your vision. 

Think in terms of “goals, risks, actions” when managing up: Maybe as part of a large corporate design team it was acceptable to vent and rant about issues with close peers. However, in a design leadership role on par with CEO and VPs of Engineering or Sales, you need to be focused and deliberate in your communications with them, to amplify the respect and build trust/confidence in you. I learned it’s far more effective to discuss things in terms of your goals, what are the key risks affecting the accomplishment of said goals, and what actions are desired (or asks to made) to help achieve those goals. This is way more professional and valuable of a dialogue driver. Don’t just rant! 

Finally, get comfortable with “good enough”: As Steve Jobs said, “Real artists ship,” meaning that you can’t sweat all the perfection-oriented details too much at risk of delaying the release. At some point you must let it go, knowing that there will be subsequent iterations and releases for improving imperfections—which is ongoing. Having fillers, stop-gaps, and temporary fixes are all expected. Do your best and accept (if not wholly embrace) the notion of “satisficing” (per Herb Simon) of doing what’s necessary yet sufficient. 

Design leadership is incredibly hard, perhaps made more difficult because of the glare of the spotlight now that UX is “hot” and finally recognized by execs and boards as a driver of company success. While you may be a “team of one,” the kinds of learnings itemized here will help enable a productive, design-led path forward for the team.

Posted in: on Tue, September 16, 2014 - 10:01:28

Uday Gajendar

Uday Gajendar is Director of User Experience at CloudPhysics, focused on bringing beauty and soul to Big Data for virtualized datacenters.
View All Uday Gajendar's Posts

Post Comment

No Comments Found

Service blueprints: Laying the foundation

Authors: Lauren Chapman Ruiz
Posted: Wed, September 10, 2014 - 1:01:14

This article was co-written by Izac Ross, Lauren Chapman Ruiz, and Shahrzad Samadzadeh.

Recently, we introduced you to the core concepts of service design, a powerful approach that examines complex interactions between people and their service experiences. With this post, we examine one of the primary tools of service design: the service blueprint.

Today’s products and services are delivered through systems of touchpoints that cross channels and blend both digital and human interactions. The service blueprint is a diagram that allows designers to look beyond the product and pixels to examine the systems that bring a customer’s experience to life.

What is a service blueprint?

You may be familiar with customer journey mapping, which is a tool that allows stakeholders to better understand customer interactions with their product or service over time. The service blueprint contains the customer journey as well as all of the interactions that make that journey possible.

Because of this, service blueprints can be used to better deliver a successful customer experience. Think of it this way: You can look at a building, and you can read a description, but to build the building you need more than an image or description. You need the instructions—the blueprint.

Service blueprints expose and involve many of the core concepts we talked about in Service Design 101.To use that vocabulary: Service blueprints clarify the interactions between service users, digital touchpoints, and service employees, including the frontstage activities that impact the customer directly and the backstage activities that the customer does not see.

When should you use a service blueprint?

Service blueprints are useful when:

  • You want to improve your service offering. Knowing how your service gets produced is essential for addressing breakdowns or pain points.
  • You want to design a new service that mixes digital and non-digital touchpoints. Service blueprints shine when examining and implementing the delivery of complex services.
  • You have lost track of how the service gets produced. Services, like products, have manufacturing lines. The longer the service has been around or the larger the organization, the more siloed and opaque the manufacturing process can become.
  • There are many players in the service. Even the most simple-sounding service often involves IT systems, people, props, and partners all working to deliver the customer experience. A blueprint can help coordinate this complexity.
  • You are designing a service or product that is involved in producing other services. Products and services often interact with other services, particularly if they are b2b. Understanding your customer’s interactions with partners throughout the service can support a more seamless—and better—customer experience.
  • You want to formalize a high-touch service into a lower-touch form. New technologies can create opportunities for delivering higher-touch (and thus more expensive) services to broader audiences in new, more cost-effective forms. For example, think about the expanding world of online education. A blueprint uncovers the essential considerations for implementing a new, lower-touch service.

Keep in mind that there are times when a service blueprint is not the right tool! For example, if the goal is to design an all-digital service, journey mapping or process flows might be more appropriate.

Anatomy of a service blueprint

You understand the concept of service blueprints, and you know when to use them. Now, how do you make one? The starting elements are simple: dividing lines and swimlanes of information.

There are three essential requirements for a formal service blueprint:

  • The line of interaction: This is the point at which customers and the service interact.
  • The line of visibility: Beyond this line, the customer can no longer see into the service.
  • The line of internal interaction: This is where the business itself stops, and partners step in.

In between these lines are five main swimlanes that capture the building blocks of the service:

  • Physical evidence: These are the props and places that are encountered along the customer’s service journey. It’s a common misconception that this lane is reserved for only customer-facing physical evidence, but any forms, products, signage, or physical locations used by or seen by the customer or internal employees can and should be represented here.
  • Customer actions: These are the things the customer has to do to access the service. Without the customer’s actions, there is no service at all!
  • Frontstage: All of the activities, people, and physical evidence that the customer can see while going through the service journey.
  • Backstage: This is all of the things required to produce the service that the customer does not see.
  • Support processes: Documented below the line of interaction, these are the actions that support the service.

Additional Lanes

For clarity, here are some additional swimlanes we recommend:

  • Time: Services are delivered over time, and a step in the blueprint may take 5 seconds or 5 minutes. Adding time along the top provides a better understanding of the service.
  • Quality measures: These are the experience factors that measure your success or value, the critical moments when the service succeeds or fails in the mind of the service user. For example, what’s the wait time?
  • Emotional journey: Depending on the service, it can be essential to understand the service user’s emotional state. For example, fear in an emergency room is an important consideration.
  • Splitting up the front stage: With multiple touchpoints that are working together simultaneously to create the service experience, splitting each touchpoint into a separate lane (for example, digital and device interactions vs. service employee interactions) can be very helpful.
  • Splitting up the backstage: The backstage can be comprised of people, systems, and even equipment. For detail or low altitude blueprints, splitting out lanes for employees, apps, data, and infrastructure can clarify the various domains of the service.
  • Phases of the service experience cycle: Services unfold over time, so it can add clarity to call out the phases of the experience cycle. For example: how customers are enticed to use the service, enter or onboard to the service, experience the service, exit, and then potentially re-enter the service and are thus retained as customers.
  • Photos/sketches of major interactions: Adding this lane can help viewers quickly grasp how the service unfolds over time, in a comic-book-like view.

You can build as much complexity as needed into your blueprint, depending on the complexity of the service.

The 5 Ps and how they relate to the blueprint

In Service Design 101, we talked about the 5 Ps: People, Processes, Props, Partners, and Place. Looking at the service blueprint structure, you can see that all of the 5 Ps are captured. Through the top are Props and Place, and down through the rest of the blueprint are People, Process, and then Partners in the support processes. In the Customer Action lane are experience and actions.

Service blueprints can only be useful if they can be interpreted and implemented. We recommend the following common notation standards for making sure a blueprint is clear, focused, and communicates successfully.


Like a customer journey map, a service blueprint is focused on one persona’s experience through a single path. Think of it as a kind of scenario. Your blueprint will quickly become too complex and too difficult to read if you put multiple journeys on a single one, or try to capture service use with many variations. Blueprints show one use case or path over time, so additional blueprints must be used for variations of the service journey.

Common notations


When an arrow crosses a swimlane, value is being exchanged through the touchpoints of the service.

Arrows have a very important meaning beyond the direction of the value exchange. They indicate who or what system is in control at any given moment:

  • A single arrow means that the source of the arrow is in control in the value exchange.
  • A double arrow indicates that an agreement must be reached between the two entities to move the process forward. For example, agreeing on the pick-up time with a pharmacist, or negotiating a price in a non-fixed cost structure.


As you do the research and field observations necessary to build a blueprint, remember that blueprints can be a powerful way to communicate what’s working and what isn’t working for both service users and employees in the existing process. These notable moments can be captured in a number of ways, but we recommend icons with a legend to keep things legible and clear.

Some notable moments to consider capturing:

  • Pain points which should be fixed or improved
  • Opportunities to measure the quality of the service
  • Opportunities for cost savings or increased profits
  • Moments that are loved by the customer and should not be lost

What’s next?

As you begin to incorporate blueprinting into your process, remember that blueprints have altitude! They can capture incredible amounts of detail or summarize high-level understandings. When evaluating or implementing an existing set of service interactions, a low altitude map that details out the processes across touchpoints and systems is an invaluable management tool. To quickly understand the customer experience in order to propose design changes or develop a shared understanding, higher altitude diagrams can be most helpful.

Service blueprints break down the individual steps that are happening to produce the customer experience, and are an essential tool for the practice of service design.

In upcoming weeks we will talk about where this tool fits into the design process, and on gathering information to create service blueprints and journey maps.

Posted in: on Wed, September 10, 2014 - 1:01:14

Lauren Chapman Ruiz

Lauren Chapman Ruiz is a Senior Interaction Designer at Cooper in San Francisco, CA, and is an adjunct faculty member at CCA .
View All Lauren Chapman Ruiz's Posts

Post Comment

@Raffaele (2014 09 21)

What do you think of my storygraph?

@Kathryn Gallagher (2015 01 13)

Thank you Lauren for the great explanation.  I would like to create electronic service blueprints myself and wondered if you had any software apps. you could recommend, pref. free?  (I’m a student)  Thank you!

Big, hairy, and wicked

Authors: David Fore
Posted: Fri, September 05, 2014 - 1:00:56

Interaction designers sure can take things personally. When our behavior is driven by ego, this habit can be annoying. All that huffing and puffing during design crits! But when it springs from empathy for those our designs are meant to serve, then this signal attitude can yield dividends for all.

Nowhere are these sensitivities more critical to success—or more knotty—than when we confront systems whose complexity is alternately big, hairy, and wicked, and where making a positive impact with design can feel like pushing a rope.

I learned this again (apparently you can’t learn it too often) while leading design at Collaborative Chronic Care Network (C3N). Funded by a landmark grant from National Institutes of Health (NIH), C3N has spent nearly five years cultivating a novel learning health system, one that joins health-seekers and their families, their healthcare providers, and researchers in common cause around improving care. 

The past few years have been filled with front-page stories about the difficulties of changing healthcare for the better… or even to determine if what you’ve done is for the better. To hedge such risks from the start, C3N adopted interaction design methods to enhance understanding of and empathy for all system participants: pediatric patients, their caregivers, and researchers alike. Not a panacea, to be sure, but still a step in the right direction.

This initiative has led me to a deep appreciation for the value of three publications—two new ones and a neglected classic—that offer methods, insight, and counter-intuitive wisdom for those whose job it is to design for systems.

Few people know this terrain better than Peter Jones. He wrote Design for Care on the Internet, sharing his steady progress on concepts and chapters with a broad swath of the interaction design community. The result is a densely packed yet accessible book with a strong narrative backbone that demonstrates a wide range of ways multidisciplinary teams and organizations have designed for healthcare environments.

Jones uses theory, story, and principles to demonstrate how designers and their collaborators are trying to change how healthcare is delivered and experienced. Practicing and teaching out of Toronto, Jones has great familiarity with the Canadian healthcare system as well as the US model, which means that readers are permitted to see how similar cultures get very different outcomes. 

He also makes a convincing argument that designers have an opportunity—an obligation even—to listen for and respond to a “call to care.” Otherwise, he observes, our collaborators—the ones with skin in the game—will find it difficult to take our contributions seriously. After all, a cancer patient seeking health or an oncology nurse running a clinical quality-improvement program need to know you care enough to stick around and see things through.

The depths of this book’s research, the clarity of its prose and schematics, and the methods it offers can help designers take advantage of an historic opportunity to improve the healthcare sector. I expect Design for Care to be an evergreen title, used by students, teachers, and practitioners for years to come.

Every designer I’ve worked with, without exception, loves doing field research. It might begin as a desire to get out from behind their pixel machines. But they always come back to the studio ready to blow away old ways of thinking. 

The fruit of qualitative research not only improve designs, but it also equips designers with invaluable insight and information when negotiating with product managers, developers, and executives. 

And what do they find out there among the masses? 

People are struggling to realize their goals amidst the whirling blades of systems not designed for their benefit. That’s why it’s so valuable (and challenging) to spend time with folks at factories, agricultural facilities, hospitals, or wherever your designs will be encountered.

But what if you can’t get out into the field? How do you make sure that your nicely designed product is going to be useful. In other words, how do stop yourself from the perfect execution of the wrong thing? A new whitepaper from the National Alliance for Caregiving Caregiver’s Alliance has some answers. 

It is my burden to read a lot of healthcare papers. What helps the work of Richard Adler and Rajiv Mehta rise above the rest is how carefully they inflect their recommendations toward the sensibilities and needs of product designers and software developers. The authors are Silicon Valley veterans who place the problem into context, then make sure to draw a bright line from research to discovery to requirements. 

Equipped with this report, designers are far more likely to create designs that serve the true needs of the people who have much to gain—and lose—during difficult life passages. It is filled with compelling qualitative research results and schematics and tables that shed light on the situations and needs of family caregivers. 

Why is this important? Because these are family and community members—people like you and me—that constitute the unacknowledged backbone of the healthcare system. They are also, perhaps, the most overburdened, deserving, and underserved population in the field of design today.

So now we’re ready to redesign healthcare for good! 

But wait a minute…

That’s not just difficult, it’s probably impossible. After all, we humans are exceptionally good at over-reaching, but less good at acknowledging the limits of what we can foresee. We want to change everything with a single swift and stroke, but typically that impulse leaves behind little else but blood on the floor. 

John Gall, a physician and medical professor, observed that systems design is, by and large, a fool’s errand. But even fools need errands, which is why he wrote the now-legendary SystemANTICS. Gall’s perspective, stories, and axioms make this book a must-read, while his breezy writing style makes the book feel like beach reading. 

But make no mistake: Gall has lived and worked in the trenches, and he knows of what he speaks. His work has had such a profound influence on systems thinking, in fact, that we now have Gall’s Law:  A complex system that works is invariably found to have evolved from a simple system that worked.

Others are similarly concise and useful, such as New systems mean new problems and A system design invariably grows to contain the known universe

My favorite is this: People will do what they damn well please.

That last one comes from biology, of course, but it’s true for human systems as well. 

Gall implores you to acknowledge that people will use your systems and products in the strangest ways… and he will challenge you to do everything you can to anticipate some of those uses, and so build in resilience. 

As design radically alters the rest of economy with compelling products and services, healthcare remains a holdout. The current system in whose grip we find ourselves appears designed to preserve privileges and perverse incentives that propagate waste at the expense of better outcomes for all. By protecting prerogatives of incumbent systems and business models, innovation is too often stifled. But if we put our minds and hearts to it, and we’re aware of the pratfalls, we are more likely to push the rock up the hill at least another few inches.

Posted in: on Fri, September 05, 2014 - 1:00:56

David Fore

David Fore cut his teeth at Cooper, where he led the interaction design practice for many years. Then he went on to run Lybba, a healthcare nonprofit. Now he leads Catabolic, a product strategy and design consultancy. His aim with this blog is to share tools and ideas designers can use to make a difference in the world.
View All David Fore's Posts

Post Comment

@Jonathan T Grudin (2014 09 08)

Very nice essay, David. Thanks.

@Lauren Ruiz (2014 09 16)

So true! I often find we designers want to talk all about what needs to change, and that design thinking can tackle these problems, but the how is ever-elusive. How do you affect big systems? These books are good guides in starting to answer that question.

Inside the empathy trap

Authors: Lauren Chapman Ruiz
Posted: Mon, August 11, 2014 - 12:43:17

It’s not uncommon to find yourself closely identifying with the users you are designing for, especially if you work in consumer products. You may even find yourself exposed to the exact experiences you’re tasked with designing, as I recently discovered when I went from researching hematologist-oncologists (HemOncs) and their clinics to receiving care from a HemOnc physician in his clinic. (Thankfully, all is now well with my health.)

This led to some revealing insights. Suddenly I was approaching my experience not just as a personal life event, but as both the designing observer, taking note of every detail, and the subject, or user, receiving the care. Instead of passively observing, I focused on engaging in a walk-a-mile exercise, literally walking in my own shoes, as my own user.

In the past, I’ve written about the importance of empathy in design, but this was an extreme. I was able to identify my personal persona, watch to confirm the validity of workflows, and direct multitudes of questions to the understanding staff members. This subsequent experience can be extremely positive, but reminded me of the dangers of biases and designing solely for one person.

For instance, most of my caregivers enjoyed chatting, and one even stated how fun it was to have a patient who inquired about everything. That was my reminder that most patients are not like me, not having studied this exact space, and therefore having less comfort in asking questions. I had to remember this was something unique.

When we find ourselves in these situations, we need to remember that what happens to us may enhance our knowledge, but it cannot become the only conceivable experience in our minds. Too often we can walk dangerously close to designing for ourselves or for “the identifiable victim.” However, this can cause us to lose focus on improving outcomes for “the many” by single-mindedly pursuing an individual solution to a particularly negative outcome.

A New Yorker article called “Baby in the Well” builds a case against empathy, pointing out that this can cause us to misplace our efforts, missing the needs of “the many.” It is shown that the key to engaging empathy is the “identifiable victim effect,” which is the tendency for people to offer greater aid when a specific, identifiable person, or “victim” is observed under hardship, as compared to a large and vaguely defined group with the same need. The article states:

As the economist Thomas Schelling, writing forty-five years ago, mordantly observed, “Let a six-year-old girl with brown hair need thousands of dollars for an operation that will prolong her life until Christmas, and the post office will be swamped with nickels and dimes to save her. But let it be reported that without a sales tax the hospital facilities of Massachusetts will deteriorate and cause a barely perceptible increase in preventable deaths—not many will drop a tear or reach for their checkbooks.

When we design, we pursue a broader type of empathy. As a colleague once said to me, designers need to identify with the whole user base. User-centricity is about the ability to recognize that there are a number of personas, each with different goals, desires, challenges, behaviors, and needs. We design for these personas, recognizing that each has different goals they’re trying to accomplish and with different behaviors in how they go about achieving them.

So what are the key takeaways from my experience?

  1. Situations that help us build empathy for our users are invaluable as it gives us deep knowledge, but we should recognize and feel empathy for many. Looking at our situations through the lenses of your multiple personas can help you avoid this trap.
  2. Remember that the empathy we look to build in design is not just about feelings, but rather about understanding goals, the reasons for these goals, and how they are or aren’t currently accomplished.
  3. Have some empathy for yourself—it’s hard to untangle our personal feelings from the work we do on a day-to-day basis. Remember, we’re all human, and we will fall into the trap of focusing on ourselves from time to time. Recognizing this and looking out for the places where it affects our work is the best we can do.

What about you—have you found yourself in similar situations? How have you approached it? Are there tricks you use or pitfalls you work to avoid? Please use the twitter hashtag #designresearch to share in the conversation. 

Illustration by Cale LeRoy

Posted in: on Mon, August 11, 2014 - 12:43:17

Lauren Chapman Ruiz

Lauren Chapman Ruiz is a Senior Interaction Designer at Cooper in San Francisco, CA, and is an adjunct faculty member at CCA .
View All Lauren Chapman Ruiz's Posts

Post Comment

No Comments Found

Diversity and survival

Authors: Jonathan Grudin
Posted: Tue, August 05, 2014 - 12:40:21

In a “buddy movie,” two people confront a problem. One is often calm and analytic, the other impulsive and intuitive. Initially distrustful, they eventually bond and succeed by drawing on their different talents.

This captures the core elements of a case for diversity: When people with different approaches overcome a natural distrust, their combined skills can solve difficult problems. They must first learn to communicate and understand one another. In addition to the analyst and the live wire, buddy movies have explored ethnically, racially, and gender diverse pairs, intellectual differences (Rain Man), ethical opposites (Jody Foster’s upright agent Clarice Starling teamed with psychopathic Hannibal Lecter), and alliances between humans and animals or extra-terrestrials.

Diverse buddies are not limited to duos (Seven Samurai, Ocean’s Eleven). All initially confront trust issues. Lack of trust can block diversity benefits in real life, too: Robert Putnam demonstrated that social capital is greater when diversity is low and that cultures with high social capital often fare better. However, the United States has prospered with high immigration-fueled diversity, despite the tensions. When is diversity worth the price?

In the movies, combining different perspectives solves a problem that no individual could. The moral case for racial, gender, social, or species diversity is secondary, although these differences may correlate with diverse views and skills. At its core, diversity is about survival, whether the threat is economic failure or the Wicked Witch of the West. “Don’t put all your eggs in one basket,” financial planners advise. A caveat is that diversity is not always good. Noah had to bar Tyrannosaurus rex from the ark; it wouldn’t have worked out. For a given task, some of us will be as useful as the proverbial one-legged man at an ass-kicking party. Exhortations in support of diversity rarely address this.

Diversity in teams receives the most attention. My ultimate focus is on the complex task of managing diversity in large organizations—companies, research granting agencies, and academic fields. But a discussion of diversity and survival has a natural starting point.

Biological diversity

Diversity can enable a species to survive or thrive despite changes in environmental conditions. Galapagos finches differ in beak size. Big-beaked finches can crack tough seeds, small-beaked finches ferret out nutritious fare. Drought or a change in competition can rapidly shift the dominant beak size within a single species. The finches do well to produce some of each.

In two situations, biological diversity disappears: A species with a prolonged absence of environmental challenge adapts fully to its niche, and a species under prolonged high stress jettisons anything nonessential. If circumstances shift, the resulting lack of diversity can result in extinction. In human affairs too, complacency and paranoia are enemies of diversity.

Human diversity

Coming to grips with workplace diversity is difficult because all forms come into play. Our differences span a nature-nurture continuum. Race and gender lie at one end, acquired skills at the other. Shyness or a preference for spatial reasoning may be inherited; cultural perspectives are acquired. In his book The Difference, Scott Page focuses on the benefits of diverse cognitive and social skills in problem-solving. As I wrote, a friend announced a startup for which a major investor was on board provided that other investors join: He wants diverse concurring reviews.

A team matter-of-factly recruiting core skills doesn’t think of them as diversity—but an unusual skill becomes a diversity play. Whether differences originate in nature or nurture isn’t important. Understanding their range and how they can clash or contribute is.

Drawing on thousands of measurements and interviews, William Herbert Sheldon’s 1954 Atlas of Men [1] yielded three physical types, each focused on an anatomical system accompanied by a psychological disposition: (i) thin, cerebral, ectomorphs (central nervous system), (ii) stocky, energetic mesomorphs (musculature), and (iii) emotional, pleasure-seeking endomorphs (autonomic nervous system). Consider a team comprising a scarecrow, lion, and tin man keen to establish ownership of a brain, courage, and a heart, or Madagascar’s giraffe, lion, and hippopotamus.

Aldous Huxley used Sheldon’s trichotomy in novels. Organizational psychologists favor broader typologies. Companies know that good teams can be diverse and try to get a handle on it. A popular tool is the Myer-Briggs Type Indicator. This 2x2 typology was built on Carl Jung’s dimensions of introversion/extraversion and thinking/feeling. It is consistent with his view that a typology is simply a categorization that serves a purpose. Other typologies are also used. Early in my career, my fellow software developers and I were given a profiling survey that I quickly saw would indicate whether we were primarily motivated by (i) money, (ii) power, (iii) security, (iv) helping others, or (v) interesting tasks. If I filled it out straightforwardly I’d be (v) followed by (iv). Instead, I drew on my childhood Monopoly player persona, and in the years that followed received very good raises. I took the initiative to find interesting tasks, rather than relying on management for that.

Teams and organizations

Organizations generally differ from teams in several relevant respects. Organizations are larger, more complex, and last longer [2]. An organization, a team of teams, requires a greater range of skills than any one team. Organizations strive to minimize the time spent problem-solving, where diversity helps the most, and maximize the time spent in routine execution. Most teams continually solve problems; one change in personnel or external dependency can alter the dynamics and lead to a sudden or gradual shift in roles.

Should an organization group people with the same skill or form heterogeneous teams? Should a company developing a range of products have central UX, software development, and test teams, or should it form product teams in which each type is represented? Homogeneous teams are easier to manage—assessing diverse accomplishments is a challenge for a team manager. Diverse teams must spend time and effort learning to communicate and trust.

Homogeneous teams could be optimal for an organization that is performing like clockwork, heterogeneous teams better positioned to respond in periods of flux. A centralized UX group is fine for occasional consulting, an embedded UX professional better for dynamic readjustment.


Consider a working group with a single manager, such as a program committee for a small conference, an NSF review panel, or a team in a tech company. The scope of work is relatively clear. Diversity may be limited: quant enthusiasts may keep out qualitative approaches or vice versa; a developer-turned-manager may feel that a developer with some UX flair has sufficient UX expertise for the project.

Where does diversity help or hinder? Joseph McGrath identified four modes of team activity: taking on a new task, conflict resolution, problem-solving, and execution. Diversity often slows task initiation. It can create conflicts. Diversity is neutral in execution mode [3], where a routine job has been broken down into component tasks, minimizing complex interdependencies.

Scott Page describes contributions of diversity to the remaining mode of team activity, problem-solving. Diversity helps when clearly recognizable steps toward a solution can be taken by any team member, as in open source projects or when several writers work on dialogue for a drama. Although a team executing in unison like a rowing crew may not benefit from diversity, most teams encounter problems at times. Members are often collocated, enabling informal interaction, learning to communicate, and building trust. When resource limitations force hard decisions on a team, members understand the tradeoffs. Subjective considerations sometimes override objective decision-making on behalf of team cohesion: “We just rejected one of her borderline submissions, let’s accept this one.” “His grant proposal is poor but his lab is productive, let’s accept it at a reduced funding level.” In contrast, responses to organizational decisions are often less nuanced.

Teams have teething pains, conflicts, and managers who can’t evaluate workers who have different skill sets or personalities. But in general, diverse teams succeed. One that fails is replaced or its functions reassigned.

When time is limited, introducing diversity is challenging. I was on a review panel that brought together organizational scientists and mathematicians. The concept of basic research in organizational science mystified the mathematicians, to whom it seemed axiomatic that research on organizations was applied. Another review panel merged social scientists studying collaboration technology and distributed AI researchers; the latter insisted loudly that every grant dollar must go to them because DARPA had cut them off and they had mouths to feed.


Organizations often endorse diversity, perhaps to promote trust in groups that span race, gender, and ethnicity. However, it is rarely a priority. Given that managing diversity is a challenge, why should a successful organization take on more than necessary? An organization’s long stretches of routine execution don’t benefit from a reservoir of diversity that enhances problem-solving. Complacency sets in. Perhaps diversity would yield better problem-solving, outweighing the management costs. Perhaps not.

The biology analogy suggests that a reservoir of skills could enable an organization to survive an unexpected threat. We don’t need big beaks now, but keep a few around lest a drought appear. A successful organization outlives a team, but few surpass the 70-year human lifespan. Perhaps identifying and managing the diverse skills that could address a wide-enough range of threats is impossible; managing the clearly relevant functions is difficult enough.

One approach is to push the social, cognitive, and motivational diversity that aids problem-solving down to individual teams to acquire and manage, using tools such as employee profile surveys. Unfortunately, it doesn’t suffice for organizational purposes to have skills resident in teams. Finding and recruiting a specific skill that exists somewhere in a large organization is a nightmare. I have participated in several expertise-location system-building efforts over 30 years, managing two myself. The systems were built but not used. Incentives to participate are typically insufficient. Similarly, cross-group task forces have been regarded as stop-gap efforts that complicate normal functioning.

Another approach is to assign teams to pursue diverse goals. For example, one group could pursue low-risk short-term activities as another engages in low-probability high-payoff efforts, drawing on different skills or capabilities.

Assessment at large scale

Organizations can’t be infinitely diverse. A company does a market segmentation and narrows its focus. NSF balances its investment across established and unproven research. A conference determines a scope. When unexpected changes present novel problems, will a reserve of accessible skills and flexibility exist? Management can draft aspirational mission statements, but in the end, responsiveness is determined by review processes, such as employee performance evaluations, grant funding, and conference and journal reviewing.

A pattern appears as we scale up: Assessing across a broad range not only requires us to compare apples and oranges (and many other fruits), it requires deciding which apples are better than which oranges. Sometimes all the apples or all the oranges are discarded.

Large organizations. How broadly should assessments and rewards be calibrated laterally across an organization? Giving units autonomy to allocate rewards can lead to the perception or fact that low-performing units are rewarded equivalently to stronger units. A concerted effort to calibrate broadly takes time and can lead to the dominant apples squeezing out other produce. For example, when rewards are calibrated across software engineering, test, and UX, the more numerous software engineers to whom “high-performing UX professional” is an oxymoron can control the outcome.

Organizations also sacrifice diversity to channel resources to combat exaggerated external threats. A hypothetical company with consumer and enterprise sales could respond to a perceived threat to its consumer business by eliminating enterprise development jobs and devoting all resources to consumer for a few years. When the pendulum swings back to enterprise, useful skills are gone.

Granting agencies. An agency that supports many programs has three primary goals: (i) identify and support good work within each program (a team activity), (ii) eliminate outdated programs, which facilitates (iii) initiating new programs, expanding diversity. Secondary diversity goals are geographic, education outreach, underrepresented groups, and industry collaboration.

Let’s generously assume that individual programs surmount team-level challenges and support diversity. The second goal, eliminating established programs that are not delivering, can be close to impossible. Once a program survives a provisional introductory period, it is tasked to promote the good work in its area—there is an implicit assumption that there is good work. Researchers in a sketchy program circle the wagons: They volunteer for review panels and for rotating management positions, submit many proposals (“proposal pressure” is a key success metric), rate one another’s proposals highly, and after internal debates emerge with consensus in review panels.

The inability to eliminate non-productive programs impedes the ability to add useful diversity. For example, NSF has a process for new initiatives that largely depends on Congress increasing its budget. The infrequent choices can be whimsical, such as the short-lived “Science of Design” and “CreativIT” efforts [4]. I participated in three high-level reviews in different agencies where everyone seemed to agree that science suffers from inadequate publication of negative results, yet we could find no path to this significant diversification given current incentive structures.

Large selective conferences. Selective conferences in mature fields form groups to review papers in each specialized area. Antagonism can emerge within a group over methods or toward novel but unpolished work, but the main scourge of diversity is competition for slots, which causes each group to gravitate toward mainstream work in its area. Work that bridges topic areas suffers. Complete novelty finds advocates nowhere. Researchers often wistfully report that their “boring paper” was accepted but their interesting paper was rejected.

Startups: Team and organization

A startup needs a range of skills. It may avoid diversity in personality traits: People motivated by security or power are poor bets. Goals are clear and rewards are shared; there is a loose division of labor with everyone pitching in to solve problems. The short planning horizon and dynamically changing environment resemble a team more than an established enterprise. With no shortage of problems, diversity in problem-solving skills is useful, but every hire is strategic and there is little time to develop trust and overcome communication barriers.

Professional disciplines

Competition for limited resources works against community expansion and diversity. Two remarkably successful interdisciplinary programs, Neuroscience and Cognitive Science, originated in copious sustained funding from the Sloan Foundation. In contrast, I’ve invested more fruitless time and energy than I like to think about trying to form umbrella efforts to converge fields that logically overlapped: CHI and Human Factors (1980s), CHI and COIS (1980s), CSCW and MIS (1980s), CSCW and COOCS/GROUP (1990s), CHI and Information Systems (2000s), and CHI and iConferences (2010s). An analysis of why these failed appears elsewhere.


This is not the short essay I expected, and it doesn’t cover the equity considerations that drive diversity discussions in university admissions and workplace hiring. What can we conclude? Noting that diversity requires up-front planning to possibly address unknown future contingencies, I will consider where the biology analogy does and doesn’t hold.

With moderate uncertainty, diversity is a good survival strategy; with major resource competition, diversity yields to a focus on the essentials. So avoid exaggerating threats. Next, given that choices are necessary, what dimensions of diversity should we favor? Ecological cycles favor the retention of capabilities that were once useful—a drought may return. Economic pendulum swings argue for the same.

However, the march of science and technology creates both obsolescence and novel opportunities and challenges. Some, but not all, can be anticipated by studying trends. It is fairly empty to recommend focusing on efficiency in execution while retaining flexibility, but “avoid overreacting to perceived threats” is again good advice. Businesses narrow when they should diversify. Government funding is poured into defense and intelligence at the expense of health, education, infrastructure, and environment. And finally, my favorite hot button example, large conferences.

HCI researchers have always been terrified of appearing softer than traditional computer science and engineering. So we followed their lead. We drove down conference acceptance rates, kept out Design, and chased out practitioners. But other CS fields evolve more slowly, with greater consensus on key problems. Human interaction with computers explodes in all directions. Novelty is inevitable, yet with acceptance rates of 15% - 25%, each existing subfield accepts research central to today’s status quo, leaving little room for research that spans areas, is out of fashion but likely to return, involves leading edge practitioners, or is otherwise novel [5]. Could our process consign us to be followers, not leaders?


1. His atlas of women was not completed.

2. To be clear about my terminology use, a “football team” is an organization, although the group of players on the field together is a team. Boeing called the thousands of people working on the 777 a team, but here it would be an organization.

3. An exception is an organization tasked with problem-solving. The World War II codebreaking organization at Bletchley Park made extraordinary use of diversity, documented in Sinclair McKay’s The Secret Lives of Codebreakers: The Men and Women Who Cracked the Enigma Code at Bletchley Park (Plume, 2010).

4. DARPA is an agency with top-down management which can and does eliminate programs, sometimes restarting them years later.

5. See Donald Campbell’s provocative 1969 essay “Ethnocentrism of Disciplines and the Fish-scale Model of Omniscience.”

Thanks to Gayna Williams for ideas and perspectives, John King, Tom Erickson, and Clayton Lewis for comments on an earlier draft.

Posted in: on Tue, August 05, 2014 - 12:40:21

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

Possibilities, probabilities, and sensibilities

Authors: Uday Gajendar
Posted: Thu, July 31, 2014 - 11:27:43

Design is an iterative activity involving trajectories of exploration and discovery, of the problem space, the target market, and the solutions, towards making good choices. As the primary designer charged with delivery of an optimal solution, I must contend with such problems of choice, and thus trade-offs. Designing is fundamentally about mediating “choices”: what elements to show on-screen, which pathways to reveal, how to de-emphasize some features or prioritize others, and so forth. Some are “good choices” and some are not so good. So, if choice is at the heart of designing, how does a designer effectively handle too many choices and options—a dazzling array propositioned by earnest product managers seeking revenues and tenacious engineers wanting to showcase brilliance. Hmm! It’s a veritable challenge in the course of daily design work that I confront in my own professional life, too. I offer a potential framework that I have been evolving and applying which may be useful: iteratively defining the possibilities, probabilities, and sensibilities. Let me explain further...

Possibilities: This involves mapping out to the fullest extent all possible variants of user types, contexts of use, or solutions for a problem. Even if it’s wild or unfeasible, or an “edge case,” just capture it anyway so it is recorded for everyone on the team to discuss. This shows commitment to open-minded understanding of the situation, creating trust with colleagues, which is vital to delving with credibility into the “possibility space.” To make this practical, in my own work lately I’ve done the following to express this wide set of possibilities: 

  • Itemize all possible visual states and signals that apply to a data object, given the impending factors as interpreted by the back-end logic (regardless of whether a user understands or sees it) as a giant matrix.
  • Map out all possible filter and sorting combinations from a multifaceted filter panel control, to force understanding of potential impacts on the UI and user’s workflow.
  • Diagram all possible pathways for accessing the main application, depending on various roles, permissions, states, timeouts, and errors, in exhaustive detail so the team is fully aware.

Probabilities: Next, by virtue of the previous exercise and artifacts, force a critical dialogue on the actual likelihood of these possible events or states to actually happen. This necessarily requires informed stakeholders to contribute and clarify, stake out a position and defend it with data (empirical or anecdotal, as needed). This also surfaces qualifying conditions that were implicit and encourages everyone to understand how or why certain possibilities are not favorable or likely. This awareness on the team increases empathy of the situation, and possibly incites more user studies or other “pre-work” around the business or technical parameters. 

Again, to make this practical, the dialogue requires everyone in the room (or otherwise present) to ascertain the criteria for likelihood of happening, and a stack ranking of probabilities. This, of course, is a sneaky way of forcing priorities—essential to designing and focusing down for an optimal solution. Clearly, to do this well requires historical and empirical data trends, as well as observational data of users in the wild, as needed. Else, you’re making wild guesses, which is a sign about team research needs. (That’s a whole other topic!)

After your probabilities ranked and focused, what’s next? Then it’s time to introduce the humanistic, poetic element of desirability, which is the “sensibilities” part.

Sensibilities: Now having winnowed down to some set of actually probable states, types, situations, or whatever, the question you must raise is “What is most sensible?” for the targeted user. This refers to some articulation that is meaningful, relevant, maybe even delightful—literally engaging with the user’s senses to become something satisfying and productive. This requires a more ambiguous interaction with the team, but grounded with mock-ups, prototypes, animations to portray what is sensibly viable for the given personas and their goals and desires. This also requires tapping into a set of design principles and cultural values as espoused by the company—a reflection of the central brand promise.

Designing is about choices and arriving at a balanced solution that strives to meet a variety of demands from many perspectives. It’s easy as a designer to get caught in the mess of too many options and lose sight of what matters most to customers, ordinary people leading busy yet satisfying lives. By thinking through (and collaborating with teammates) on the possibilities, probabilities, and sensibilities, you can shape a structured approach to getting to that optimal solution.

Posted in: on Thu, July 31, 2014 - 11:27:43

Uday Gajendar

Uday Gajendar is Director of User Experience at CloudPhysics, focused on bringing beauty and soul to Big Data for virtualized datacenters.
View All Uday Gajendar's Posts

Post Comment

No Comments Found

Report from DIS 2014 part 1: Moral status of technological artifacts

Authors: Deborah Tatar
Posted: Fri, July 25, 2014 - 11:25:01

Peter-Paul Verbeek gave the opening keynote speech last month at the DIS (Designing Interactive Systems) conference in Vancouver. His topic was the moral status of technological artifacts. Do they have any? 

He argues that “yes, they do.” The argument runs that humans and objects are co-creations or, as he prefers, hybrids. Just as J. J. Gibson long ago argued for an ecology between the human and the environment—the eye is designed to detect precisely those elements of the electromagnetic spectrum that are usefully present in the environment—so too are humans and designed objects culturally co-adapted. This, by itself, is not revolutionary. In fact, it is on the basis of this similarity that Don Norman brought the term affordance into human-computer interaction. It brings together both the early, easier-to-swallow idea in cognitive psychology and human-computer interaction that the ways that we arrange the space around us are extensions of our intelligence, and the post-modern philosophical move. Suchman [1] uses the word re-creations rather than hybrids to describe the intertwining of (high-tech) artifact and person. However, Suchman, who spent many years embedded in design projects, emphasizes our active role in such re-creations, the things that we do, for example, in order to be able to imagine that robots have emotions. 

But Verbeek does not stop there. He moves towards an enhanced framework from which to understand this hybrid relationship. He draws on Don Ihde (this looks like a good link that can generate more for the interested.) to identify different kinds of relationships between the designed artifact and the human. The artifact may be part of the human, bearing an embodied relationship, as with glasses. But now we have to think embodied, as in Google glasses? The artifact may have a hermeneutic relationship to the human, bringing or excluding information for our consideration into the bright circle of our recognition, as with the thermometer. Now, we ask a FitBit? The artifact may have a contrastive role, called alterity or otherness, as in a robot. Last, the artifact may provide or create background—maybe even the soundtrack of our lives as we jog-trot down the beach along with Chariots of Fire. In the context of these distinctions, Verbeek asks what we know, do or hope for with respect to the technologies? These are excellent questions and lest we be too hasty in our answers, his examples summarize unintended effects and how new technologies create new dilemmas and possibilities. Courtesy of modern medical testing, for example, much congenital disease is moving from its status as fate to a new status as decision. Fetal gender decisions will soon be playing out in homes near you—and everywhere else. The decisions about whether to have a girl or a boy are local, but society has an interest in the anticipating and gauging the effects. (One of the reasons for the oft-depicted plight of women in Jane Austen’s England was the dearth of men caused by England’s imperial struggles.)

What are the consequences of Verbeek’s analysis? Here is where the design dilemmas start to build. Let us suppose that we just accept as normal the idea that we are hybrids of artifacts and biology. Fair ‘nuff. But Verbeek goes beyond this. He rejects the separation of the idea of human and machine in the study of human-machine interaction. The difficult part is that the relationship of the designer to the design components is not the same. The individual designer controls the machine, but only influences the person. The power of the relationship, the components of the relationship that cause us to conceptualize the relationship as so strong as to constitute hybridity, is precisely what leads to the need to study the relationship. 

What makes me impatient about Verbeek’s approach is that, as I understand it, he does not prioritize recent changes in the power of the machine. For many years, some of us (c.f. Englebart’s vision of human augmentation) imagined a future in which people could do precisely what they were already doing, and have something for free via the marvels of computing. We could just keep our own calendar, for example, and have it shared with others. 

But we do not hear this rhetoric any more. Now the rhetoric is one of expectation that our most private actions will do precisely those things that can be shared by the system widely. The intransigence of the computer wears us down even where we would prefer to resist and where another human being would give us a break. Verbeek’s position—like Foucault’s—feeds into the corporate, systemic power-grab by weakening our focus on those design and use actions that we can indeed take. 

If I am frustrated with Verbeek, I am more frustrated with myself. Our own “Making Epistemological Trouble” goes no further towards design action than to advance the hope that the third paradigm of HCI research can, by engaging in constant self-recreation, stir the design pot. These are the same rocks against which so much feminist design founders. In Verbeek’s view, we designers can have our choice of evil in influencing hybridity. Influence can be manifested as coercive, persuasive, seductive, or decisive (dominating). 

The designer may think globally but must act locally. The thing is that design action is hard. Moral design action is harder. In the May/June 2014 issue of Interactions, I published a feature that advanced a theory of what we call “human malleability and machine intransigence.” The point here is to draw attention to one class of design actions that often can be taken by individual designers, those that allow users to reassert that which is important to their identity and vision of themselves in interacting with the computer. 

Often when there is a dichotomy (focusing on human-machine interaction vs. rejecting the human-machine dichotomy), there are two ways of being in the middle. One way is to just reject the issue altogether. “It’s too complicated.” “Who can say?” “There are lots of opinions.” But the other is to hold fast onto the contradiction. In this case, it means holding onto the complexity of action while we think out cases. And it means something further than this. It means that individuals thinkers, like me and you, regardless of our corporate status indebtedness, should resist the temptation to be silenced by purely monetized notions of success. To end with one small but annoying example, it is a tremendous narrowing of the word helping to say that corporations are helping us by tailoring the advertisements that we see to things that we are most likely to buy. Yeah, sure. It’s helping in some abstract way, but not as a justification for ignoring the manifest injustices inherent in the associated perversion of shared knowledge about the world. I think that Verbeek may have been saying some of this when he talked about the need to anticipate mediations, assess mediations, and design mediations, but my impatience lies in that I want it said loudly, repeatedly, and in unmistakable terms. 


1. Suchman, L. Human-Machine Reconfigurations: Plans and Situated Actions. Cambridge University Press, New York, 2007.

Thanks to Jeffrey Bardzell for comments on an earlier version of this!

Posted in: on Fri, July 25, 2014 - 11:25:01

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts

Post Comment

No Comments Found

Ethical design

Authors: Ashley Karr
Posted: Thu, July 24, 2014 - 10:29:26

Take away: Something as fundamental to the human experience as ethics ought to be a fundamental part of human-centered design.

If only it were all so simple! If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being. And who is willing to destroy a piece of his own heart?
—Aleksandr Solzhenitsyn,The Gulag Archipelago

For a long while, I have been angry and frustrated with the design process and design community. It seems that our sole purpose is to make things that maximize profits as quickly as possible. User experience research and design is often used as a means to trick, manipulate, and separate people from their money and/or personal information. 

Finally and thankfully, I came to realize the cause of my anger and frustration.


Ethics are almost entirely absent from UX. I have six HCI, UX, and design textbooks and one seminal Air Force report on user interface design within arm’s reach at this very moment. That is a total of seven well-respected texts in our field. Only two of them even mention ethics. Of these two, one textbook has a paragraph on ethics regarding recruiting participants for research. The other has one-and-a-half pages on ethical interaction design, but it fails to even define ethics.

Define ethics

I have decided to do my part in rectifying this situation. I will begin by defining ethics. It is from the Greek work ethos, meaning customs. Ethics are right behaviors according to the customs of a particular group. I like to think of ethical things as thoughts, words, behaviors, designs, systems, and customs that are cumulatively more beneficial to life than they are harmful. Ethics are an essential part of civilization. Without ethics, people would not have ideas of right and wrong. They make society more stable and help people choose right actions over wrong ones. A society without ethics will fail sooner rather than later. It is important to state, however, that customs aren’t necessarily ethical. Often unethical customs inspire social change, movements, and revolutions.

Ethics require constant practice and consideration—like good hygiene. We cannot wash our hands once and expect them to be clean for life. We must wash our hands multiple times a day, every day, in order for our hands to remain clean. With ethics, we cannot engage in one ethical act in our lives and assume that we are forever after an ethical person. We must practice and consider ethics at every turn. As Abraham Lincoln said, “There are few things wholly evil or wholly good. Almost an inseparable compound of the two, so that our best judgment of the preponderance between them is continually demanded.” 

Why ethics are important in our field

There are three reasons why it is imperative that as makers of interactive computing technology we must embed ethics into our culture, methods, and metrics:

  1. First, what we create and put into the world has actual effects on actual people. Interactive designs do things. We need to make sure that our efforts are going into making things that do good things.
  2. Second, computing technology has the ability to amplify human abilities and spread exponentially in record time.
  3. Third, the ability to design and develop computing technology is to today’s world what literacy was two thousand years ago. We are (tech)literate in a world of people who cannot read. We are the leaders and creators of the sociotechnical system in which we now live. We are powerful—more powerful than we even realize. With great power comes great responsibility.

Allies in the field

Very few professionals within our field are actively incorporating ethics into their work. I have managed to find a few, and I will highlight the main objectives of three researchers here. (Please feel free to share with me other professionals working on this topic. I would love to hear from you.)

  1. Florian Egger addresses deceptive technologies. He states there is a fine line between user experience and user manipulation, and insights into user behavior and psychology can be used for ethical or unethical purposes. If designers understand certain “dirty tricks” that their unethical counterparts devise, users can be warned of these practices before falling victim. He also states that persuasion can be used for the good of the user.
  2. Sarah Deighan is conducting research on ethical issues occurring within UX, including how UX professionals view these issues. She is attempting to make ethical resources available for UX professionals.
  3. Rainer Kuhlen wrote The Information Ethics Matrix: Values and rights in electronic environments. He explores new attitudes toward knowledge and information (sharing and open-access) and defines communication rights as human rights. He states that communication is a fundamental social process, a basic human need, and the foundation of all social organization. Everyone, everywhere, should have the same opportunity to communicate, and no one should be excluded from the benefits of access to information.

Define Ethical Design

In order to foster the adoption of ethics into our design and development processes, I am creating a conceptual framework called Ethical Design. It allows designers and design teams to create products, services, and systems that do no harm and improve human situations. Ethical design extends to all people and other living things that are in any way involved in the product, service, and/or system lifecycle. Borrowing from About Face by Cooper, Reimann, and Cronin, I explain the meaning of doing no harm and improving the human situation below.

Do no harm

  • Interpersonal harm: loss of dignity, insult, humiliation
  • Psychological harm: confusion, discomfort, frustration, coercion, boredom
  • Environmental harm: pollution, elimination of biodiviersity
  • Social and Societal harm: exploitation, creation or perpetuation of injustice

Improve the human situation

  • Increase understanding: individual, social, cultural
  • Increase efficiency/effectiveness: individuals and groups
  • Improve communication: between individuals and groups
  • Reduce sociocultural tension: between individuals and groups
  • Improve equity: financial, social, legal
  • Balance cultural diversity with social cohesion

What I hope to avoid using Ethical Design

I do not want to make digital junk. I do not want to waste time, money, and energy on things that don’t help anyone in any meaningful way. I don’t want other people to waste their time, money, and energy on those things, either, even if those people are investors with millions, billions, or trillions of dollars to burn. As a specific example, I do not want our transactional system to be based on technology that depends on inconsistent networks, has limited storage, and runs on batteries that die every three hours. Yes, I am talking about mobile payments in general. Smartphones were meant to be auxiliary devices—they were not meant for complete human dependency. We cannot run our lives from our mobile phones, nor can we build ubiquitous and high priority systems, like transactional systems, based upon such technology. It just won’t work. In a handful of limited and specific cases, mobile payments are an interesting option, but on a grand scale, no.

What I hope to achieve with Ethical Design

I want a healthy, happy family and a healthy, long life. I want a safe, clean house in a safe, clean neighborhood with enough room for all of us, including the dog. I want clean water, clean air, safe transportation, education for all, and a good school walking distance from our home. I want decent, clean clothes that keep us protected from the elements and allow us to express ourselves. I want healthy, safe food and enough to sustain ourselves. I want the right to communicate and freedom to retrieve information. I want time to spend with family and friends and time to spend alone in self-reflection. I want a satisfying career that allows me to help other people improve their situation in life. I want the same for everyone else.

In order to make sure I achieve what I have listed in the paragraph above, I am beginning with these three short-term objectives for Ethical Design:

  1. Add ethics as a standard usability requirement and heuristic guideline. 
  2. Include a course on ethics and ethical design in every CS/HCI/UX/HF/IxD program.
  3. Include in all CS/HCI/UX/HF/IxD textbooks a chapter on ethics ethical design.

In conclusion

I will continue to discuss Ethical Design and create methods, metrics, resources, and conversation starters to support others interested in the topic. Please contact me if you are one of those people. Thanks very much for reading and for caring. I appreciate it. 

Posted in: on Thu, July 24, 2014 - 10:29:26

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm,
View All Ashley Karr's Posts

Post Comment

@Richard Anderson (2014 07 24)

Add Jon Kolko to your list. He and I were Co-Editors-in-Chief of interactions awhile back. Here is a quote of his from an interview I did of him and Don Norman:

“Not all problems are equally worth solving. It seems like we’ve taken it for granted that every activity within the context of design is worth doing, whether it is a drinking bottle or a microphone or a website for your band. I don’t know if that is true, and I’d like to challenge it and would like more people to challenge it more regularly. That is the focus of the Austin Center for Design: problems that are socially worth doing, and broadly speaking, that means dealing with issues of poverty, nuitrition, access to clean drinking water, the quality of education, ... These are big, gnarly problems, sometimes called ‘wicked’ problems, and it seems incredibly idealistic to think that designers can solve them—I agree, I don’t think designers can solve them. In fact, I’m not sure anyone can solve them, but I think designers can play a role in mitigating them—a really important role because of all of the design thinking stuff that we’ve already talked about: the power of that can drive innovations that are making millions of dollars for companies; it seems that that same power can be directed in other ways.”

For more, see

(from a GA instructor to your north)

Service design 101

Authors: Lauren Chapman Ruiz
Posted: Mon, July 21, 2014 - 1:04:06

This article was co-written by Lauren Ruiz and Izac Ross.

We all hear the words service design bandied about, but what exactly does it mean? Clients and designers often struggle to find a common language to define the art of coordinating services, and frequent questions arise. Often it emerges as necessary in the space of customer experience or complicated journey maps. In response, here is a brief FAQ primer to show the lay of the land in service design.

What are services?

Services are intangible economic goods—they lead to outcomes as opposed to physical things customers own. Outcomes are generated by value exchanges that occur through mediums called touchpoints. For example, when you use Zipcar, you don’t actually own the Zipcar, you buy temporary ownership. You use the car, then transfer it to someone else once it is returned. Every point in which you engage with Zipcar is a touchpoint. 

What creates a service experience?

Services are always co-created by what we call service users and service employees—the direct beneficiaries of the service, and the individuals who see the service through.

This oftentimes means that the outcome will vary for each service user. Your experience of a service may be completely different than another’s. Think of a flight—it can be a pleasant experience, or if you have a screaming baby next to you, not so great. Service employees can do everything to provide a good experience, but there are unknown factors each time that can ruin that experience.

A positive service experience considers and works to account for these situations—they are intentionally planned.

Who else is involved in a service?

A service experience often involves more than just the service user and employee. There are several types of people working together to create a service:

  • Service customers are actually purchasing the service, which is sometimes a different user than who is actually using the service.
  • Service users directly use the service to achieve the outcome.
  • Frontstage service employees deliver the service directly to the user.
  • Backstage service employees make everything happen in the background; the user doesn’t see or interact directly with these people.
  • Partner service employees are other partners involved in delivering the service. For example, UPS is a partner service employee to Amazon. You may order from Amazon, but UPS plays a role in completing your service experience.

What is frontstage and backstage?

In services, there are things the customer does and doesn’t see—we call this frontstage and backstage. Think of it like theater: backstage is what is done behind the curtain to support the actors, who are frontstage, and they’re who you see in front of the curtain. Those on the backstage do just as much to shape the experience as those on the front stage. They help to deliver the service, play an active and critical part in shaping the experience, and represent a company’s brand.

Partners help the company deliver the service outcome by doing things like delivering packages, providing supplies for the service, or processing data.

What are touchpoints?

Earlier I mentioned touchpoints as the medium through which value exchanges happen, leading to the outcomes of a service. Touchpoints are these exchange moments in which service users engage with the service.

There are five different types of touchpoints:

  • People, including employees and other customers encountered while the service is produced
  • Place, such as the physical space or the virtual environment through which the service is delivered
  • Props, such as the objects and collateral used to produce the service encounter
  • Partners, including other businesses or entities that help to produce or enhance the service
  • Processes, such as the workflows and rituals that are used to produce the service (this relates the people, place, props, and partners).

Unlike most products, a service can be purchased multiple times. If a service is purchased just once, it may be a high-value exchange. Since most services are used frequently, we approach designing a service by considering the service cycle.

The service cycle helps answer the following questions:

  • How do we entice service users?
  • How do they enter into the service?
  • What is their service experience?
  • How do they exit from the service?
  • And how do we extend the service experience to retain them as a repeat service user?

What has 30 years done to services?

A lot of changes have occurred to services over the decades. Think about banking services. At one point, the only access to banks included four channels: checks, phone, mail, and branches. Today, there are many more access channels that need to be coordinated, including debit cards, ATMs, online banking, mobile web access, texting, iPhone, Android, mobile check deposit, retail partners, and even Twitter.

And here’s the exciting news—service design (or as some industries might call it, customer experience) is critical to making a cohesive experience across all these channels! There is a desperate need to coordinate these elements using the skills and principles of design.

Like most industries, design disciplines have been changing as a response to paradigm shifts in the economy. Graphic design emerged from the printing press. The industrial age gave birth to industrial design. Personal computing and the mobile age gave rise to interaction design. And the convergence of all of these channels has bought service design forward to coordinate service outcomes.

So how important is service design?

We’ve all had bad service experiences across a range of industries. They’re why companies lose customers, and they can bring frustration, pain, and suffering—from poor transit systems to care delivery. When clients neglect backstage or frontstage employees, every pain point will show through to a service user and customer.

Without effective service design, many companies break apart into disconnected channels, with no one overseeing or coordinating. And even if you’re creating a product, understanding the service you’re trying to put your product into will help your product be much more successful—remember, your B2B “product” is also one of your customer’s touchpoints.

In addition, there are many opportunities to leverage technology to create new services. Look at TaskRabbit—it starts as a digital experience, but without the “rabbits” to perform the service, it’s useless.

Finally, well-designed service experiences differentiate companies. Those who pay attention to wisely designing services will be poised to stand out and achieve success in our ever-changing economy.

So how important is service design? I hope this post has convinced you the answer is very. Tune in again as we’ll be continuing this topic with a deep dive into one of the most important tools of service design—the service blueprint.

Top image via Zip Car, all others created by Izac Ross

Posted in: on Mon, July 21, 2014 - 1:04:06

Lauren Chapman Ruiz

Lauren Chapman Ruiz is a Senior Interaction Designer at Cooper in San Francisco, CA, and is an adjunct faculty member at CCA .
View All Lauren Chapman Ruiz's Posts

Post Comment

@rypac (2014 07 22)

I seriously appreciate your information.
this post is helpful.
thanks for this idea

Visual design’s trajectory

Authors: Jonathan Grudin
Posted: Thu, July 17, 2014 - 4:50:11

Some graphic artists and designers who spent years on the edges of software development describe with bemusement their decades of waiting for appreciation and adequate computational resources. Eventually, visual design soared. It has impressed us. Today, design faces complexities that come with maturity. Cherished aesthetic principles deserve reconsideration.

An enthusiastic consumer

People differ in their ability to create mental imagery. I have little. I recognize some places and faces but can’t conjure them up. The only silver lining to this regrettable deficit is that everything appears fresh; the beauty of a vista is not overshadowed by comparison with spectacular past views. I’m not a designer, but design can impress me.

The first HCI paper I presented was inspired by a simple design novelty. I had been a computer programmer, but in 1982 I was working with brain injury patients. A reverse video input field—white characters on a black background—created by Allan MacLean looked so cool that I thought that an interface making strategic use of it would be preferred even if it was less efficient. I devised an experiment that confirmed this: aesthetics can outweigh productivity [1].

Soon afterward, as the GUI era was dawning, I returned to software development. A contractor showed me a softly glowing calendar that he had designed. I loved it. Our interfaces had none of this kind of beauty. He laughed at my reaction and said, “I’m a software engineer, not a designer.” “Where can I find a designer?” I asked.

I found one downstairs in Industrial Design, designing boxes. As I recall, he had attended RISD and had created an award-winning arm that held a heavy CRT above the desktop, freeing surface space and repositioned with a light touch. I interested him in software design. It took about a year for software engineers to value his input. Other designers from that era, including one who worked on early Xerox PARC GUIs, recount working cheerfully for engineers despite having little input into decisions.

Design gets a seat at the table

I was surprised by design’s slow acceptance in HCI and software product development. Technical, organizational, cultural, and disciplinary factors intervened.

Technical. Significant visual design requires digital memory and processing. It is difficult to imagine now how expensive they were for a long time. As noted in my previous post, and in the recent book and movie about Steve Jobs, the Macintosh failed in 1984. It succeeded only after models with more memory and faster processors came out, in late 1985 and in 1986. Resource constraints persisted for another decade. The journalist Fred Moody’s account of spending 1993 with a Microsoft product development team, I Sing the Body Electronic, details an intense effort to minimize memory and processing. The dynamic of exponential growth is not that things happen fast—as in this case, often they don’t—it is that when things finally start to happen, then they happen fast. In the 00’s, constraints of memory, processing, and bandwidth rapidly diminished.

Organizational. The largest markets were government agencies and businesses, where the hands-on users were not executives and officers. Low-paid data entry personnel, secretaries who had shifted from typing to word processing, and other non-managerial workers used the terminals and computers. Managers equipping workers wanted to avoid appearing lavish—drab exteriors and plain functional screens were actually desirable. I recall my surprise around the turn of the century when I saw a flat-panel display in a government office; I complimented the worker on it and her dour demeanor vanished, she positively glowed with pride. For decades, dull design was good.

Sociocultural. The Model T Ford was only available in black. Timex watches and early transistor radios were indistinguishable. People didn’t care. When you are excited to own a new technology, joining the crowd is a badge of honor. Personalization comes later—different automobile colors and styles, Swatches, distinctive computers and interfaces. The first dramatically sleek computers I saw were in stylish bar-restaurants.

Disciplinary friction. Software engineers were reluctant to let someone else design the visible part of their application. Usability engineers used numbers to try to convince software developers and managers not to design by intuition; designers undermined this. In turn, designers resented lab studies that contested their vision of what would fare well in the world. The groups also had different preferred communication media—prototypes, reports, sketches.

These factors reflected the immaturity of the field. Mature consumer products relied on collaboration among industrial design, human factors, and product development. Brian Shackel, author of the first HCI paper in 1959, also worked on non-digital consumer products and directed an ergonomics program with student teams drawn from industrial design and human factors.

As computer use spread in the 1990s, HCI recognized design, sometimes grudgingly. In 1995, SIGCHI backed the Designing Interactive Systems (DIS) conference series. However, DIS failed to attract significant participation from the visual design community: Papers focused on other aspects of interaction design. In the late 1990s, the CMU Human-Computer Interaction Institute initiated graduate and undergraduate degrees with significant participation of design faculty.

This is a good place to comment on the varied aspects of “design.” This post outlines a challenge for visual or graphic design as a component of interaction design or interface design focused on aesthetics. Practitioners could be trained in graphic art or visual communication design. Industrial design training includes aesthetic design, usually focused on physical objects that may include digital elements. Design programs may include training in interaction design, but many interaction designers have no training in graphic art or visual communication. CHI has always focused on interaction design, but had few visual designers in its midst. “Design” is of course a phase in any development project, even if the product is not interactive and has no interface, which adds to the potential for confusion.

Design runs the table

Before the Internet bubble popped in 2000–2001, it dramatically lowered prices and swelled the ranks of computer users, creating a broad market for software. This set the stage for Timex giving way to Swatch. In the 2000s, people began to express their identity through digital technology choices. In 2001, the iPod demonstrated that design could be decisive. Cellphone buddy lists and instant messaging gave way to Friendster, MySpace, Facebook, and LinkedIn. The iPod was followed by the 2003 Blackberry, the iPhone in 2007, and other wildly successful consumer devices in which design was central.

The innovative Designing User Experience (DUX) conference series of 2003–2007 drew from diverse communities, succeeding where DIS had failed. It was jointly organized by SIGCHI, SIGGRAPH, and AIGA—originally American Institute of Graphic Arts, founded in 1914, the largest professional organization for design.

The series didn’t continue, but design achieved full acceptance. The most widely-read book in HCI may be Don Norman’s The Psychology of Everyday Things. It was published in 1988 and republished in 2002 as The Design of Everyday Things. Two years later Norman published Emotional Design.

Upon returning to Apple in 1997, Steve Jobs dismissed Apple’s HCI group and vice president Don Norman. Apple’s success with its single-minded focus on design has had a wide impact. For example, the job titles given HCI specialists at Microsoft evolved from “usability engineers” to “user researchers,” reflecting a broadening to include ethnographers, data miners, and others, and then to “design researchers.” Many groups that were focused on empirical assessment of user behavior had been managed parallel to Design and are now managed by designers.

Arrow or pendulum?

Empowered by Moore’s law, design has a well-deserved place at the table, sometimes at the decision-maker’s right hand. But design does not grow exponentially. Major shifts going forward will inevitably originate elsewhere, with design being part of the response. An exception is information design—information is subject to such explosive growth that tools to visualize and interact with it will remain very significant. Small advances will have large consequences.

In some areas, design may have overshot the mark. A course correction seems likely, perhaps led by designers but based on data that illuminate the growing complexity of our relationships with technology and information. We need holistic views of people’s varied uses of technology, not “data-driven design” based on undifferentiated results of metrics and A/B testing.

I’d hesitate to critique Apple from Microsoft were it not for the Windows 8 embrace of a design aesthetic. Well-known speakers complain that “Steve Ballmer followed Steve Jobs over to the dark side,” as one put it. They are not contesting the value of appearance; they are observing that sometimes you need to do real work, and designs optimized for casual use can get in the way.

My first HCI experiment showed that sometimes we prefer an interface that is aesthetic even when there is a productivity cost. But we found a limit: When the performance hit was too high, people sacrificed the aesthetics. Certainly in our work lives, and most likely in our personal lives as well, aesthetics sometimes must stand down. Achieving the right balance won’t be easy, because aesthetics demo well and complexity demos poorly. This creates challenges. It also creates opportunities that have not been seized. Someone may be doing so out of my view; if not, someone will.

Aesthetics and productivity

Nature may abhor a vacuum, but our eyes like uncluttered space. When I first opened a new version of Office on my desktop, the clean, clear lettering and white space around Outlook items were soothing. It felt good. My first thought was, “I need larger monitors.” With so much white space, fewer message subject lines fit on the display. I live in my Inbox. I want to see as much as my aching eyes can make out. I upsized the monitors. I would also reduce the whitespace if I could. I’d rather have the information.

A capable friend said he had no need for a desktop computer—a tablet suffices, perhaps docked to a larger display in his office. Maybe our work differs. When I’m engaged in a focal task, an undemanding activity, or trying out a new app, sparsity and simplicity are great. When I’m scanning familiar information sources, show me as much as possible. As we surround ourselves with sensors, activity monitors, and triggers, as ever more interesting and relevant digital information comes into existence, how will our time be spent?

Airplane pilots do not want information routed through a phone. They want the flight deck control panel, information densely arrayed in familiar locations that enable quick triangulations. If a new tool is developed to display airspeeds recorded by recent planes on the same trajectory, a pilot doesn’t want a text message alert. Tasks incidental to flying—control of the passenger entertainment system perhaps—might be routed through a device.

We’re moving into a world where at work and at home, we’ll be in the role of a pilot or a television news editor, continually synthesizing information drawn from familiar sources. We’ll want control rooms with high-density displays. They could be more appealing or less appealing, but they will probably not be especially soothing.

Design has moved the opposite direction, toward sparsely aesthetic initial or casual encounters and focal activity. Consumer design geared toward first impressions and focal activity is perfect for music players and phones. Enabling people to do the same task in much the same way on different devices is great. However, when touch is not called for, more detailed selection is possible. Creative window management makes much more possible with large displays. A single application expanded to fill an 80-inch display, if it isn’t an immersive game, wastes space and time.

I observed a 24x7 system management center in which an observation team used large displays in a control panel arrangement. The team custom-built it because this information-rich use was not directly supported by the platform.

You might ask, if there is demand for different designs to support productivity, why hasn’t it been addressed? Clever people are looking for ways to profit by filling unmet needs—presumably not all are mesmerized by successes of design purity. My observation is that our demo-or-die culture impedes progress.  A demo is inherently an initial encounter. A dense unfamiliar display looks cluttered and confusing to executives and venture capitalists, who have no sense of how people familiar with the information will see it.

This aggravates another problem: the designers of an application typically imagine it used in isolation. They find ways to use all available screen real estate, one of which is to follow a designer’s recommendation to space out elements. User testing could support the resulting design on both preference and productivity measures if it is tested on new users trying the application in isolation, which is the default testing scenario. People using the application in concert with other apps or data sources are not given ways to squeeze out white space or to tile the display effectively.

Look carefully at your largest display. Good intentions can lead to a startling waste of space. For example, an application often opens in a window that is the same size as when that application was most recently closed. It seems sensible, but it’s not. Users resize windows to be larger when they need more space but rarely resize them smaller when they need less space, so over time the application window grows to consume most of a monitor. When I open a new window to read or send a two-line message, it opens to the size that fits the longest message I’ve looked at in recent weeks, covering other information I am using.

The challenge

The success of the design aesthetic was perfectly timed to the rapidly expanding consumer market and surge of inexpensive digital capability in just the right segment of the exponential curve. It is a broad phenomenon; touch, voice, and a single-application focus are terrific for using a phone, but no one wants to gesticulate for 8 hours at their desk or broadcast their activity to officemates. At times we want to step back to see a broader canvas.

The paucity of attention to productivity support was recently noted by Tjeerd Hoek of Frog Design. The broad challenge is to embrace the distinction between designs that support casual and focal use and those that support high-frequency use that draws on multiple sources. Some designers must unlearn a habit of recommending aesthetic uncluttered designs in a world that gets more cluttered every week. Cluttered, of course, with useful and interesting information and activities that promote happier, healthier, productive lives.


1. J. Grudin & A. MacLean, 1984. Adapting a psychophysical method to measure performance and preference tradeoffs in human-computer interaction. Proc. INTERACT '84, 737-741 PDF

Thanks to Gayna Williams for suggesting and sharpening many of these points. Ron Wakkary and Julie Kientz helped refine my terminology use around design, but any remaining confusion is my fault.

Posted in: on Thu, July 17, 2014 - 4:50:11

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

Excuse me, your company culture is showing

Authors: Monica Granfield
Posted: Fri, July 11, 2014 - 10:36:16

Find the simple story in the product, and present it in an articulate and intelligent, persuasive way. —Bill Bernbach

As I read this quote by the all time advertising great, Bill Bernbach, it occurred to me that simplifying and distilling a product story to persuasively and innovatively represent it in a product depends on a company’s brand and culture, and how the brand embodies the culture. 

This is not new news, but it is news worth revisiting—your company culture and politics surface in the design of your product. 

As a designer, my inclination or habit is to try and understand how a design solution was reached—how and why something was created and designed as it was. In doing this, quite often it becomes apparent how and why certain design trade-offs and decisions were made. One can almost hear the conversations that occurred around the decisions. 

A confusing design that provides little guidance and direction or one that does not provide enough flexibility, generating end user frustration, could be traced back to a culture where the end user’s voice is not heard or represented. 

Business trade-offs, technical decisions, design trade-offs, research or lack of it, political posturing—it's all there, reflected in your product. Every meeting, every disagreement, every management decision—all are represented in the end result, the design of your product and the experience your users have with that product. 

Does your company innovate or follow? Is the design of your product driven by clear and thoughtful goals and intentions? Most of these aspects of a design can be traced directly to company culture. Just like the underlying technical architecture surfaces in the product design, so too does the corporate culture, politics, and decision making. 

Is the company engineering focused? Sales focused? Does your culture represent your brand? Where do the product goals align with these intentions? Design goals need to align with the business goals, which are a direct reflection of the product’s design. The clearer your company goals and mission, the clearer your design intentions will be. This will drive directed design thinking, resulting in useful, elegant, well-designed, desirable products. 

I recently read an article that asked what it's really like inside of Apple. The answer: Everyone there embraces design thinking to support the business goals. That is the culture. Everyone's ideas matter, and are subject to the same rigor as a designer’s solution. Great idea? Let's vet that as we would any design idea or solution. This is what makes a great product. So when someone tells you they want to make products as cool as Apple’s, that they want to innovate, ask them about their culture. 

When rationalizing the design thinking and design direction of your products, consider representing a culture that you are proud of and how that culture and the decisions you make will be represented in your product.

Posted in: on Fri, July 11, 2014 - 10:36:16

Monica Granfield

Monica Granfield is a user experience designer at Go Design LLC.
View All Monica Granfield's Posts

Post Comment

No Comments Found

The challenges of developing usable and useful government ICTs

Authors: Juan Pablo Hourcade
Posted: Mon, June 30, 2014 - 4:04:52

Governments are increasingly providing services and information to the public through information and communication technologies (ICTs). There are many benefits to providing information and services through ICTs. People who are looking for government-related information can find it much more quickly. Government agencies can update websites more easily than paper documents. Those taking advantage of government services through ICTs can save time and frustration, which often accompany waiting in line at government agencies. Further, government agencies can save resources when transactions can be handled automatically. In addition, ICTs for internal government use have the potential of helping manage large amounts of information and handle processes more efficiently.

In spite of the promises of e-government, there have been several notorious failures in the implementation of e-government systems. The most recent example in the United States was the website for applying for health insurance under the Affordable Care Act (also known as Obamacare). The website was not usable by a significant portion of users when it launched. This is only the latest example of an e-government system that does not work as planned and requires additional resources to be functional (if it is not completely scrapped). These challenges have occurred across different administrations, and with different political parties in power. In the United States, historic examples include the Federal Aviation Administration’s air traffic control software, and the Federal Bureau of Investigation’s Virtual Case File [1]. Dada [2] provides examples of e-government failures in lower-income countries.

These failures tend to stem from the difficulty in following modern software engineering and user-centered design methods when contracting with companies for the development of ICTs. These modern methods call for iterative processes of development with significant stakeholder input and feedback. There is an expectation, for example, that detailed requirements will be developed over time, and that some may change. Typical government contracting for ICTs, on the other hand, often assumes that government employees, oftentimes without any training in software engineering, will be able to deliver an accurate set of requirements to a company that will then build a system with little or no feedback from stakeholders during the development process.

The challenge is that an overwhelming majority of elected officials and political appointees have little or no knowledge of software engineering or user-centered design methods. Even people responsible for ICTs at government agencies may not have any specific training in these methods. It is rare, for example, for government agencies to usability test competing technologies before deciding which one to purchase. 

I saw this first-hand while I worked at the U.S. Census Bureau. At the time, the Census Bureau was planning to use handheld devices to conduct the 2010 Census. In spite of the significant investment to be made, no one in the leadership was familiar with software engineering or user-centered design methods, and they trusted the management of the process to employees with some background in ICTs, but no training or experience in handling projects of such magnitude, and little knowledge of appropriate methods. This resulted in the development of a set of requirements that no one understood, and that came largely from long-time employees, with no feedback or consideration for the fact that those who would use the system would be temporary employees. While there was some involvement of usability professionals in the process it was “too little, too late” and did not have an impact on the methods used. The requirements were turned over to a contractor, and a test of the resulting software resulting in the need to change more than 400 requirements. The project had to be scrapped after spending almost $600 million on the contractor (not counting the resources spent in-house), and meant that the Census Bureau had to spend an extra $3 billion in processing paper forms that would have been unnecessary had the software been successfully developed.

So how can we help? HCI researchers and professionals can contribute to public policy by informing elected officials and the leadership at government agencies of the methods that are most likely to result in usable and useful government ICTs that can be developed on time and within a given budget. This, in turn, can inform how government contracts for the development of ICTs are structured, such that they require iterative processes with a significant amount of stakeholder feedback. If these methods are followed, government agencies stand to save resources, and deliver better quality ICTs. This is an area where ACM, SIGCHI, and other professional associations could play a role. If we don’t do it, no one else will.


1. Charette, R.N. Why software fails? IEEE Spectrum 42, 9 (2005), 42-49.

2. Dada, D. The failure of e-government in developing countries: A literature review. The Electronic Journal of Information Systems in Developing Countries 26, 7 (2006), 1-10.

Posted in: on Mon, June 30, 2014 - 4:04:52

Juan Pablo Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Pablo Hourcade's Posts

Post Comment

No Comments Found

Organizational behavior

Authors: Jonathan Grudin
Posted: Mon, June 23, 2014 - 11:36:18

Two books strongly affected my view of organizations—those I worked in, studied, and developed products for. One I read 35 years ago; the other I just finished, although it came out 17 years ago.

Encountering Henry Mintzberg’s typology of organizational structure

In 1987, an “Organizational Science Discussion Group” was formed by psychologists and computer scientists at MCC. We had no formal training in organizational behavior but realized that software was being designed to support more complex organizational settings. Of the papers we discussed, two made lasting impressions. “A Garbage Can Model of Organizational Choice” humorously described the anarchy found in universities. It may primarily interest academics; it didn’t seem relevant to my experiences in industry. The discussion group’s favorite was a dense one-chapter condensation [1] of Henry Mintzberg’s 1979 The Structuring of Organizations: A Synthesis of the Research.

Mintzberg observed that organizations have five parts, each with its own goals, processes, and methods of evaluation. Three are the top management, middle management, and workers, which he labels strategic apex, middle line, and operating core. A fourth group designs workplace rules and processes, such as distribution channels, forms, and assembly lines. This he calls technostructure, although technology is not necessarily central. Finally there is everyone else: the support staff (including IT staff), attorneys, custodians, cafeteria workers, and so on.

Mintzberg argues that these groups naturally vie for influence, with one or another usually becoming particularly powerful. There are thus five “organizational forms.” Some are controlled by executives, but in divisionalized companies the middle line has strong autonomy, as when several product lines are managed with considerable independence. In an organization of professionals, such as a university, the workers—faculty—have wide latitude in organizing their work. In organizations highly reliant on regulations or manufacturing processes, the technostructure is powerful. And an “adhocracy” such as a film company relies on a collection of people in support roles.

When I left MCC, I was puzzled to find that Mintzberg’s analysis was not universally highly regarded. Where was supporting evidence, people asked. What could you do with it?  Only then did I realize why we had been so impressed. It arose from the unique origin of MCC.

An act of Congress enabled MCC to open its doors in 1984. In response to a prominent Japanese “Fifth Generation” initiative, anti-trust laws were modified to permit 20 large U.S. companies to collaborate on pre-competitive research. MCC was a civilian effort headed by Bobby Ray Inman, previously the NSA Director and CIA Deputy Director. It employed about 500 people, some from the shareholder companies and many hired directly. Our small discussion group drew from the software and human interface programs; MCC also had programs on artificial intelligence, databases, parallel processing, CAD, and packaging (hardware).

Consider this test of Mintzberg’s hypotheses: Create an organization of several hundred people spread across the five organizational parts, give them an ambiguous charter, let them loose, and see what happens. MCC was that experiment.

To a breathtaking degree, we supported Mintzberg’s thesis. Each group fought for domination. The executives tried to control low-level decisions. Middle managers built small fiefdoms and strove for autonomy. Individual contributors maneuvered for an academic “professional bureaucracy” model. Employees overseeing the work processes burdened us with many restrictive procedural hurdles, noting for example that because different shareholder companies funded different programs, our interactions should be regulated. Even the support staff felt they should run things—and not without reason. Several were smart technicians from shareholder companies; seeing researchers running amok on LISP machines, some thought, “We know what would be useful to the shareholders, these guys sure as hell don’t.”

Mintzberg didn’t write about technology design per se. We have to make the connections. Central to his analysis is that each part of the organization works differently. Executives, middle managers, individual contributors, technostructure, and support staff have different goals, priorities, tasks, and ways to measure and reward work. Their days are organized differently. Time typically spent in meetings, ability to delegate, and the sensitivity of their work differ. Individual contributors spend more time in informal communication, managers rely more on structured information—documents, spreadsheets, slide decks—and executives coordinate the work of groups that rarely communicate directly.

Such distinctions determine which software features will help and which may hinder. Preferences can sharply conflict. When designing a system or application that will be used by people in different organizational parts, it is important to consult or observe representatives of these groups during requirements analysis and design testing.

At MCC we did not pursue implications, but I was prepared when Constance Perin handed me an unpublished paper [2] in 1988. I had previously seen the key roles in email being senders and receivers; she showed that enterprise adoption could hinge on differences between managers, who liked documents and hated interruptions, and individual contributors, who engaged in informal communication and interruption. Over the next 25 years, studying organizational adoption of a range of technologies, I repeatedly found differences among members of Mintzberg’s groups. If it was confirmation bias, it was subtle, because somewhat obtusely I didn’t look for it and was surprised each time. The pattern can also be seen in other reports of enterprise technology adoption. This HICSS paper and this WikiSym paper provide a summary and a recent example.

Clayton Christensen and disruptive technologies

In 1997, Clayton Christensen published The Innovator’s Dilemma. Thinking it was a business professor’s view of issues facing a lone inventor, I put off reading it until now. But it is a nice analysis of organizational behavior based on economics and history, and is a great tool for thinking about the past and the present.

I have spent years looking into HCI history [3], piecing together patterns some of which are more fully and elegantly laid out by Christensen. The Innovator’s Dilemma deepened my interpretations of HCI history and reframed my current work on K-12 education. Before covering recent criticism of this short, easily read book and indicating why it is a weak tool for prediction, I will outline its thesis and discuss how I found Mintzberg and Christensen to be useful.

Christensen describes fields as diverse as steel manufacture, excavation equipment, and diabetes treatment, arguing that products advance through sustaining innovations that improve performance and satisfy existing customers. Eventually a product provides more capability than most customers need, setting the stage for a disruptive innovation that has less capability and a lower price—for example, a 3.5” disk drive when most computers used 5”+ drives, or a small off-road motorbike when motorcycles were designed for highway use. The innovation is dismissed by existing customers, but if new customers happy with less are found, the manufacturer can improve the product over time and then enter the mainstream market. For example, minicomputers were initially positioned for small businesses that could not afford mainframes, then became more capable and undermined the mainframe industry. Later, PCs and workstations, initially too weak to do much, grew more capable and destroyed the once-lucrative minicomputer market.

An interesting insight is that established companies can fail despite being well-managed. Many made rational decisions. They listened to customers and improved their market share of profitable product lines rather than diverting resources into speculative products with no established markets.

Some firms that successfully embraced disruptive innovations learned to survive with few sales and low profit margins. Because dominant companies are structured to handle large volume and high margins, Christensen concludes that a large company can best embrace a disruptive innovation by creating an autonomous entity, as IBM did when it located its PC development team in Florida.

Using the insights of Mintzberg and Christensen for understanding

For decades, Mintzberg’s analysis has helped me understand the results of quantitative and qualitative research, mine and others’, as described in the papers cited above and two handbook chapters [4]. Reading The Innovator’s Dilemma, I reevaluated my experiences at Wang Laboratories, a successful minicomputer company that, like the others, underestimated PCs and Unix-based workstations. It also made sense of more recent experiences at Microsoft, as well as events in HCI history.

For example, a former Xerox PARC engineer recounted his work on the Alto, the first computer sporting a GUI that was intended for commercial sale. A quarter century later he still seemed exasperated with Xerox marketers for pricing the Alto to provide the same high-margin return as photocopiers. With a lower price, the Alto could have found a market and created the personal computer industry. The marketing decision seems clueless in hindsight, but in Christensen’s framework it can be seen as sensible unless handling a disruptive innovation—which the personal computer turned out to be.

A colleague said, “An innovator’s dilemma book could be written about Microsoft.” Indeed. It would describe successes and failures. Not long after The Innovator’s Dilemma was published, Xbox development began. The team was located far from the main Redmond site, reportedly to let them develop their own approach, as Christensen would recommend. Unsuccessful efforts are less easily discussed, but Courier might be a possibility.

Using (or avoiding) the frameworks as a basis for predictions

Mintzberg’s typology has proven relevant so often that I would recommend including members of each of his groups when assessing requirements or testing designs. His detailed analysis could suggest design features, but because of the complex, rapidly evolving interdependencies in how technology is used in organizations, empirical assessment is necessary.

Christensen is more prescriptive, arguing that sustaining innovations require one approach and a timely disruptive innovation requires a different approach. But if disruptiveness is a continuum, rather than either-or, choosing the approach could be difficult. And getting the timing right could be even trickier. Can one accurately assess disruptiveness? My intuition is, rarely.

Christensen courageously concluded the 1997 book by analyzing a possible disruptive innovation, the electric car. His approach impressed me—methodical, logical, building on his lessons from history. He concluded that the electric car was disruptive and provided guidance for its marketing. In my view, this revealed the challenges. He projected that only in 2020 would electric vehicle acceleration intersect mainstream demands (0 to 60 mph in 10 seconds). Reportedly the Nissan Leaf has achieved that and the Tesla has reached five seconds. On cruising range he was also pessimistic. Unfortunately, his recommendations depend on the accuracy of these and other trends. He suggested a new low-end market (typical for the disruptive innovations that he studied) such as high school students, who decades earlier fell in love with the disruptive Honda 50 motorcycle; instead, electric cars focus on appealing to existing high-end drivers. A hybrid approach by established manufacturers, which failed for his mechanical excavator companies, has been a major automobile innovation success story.

Christensen reverse-engineered success cases, a method with weaknesses that I described in an earlier blog post. We are not told how often plausible disruptive innovations failed or were developed too soon. Christensen says that innovators must be willing to fail a couple times before succeeding. Unfortunately, there is no way to differentiate two failures of an innovation that will succeed from two failures of a bad or premature idea. Is it “the third time is a charm” or “three strikes and you’re out”? If 2/3 of possible disruptive innovations pan out in a reasonable time frame, an organization would be foolish not to plan for one. If only one in 100 succeed, it could be better to cross your fingers and invest the resources in sustaining innovations.

Our field is uniquely positioned to explore these challenges. Most industries studied by Christenson had about one disruptive innovation per century. Disk drives, which Christensen describes as the fruit flies of the business world, were disrupted every three or four years. He never mentions Moore’s law. He was trying to build a general case, but semiconductor advances do guarantee a flow of disruptive innovation. New markets appear as prices fall and performance rises. A premature effort can be rescued by semiconductor advances: The Apple Macintosh, a disruptive innovation for the PC market, was released in 1984. It failed, but models in late 1985 and early 1996 with more memory and processor power succeeded.

Despite the assistance of Moore’s law, the success rate for innovative software applications has been estimated to be 10%. Many promising, potentially disruptive applications failed to meet expectations for two or three decades: speech recognition and language understanding, desktop videoconferencing, neural nets, workflow management systems, and so on. The odds of correctly timing a breakthrough in a field that has one each century are worse. Someone will nail it, but how many will try too soon and be forgotten?

The weakness of Christensen’s historical analysis as a tool for prediction is emphasized by Harvard historian Jill Lepore in a New Yorker article appearing after this post was drafted. Some of Christensen’s cases are more ambiguous when examined closely, although Christensen did describe exceptions in his chapter notes. Lepore objects to the subsequent use of the disruptive innovation framework by Christensen and others to make predictions in diverse fields, notably education.

These are healthy concerns, but I see a lot of substance in the analysis. No mainframe company succeeded in the minicomputer market. No minicomputer company succeeded in efforts to make PCs. They were many, they were highly profitable, and save IBM, they disappeared.

I’ll take the plunge by suggesting that a disruptive innovation is unfolding in K-12 education. The background is in posts that I wrote before reading Christensen: “A Perfect Storm” and “True Digital Natives.” In Christensen’s terms, 1:1 device-per-student deployments transform the value network. They enable new pedagogical and administrative approaches, high-resolution digital pens, advanced note-taking tools, and handwriting recognition software (for searching notes). As with many disruptive innovations at the outset, the market of 1:1 deployments is too small to attract mainstream sales and marketing. But appropriate pedagogy has been developed, prices are falling fast, and infrastructure is being built out. Proven benefits make widespread deployment inevitable. The question is, when? The principal obstacle in the U.S. is declining state support for professional development for teachers.

Conclusion: the water we swim in

Many of my cohort have worked in several organizations over our careers. Young people are told to expect greater volatility. It makes sense to invest in learning about organizations. If you start a discussion group, you now have two recommendations.


1. Published in D. Miller & P. H. Friesen (Eds.), Organizations: A Quantum View, Prentice-Hall, 1984 and reprinted in R. Baecker (Ed.), Readings in Computer Supported Cooperative Work and Groupware, Morgan Kaufmann, 1995.

2. A modified version appeared as Electronic social fields in bureaucracies.

3. A moving target: The evolution of HCI. In J. Jacko (Ed.), Human-computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications. (3rd edition). Taylor & Francis, 2012. An updated version is available on my web page.

4. J. Grudin & S. Poltrock, 2012. Taxonomy and theory in Computer Supported Cooperative Work. In S.W. Kozlowski (Ed.), Handbook of Organizational Psychology, 1323-1348. Oxford University Press. Updated version on my web page; 
J. Grudin, 2014. Organizational adoption of new communication technologies. In H. Topi (Ed.), Computer Science Handbook, Vol. II. Chapman & Hall / CRC Press.

Thanks to John King and Gayna Williams for discussions.

Posted in: on Mon, June 23, 2014 - 11:36:18

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

May I have your attention

Authors: Ashley Karr
Posted: Sun, June 01, 2014 - 6:40:51

Take away: Removing ourselves from stimulation, electronic or otherwise, is crucial for our brains to function at their peak, and focusing on one task at a time with as little outside distraction as possible is the best way to increase task performance.

I will begin this article by saying that I love meta. The fact that we build new technologies to study how technology affects us makes me laugh. Anyway, what this article is really about is attention, so that is where I will focus ours.

The modern field of attention research began in the 1980s when brain-imaging machines became widely available. Researchers found that to shift attention from one task or point of focus to another greatly decreased performance. No exceptions. No excuses. No special cases. Human beings do not perform as well as possible in any task when they multitask. 

Studies also show that simple anticipation of another stimulus or task can take up precious resources in our working memory, which means we can’t store and integrate information as well as we should. Additionally, downtime is very important for the brain. During downtime, the brain processes information and turns it into long-term memories. Constant stimulation prevents information processing and solidifying, and our brains become fatigued. 

It appears that removing ourselves from stimulation, electronic or otherwise, is crucial for our brains to function at their peak, and focusing on one task at a time with as little outside distraction as possible is the best way to increase task performance. Some studies have found that people learn better after walking in rural areas as opposed to a walking in urban environments. Researchers are also investigating how electronic micro-breaks, like playing a two-minute game on a cell phone, affect the brain. Initial findings do not support electronic micro-breaks as true “brain breaks” that allow for information processing and prevent mental fatigue.  

Based on this research, I brainstormed a few ways we can de-stimulate. Here are some of my ideas:

  • Only answer and respond to emails for a window of one to two hours a day.
  • Take a five-minute break by going outside and sitting on a bench or the grass. Just sit there. Don’t even bring your phone.
  • Unplug your TV and wireless router at least one day a week.
  • Go camping.
  • Turn off your cell phone for an hour, a day, or an entire weekend.
  • Only make phone calls at a designated spot and time in a quiet place away from distractions.
  • Stop reading this article, turn off your computer, put down your phone, and go outside.

Posted in: on Sun, June 01, 2014 - 6:40:51

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm,
View All Ashley Karr's Posts

Post Comment

No Comments Found

So close but yet so far away

Authors: Monica Granfield
Posted: Thu, May 29, 2014 - 11:12:06

In the last five days I have clocked at least eight hours on three Fortune 100 consumer websites for tasks that should not take more than an hour combined. What made my poor experiences even more ironic is that these are companies in the service industry that pride themselves on innovation around the customer experience!

This has left me scratching my head, wondering how these customer-centric companies could have veered so greatly from their missions and failed at the interaction level. The sites were professionally created consumer sites—well branded, with impeccable visual presentations, thus setting my expectations high for a pleasant user experience.

In two cases there were scenarios that were not directly generating revenue, but instead asking for assistance on a recent purchase. The experiences were not only poorly thought out task flows—one provided little guidance and feedback to entering materials and the other was, as it turns out, operationally incorrect. Both of these interaction experiences failed and forced me to call customer support.

In the first case, with JetBlue, once the voice on the other end of the phone materialized I assumed could breathe a sigh of relief. The problem is going to be cleared up and I can get these tasks off of my plate. No such luck. The experience continued to deteriorate over the phone. Two calls mysteriously dropped off midway through my help session. Remaining calm, I continued to forge on. I was told twice in no uncertain terms that the website would not allow me to do what I had done, so the service rep would not instruct me on how to rectify the situation and properly register my family for the program offered. I was allowed to invite minors to my family pool without a frequent flyer number—there were no instructions to tell me otherwise—and the UI allowed me to accomplish this. I was fortunate that in this case, an actual confirmation e-mail had been sent to me. This did not help in promoting my cause or rectifying the situation with the service rep, as he told me, "No that can't happen." “Hmmm,” I thought, “then what am I looking at?” The steps to sign up for the program had no instructions and were completely undiscoverable. So much so that, in the end, the representative had to put me on hold for 10 minutes to go and somehow delete whatever was magically made possible to me via the website and sign me up manually. The steps that followed this process to get my family "registered" were also not discoverable and I could not have continued to complete the process (and remember I am a seasoned computer user/UX designer) without the assistance of the customer service representative. Several times during the two hours that it took to accomplish this 10-minute task, my husband must have asked me 10 times, "Is this really worth it?" while my oldest child intermittently hollered, "Virgin, we should have flown Virgin." Well, our tickets were already purchased, but how many other people are not bothering and abandoning this program or the airline all together based on these types of poor user experiences?  I finally did get us all signed up for the program, although we are still not sure if my miles are in this pool or not—hard to tell! I am excited for a trip, regardless of the UX fail. However, will I choose this airline next time I travel? Depends on my experience.

In the other case I spent upwards of 90 minutes painstakingly entering information, photos, and receipts into a highly unusable form and process with little or no instruction, only to hear nothing back from Disney until I picked up a phone and called them directly, two weeks later. This due to a lost activation code for a child's CD that is printed on a loose piece of paper, easily lost by an excited child waiting to watch the latest release of a Disney flick! Why not print the code on the CD or the insert on the case? I did pay for it. Why is it so difficult to get this number back or to more securely adhere it to the packaging? I will admit, once when attempting to use their site to book a trip to Disney World, I became so dismayed we stayed outside the park. Sorry, Walt. Of course I was not allowed to enter a new case for this issue, as one for this product ID already exists! UX fail.

My last and perhaps most disturbing experience is with a tactic used by many sites, including and, to passively and what seems to me questionably collect revenue unknowingly from users. Funny enough, the audience is busy parents! You check a box that says "Do not auto charge my account after my selected pay period ends" and lo and behold, you are charged anyway. And when you call to contest the charges, assuming that you catch this and call immediately, they will apologetically "refund" your money. Relying on the fact that you can't recall if you actually checked off that little option box—and where oh where was that little box anyway?—has left me saying, UX fail.

As an experience designer I have been left wondering what to think about these experiences.

My conclusion is this: scenarios that do not generate obvious revenue are not given UX priority or the attention needed to craft an elegant and usable recovery experience. Recovery experiences are important scenarios that, if not carefully considered, will eventually result in lost revenue. Evidence of this can be seen in Jared Spool’s blog post "The $300 Million Button." The business intent of the paradigm was to get users registered. Users who did not want to register became frustrated and abandoned their purchases. Once this was identified and rectified, there was significant increase in revenue. User abandonment of a product or brand can happen as often after a purchase as before or during a purchase. Companies need to embrace UX design in addressing the end-to-end experience and how it impacts the business, from the customer perspective and not just the business-revenue-generating channels. This will lead to a better user experience, repeat customers, and increased revenue. This is what a good UX can do!

Posted in: on Thu, May 29, 2014 - 11:12:06

Monica Granfield

Monica Granfield is a user experience designer at Go Design LLC.
View All Monica Granfield's Posts

Post Comment

No Comments Found

Philosophical robbery

Authors: Jonathan Grudin
Posted: Wed, May 28, 2014 - 10:19:20

In 1868 I read Dr. Holmes's poems, in the Sandwich Islands. A year and a half later I stole his dedication, without knowing it, and used it to dedicate my "Innocents Abroad" with. Ten years afterward I was talking with Dr. Holmes about it. He was not an ignorant ass—no, not he; and so when I said, "I know now where I stole, but who did you steal it from?" he said, "I don't remember; I only know I stole it from somebody, because I have never originated anything altogether myself, nor met anybody who had."

—Samuel Clemens (Mark Twain) in a letter to Anne Macy, reprinted in Anne Sullivan Macy, The Story Behind Helen Keller. Doubleday, Doran, and Co., 1933.

Accounts of plagiarism are epidemic. Charged are book authors, students, journalists, scientists, executives, and politicians. Technology makes it easier to find, cut, and paste another’s words—and easier to detect transgressions. Quotation marks and a citation only sometimes address the issue. Cats and mice work on tools for borrowing and detection, but technology is shifting the underlying context in ways that will be more important.

Plagiarism or synthesis: Plague or progress?

We appreciate novelty in art and technology. We may also nod at the adage, “There is nothing new under the sun.” Twain isn’t alone in questioning the emphasis on originality that emerged in the Enlightenment. Arthur Koestler’s The Act of Creation is a compelling analysis of the borrowing that underlies literary and scientific achievement. Although we encourage writers to cite influences, we know that a full accounting isn’t possible. Further complicating any analysis is the prevalence of cryptomnesia or unconscious borrowing, which fascinated Twain and has been experimentally demonstrated. Writers of undeniable originality, such as Friedrich Nietzsche, borrowed heavily without realizing it.

Believing that an idea is original could motivate one to work harder, perhaps borrowing more and building a stronger synthesis. The aspiration to be original could have this benefit. I’ve seen students, faculty, and product designers lose interest when shown that their work was not entirely “invented here.” They might have been more productive if unaware of the precedent.

An earlier post on creativity, which cited a professor who directs students to submit work in which every sentence is borrowed, proposed that the availability of information and the visibility of precedents will shift our focus from originality to a stronger embrace of synthesis. It seems a cop-out to say that synthesis is a form of originality. The distinction is evident in “NIH syndrome,” the reluctance to build overtly on the work of others. 

Prior to considering when citation is and is not required or perhaps even a good thing, let’s establish that there is no universal agreement on best practices.

Cultural differences

Some professors say, “I learn more from my students than they do from me.” As a professor I learned from students, but I hope they learned more from me, because I was a slow learner. One afternoon at culturally heterogeneous UC Irvine, I realized that a grade-grubber who had all term shown no respect for my time by arguing endlessly for points had in fact been sincerely demonstrating respect for the course and for my regard, which he felt a higher grade would reflect. Raised in a haggling culture abroad, he assumed that I understood that his efforts demonstrated respect, and almost fell on the floor in terror when I said mildly and constructively that he was developing a reputation for being difficult. It had a happy ending.

The faculty shared plagiarism stories. My first lecture in a “technology and society” course included a plagiarism handout. I explained it, asked if they understood it, and sometimes asked everyone who planned to plagiarize to raise a hand “because one of you probably will, and it will be a lot easier if you let me know now.” Gentler than some colleagues, I only failed a plagiarist on the assignment. But that was enough to affect the grades of students, many of whom were Asian Americans whose families counted on them to become engineers. Parents dropped some at the university in the morning and picked them up in the afternoon.

In 1995 I spent a sabbatical in a top lab at a leading Asian university. I discovered that uncited quotation was acceptable. Students plagiarized liberally. Uncited sentences and paragraphs from my publications turned up in term papers for my class. I thought, “OK, we make a big deal of quotation marks and a reference. They don’t.” This didn’t shock me. My first degrees were in math and physics, where proof originators were rarely cited or mentioned. No “Newton, 1687.”

An end-of-term event riveted my attention. Each senior undergraduate in the lab was assigned a paper to present to the faculty and students as their own work, in the first person singular! Organized plagiarism! It was brilliant. The student must understand the work inside out. A student who is asked “Why did you include a certain step?” can’t say “I don’t know why the authors did that.”

I recognized it. I once took a method acting class. Good actors are plagiarists, marshalling their resources by convincing themselves that the words in a script are their own. Plagiarism as an effective teaching device!? Be that as it may, after years of teaching, one of the two grades I regret giving was to a fellow who, before my discovery, may have followed parental guidance: work hard to find and reproduce relevant passages. He just hadn’t absorbed our custom of bracketing them with small curlicues.

Copyright violation

Plagiarism is not a crime, but violating copyright is. U.S. copyright law isn’t fully sorted out, but it represents a weighing of commercial and use issues, and a not yet fully defined concept of “fair use” exceptions that considers the length, percentage, and centrality of the reproduced material, the effect of copying on the market value of the original, and the intent (a parody or critical review that reduces the original’s market value may include excerpts).

My focus is on ethical and originality considerations, so for copyright infringement guidance consult your attorney. I once inquired into how much a copyrighted paper must be changed to republish it. I found a vast gulf in opinion between seasoned authors (“very little”) and publishers (“most of it”). Publishers haven’t seemed to bother about scientific work, but with plagiarism-detection and micropayment-collection software, that could change.

Factors in weighing originality and ethics

1. How exact is the copy, from identical to paraphrase to “idea theft”? What is the transgression—lack of giving credit? An explicit or implied false assertion of originality or effort? A false claim to understand the material?

Students are told, “Put it in your own words, then it isn’t plagiarism.” This is true when the information is general knowledge. Paraphrasing a passage from a textbook, a lecture, or a friend’s work may suffice. Information from a unique source, such as a published paper, generally deserves and is improved by a source citation.

If omitting a citation causes readers to infer that an author originated the work, it crosses the line. For example, a journalist who uses the work of other journalists, even if every sentence is rewritten, creates a false impression of having done the reporting and is considered a plagiarist. Crediting the original journalist solves the problem if copyright isn’t violated. There are grey areas—reports of press conferences may not identify those who asked the questions. When a copyright expires, anyone can publish the work, but to not credit the author would be bad form.

With student work that is intended to develop or demonstrate mastery, copying undermines the basic intent. Especially digital copying—some teachers have students write out work by hand, figuring that even if copied from a friend’s paper, something could stick as it goes from eyes through brain to fingers. For a student who has truly mastered a concept, copying “busy work” is less troubling. (We hope computer-based adaptive learning, like one-to-one tutoring, will reduce busy work.)

Idea theft is an often-expressed concern of students and faculty. We may agree that ideas are cheap and following through is the hard part, but to credit a source of an idea is appropriate even when the borrowing is conceptual.

2. When is attribution insufficient?

As noted above, attribution won’t shield an author from illegal copyright violation. Although the law is unsettled, copying with or without attribution may be allowed for “transformative works” to which the borrower has made substantive additions. Transformative use wouldn’t justify idea theft—finding inspiration in the work of others is routine, but not developing the idea of someone who might intend to develop it further.

3. When is attribution unnecessary? How is technology changing this?

In cases of cryptomnesia or unconscious plagiarism of the sort Mark Twain owned up to, attribution is absent because the author is unaware of the theft. Experiments have shown that unconscious borrowing is easy to induce and undoubtedly widespread. Nevertheless, a few years ago, a young author had a positively reviewed book withdrawn by the publisher after parallels were noticed in a book she acknowledged having read often and loved. The media feeding frenzy was unjustified; it was clearly cryptomnesia, with few or no passages reproduced verbatim.

Homer passed on epic tales without crediting those he learned them from. Oral cultures can’t afford the baggage. Change was slow and is not complete, and today cultures vary in their distance from oral traditions. When printing arrived, “philosophical robbery” was rampant. Early journals reprinted material from other journals without permission. Benjamin Franklin invented some of his maxims and appropriated others without credit. Only recently have we decided to expend paper and ink to credit past and present colleagues for the benefit of present and future readers.

Shakespeare borrowed heavily from an earlier Italian work in writing Romeo and Juliet, on which 1957’s West Side Story was based. The first version of West Side Story was shelved in 1947 when the authors realized how much they’d borrowed from other plays that were also based on Shakespeare. Twain again: “Substantially all ideas are second-hand, consciously and unconsciously drawn from a million outside sources, and daily used by the garnerer with a pride and satisfaction born of the superstition that he originated them; whereas there is not a rag of originality about them anywhere except the little discoloration they get from his mental and moral calibre and his temperament, which is revealed in characteristics of phrasing. . . . It takes a thousand men to invent a telegraph, or a steam engine, or a phonograph, or a photograph, or a telephone, or any other important thing—and the last man gets the credit and we forget the others. He added his little mite—that is all he did.”

“Gladwellesque” books are artful syntheses of others’ work. Some of the contributing scholars may grumble that they should share in royalties, but at least they get credit, which is their due and which gives authority to a synthesis. However, a writer constantly weighs when contributions merit overt credit. In the natural sciences citation is often omitted. It slows the reader and distracts from the elegance of the pure science. It forces a writer to take sides in historical paternity/maternity quarrels and decide whether his or her slight improvement in the elegance of a proof also merits mention.

In less formal writing the custom is to acknowledge less. Magazines may limit or altogether eliminate citations, allowing only occasional mentions in the text.

Consider this essay. I cited a primary source for the Twain quotations, but not the secondary source where I found them. I didn’t note that the first complaint of “philosophical robbery” that I know about was by the chemist Robert Boyle soon after the printing press came into use, or where I learned that. I didn’t credit Wikipedia for the origins of West Side Story.

Technology is rapidly expanding the realm of information that I consider common knowledge that needs no citation. My rule of thumb is that if a reader can find the source in fewer than five seconds with a search engine and obvious keywords, I don’t need to cite it, although sometimes I will. For example, anyone can quickly learn that “there is nothing new under the sun” comes from Ecclesiastes.

Reasons for omission vary. I provided a source for cryptomnesia but not for the author who fell afoul of it, feeling that after the lack of media generosity she has a “right to be forgotten.” It is also a tangled web we weave when we practice to communicate directly (a transformative borrowing I shall leave uncredited).

As we focus on building plagiarism detectors to trip up students, technology will make all of our borrowings more visible, the conscious and the unconscious. There is no turning back, but the emerging emphasis on synthesis may resonate more with the oral tradition of aggregation than with the recent focus on individual analysis. In the swarm as in the tribe, credit is unnecessary.

Posted in: on Wed, May 28, 2014 - 10:19:20

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

Bringing together designers, ePatients, and medical personnel

Authors: Richard Anderson
Posted: Fri, May 23, 2014 - 11:06:54

Back in 1989–1991, I served on the committee that founded BayCHI, the San Francisco Bay Area chapter of ACM SIGCHI. I became its first elected chair and served as its first appointed program chair for 12 years. I also served as SIGCHI’s Local Chapters chair for five years, supporting the founding and development of SIGCHI chapters around the world.

Much has happened since then. Perhaps of greatest significance were my horrific experiences with the U.S. healthcare system. My healthcare nightmare changed my life and has prompted me to focus on what can be done to dramatically redesign the healthcare system and the patient experience. Indeed, several of my Interactions blog posts reflect that focus, with a large part of that focus being on changing the roles and relationships of and between patients and medical personnel and designers. You’ll see that in, for example, “Utilizing patients in the experience design process,” “Learning from ePatient (scholar)s,” “Are you trying to solve the right problem?,” “The importance of the social to achieving the personal,” and “No more worshiping at the altar of our cathedrals of business.”

All this has led me to start a new local chapter, but this one is not of SIGCHI. This one is for a combination of ePatients, medical personnel, and designers. This one is for changing the healthcare system. This one is the first local chapter of the Society for Participatory Medicine.

Topics/issues to be addressed by the chapter should be of interest to many Interactions readers. They include the ePatient movement, peer-to-peer healthcare, other uses of social media in healthcare, human-centered healthcare design and innovation, doctors and patients as designers, the quantified self, patient and doctor engagement, empathy, healthcare technology, patient experiences of the healthcare system, and more. When Jon Kolko and I were the editors-in-chief of Interactions, we published lots of articles that addressed this level of topics/issues. One of those was a cover story entitled “Reframing health to embrace design of our own well-being.” (Somewhat coincidentally, two of the article’s authors made a presentation about the content of the article at a BayCHI meeting.)

If you reside anywhere in the San Francisco Bay Area and are interested in the topics/issues listed above, I invite you to join this new local chapter. If you know of others in the San Francisco Bay Area who you think might be interested, please let them know about the group as well.

The chapter is just starting. Indeed, our first meeting has not yet been scheduled, as I'm still seeking venue options (and sponsors). If you know of any venue (or sponsor) possibilities, please let me know.

It feels good to be getting back into the local chapter business. I hope you’ll check us out.

Posted in: on Fri, May 23, 2014 - 11:06:54

Richard Anderson

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.
View All Richard Anderson's Posts

Post Comment

No Comments Found

Why teaching tech matters

Authors: Ashley Karr
Posted: Fri, May 16, 2014 - 10:12:56

Education is one of the most valuable ways that we can improve quality of life for ourselves and others. This improvement applies to teachers as well as students. Having just taught a ten-week user experience design immersive course, I am keenly aware of how my life is better as a result. The following is a list of how my life has improved thanks to my co-instructor, our students, course producer, and supportive staff:

Meaning and engagement 

Before I began this course, I was burnt out. My work had lost meaning and engagement. I had been building technology for organizations and groups that had lost their soul and passion for good design. Instead, they pursued profit and fantasy deadlines. The irony of engaging in a type of engineering called “human-centered design” and “user experience design” in environments like this was not amusing. After the first day of this course, meaning and engagement had found their way back into my Monday through Friday 9 to 5. Why? The people. But I will write more about that later.


I found my way into anthropology, human factors engineering, human-computer interaction (HCI), and user experience (UX) design because I truly care about our world, its people, and other living creatures. I saw how these fields could help me operationalize my instinct to help. Additionally, by leveraging the power of computing technology, I could help a lot of people with minimal effort. (Spoken like a true humanitarian engineer!)

What I now realize is by teaching others to be empathetic, ethical, human-centered designers and makers of computing technology, I am leveraging the boundless energy and power of my students, as well. In the past ten weeks, my co-instructor and I have overseen roughly eighty projects, and a number of these have the potential to become consumer facing. Our eighteen students will go on to rich careers in UX, and I think it safe to say that they will each be involved in at least one small design improvement per month for the next ten years at minimum. That means that my co-instructor and I will be part of at the very least 2,160 design improvements over the next decade because they will draw from the principles learned in our class to do their jobs properly. If each of those design improvements saves ten million people one minute of their time on a mundane task like bill pay, this means 21,600,000,000 minutes have been freed to spend on hugging children, taking deep breaths, and other meaningful things.

Deepening my KSAOs

KSAOs are the job-related knowledge, skills, attitudes, and other characteristics necessary for one to perform their job successfully. When one teaches a subject, their KSAOs deepen, because teaching is not separate from but an integral part of the learning process. My co-instructor and I feel that we’ve learned more than our students by teaching them UX fundamentals. I am not suggesting that the students haven’t paid attention—on the contrary! It is their questioning, challenging, creating, and building upon what we’ve taught them that has enabled us to achieve an even greater mastery of our trade. Interestingly, I now have an ability to discern who of my professional peers has taught and who has not. Teaching, like parenting, gives one a sense of humility and compassion that is hard to reach without the challenges that students and children place in front of you. I do have to add in the closing of this paragraph that my co-instructor and I are very, very proud of our students for thinking critically, independently, and deeply about design and technology. It makes us very proud—the good kind of proud.

Community and personal relationships

It is always and forever about the people. When I applied for the position to teach the UX design immersive course, I focused on the students and the relationships I would create with them. I thought about the people they were before applying for the course, the experiences we would have together over ten weeks, and the people they would become after graduation. I was so excited to meet the students on the first day of class, see their faces, hear their voices, and get to know about them as people rather than social media profiles. The relationships I have built with my students mean more than I had anticipated, and I have the added joy of finding life-long friends in my co-instructor, Jill; the course producer, Jaime; and the staff that supported us through the course. Beyond that, I have met many wonderful guest speakers, leaders, community organizers, and other professionals who have also enriched my experience and career. I am so thankful that all these wonderful people are now in my life, and that we are all working at minimum forty hours per week, two-thousand hours a year to help make the world a better place. I get chills when I think about it. 


For no reason whatsoever, I was born into a situation where I had opportunities that few people have ever had. I am a literate, educated, financially independent person developing cutting-edge technology. I am keenly aware of and grateful for these opportunities and believe them accidents of history and birth and not something that I earned or deserve. It seems that teaching others to be empathetic, ethical, human-centered designers of computing technology is a good way of making sure this privilege is not wasted.


I am happy to say that I am no longer burnt out, and I have rediscovered meaning and engagement in my work thanks to my students, co-instructor, course producer, and supportive staff. I have found that when I lack inspiration, motivation, and energy, what I am missing are quality relationships and interactions with my peers and colleagues. 

In conclusion

I hope this inspiration carries over to you, the reader, and inspires you to become a teacher or mentor. I will end this essay with a direct quote from my co-instructor and good friend, Jill DaSilva. Before I sat down to write this article, I asked her why she decided to teach this course. Here is her answer:

I teach because I have the opportunity to give back. At a time in my life when I needed to support my son and myself, there were people there to help me, teach me, and give me the chance to do what I loved for a living. I’m paying it forward. Also, what we make is meaningful, and I get to teach our students how to create things that improve other people’s circumstances. If we can remove suffering and increase happiness though what we make, then we are living good lives.

Posted in: on Fri, May 16, 2014 - 10:12:56

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm,
View All Ashley Karr's Posts

Post Comment

No Comments Found

Designing the cognitive future, part IV: Learning and child development

Authors: Juan Pablo Hourcade
Posted: Thu, May 15, 2014 - 10:29:54

In this post, I discuss how technology may affect learning and child development in the future, and how the HCI community can play a role in shaping what happens.

Let’s start with a quick primer on some of the latest theories on child development, such as dynamic state theories and connectionism. These theories attempt to bridge what we know about the biology of the brain with well-established higher-level views on development from Piagetian and socio-cultural traditions. These theories see learning as change, and study how change happens.

One of the main emphases of these theories is on the notion of embodiment. They see learning and development occurring through interactions between the brain, the body, and the environment (including other people). When we learn to complete a task, we learn how to do it with our bodies, using the resources available in the environment. As learning, change, and development occur, the brain, the body, and the environment learn, change, and develop together.

These approaches also bring a “biological systems” view of the brain, with small components working together to accomplish tasks, and knowledge representations, behaviors, and skills emerging over time. Emerging skills, for example, are likely to show a great deal of variability initially, with the best alternatives becoming more likely over time. This also links to the concept of plasticity, where it is much easier to change behavior and learn new skills for younger people (they also show greater variability in behavior) but it is more challenging later in life.

So how does all this link to technology? I think technology brings significant challenges and opportunities. The biggest change, perhaps the most radical in the history of humanity, is in the environments with which children may interact in the future. The richness of these environments, and the ability to modify and develop with them will be unprecedented. In particular, there is a potential to give children access to appealing media to build and learn things that match their interests. Much of the research at the Interaction Design and Children (IDC) conference follows this path.

The biggest challenge is in making sure that technology doesn’t get in the way of the human connections that are paramount to child development. A secure attachment to primary caregivers (usually parents) plays a prominent role in helping children feel secure, regulate their emotions, learn to communicate, connect with others, self-reflect, and explore the world with confidence. We have increasing evidence that interactive devices are not always helping in this respect. For example, a recent study by Radesky and colleagues at Boston Medical Center found that parental use of interactive devices during meals led to negative interactions with children. 

Likewise, when providing children with access to interactive media, we need to make sure that this happens in a positive literacy environment. Typical characteristics of positive literacy environments include shared activities (e.g., reading a book or experiencing educational media together) and quality engagement by primary caregivers (e.g., use of wide, positive vocabulary). Obviously, access to appropriate media is also necessary. What are some characteristics to look for? The better options will provide open-ended possibilities, encourage or involve rich social interactions, and incorporate symbolic play and even physical activity.

So how should we design the future of learning? One path is to replace busy parents and teachers with interactive media that take their place, and may even provide children with emotional bonds (similar to the film Her), making sure they are able to accomplish tasks according to standardized measures. The path I would prefer is for technology to enrich the connections between children, caregivers, teachers, and peers; to expand our ways of communicating; to provide more options for engaging in activities together; and to enable self-expression, creativity, and exploration in unprecedented ways. 

How would you design the future of learning?

Posted in: on Thu, May 15, 2014 - 10:29:54

Juan Pablo Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Pablo Hourcade's Posts

Post Comment

No Comments Found

Margaret Atwood: Too big to fail?

Authors: Deborah Tatar
Posted: Mon, May 12, 2014 - 10:27:33

Margaret Atwood gave the opening plenary for the CHI conference in Toronto in late April. When Atwood’s name was announced at the Associate Chair meeting the prior December, the audience was divided into two groups, the “Who?” group and the group that gasped “The Margaret Atwood?” Even though the second group was significantly smaller than the first, Atwood’s presence was a coup for the conference. In retrospect, we were the beneficiaries of her slide into entrepreneurial endeavor. We benefited in two ways: first because her entrepreneurial interests are probably why she accepted the gig and then because she brought her narrative powers to describing design development. More than one person in the audience muttered that she was the deciding factor in conference attendance, some because she is a great writer, and some because she is a kind of futurist. The rest came to her keynote because CHI told them to. Luckily, because she is a great writer and a futurist, she could not fail to please, even with a presentation primarily based on voice and content alone. She used Power Point, but only to illustrate, not to structure. She did please. 

To my mind, the most interesting part was the description of her childhood in the far north of Quebec, without running water, school, contact with the outside, or friends. She and her brother engaged in the kind of intense creative endeavor that the Bronte children (as in Charlotte, Emily, and Anne; authors of Jane Eyre, Wuthering Heights, The Tenant of Wildfell Hall respectively) did in the early 19th century in the Yorkshire Moors, but happily neither Atwood nor her brother died early of tuberculosis. Also, happily, Atwood was influenced by Flash Gordon rather than Pilgrim’s Progress. And her take experience with the can-do (actually, the must-do) spirit required for existence in the wild contributed to her intrepid voice. 

Her talk was charming and interesting. And she correctly pointed out the importance of self-driven, unstructured exploration in creativity. In fact, her discussion was very similar to a speech Helen Caldecott, the founder of Physicians for Social Responsibility, gave in the mid-1990s, reminiscing on the dangerous chances that were an everyday part of her childhood in Australia and the judgment that skirting such dangers taught. My own paper on playground games and the dissemination of control in computing (DIS 2008) was based on that talk as well as two other factors: my own memory of routine freedom in my own childhood in Ohio (“Just be home in time for dinner!”) and Buck’s Rock Creative and Performing Arts Camp in New Milford, Connecticut. Buck’s Rock was founded in 1942 by German refugees and permitted students to choose their own activities all day long, every day. The brief, blissful and extremely expensive month I spent at what was then called “Buck’s Rock Work Camp” in the summer of 1973 set my internal compass up for life. 

Despite the considerable interest of her story, Ms. Atwood was deeply wrong in one respect, and in some way it is her error rather than her perception that is the important take-away. She repeated at least twice, and perhaps more often, that we cannot build what we cannot imagine. 

Oh, if only she were right! But she is wrong. She ignores the existence of banks too big to fail. We might, more formally refer to this and related phenomena as emergent effects. Mitch Resnick, Uri Wilensky, and Walter Stroup have been writing for years about teaching children to model the emergent effects of complex systems, using a distributed parallel version of the Logo computer language called Star Logo. Mitch has a lovely small 1997 book called Turtles, Termites and Traffic Jams from MIT Press. And of course the notion of complexity theory as pursued at the Sante Fe Institute brings formality and rigor to the structure of information. 

These are some of the intellectual roots of Big Data. More significant is the practical consequence as Big Data increasingly controls freedom of action. The fear is that, armed with information, the incessant insistence of the computer that I recently wrote about in an Interactions feature will fragment and disperse the unofficial mechanisms that the powerless have always used to get influence. When we let big corporations, dribble-by-dribble, have our information, we do not intend to make a world designed only by monetization. Indeed, I’m sure that at one point the Google founders actually thought that they would “do no evil.” Unintended consequences. Orwell could imagine 1984 but he could not imagine the little steps, quirks, and limitations by which 1984 would be bootstrapped. 

Ironically, I acquired this sensitivity to the power of the computer to obliterate the mechanisms of the already disenfranchised in part by reading The Handmaid’s Tale, a book by Margaret Atwood. 

Posted in: on Mon, May 12, 2014 - 10:27:33

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts

Post Comment

No Comments Found

Wireframes defined

Authors: Ashley Karr
Posted: Tue, April 29, 2014 - 1:18:59

Take away:Wireframing is the phase of the design process where thoughts become tangible. A wireframe is a visual 2D model of a 3D object. Within website design, it is a basic, visual guide representing the layout or skeletal framework of a web interface. Page schematic or screen blueprint are frequently used synonyms.

When user experience (UX) professionals create wireframes, they arrange user interface elements to best accomplish a particular, predetermined task or purpose. They focus on function, behavior, content priority, and placement, page layout, and navigational systems. They lack graphics and a fancy look and feel. Wireframing is an effective rapid prototyping technique in part because it saves huge amounts of time and money. It allows designers to measure a design concept's practicality and efficacy without a large investment of these two very important resources. It combines high-level structural work, such as flow charts, site maps, and screen design, and connects the underlying conceptual structure (the information architecture) to the design's surface (the user interface). Designers use wireframes for mobile sites, computer applications, and other screen-based products that involve human-computer interaction (HCI). 

The following are a few best practices for wireframing in the digital space:

  • Be a planner. Gather information before you start jumping into wireframes. Make sure you, your team, and your clients are clear on the design's missions, goals, objectives, and functions. Make sure you are also just as clear on your stakeholders and users. (Did you remember that maintenance workers are design users, as well? No? Better drop the wireframe and spend a bit more time thinking...) 

  • Be a philosopher. Wireframing is " a finger pointing away to the moon. Don't concentrate on the finger or you will miss all that heavenly glory." Yes, I just quoted Bruce Lee in Enter the Dragon. How does this apply to wireframing? I will tell you. People tend to get hung up on the medium and not the quality and appropriateness of the wireframe. Wireframing programs are a dime a dozen. In my real-life, actual UX experience, I have discovered that people LOVE paper wireframes. They get excited when they are finally allowed to unplug their fingers from the keyboard, peel their eyes away from the screen, and do batteries-not-included usability tests outside in a courtyard chalk-full of pigeons, fountains, babies in strollers, and dogs playing catch with their owners.

  • Be a child. So many people are afraid of being wrong or seemingly silly in a professional environment that they paralyze their creativity. If you are one of those people who worry so much about what others may think of you during a brainstorming session, go hang out with pre-schoolers for a morning. Dump a big pile of Crayolas on a table, fling around some colored construction paper, and notice what happens.

Wireframing is neither new nor innovative. Humans have wireframed for millennia—sketching out inspirations for new inventions, drafting designs for buildings and civil engineering projects, and developing schematics for massive travel and communication networks. Arguably, our first medium was the cave wall and lump of charcoal leftover from the previous night's fire. As human technology evolved from these prehistoric wares to papyrus to paper to computerized 3D-modeling programs and interactive systems, such as Balsamiq, Axure, and InDesign, what and how we wireframe has evolved in step. However, why we wireframe and wireframing best practices are timeless. In order to develop a viable final product, be it the Parthenon or a mobile phone app to track blood sugar levels for diabetics, humans have depended upon wireframes to organize and synchronize the design team's efforts and test early iterations to avoid disaster and make a good faith attempt at success. 

Posted in: on Tue, April 29, 2014 - 1:18:59

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm,
View All Ashley Karr's Posts

Post Comment

No Comments Found

True digital natives

Authors: Jonathan Grudin
Posted: Tue, April 22, 2014 - 12:49:32

They’re coming. They may not yet be recognizable, but some are walking—or crawling—among us.

The term digital native was coined in 2001 to describe technology-using youths, some of whom are now approaching middle age. At an early age they used family computers at home. They took computer skills classes in school. They met for other classes in computer labs or had device carts wheeled in. They acquired mobile phones as they approached their teen years.

They are not intimidated by tech. But they aren’t fully digital. Paper and three-ring binders are still alive and well in schools. Last September, Seattle was plagued for weeks by a shortage of bound quadrille-ruled notebooks. Many schools still ban mobile phone use, reinforcing students’ longstanding suspicion that school has little to do with the phone-tethered world outside. Scattered reports of BYOD in the workplace alarm IT professionals, but employers generally assume that new hires will adopt the technology that comes with a job. Enterprises see the disappearance of technophobia as a plus, failing to anticipate the new challenges that will accompany greater technophilia.

Today, a different cohort is starting to emerge. Psychologically different. Not the final stage of digital evolution, but a significant change.


A previous post described forces behind the spread of device-per-student deployments: changes in pedagogy and assessment methods, sharply declining prices resulting from Moore’s law, manufacturing efficiencies, and the economies of scale that accompany growing demand.

“One laptop per child” visions began almost half a century ago with Alan Kay’s Dynabook concept. Kay pursued education initiatives for decades. The nine-year-old OLPC consortium aimed unsuccessfully for a $100 dollar device, encountering technical and organizational challenges. The site’s once-active blog has been quiet for six months. Its Wikipedia page reflects no new developments for two years. Media accounts consist of claims that OLPC has closed its doors; these are disputed, but the debate speaks for itself.

I don’t question the potential of digital technology in education. Yes, pedagogy, compensation and ongoing professional development for teachers, and infrastructure are higher priorities to which OLPC might have paid more attention. But digital technology is so fluid—when there is enough, it will find its way. OLPC was cycles of Moore’s law ahead of itself—but how many cycles? If nine years wasn’t enough, might another two or three suffice? Pedagogy is improving and infrastructure is coming into place. Support for teachers is the one area of uncertainty; let’s hope it picks up.

Insofar as technology is concerned, the light at the end of the long tunnel is getting bright. Capability grows and cost declines. For the price of several laptop carts five years ago, a school can provide all students with tablets that can do more. And 1:1 makes a tremendous difference.

The obvious difference is greater use, which leads to knowledge of where and how to use technology, and when to avoid using it. Students who use a device for a few hours a week can’t acquire the familiarity and skills of those who carry one to every class, on field trips, and home.

Some features make little sense until use is 1:1. Consider a high-resolution digital pen. Most of us sketch and take notes on paper, but for serious work we type and use graphics packages. Education is different: For both students and teachers, handwriting and sketching are part of the final product. Students don’t type up handwritten class notes or algebraic equations. They draw the parts of a cell, light going through lenses, and history timelines. Teachers mark papers by hand. When lecturing, they guide student attention by underlining, circling, and drawing connecting arrows.

Only when students carry a device can they use it to take notes in every class. When everyone has a high-resolution digital pen, a class can completely eliminate the use of paper. It is happening now. It would happen faster were it not for the familiar customer-user distinction. The customers—such as school board members deciding on technology acquisition—think, “I don’t use a digital pen and I’m successful; isn’t it a frill that costs several dollars per device and is easily lost or broken?” They don’t see that reduced use of paper and substantial efficiency gains will yield net savings. They don’t realize that students who are familiar with the technology will use it to become more productive workers than their predecessors, including those who are today making the purchasing decisions.

There are unknowns. We have learned that when everything is digital, anything can appear anywhere at any time, for better or worse. But in the protected world we strive to maintain for children, digital technology can and I believe will be a powerful positive force. The world’s schools have started crossing that line. A flood will follow.

Leveling the field

When prices fall and other features come into alignment, future use will resemble today’s high-functionality tablets with active digital pens. These devices are not over-featured and are already much less expensive than a few years ago.

The flexibility of well-managed digital technology supports a range of learning styles. An elementary school teacher whose class I visited last week said that her greatest surprise was that struggling students benefited as much as or more than very capable students: The technology “helps level the playing field.” This echoed other conversations I have had. A teacher who was initially skeptical about a new math textbook remarked that after a year he was convinced: The adaptive supplementary materials accessible on the Internet “keep any student from falling through the cracks.” He still felt the textbook was weak on collaboration and other “21st century skills,” but concluded “a good teacher will add them.” In a third school, a teacher recorded parts of lectures as he gave them, using software that captured voice, video, and digital pen input. He then put them online for students who missed class, were not paying attention, or needed to view it a second time. On some occasions when he was not recording, students asked him to.

The most dramatic leveling occurs when technology allows students with sensory and other limitations to use computers for the first time. When I first saw a range of accessibility accessories and applications in active use a year ago, it was eye-opening: Children who had been cut off from the world of computing that we take for granted could suddenly participate fully. It took me by surprise. The tears streaming down my face were not of joy—I felt the isolation and helplessness they had lived with.

Only a device that supports keyboard, pen, voice, and video input along with software that supports a range of content creation, communication, and collaboration activities will realize the full potential. However, 1:1 deployment of any device—tablet PCs, kindles, iPads, Chromebooks—when accompanied by appropriate pedagogy, professional development, and infrastructure not only provides benefit: It is fundamentally transformative, as described in the next section.

From direction to negotiation

When a computer is used in a lab, delivered by a device cart, or engaged with for part of a class period in a station rotation model, the teacher controls when and how it is used. When a student carries a device everywhere, use is negotiated. Students can take notes digitally in a technophobic instructor’s class. Student, teachers, and parents decide, with students often the most knowledgeable party.

The psychological shift with 1:1 goes deep. Students can and often do personalize their devices in various ways. Their sense of responsibility for the tool and its use creates a symbiosis that didn’t exist before. New hires today might use what they are given—that is how they were trained in school! Tomorrow’s students who arrive with years of responsibility for making decisions will bring a knowledge of how they can use digital technology effectively and efficiently. They will expect to participate in decisions. They may or may not use what they grew up with, but they’ll know what they want.

1:1 classroom experiments are underway and succeeding, even in K-5. These kids may not take devices home, but when they are not reading books, playing outdoors, and interacting with family members, they will probably find a device to use there as well.

Born digital

What next? These could be early days. Moore’s law hasn’t yet been revoked. Harvested energy R&D moves forward. Imagine: An expectant mother swallows a cocktail of vitamins, minerals, proteins, and digital microbes that find their way to the fetus. In addition to monitoring fetal health, will the sentinels serenade it with Mozart, drill on SAT questions, introduce basic computing concepts? Born digital—the term is already in use, but we have no idea.

Thanks to Clayton Lewis for comments and discussion.

Posted in: on Tue, April 22, 2014 - 12:49:32

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

Three fundamentals for defining a design strategy

Authors: Uday Gajendar
Posted: Tue, April 15, 2014 - 11:04:47

The other day I tweeted this out while grasping what I’m trying to accomplish in my new role as Director of UX at a Big Data start-up: "Creating strategy (& vision) is about understanding the essence, exploring the potential & defining the expression, in an integrative way.” I’d like to delve a bit deeper into this spontaneously conveyed moment of personal profundity!

First, there are literally tons of books out there postulating on “strategy,” and many articles linking business-oriented concepts to “design.” One can spend weeks or months studying them (I’ve read quite a few, no doubt) but until you’re in the midst of being singularly burdened with the responsibility to define a viable, feasible design strategy (and correlated vision) for a team, company, and product, such readings just aren’t enough. Once you realize what’s around you—the sheer magnitude of the opportunity—then you’re able to peek into the milieu of ambiguity and complexity that comprises strategy. And that’s when you see that it’s fundamentally about three basic things:

  • Understand the essence. I once had a very tough Color Theory professor while an undergrad, who one day declared, “In order to master color, you must understand its essence.” And with some noticeable exasperation reinforced by a stern glare, he added: "Are you interested in… essence?!” Harrumph! Well then. It took a very long time, but I eventually realized he meant that you must deeply intuit—to the level of personal resonance—the purpose, value, and raison d’être for existence of color within a certain context. So what is strategy for, ultimately? I don’t mean some banal, trite “value prop” bullet point for a VC “pitch deck.” While that’s great fodder for Dilbert, as a designer I need to speak and work with authenticity to deliver excellence. So, this requires deeply probing the identity and nature of the company and its product—what is their inner “truth”? This requires “connecting” with the purpose, as a designer, and capturing it in the form of a thematic construct of human values: trust, joy, desire, power, freedom, etc. You’ve got to feel it… and believe it. There’s a bilateral immersive engagement that shapes your perspective of why the product and company exist, and how you can move that forward.

  • Explore the potential. There is a vast array of materials at a designer’s disposal, from the tangible (color, imagery, type, animations) to the intangible (presence, interaction, workflow). Pushing this range of potentialities is necessary to break beyond any conventional thinking, see what’s afforded and available. Potential necessarily involves delving into a fragile, unfamiliar realm of “what if” and “why not,” challenging limits and implied norms that many may hold sacred, for no discernible reason.

  • Define the expression. There’s got to be some well-crafted, artfully balanced manifestation, some embodiment of all that profound exploration of strategy and vision, that yourself and others can grasp and hold on to as a torch to light the way, signaling a path forward with promise and conviction. Maybe it’s a mockup, a movie, a demo, a marketing campaign, whateve… And there’s admittedly a degree of theatricality and rhetorical flourish to persuade stakeholders, but those expressions become symbols that others inside and outside the company will associate with your strategy. To put it bluntly, make prototypes, not plans! The expression matters; it brings your strategy to life in an engaging manner, where followers become believers and eventually, leaders.

  • In an integrative way. Finally, it’s all got to work together beautifully—the ideas about the product, the customer, the company, the principles, team process, public brand, etc. It takes systemic thinking to connect the dots and interweave the threads of crucial, even difficult, conversations with peers/superiors/ambassadors to ensure everyone is on board and committed, thus participating productively, to helping your strategy come alive. This requires constant multilateral thinking, with discipline and focus, bringing those elements together effectively. I think Steve Jobs said it best when he described the journey from idea to execution:

    "Designing a product is keeping 5,000 things in your brain, these concepts, and fitting them all together in kind of continuing to push to fit them together in new and different ways to get what you want.”

Posted in: on Tue, April 15, 2014 - 11:04:47

Uday Gajendar

Uday Gajendar is Director of User Experience at CloudPhysics, focused on bringing beauty and soul to Big Data for virtualized datacenters.
View All Uday Gajendar's Posts

Post Comment

No Comments Found

Interaction design for the Internet of Things

Authors: Mikael Wiberg
Posted: Fri, April 11, 2014 - 3:19:02

The Internet of Things (IoT) seems to be the next big thing, no pun intended! Embedded computing in everyday objects brings with it the potential of integrating physical things into acts of computing, in loops of human-computer interactions. IoT makes things networked and accessible over the Internet. And vice versa—these physical objects not only become input modalities to the Internet but also, more fundamentally, manifest parts of the Internet. IoT is not about accessing the Internet as we know it through physical objects. It is about physical objects becoming part of the Internet, establishing an Internet of Things. Accordingly, IoT brings with it a promise to dissolve the gap between our physical and digital worlds and the potential to integrate elements of computing with just about any everyday activity, location, or object. In short, IoT brings with it a whole new playground for interaction design!

We already see good examples of how this is starting to play out in practice. Connected cars, specialized computers, and tagged objects are becoming more and more common and the repertoire of available networked objects is rapidly growing. There is a shared interest in the Internet of Things in industry and academia. 

While the technological development around this area is indeed fascinating, it is from my perspective even more interesting to see where this will take interaction design over the next few years. From an interaction design perspective, it is always interesting to explore what this digital material can do for us in terms of enabling new user experiences and the development of new digital services. The IoT movement does indeed bring with it a potential not only for re-imagining traditional physical materials, making physical objects part of digital services, but also for re-thinking traditional objects as not being bound to their physical forms and current locations, but rather functioning as tokens and objects in landscapes of networked digital services, objects, and experiences.

When we, as interaction designers, approach the Internet of Things I hope we do it through a material-centered approach in which we treat the IoT not only as an application area but also more fundamentally as yet another new design material. With a material-centered approach, I hope that we look beyond what services we can imagine around internet-enabled objects and instead move our focus over to the re-imagination of what human-computer interaction can be about, i.e., how IoT might expand the design scope of HCI. By thinking compositionally about IoT and viewing IoT in composition with device ecologies, cloud based services, smart materials, sensors, and so on, we move our focus from what this latest trend of technology development can do for us to how we might interact in a nearby future with and through just about any materials—digital or not. This is what I hope for when it comes to interaction design for and via the Internet of Things! 

Posted in: on Fri, April 11, 2014 - 3:19:02

Mikael Wiberg

Mikael Wiberg is Professor of Informatics in the Department of Informatics at Umeå University, Sweden.
View All Mikael Wiberg's Posts

Post Comment

No Comments Found

It’s spring and a girl’s thoughts turn to design (and meaning)

Authors: Deborah Tatar
Posted: Fri, April 04, 2014 - 10:37:40

It’s spring. Spring for me is always associated not so much with the bulbs that turn Blacksburg into a really beautiful place, but with serious thoughts about values. Of course, there are a lot of holidays associated with spring, but mine is Passover. And renewal is associated with thoughts about the aspiration to live rightly. In my childhood, in New York, the big fall holidays, Rosh Hashanah and Yom Kippur, were about personal challenges. We turned inwards with the threat of the long, dark, cold winter ahead. But even in a non-religious family like mine, the Seder was about turning outwards.  

One dinner in particular jumps into my mind. My stepfather was on the Bicentennial Commission for New York City. This is the group that put together the celebration of the 200th anniversary of the Declaration of Independence. It was a big deal, with events of many kinds all over the city (evidently it was an effort to imagine That Beyond Manhattan). As I had dutifully learned in 5th grade history, New York was in fact quite central to independence before, during, and after the Revolutionary War. So at our family Seder in the spring of 1976, after the ceremony and the lesson that even we could be slaves were circumstances otherwise, Papa regaled us with stories about the ideas, decisions, and commitments, the struggles between the boroughs, the balance of activities, the political and aesthetic disagreements. After dinner, we moved to the living room of my grandparents’ apartment. My usual perch was an embroidered footstool. Some of the activities were vox populi and others were High Art. Eventually the talk to turned to the role of art in modern America. 

Ahh! This topic and the shift of venue gave my Great Uncle Harry, our usual primary raconteur, the opening he had been longing for, the chance to top the evening with the seal of profundity. He settled his comfortable paunch back into the brocaded wing-tip chair and fingered his cigar. Think a small Jewish man with the mannerisms of Teddy Roosevelt. This was 38 years ago, and I have lost some of the details that would make the story jump off the page as the lesson was impressed on me. 

England, as well as the United States, was infested with virulent anti-communism in the late 1940s and 50s. Uncle Harry’s story concerned two Very Well Known Brits—neither of whom I can remember by name. One was a retired military general in the style of Bernard Shaw’s Horseback Hall. I could hear his bristling bushy moustache in the tone of the story. British, British, British. God and Country. Suspicious. Proud. Nationalistic. The other was a preeminent creative person or academic—a writer perhaps. Slightly ascetic. Sharp but diffident. Clever with words in a way that no American can ever be. (If anyone else recollects this story, please remind me who the protagonists were!) Both were dressed in black tie at some kind of formal dinner—or maybe it was even more formal, white tie. 

In the course of political discussion, the general turned to the writer—as Uncle Harry told this, Teddy Roosevelt appeared in him most clearly; he threw out his chest and looked down his nose—and said in tones of opprobrium, “And what did you do during the War?” (Meaning, as an American in the 1970s would, the Second World War.)

And the writer replied—Uncle Harry’s eyebrows went up slightly; his voice stayed mild and quiet; he looked askance, as he assumed his imitation Oxbridge accent—“I was doing the things that you were fighting to protect.” 

That was it. The writer, the artist, the intellectual “was doing the things that you were fighting to protect.” In that phrase, we had the assertion of role of art and intellect, intrinsic to quality of life, to freedom, and a force for meaning in a difficult world. 

I hope that the layers of this story as I tell it to you—the concept of celebrating the American revolution, the reenactment of the flight of the Jews from slavery, my family’s interpretation in the mid 1970s through an imagined connection to British thought, my own processing and recollection so many years later—give that message about values a kind of deeply lacquered frame. 

The intellectual, the artist, the writer was able to claim that he was doing things worth fighting to protect. We fought to protect art and ideas, to preserve justice. To enact a vision of a more equitable world. 

In that world, design was a half-step behind art, shadowed by it, but intensely tied to meaning, both political and personal. Raymond Loewy was already represented in the collection of the Museum of Modern Art. And indeed so was one of the first things I ever purchased with money I earned myself: a Valentino portable typewriter. Of course, designers have clients. Did I say that they have clients? They have clients. They have clients. They have clients. But they also have vision and that vision is something that can be talked about and even disputed. 

My moral for this spring is that design is or should be something more than client-fulfillment centers. 

Posted in: on Fri, April 04, 2014 - 10:37:40

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts

Post Comment

No Comments Found

Preaching to the choir, just when you thought it was safe

Authors: Monica Granfield
Posted: Wed, April 02, 2014 - 10:41:12

UX has made great strides within the mainstream IT and software community. Hard work, education, and return on investment have all contributed to growth of the UX design discipline over the past 10 years. UX (with which I am bundling research) is now gaining traction and opening doors in a large variety of industries, from healthcare to robotics. New avenues such as customer experience are gaining traction and opening even more doors for our discipline. This is a very exciting time for UX and CX! However, with opportunity comes challenge and there is still a fair amount of work to do out there.

The industry is changing as software moves off of the desktop and out into every type of electronic device imaginable, creating new ecosystems, new experiences, and exciting new challenges. With these developments, UX often finds itself back at the starting gate, most commonly back to a point of playing defense and vying for offensive play. With this it seems UX needs to proactively broaden its reach into educating and building awareness within these new industries.

The playbook is the same, just with a new team. After a recent move into the uncharted territory of a new industry and attending UX-related events, I realized that I was not the only one facing these challenges. I thought, as many others I have spoken with since, that UX was more commonly understood now. This has me thinking: Maybe we are all preaching to the choir. Maybe the UX industry needs to break out of our own comfort zone and start spreading the word at other industry professional events. Reaching into new industries could open our playbooks and allow these new industries to gain awareness and knowledge of UX outside of the political arena within the workplace.  

I am curious if anyone out there has been representing UX at other professional meetings and conferences. Are there any UX talks happening at IEEE or Business Professionals of America? Yes, being in the trenches educating your team and your organization may be the best ground up approach, and branching out to present to the disciplines we most often collaborate with, within their comfort zone, might gain greater traction among these disciplines. Internal grassroots efforts can be an uphill climb, as a team or as an individual. Building momentum and awareness of UX as a discipline within other disciplines could be a game changer for us at the professional level. Who knows how other disciplines might receive the UX message. If other disciplines want to create and participate in creating the best user experience, this might be one route to success.

At the education level the momentum is gaining. The D-school at Stanford, which was hatched out of the School of Engineering in 2005, has begun educating students on the value and application of design collaboration, to create “innovators.” Bringing more awareness of UX design to engineering is also important. I have heard of efforts such as Jared Spool teaching UX courses in the Graduate Management Engineering program at Tufts University’s Gordon Institute. Many undergraduate universities are now offering UX classes within the software engineering curriculum. These classes are key to setting the stage for the next wave of technical talent coming out of university, who will gain the ability to understand value and collaborate around the use of UX design in creation and innovation.

Universities presenting more opportunities for cross pollination and collaboration between design, engineering, and business may be helpful in breaking down departmental barriers in the future. Today, creating the opportunity for design and research to truly become innovators, especially within new domains, is still a challenge. I have heard the argument that if designers want to participate in design strategy to address the business, they should become business strategists, and that is for MBAs. However, as most UX professionals know we are not claiming to be business strategists. Yet our insights and offerings do overlap with business strategy, and this is a lesser known use of design, as opposed to overlapping with product development or engineering. We are mediators of how these disciplines contribute to the fruition of the resulting user experience and that word needs to reach a world of professionals who are heads down, working off of what they know. Current grads are getting some exposure and cross pollination; however, it will be some time before they are in the top ranks championing the next generation of technology or customer experiences. Therefore, it is up to the design community today to reach out, reach over, and continue to break down the barriers and open minds outside of traditional software.  

Posted in: on Wed, April 02, 2014 - 10:41:12

Monica Granfield

Monica Granfield is a user experience designer at Go Design LLC.
View All Monica Granfield's Posts

Post Comment

@Richard Anderson (2014 04 15)

Speaking of Stanford, Medicine X— “the world’s premier patient-centered conference on emerging technology and medicine” (see — is largely about human-centered design and the patient experience. This is a fabulous conference that brings together a wide range of healthcare professionals and designers and patients. In my view, this kind of conference is better than a conference silted on one profession at which one or two UX/CX people speak.

Swarms and tribes

Authors: Jonathan Grudin
Posted: Mon, March 31, 2014 - 11:03:23

A crack team led by Deputy Marshall Samuel Gerard (Tommy Lee Jones) race about in hot pursuit of Harrison Ford’s fugitive Dr. Richard Kimble. Gerard finds one of his men standing motionless. 
Gerard: “Newman,
what are you doing?!
Newman: “I'm thinking.”
Gerard stares. “Well, think me up a cup of coffee and a chocolate doughnut with some of those little sprinkles on top, while you're thinking.” Walks away.
The Fugitive (Warner Bros., 1993)

Ant colonies

Ants scurry about in a frenetic mix of random and directed activity. They gather construction materials, water, and food, and respond to threats. Ants are also busy underground, extending a complex nest, caring for the queen and her eggs, and handling retrieved materials. Decomposing leaves are artfully placed in underground chambers to heat the structure and circulate air through the passages; leaves can also be a source of food, directly or through fungus farming. Ants captured from other colonies are put to work. Foragers that are unsuccessful, even when only due to bad luck, shift to other tasks. Remarkable navigational capabilities enable ants to find short paths home and avoid fatal dehydration. Arboreal species race to attack anything that brushes their tree. Their relentless activity is genetically programmed: There are no ant academies.

Ant programming isn’t perfectly adapted to this modern world. Fire ants invading my Texas condo marched single file by the thousands into refrigerators or air conditioning units, where they were frozen or fried, sometimes shorting out an AC box. Humans rarely exhibit such unreflective behavior. Doomed military offensives such as the Charge of the Light Brigade prompt us to ask whether soldiers should sometimes question orders.

The health of the ant colony relies on the absence of reflection. It would bode ill if individual ants began questioning their genetic predispositions. “The pheromones signal something tasty that way, maybe a doughnut with sprinkles, but I don’t like the looks of that path,” or “Maybe we could come up with a better air circulation system, let’s have a committee draw up a report.” Ants don’t think, but they’re doing OK. They outnumber us. If, as seems plausible, ants are here when we’re gone, our capability for reflection could be called into question, should any creatures be around that ask questions. The ants won’t [1].


According to my favorite source, a single ant supercolony comprising billions of workers was found in 2002, stretching along the coasts of southern Europe. In 2009 this colony was found to have branches in Japan and California, no doubt enabled by our transportation systems: a global megacolony. Does it have an imperialistic plan to displace rival ant supercolonies? No, each ant follows its genetic blueprint.

We’re globalizing, too. Not long ago Homo sapiens appeared to have two supercolonies, but the bonds holding them together were less enduring than ant colony bonds. Nevertheless, we are forming larger, globally distributed workgroups. We may yet become a global megacolony. If we don’t, ants may inherit the earth sooner rather than later.

The human colony

Looking back a few thousand years, a small tribe couldn’t afford to lose many members through random behavior. If Uncle Og headed down that path and did not return, let’s think twice about going that way alone! When ants stream to their deaths, lured by false pheromone signals triggered by appliances, the colony has more where they came from. In contrast, our ability to analyze and reason enabled us to spread across the planet in small groups. A century ago we were still overwhelmingly rural—isolated and often besieged. Information sharing was limited. Each community worked out most stuff for itself. Reflection was valuable.

How are the benefits and the opportunity costs of cogitation affected as the Web connects us into supercolonies? Given a wealth of accessible information, is my time better spent searching or thinking? Tools make it easier to conduct studies; is it better to ponder the results of one, or use the time to do another study? Cut new leaves or rearrange those brought in yesterday?

Many research papers represent about three months’ work, with students or interns doing much of it. After publishing three related papers, will I contribute more by spending six months writing a deep, extensive article and carefully planning my future research—or by cranking out additional studies and two more papers?

We are shifting to the latter. Journals, handbooks, and monographs are in decline. Conferences and arXiv thrive. Arguably, we know what we are doing or our behavior is being shaped appropriately. The colony may be large and connected enough to thrive if we scurry about, cutting and hauling leaves without long pauses to reflect. Beneficial chance juxtapositions of results will simulate reflection, just as the frenzied instinct-driven construction of an ant nest appears from the outside to be a product of reflective design. The large colony requires food. For us, as for the ants, there are so many leaves, so little time.

Shifting our metaphoric social insect, the largest social networking colony is the IBM Beehive compendium: twenty-something research papers scattered across several conference series. No survey or monograph ties the studies together. I lobbied the authors to write one, but they were heads down collecting more pollen, which was rewarded by their management and the research community.

Working on a handbook chapter, I did it for them, tracking down the studies, reviewing them, and trying to convert the pollen into honey. It was hard to stitch the papers together. For example, the month and year of some work was not stated, and publication date is not definitive in a field marked by rejection and resubmission. In a rapidly evolving domain, knowing the sequence would help.

In retrospect, they were probably right about where to invest time. I found a few higher-level patterns and overarching insights, but few will take note when the handbook chapter is published next month. Social networking behaviors have moved on. The Beehive has been abandoned, the bees have flown elsewhere, leaving behind work that is now mainly of historical value, although bits and pieces will spark connections or confirm biases and be cited. From the perspective of my employer, the field, and intellectual progress, my time could have been better spent on a couple more studies.

It is ultimately a question of the utility of concentrated thought. How might we find objective evidence that scholarship is useful in this century? I’m sentimentally drawn to it, but the effort required to become a scholar might be more usefully channeled into other pursuits. The colony would collapse if ants spent time contemplating whether or not to blindly follow pheromones. Through frenetic activity they build a beautiful structure and the colony thrives. Is life in our emerging megacolony or swarm different? Race around, accept that bad luck will sideline many, and plausibly we will thrive. If an occasional false pheromone lures a stream of researchers to a sorry fate, there will be more where they came from!

The tribe and the swarm

Consciously or unconsciously, we’re choosing. Fifteen years ago, an MIT drama professor told me that with the digital availability of multiple performances, students who analyze a performance in detail do not do well. Better to view and contrast multiple performances, spending less time on each. Other examples:

  • In an earlier era, if one of the five people who were engaged in similar work performed exceptionally well, the tribe benefited by bringing them together so that the other four learned from the fifth. Today, it may be more efficient for a large organization to let the four flounder, social insect style. Successes can be shared with people working on other tasks; enough will connect to make progress. In other words, 80% conference rejection rates that were a bad idea when the community was smaller may now be viable. The community-building niche once served by conferences may be unnecessary.

    Many senior researchers disdain work-in-progress conferences—they want strong 20%-acceptance pheromone trails. If less-skilled colleagues who rely on lower-tier venues perish through lack of guidance, no matter. The as-yet-unproven hypothesis is that the research colony will thrive without the emotional glue that holds together a community.

  • When more effort was required to plan, conduct, and write up a study, it made sense to nurture and iterate on work in progress. With high rejection rates and an inherently capricious review process, researchers today shotgun submissions, buying several lottery tickets to boost the odds of holding one winner. Rejected papers may be resubmitted once, then abandoned if rejected again. Not all ants make it back to the nest, but when those that return carry a big prize, the colony thrives.

  • Any faculty member who mentors a couple successful students has trained an eventual replacement. In the past, this could mean working closely with one graduate student at a time. Today, many faculty have small armies of students, most of whom anticipate research careers. “Is this sustainable?” I asked one. “It’s a Ponzi,” he replied cheerfully. Not all students will attain their goals. In a tribe this could be a major source of discontent and trouble. Swarms are different: Foragers who fail, even when due to bad luck, take on different tasks.

The ghost in the machine

Efficiencies that govern swarm behavior may now apply to us, but there is a complication. Our programming isn’t perfectly adapted to this modern world. Our genetic code is based on the needs of the tribe. Until natural selection eliminates urges to reflect, feelings of concern for individual community members, and unhappiness over random personal misfortune, there will be conflict and inefficiency. In 1967, Ryle’s concept of the ghost in the machine was applied by Arthur Koestler to describe maladaptive aspects of our genetic heritage. The mismatch grows.

If on a quick read this is not fully convincing, you could spend some time reflecting on it, but it may be wiser to return to working on your next design, your next conference submission, and your next reviewing assignment.

1. Not all ant species exhibit all these behaviors. Some ants are programmed for rudimentary “learning,” such as following another ant or shifting from unsuccessful foraging to brood care.

Thanks to Clayton Lewis for discussions and comments.

Posted in: on Mon, March 31, 2014 - 11:03:23

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

@Paul Resnick (2014 04 02)

I think you’d like the Collective intelligence conference, Jonathan.  Not only are there presentations reflecting on models of collective intelligence for ant colonies, but there are no archival proceedings, and a very high acceptance rate. Hope to see you there!

Designer’s toolkit: A primer on using video in research

Authors: Lauren Chapman Ruiz
Posted: Wed, March 19, 2014 - 1:00:17

In our last post, we explored a variety of methods for capturing user research. Yet a question lingered— how can you effectively use video in your research without influencing the participants?

Here are some tips and tricks to minimize the impact of using video in research engagements. Keep in mind, these tips are focused on conducting research in North America—the rules of engagement will vary based on where you are around the world.

Be transparent

If you’re using a recruiter, ensure they let participants know that they will be video recorded. Usually you or your recruiter develop a screener—a set of questions that are used to determine whether a potential participant is qualified for a study. As part of the screener introduction, you should include the intention to video record the research visit, which gives your participant time to express any concerns. You don’t want the participant to be surprised when you pull out a video camera, especially if the topic being discussed is very personal.

Minimize equipment 

With the improvements in technology, you shouldn’t be using a large video camera. Keep the recorder as small as possible, and use a tabletop tripod. (I recommend a Joby Gorillapod due to its flexibility.) Placing your equipment on a table allows you to keep the camera discreetly in the background while allowing easy pickup and placement if necessary.

Prep everything beforehand 

Nothing calls more attention to video than technical difficulties. You don’t want to be fiddling with the camera, or checking it during an interview. Make sure all batteries are fully charged, and bring spares. Make sure your memory card has enough space. Either have the small tripod on the camera beforehand, or make sure it’s ready to install. If you run into a problem in the middle of the interview, ignore the issue and just focus on the engagement—this is what’s critical—and always have that pen and paper handy.

Ease participants in while building rapport 

As you are introducing the research, remind participants that it will be video recorded, review what the video will be used for, and assure them of the purpose. If you’re recording only for note-taking purposes, explain that you simply can’t remember everything that’s said, and video allows you to go back and verify information. Remind them that they are in control of what is or isn’t recorded, and that it can be stopped at any time. If the expectation of your video recording has been communicated correctly during recruitment, this information shouldn’t be a surprise, and your interviews should progress smoothly. This is when you can also request signed consent for the recording regarding when, where, and how the footage will be used. How you do this will vary based on what you plan to do with the footage.

Include the participant in the process of setup

Part of building rapport with participants is allowing them to see you setup your equipment—let them see where you place the camera and set it up to capture a clear angle. Spend the necessary time for your participant to become comfortable with the equipment, and answer any questions.

Use humor to help dissipate discomfort 

Oftentimes humor can help to ease any nervousness about being recorded. As you build rapport, be lighthearted about the fact that you’re recording. A light chuckle can help to relax the situation.

Consider the audio quality

If you’re making a high-quality video reel for your stakeholders, you may need to ask your participant to wear a microphone. This will guarantee quality audio much more than using a directional microphone or relying on the camera’s capabilities. In these situations, participants should be more aware of the situation while being recruited so there are no surprises.

Keep the focus away from the camera

Once you’ve finished your setup and you start recording, try to forget the fact that a camera is on. Still write down notes—this distracts attention from the camera, and gives you a backup of what was said and when, which is helpful when you go back to the video later. If your participant gets up and takes you on a tour, pick the camera up and hold it against your body—place it low on your body so that it’s not very noticeable. This will keep the footage steady, and the camera out of the way. If the camera is treated as a casual aside, then attention will be drawn away from the fact that it’s recording.

If unsure, ask for permission

During your interview, if you’re not sure whether you can record something in particular, always ask your participant for permission first. Your participant might be nervous to show you particular information while you’re recording, but by asking for permission first, this reminds them that they have control of the situation. Yes, this draws attention back to the fact that you’re recording, but when your participant feels in control of what is being captured, this will build confidence and trust in allowing you to continue recording.

Be willing to show an example

In some situations, participants have felt reassured when they see the quality of the video that is captured. If video quality isn’t essential to you, showing your participant that the footage captured is low-res can increase trust to record potentially sensitive visuals.

Stop any time 

Don’t be afraid to turn off the video camera if it’s clearly a distraction or is preventing the participant from being open and honest. Make sure they know you can turn it off if they are uncomfortable. Just always be ready to switch to an alternative capturing method.

Wrapping up

If you feel your participant may have been nervous to say something due to the camera, at the end you can always turn the camera off, and ask if they have anything else they’d like to share.

Now that you have hours of video footage, what do you do with it? Based on the type of consent you gathered, there are a variety of outputs you can use the footage for.

I recommend a quick and dirty highlights reel of key findings. You can save time if you’re diligent with taking written notes of key moments, or marking down the time stamp if your note taker is sitting back with the camera. Cut these key moments out with a basic program, such as Quicktime or iMovie, for easier compilation. A good length of time for a highlights reel is about 5 minutes—if it’s too long, you will lose attention.

You now have footage that you can reference if something wasn’t clear to you, or if you need to verify a vague memory. If a stakeholder challenges an insight you’ve presented, you can reference the video as evidence. The footage can also be provided to your stakeholders as raw data with your synthesis. It’s footage they invested in, and it can be kept for any future needs.

We’ve gone through various methods for capturing research, and focused on how to leverage video without disrupting your participant engagement. As technologies advance, we can limit the appearance of being recorded—imagine recording research with Google Glass—but always we need to ask, what is the risk that we might alter participant behavior?

What do you think?

I’d be thrilled to hear about how you decide to approach capturing research—what tools do you love to use? What tips or tricks do you have to put participants at ease? Have you tried anything new or did something surprise you?

Posted in: on Wed, March 19, 2014 - 1:00:17

Lauren Chapman Ruiz

Lauren Chapman Ruiz is a Senior Interaction Designer at Cooper in San Francisco, CA, and is an adjunct faculty member at CCA .
View All Lauren Chapman Ruiz's Posts

Post Comment

No Comments Found

Theory weary

Authors: Jonathan Grudin
Posted: Fri, March 14, 2014 - 10:09:08

Theory weary, theory leery,
why can't I be theory cheery?
I often try out little bits
wheresoever they might fit.
(Affordances are very pliable,
though what they add is quite deniable.)
The sages call this bricolage,
the promiscuous prefer menage...
A savage, I, my mind's pragmatic
I'll keep what's good, discard dogmatic…

—Thomas Erickson, November 2000
"Theory Theory: A Designers View" (sixty-line poem)

An attentive reader of my blog posts on bias and reverse engineering might have noticed my skirmishes against the role of theory in human-computer interaction. I’m losing that war.

Appeals to theory are more common in some fields than others. CHI has them. CSCW has more and UIST has few. I’m writing on the plane back from CSCW 2014, where we saw many hypotheses confirmed and much theory supported. In one session I was even accused of committing theory myself, undermining my self-image of being data-driven and incapable of theorizing on the rare occasions that I might like to.

One author presented a paper informed by Homophily Theory. He reported that it might also, or instead, be informed by Social Identity Theory. After reading up on both, he couldn’t tell them apart. So he settled on Homophily Theory, which he explained meant “birds of a feather flock together.” It was on the slide.

When I was growing up expecting to become a theoretical physicist, “birds of a feather flock together” was not considered a theory; it was a proverb, like “opposites attract.” I collected proverbs that contradicted each other, enabling me to speak knowingly in any situation. Today, “opposites attract” could be called Heterophily Theory, or perhaps Social Identity-Crisis Theory.

In the CSCW 2014 proceedings are venerable entries, such as Actor-Network Theory and Activity Theory. The former was recharacterized as an “ontology” by a founder; the latter evolved and is considered an “approach” by some advocates, but we don’t get into them deeply enough for this to matter. Grounded Theory is popular. Grounded Theory covers a few methodologies, some of which enable a researcher to postpone claiming to have a theory for as long as possible, ideally forever. But some papers now include an “Implications for Theory” section; as with “Implications for Design” in days of old, some reviewers get grumpy when a paper doesn’t have such a section. With CSCW acceptance rates again down to around 25%, despite a revision cycle, authors can’t afford to have grumpy reviewers.

CSCW citations also include broad theories, such as Anthropological Theory, Communication Theory, Critical Theory, Fieldwork for Design Theory, Game Theory, Group Dynamics Theory, Organizational Science Theory, Personality Theory, Rhetorical Theory, Social Theory, Sociology of Education Theory, and Statistical Mechanics Theory. (These are all in the proceedings.) Theory of Craft is likely broad (I didn’t look into it), but Theory of the Avatar sounds specific (didn’t check it out either).

The Homophily/Social Identity team did not get into Common Identity Theory or Common Bond Theory, but other authors did. I could explain the differences, but I don’t have enough proverbs to characterize them succinctly. With enough time one could sort out Labor Theory of Value, Subjective Theory of Value, Induced Value Theory, and (Schwartz’s) Value Theory. All are in CSCW 2014, though not always explained in depth. So are Resource Exchange Theory, Social Exchange Theory, Socialization Theory, Group Socialization Theory, Theory of Normative Social Behavior, and Focus Theory of Normative Conduct.

We also find models—Norm Activation Model, Urban Gravity Model (don’t ask), Model of Personal Computer Utilization, and Technology Acceptance Model. The latter has a convenient acronym, TAM, giving it an advantage over the related Adoption Theory, Diffusion of Innovations Theory, and Model of Personal Computer Utilization: an Adoption Theory acronym would risk confusion with Activity Theory and Anthropological Theory, and who wants to be called DIT or MPCU? Actor-Network Theory has a pretty cool acronym, as does Organizational Accident Theory—both acronyms are used.

Although they don’t have theory in their names, Distributed Cognition (DCog) and Situated Action are popular. Alonso Vera and Herb Simon described Situated Action as a “congeries of theoretical views.” Perhaps in our field anything with theory in its name isn’t really a theory.

Remix Theory and Deliberative Democratic Theory sound intriguing. They piqued my interest more than Communication Privacy Management Theory or Uses and Gratifications Theory. The latter two might encompass threads of my work, so perhaps I should be uneasy about overlooking them.

The beat goes on: Document Theory, Equity Theory, Theory of Planned Behavior (TPB). CSCW apparently never met a theory it didn’t cite. There is also citation of the enigmatically named CTheory journal. What does the C stand for? Culture? Code? Confusion?

Graduate students, if your committee insists that you find another theory out there to import and make your own, find an unclaimed proverb, give it an impressive name, and they’ll be happy. Practitioners, what are you waiting for, come to our conferences for clarity and enlightenment!


Postscript: This good-natured tease has a subtext. Researchers who start with hypotheses drawn from authoritative-sounding “theory” can be susceptible to confirmation bias or miss more interesting aspects of the phenomena they study. Researchers who find insightful patterns in solid descriptive observations may suffer when they are pressured to conform to an existing “theory” or invent a new one.

Thanks to Scott Klemmer for initiating this discussion, and to John King and Tom Erickson for comments.

Posted in: on Fri, March 14, 2014 - 10:09:08

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

What serendipity is providing for me to read

Authors: Richard Anderson
Posted: Thu, March 13, 2014 - 12:48:54

In the spirit of the new What Are You Reading? articles that appear within Interactions magazine…

My use of Twitter and my attending local professional events have had a big impact on what I'm reading. Indeed, both have increased my reading greatly.

Every day I spend at least a few minutes on Twitter—time which often surfaces an abundance of online reading riches. You can get a sense of what comprises this reading by taking a look at my tweet stream, since I often tweet or retweet about compelling readings I learn about via Twitter. A few recent examples:

  • The Unexpected Benefits of Rapid Prototyping. In this Harvard Business Review blog post, Roger Martin (former Dean of the Rotman School of Management at the University of Toronto) describes how the process of rapid prototyping can improve the relationship between designers and their clients. Roger and a colleague wrote about the importance of designing this critical relationship in a piece published in Interactions when I was its Co-Editor-in-Chief. This blog post extends that article.

  • Cleveland Clinic's Patient Satisfaction Strategy: A Millennial-Friendly Experience Overhaul. Here, Micah Solomon describes one of the ways one healthcare organization is improving the patient experience. The Cleveland Clinic was the first major healthcare organization to appoint a Chief Experience Officer, a role for which many experience designers and experience design managers have advocated for years for all sorts of organizations. This blog post reveals the role continues to have an impact in an industry not well known for being patient-centric.

  • Some of the blog posts written for Interactions magazine. Too few people know about these posts, as they are somewhat hidden away and don't all receive (individual) promotion via Twitter. But some are excellent. I've been most impressed by those authored by Jonathan Grudin (e.g., Metablog: The Decline of Discussion) and those authored by Aaron Marcus (e.g., My Apple Was a Lemon). A guy named Richard Anderson occasionally has a couple of worthwhile things to say here as well. ; )

  • The Essential Secret to Successful User Experience Design. Here, Paul Boag echoes something that I've written about for Interactions (see Are You Trying to Solve the Right Problem?)—something Don Norman has been emphasizing of late in several of his speaking engagements: 

    Essential, indeed.

  • Epatients: The hackers of the healthcare world. This excellent post from 2012 shows how Twitter users don't always focus on the new. Here, Fred Trotter describes and provides advice for becoming a type of patient that healthcare designers need to learn from, as I described in another piece I wrote for Interactions (see Learning from ePatient( Scholar)s).

Local events I attend sometimes feature authors of books, and sometimes those books are given away to attendees. I've been fortunate to have attended many events recently when that has happened.

Lithium hosts a series of presentations by or conversations with noted authors about their books in San Francisco. Free books I received because of this series:

  • What's the Future of Business? Changing the Way Businesses Create Experiences. This book by digital media analyst Brian Solis alerts businesses to the importance of designing experiences. I've found the book a bit challenging to read, but its message and words of guidance to businesses are important to experience designers. 

  • Your Network is Your Net Worth: Unlock the Hidden Power of Connections for Wealth, Success, and Happiness in the Digital Age. I think I'm pretty well-connected as it is, but I'm finding this book by Porter Gale to be of value. You might as well.

  • Crossing the Chasm (3rd edition). Attending Lithium's conversation with Geoffrey Moore about the updated edition of his classic book was well worth the time, as I suspect will be true of reading the book. I should have read the 1st or 2nd edition; now I can catch up.

I attend numerous events at Stanford University. A recent event there featured Don Norman talking about his new edition of The Design of Everyday Things. I loved the original (when it was titled The Psychology of Everyday Things), and shortly after this event, Don sent a copy of the new edition to me. It included the kind inscription: "To Richard—Friend, colleague, and the best moderator ever." (I've interviewed Don on stage several times, once transcribed for an Interactions article; see also the partial transcript and video of the most recent interview, with Jon Kolko.) I'm looking forward to reading this new edition and to interviewing him on stage again.

Carbon Five hosts public events every so often in San Francisco. Authors of three books were featured recently (two of which were given away):

  • The Lean Entrepreneur: How Visionaries Create Products, Innovate with New Ventures, and Disrupt Markets. Authors Brant Cooper and Patrick Vlaskovits join the many now touting lean in this book about starting or evolving businesses. This is a valuable read, given that designers are increasingly playing key roles in these activities.

  • Loyalty 3.0: How to Revolutionize Customer and Employee Engagement with Big Data and Gamification. Here, Rajat Paharia, founder of Bunchball, offers a book that should be of great interest to experience designers. I've found the book to be too formulaic in structure and presentation, but...

  • Rise of the DEO: Leadership by Design. The enjoyment of the on-stage interview of authors Maria Giudice and Christopher Ireland prompted me to purchase this book, which proved to also be too formulaic for my tastes. Yet, given the increasing importance of the presence of design-oriented leaders in executive offices...