Blogs

The rise of incompetence


Authors: Jonathan Grudin
Posted: Thu, December 11, 2014 - 10:08:27

“To become more than a sergeant? I don't consider it. I am a good sergeant; I might easily make a bad captain, and certainly an even worse general. One knows from experience.”
   — from Minna von Barnhelm, by Gotthold Ephraim Lessing (1729–1781)

“There is nothing more common than to hear of men losing their energy on being raised to a higher position, to which they do not feel themselves equal.”
    — Carl von Clausewitz (1780–1831)

"All public employees should be demoted to their immediately lower level, as they have been promoted until turning incompetent."
   — José Ortega y Gasset (1883–1955)

“In a hierarchy individuals tend to rise to their levels of incompetence.”
   — Laurence Peter (1919–1990)

We should be enjoying a golden age of competence. We have easy access to so much information. YouTube videos show us how to do almost anything. Typing a question into a search engine very often retrieves helpful answers. We see impressive achievements: Automobiles run more efficiently and last longer, air travel grows steadily safer, and worldwide distribution of a wide variety of products is efficient. Nevertheless, there is a sense that overall, the world isn’t running that smoothly. Governments seem inept. One industrial sector after another exhibits bad service, accidents, inefficiencies, and disastrous decisions. The financiers whose ruinous actions led to worldwide recession and unemployment didn’t even lose their jobs. In HCI, many nod when Don Norman says “UI is getting worse—all over.” How could incompetence be on the rise when knowledge and tools proliferate?

The last of the four opening quotations, known as the Peter Principle, was introduced in the 1969 best-selling book of the same name. The other writers noted that people are promoted to their levels of incompetence; Peter went further, explaining why organizations keep incompetent managers and how they avoid serious harm. I will summarize his points later, but for now join me in a thought exercise:

Assume the Peter Principle was true in 1969. How are technology and societal changes affecting it?

There are several reasons to believe that managerial incompetence is escalating, despite the greater capability of those who are competent—who, in Peter’s words, “have not yet reached their levels of incompetence.”

Strengthening incompetence

1.  Competent people are promoted more rapidly today. Thus, even if well-trained, they can reach their levels of incompetence more quickly. In the rigid hierarchical organizations of the past, promotions were usually internal and often within a group. Few employees had to wait the 62 years and counting that Prince Charles has for his promotion, but wait they did. Today, with the visibility that technologies enable, competent employees can easily find suitable openings at the next higher level in the same or a different organization. Organizational loyalty is passé. A software developer joins a competitor, an assistant professor jumps to a university that offers immediate tenure, a full professor is lured away by a center directorship or deanship. The quickest way to advance in an organization can be to take a higher position elsewhere and return later at the higher level. LinkedIn reduces the friction in upward trajectories.

2. Successful organizations grow more rapidly than they once did, creating a managerial vacuum that sucks people upward. Enterprises once started locally and grew slowly. Mass media and the Internet enable explosive growth, with technology companies as prime examples. As a project ramps up and adds team members, experienced workers are incented and pressured to move up a management ladder that can quickly grow to 8 or 10 rungs. A person can plateau at his or her level of incompetence while very young.

3. The end of mandatory retirement extends the time that employees can work at their levels of incompetence. In 1969, Peter’s great teacher who became an incompetent principal probably had to retire at 65. Today he could have a decade of poor performance ahead.

4. The decline of class systems and other forms of discrimination is terrific, but egalitarian systems are less efficient if everyone progresses to their level of incompetence, whereas competent employees trapped beneath a class boundary or a glass ceiling are ineligible for promotion and thus fail to achieve incompetence. In the 1960s, many women found job opportunities only in teaching, nursing, and secretarial work. Accordingly, there were many extraordinarily capable teachers, nurses, and secretaries. I benefited from this indefensible discrimination in school and my father benefited from it in his job. (If this argument seems alarming, read to the end!)

5. Increased job complexity is a barrier to achieving and maintaining competence. As the tools, information, and communication skills required for a job increase, someone promoted into the position is less likely to handle it well. The pace of change introduces another problem: A competent worker could once count on remaining competent, but now many skills become obsolete. “Life-long learning” isn’t a cheerful concept to someone who was happy to finish school 30 years ago.

Accept the premise of the Peter Principle and these are grounds for concern. But you may be thinking, “The Peter Principle is oversimplified, competence isn’t binary, lots of us including me haven’t reached our levels of incompetence and don’t plan to.” Peter would disagree and insist that you are on a path to your level of incompetence, if you haven’t reached that destination already. I will summarize Peter’s case, but first let’s consider another possibility: Do other changes wrought by technology and society undermine the Peter Principle? The answer is yes.

Weakening the Peter Principle

1. Technology has so weakened hierarchy in many places that it’s difficult to realize how strong hierarchy once was. Peter christened his work “hierarchiology” because flat organizations are not built on promotions. The ascent at the heart of his principle is almost inevitable in rigid hierarchies where most knowledge of a group’s functioning is restricted to the group. I worked in places where initiating a work-related discussion outside the immediate team without prior managerial approval was unthinkable. Memos were sent up the management chain and down to a distant recipient; the response traveled the same way. The efficiency and especially the ambiguous formality of email broke this. A telephone call or knock on the door requires an immediate response; an email message can be ignored if the recipient considers it inappropriate to circumvent hierarchy. Studies in the 1980s showed that although most email was within-group, a significant amount bypassed hierarchy. Hierarchy is not gone, but it continues to erode within organizations and more broadly: Dress codes disappear, children address adults by first name, merged families have complex structures, executives respond directly to employee email, and everyone tweets.

2. Hierarchy benefits from an aura of mystery around managers and leaders. Increased transparency weakens this. In hierarchical societies, rulers tied themselves to gods. Celebrities and the families of U.S. Presidents once took on a quasi-royalty status. In The Soul of A New Machine, the enigmatic manager West was held in awe by his team. Not so common anymore. Leaders and managers are under a media microscope, their flaws and foibles exposed [1]. When managerial incompetence is visible, tolerating it to preserve stability and confidence in the hierarchy is more challenging. In addition, internal digital communication hampers an important managerial function: reframing information that comes down from upper management so that your unit understands and accepts it. The ease of digital forwarding makes it easier to pass messages on verbatim, and risky to do otherwise because a manager’s “spin” can be exposed by comparison with other versions.

3. When organizations are rapidly acquired, merged, broken up, or shut down, as happens often these days, employees have less time to reach their levels of incompetence. Unless brought in at too high a level, they may perform competently through much of their employment.

And the winner is…

…hard to judge definitively. We lack competence metrics. People say that good help is harder to find and feel that incompetence is winning, perhaps because we expect more, promote too rapidly, or keep people around too long. But could it be that only perceived incompetence is on the rise? Greater visibility and media scrutiny that reveal flawed decisions could pierce a chimera of excellence that we colluded in maintaining because we wanted to believe that capable hands were at the helm.

Despite these caveats, I believe that managerial incompetence is accelerating, aided by technology and benign social changes that level some parts of the playing field. Two of the three counterforces rely on weakened hierarchy, but hierarchical organization remains omnipresent and strong enough to trigger hierarchy-preserving maneuvers at the expense of competence, as summarized below.

Part II: Hierarchy considered unnatural

Peter’s “new science of hierarchiology” posits dynamics of levels and promotions. Archaeology and history [2] reveal that when hunter-gatherers became food-sufficient, extraordinarily hierarchical societies evolved with remarkable speed: Egypt and Mexico, China and Peru, Rome and Japan, England and France. Patterns of dysfunction often arose, but hierarchy persevered. Our genes were selected for small-group interaction; large groups gravitate to hierarchy for social control and efficient functioning. Hence the universality of hierarchy in armies, religions, governments, and large organizations.

As emphasized by Masanao Toda and others, we evolved to thrive in relatively flat, close-knit social organizations where activity unfolded in front of us. Hierarchical structures are accommodations to organizing over greater spatial and temporal spans. They can be efficient, but because they aren’t natural we should not be surprised by dysfunction. Hierarchy that emerges from our disposition to jockey for status in a small group can play out in less than optimal ways in large dispersed communities.

Those at the top work to preserve the hierarchy, with the cooperation of others interested in stability and future promotion. When employees are promoted but prove not up to the task, removing them has drawbacks. It calls into question the judgment of higher management in approving the promotion. Who knows if another choice will be better? The person’s previous job is now filled. If it is not disastrous, best to leave them in place and hope they grow into the job. In this way, a poor school superintendent who was once a good teacher or athletic coach hangs on; an incompetent officer is not demoted. When high-level incompetence could threaten an organization, other strategies are employed: An inept executive focuses on procedural aspects of the job and is given subordinates “who have not yet risen above their levels of competence” to do the actual work.  An incompetent manager is “kicked upstairs” to a position with an impressive title and few operational duties. Peter labels this practice percussive sublimation and describes organizations that pile up vice presidents “on special assignments.” In a lateral arabesque, a manager is moved sideways to a role in which little damage can be done. Another maneuver is to transfer everyone out from under a high-level non-performer, yielding a free-floating apex. Reading Peter’s amusing examples of these and other such practices can bring to mind a manager one has known. Perhaps more than one [3].

The Peter Principle

Researching a book on the practices of good teachers, Laurence Peter encountered examples of poor teaching and administration. His humorous compilation of “case studies” drawn from education and other fields, fictionalized and padded with newspaper stories, eventually found a publisher and became a best-seller. A 1985 book subtitled “The Peter Principle Revisited” promised “actual cases and scientific evidence” behind “the new science of hierarchiology.” It delivered no such thing. He may have observed and interviewed hundreds as claimed, but he provides a limited set of examples: capable followers promoted to be incompetent leaders, capable teachers who made poor administrators, experts on the shop floor who became bad supervisors, great fundraiser-campaigners who prove to be poor legislators, and so on. Sources of eventual incompetence are intellectual, constitutional, social, and other mismatches of skill set to position requirements.

The phenomenon is also evident in less formal hierarchies. A good paper presenter is promoted to panel invitee. A successful panelist receives a keynote speech invitation. A young researcher is invited to review papers. Promotions to associate editor or associate program chair, editorship or program chair, and more prestigious venues or higher professional service can follow until incompetence is achieved. Percussive sublimation and lateral arabesques are found in professional service as well as in organizations. The visibility of competent performance can undermine it by spurring invitations: A strong, proactive conference committee member may deliver weak, reactive service when subsequently on four committees simultaneously.

At times Peter claims that there are no exceptions to his principle. Pursuit of universality led him to dissect apparent exceptions, yielding the insights into how organizations handle high-level poor performers. Elsewhere Peter acknowledges that many people work ably prior to their “final promotion,” suggests a few ways to avoid promotion to your level of incompetence, and presents the nice class boundary analysis that identifies pools of competence.

The prevalence of class systems explains why the 18th and 19th century quotations above described the possibility of promotion to incompetence whereas the 20th century quotations stated its inevitability in an age with less discrimination. Less discrimination against white males, anyway. Allow me an anecdote: As CSCW 2002 ended, I went to a New Orleans post office to mail home the bulky proceedings and other items. As I started to box them, one of two black women behind the counter laughed and told me to give them to her, whereupon she discussed the science of packaging while rapidly doing the job and entertaining her colleague with side comments. I left with no doubt that the two of them could have managed the entire New Orleans postal service. Courtesy of workplace discrimination perhaps, I had the most competent package wrapper in the country.

Spending a career with a single employer—actors in the studio system, athletes and coaches with one team, faculty staying at one university, reciprocal loyalty of employees and company—was once common. Promotions were internal, waiting for promotions was the rule, and years of competent performance was common, abetted by glass ceilings and early retirements. Those days are gone.

The versatility of programming made it a nomadic profession from the outset. When I worked as a software developer in the mid-1980s, we questioned the talent of anyone who remained in the same job for more than three years. A good developer should be ready for new challenges before an opening appeared in their group, so we were always ready to find a job elsewhere. When after two years I left my first programming job—work I loved and was good at—to travel and take classes, my manager tried to retain me by offering to promote me to my level of incompetence—that is, he offered to hire someone for me to manage.

With job opportunities in all professions visible on the Internet and intranets, a saving grace of the past disappears: When the number of competent employees exceeded the number of higher positions, not all could be promoted. Today, a capable worker aspiring to a higher position can likely find an employer somewhere looking to fill such a position. 

Concluding reflections

This essay on managerial efficacy began with the observation that rapidly accessed online information is a powerful tool for skill-building: In many fields, individual competence and productivity has never been higher. Because someone who does something well is a logical candidate for promotion to manage others doing it, this undermines managerial competence: Managing is a complex social skill that is learned less when studying online than through apprenticeship models.

As class barriers and glass ceilings are removed, subtle biases continue to impede promotion, so by the logic of the Peter Principle, past victims of overt discrimination are especially likely to be capable as they more slowly approach their final promotion.

What should we do? Think frequently about what we really want in life, and keep an eye on those hierarchies in which we spend our days, never forgetting that they are modern creations of human beings who grew up on savannahs and in the forests.

Endnotes

1. “It is the responsibility of the media to look at the President with a microscope, but they go too far when they use a proctoscope.”— Richard Nixon

2. Charles Mann’s 1491 provides incisive examples and a thoughtful analysis.

3. The reluctance of monarchs to execute other royalty reflected the importance of preserving public respect for hierarchy. The crime of lèse-majesté, insulting the dignity of royalty, was severely punished and remains on the books in many countries. Today we are reluctant to prosecute or even force out financial executives who made billions driving our economy into ruin. Our genes smile on hierarchy, our brains acquiesce.

Thanks to Steve Sawyer, Don Norman, Craig Will, Clayton Lewis, Audrey Desjardins, and Gayna Williams for comments.

Posted in: on Thu, December 11, 2014 - 10:08:27

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Mobile interaction design in the 2013-2014 academic year


Authors: Aaron Marcus
Posted: Wed, December 03, 2014 - 4:32:37

In May 2014, I was blessed with three invitations to be a guest critic at three end-of-the-semester design courses in three different departments of two educational institutions. Here is a quick summary of my experience, much delayed because of professional course/workshop/lecture presentations and book writing.

New Product Development Course, Mechanical Engineering and Haas School of Business, University of California at Berkeley, Berkeley, California

This course is led by Prof. Alice M. Agogino, Department of Mechanical Engineering. (The course description is here.)

From what I have seen over the past few years, the course provides an experience in preliminary project planning of complex and realistic mechanical engineering systems, but includes the possibility of projects that are, in effect, mobile user-experience design. Design concepts and techniques are introduced, and student's do innovative design/feasibility studies and present them at the end of the course. The primary reading material is the textbook Product Design and Development (Second Edition) written by Karl Ulrich and Steve Eppinger, a rather basic step-by-step discussion of the processes. The course objectives include innovation and achieving customer-driven products. The topics covered include personas and empathic design; translating the "voice of the customer"; concept generation, selection, development and testing; decision analysis; design for the environment; prototyping; an ethics case study; universal design and entrepreneurship; and intellectual property. Sounds pretty comprehensive doesn’t it! Well, I am sure the students do get a good introduction to the design/development process.

The reviews were held in Innovation Lab at the Haas School of Business. Some projects that seemed, to me, especially interesting to the HCI/UX/CHI/IXD communities were these: 

Headphones: A product that improves existing headphones by increasing longevity for daily active users while enhancing durability and ergonomics 

ResidentSynch, The Smart Home: A product to integrate, monitor and optimize the use of the various products in urban households

Samsung-Intel Next Digital 1 (IoT): A platform that connects all of our devices with objects around us 

Samsung-Intel Next Digital 2 (Sensorial Experience): The integration of technology and sensorial experiences, enabling consumers the ability to interact with technology in a way that stimulates multiple human senses simultaneously 

Smart Alarm Clock: Redesign the experience of waking up and starting your day.

One of the more intriguing projects, which I reviewed in detail, was the Samsung Loop project, with a Samsung staff member as a mentor/guide. When people actually meet (not virtually), controlled information can pass from one ring to another when in close contact, for example, when people meet and shake hands. The device was clever, minimal in form, and similar to other ongoing research for “finger-top” devices. Of course, there is the cultural challenge that not all people shake hands upon meeting.


Example of the Loop finger-device for exchanging
information between people who actually meet.
(Photo by Aaron Marcus)

Even if some projects were quite inventive, most of the projects had mediocre presentations of their end results, perhaps not surprising from a group of engineering and business students. One exception was the work of an outstanding student Elizabeth Lin, a computer-science major, whose graphic design and explanatory skills were at a professional level.

University of California at Berkeley, Department of Computer Science, Course in Mobile Interaction Design, Berkeley, California

Prof. Bjørn Hartmann invited me to be a critic in his course, in which he offers, together with Prof. Maneesh Agrawala, a semester’s activity in learning about mobile user-experience design, mobile user-interface design, and mobile interaction design. The projects were impressive, as presented in two-minute visual, verbal, and oral summaries. Two other critics and I were asked to judge them. The two other people were much younger user-experience/interaction design professionals: Henrietta Cramer and Moxie Wanderlust, both from the San Francisco Bay area (by the way, Moxie Wanderlust made up his name in graduate school....an interesting user-experience design project in itself!). 

Amazing to me was the fact that, after judging 24 presentations in under an hour, we were all almost identical in our judgments as to the best overall project, the best visual design, and the most original. I was a little worried that we would differ greatly in our reviews. We didn’t!

Here in this class, students were able to code or script working prototypes within eight weeks. Alas, insufficient time had gone into user-experience research, usability studies, and visual design. Not a surprise from a center for computer science. The course and the projects are well documented online: 

California College of Arts, Department of Graphic Design, San Francisco, California

The lead professors of the senior thesis presentations, Prof. Leslie Becker and Prof. Jennifer Morla, invited me to join the group of guest critics for 20 senior thesis projects of varying media and subject matter. One of the most interesting was that of Maya Wiester, whose project focused on 3D printing of food and a mobile app that would manage food ordering. New characteristics for one’s favorite cuisine might include not only ethnic/national genre (Italian, Chinese, Thai, Mexican, etc.) and sustainability/healthfulness (vegan, organic, low-salt, no-MSG, gluten-free, local, etc.) but new attributes such as shape (Platonic solids, free-form, etc.), color (warm, cool, multi-colored, etc.), and surface texture (smooth, rough, patterned, etc.).


Example of 3D-printed food by Maiya Wiester.

Many of the projects were quite interesting and often powerful formal explorations, but usually without business/production/implementation considerations (no business plans), little testing/evaluation, and little or no implementation of computer-software-related applications.

What was evident from visiting all three educational sites was that each had its special emphasis, but no one place succeeded in providing, at an undergraduate (or graduate) level, the sufficient depth needed to produce the user-experience design professionals so much needed now. It seems that on-the-job experience is what gives new graduates the necessary depth and breadth of educational experience and expertise.

Perhaps it has always been this way. Some educational institutions claim to provide it, that is, the “complete package.” I think most do not deliver the complete set of goods, but some are trying harder. Having visited or talked with faculty at 5-10 institutions in the last year or two in five or six different countries, I am prepared to say that at least the educational leadership is aware of the challenge.



Posted in: on Wed, December 03, 2014 - 4:32:37

Aaron Marcus

Aaron Marcus is president at Aaron Marcus and Associates, Inc. (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts


Post Comment


No Comments Found


Lightning strikes!


Authors: Deborah Tatar
Posted: Tue, December 02, 2014 - 12:15:38

Every once in a while, in the world of high technology, I encounter someone who is doing a perilous, marvelous thing: planting his/her feet on the ground, and, in grounding him/herself, becoming a conduit for far more. 

At the Participatory Design Conference in Namibia last month, Cristóbal Martínez did this both figuratively and literally. Cristóbal is a graduate student of James Paul Gee and Bryan Brayboy’s, in Rhetoric, at Arizona State. He is also a mestizo from el pueblo de Alcalde, located just north of Santa Fe, New Mexico. Martínez is also part of a collective, Radio Healer, that explores, through rhetoric and performance, indigenous community engagement. And some of that engagement involves appropriation of pervasive media. 

At PDC, Martínez committed an act of digital healing for—or perhaps with—us. The performance that I saw integrated traditional and novel elements. He first spoke, then donned a mask to dance and made music with shell ankle-rattles, a flute, his voice, and three Wi-mote-enabled instruments. One, a bottle that he tilts and turns, has drone-like properties, almost like a Theramin, that establish a kind of keening baseline for the performance. The other two—handheld revolving platforms with Wi-motes’s affixed—are musically more complex. They provided rhythmic form through the period of revolution of the platforms as he held them, and melodic content through the variation in tonality as the Wi-platforms revolved.


Martínez engaged in performance

In some important sense, this was not much different from the buffalo dances my family and I have witnessed and enjoyed on New Year’s Day near to Martínez’ home in New Mexico. I experienced it with similar emotion in Namibia as in New Mexico, and indeed my direct experience in Namibia was overlaid with a memory. We had taken our three-year-old, Galen, to the Buffalo Dances, and he stood there on the arid yellow winter ground, entirely absorbed, for several hours, apparently indifferent to the considerable cold. He was also uninterested in offers of snacks, lunch, or naps. (Our baby was also completely absorbed from the comfort of his much warmer stroller, so that made an entirely absorbed family.) 

Eventually, the huge Abuelo (grandfather) in his cowboy hat came over to Galen. Galen, also in a cowboy hat, and with his great serious, steady dark eyes, and his soft trusting little baby cheeks, looked up at the Abuelo. The Abuelo silently held out his great huge man hand and folded Galen’s little one into a serious man’s handshake. After which he gave him a very tiny, very precious candy cane. There it was, in two silent, economical gestures: acknowledgement of the elements of the man my son would become (and now is), and the child that he was. We had come to witness the dances, but we were, in this way, also seen. 

In some sense, Martínez’ performance was like that for me. It was replete with directly experienced meaning—meaning perceptible to my three-year-old and even the 8-month-old baby. I was a witness; I found my own meaning in it despite my non-native status, and I was to some extent and in some generous way invited to partake.

But there are some differences worth thinking about. Martínez was performing in isolation from his collective; he was performing for us, a small and sympathetic but definitely global audience; and then—the trickiest bit—he was adopting and adapting Wi-motes. 

I am not in much position to talk about what Radio Healer means in its own setting, except that we cannot think about what Martínez was showing us without thinking about the framing he gives it. His purpose is to open up and enliven a self-determining community through the assertion of what he calls “indigenous technological sovereignty.” He wants his people to engage in critical discourse around the appropriation of technology. He wants them to re-imagine it for their own purposes. Possibly “he wants” is too egocentric a phrase. He probably sees himself as a conduit of a larger collective wanting. This quest is given purpose by their own pursuit of their own cultural logic and lives, but it is also given poignancy by pressures that particularly impinge on Indian sovereignty. Not only have Indians been decimated, abused, underserved and neglected in the past, but they also are currently colonized in many ways, not the least of which is a considerable pressure to be a kind of living fossil—to live as stereotypes of themselves, unable to change, and unable to create living community. 

Martínez is not the only person to protest this pressure. Glenn Alteen and Archer Pechawis expressed it at last spring’s DIS conference in Vancouver in explaining the operation of grunt gallery (especially Beat Nation). And, while in Vancouver, I was lucky enough to be able to see Claiming Space, the hip-hop Indian art show at the Museum of Anthropology, created by teenage urban First Nation people. The question that was posed around that curated exhibit was, “Why should it be noteworthy that Indians engage with hip-hop?” 

And the good question that follows on this (“Why should it be noteworthy that Indians use Wi-motes in their ritual?”) brings us back to the meaning of Martínez’ performance for us at PDC. Because Indians are also Americans or Canadians or citizens of the world, and, even if they were space aliens, Martínez’s healing tacitly suggests that the performance has relevance beyond its indigenous origin. 

To my mind, the relevance stems first from the fact that we must all—Indian, not-Indian, American, African—decide how to live, given the palpable options in front of us and then, secondarily, among the pressure of computation. Computing is a structural enterprise, but we are not structural creatures. How do any of us create lives, our own rich lives, in the constant presence of the reductionist properties of the computer? In this sense, computation is a colonization that we all face. 

Martínez’ performance underscores the ways in which it is terribly, desperately hard to wrest even a device as simple and innocuous-seeming as a Wi-mote from its place in fulfilling and fueling consumption in a consumer society towards location as an expressive element in a spiritual practice. He cannot even do this simple thing, using it in a performance, without it being noteworthy, even definitional! How much more difficult it is to resist the self-definition we see in the systems embedded into the everyday activities of our lives? Martínez, arguably, provides us with a model for culturally responsive critical engagement with emerging pervasive technologies in the early 21st century. His performance raises appropriation to a design principle: Use that which is noteworthy, but use it for your own purposes. Make sure that they are your purposes. Or as Studs Terkel used to say, “Take it easy, but take it.” 



Posted in: on Tue, December 02, 2014 - 12:15:38

Deborah Tatar

Deborah Tatar is an associate professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


@Lucy Suchman (2014 12 12)

Deborah, thanks so much for this lucid reflection on a complicated event!  It makes it significance clearer even for those of us who were present.

Lucy

@Lucy Suchman (2014 12 12)

Deborah, thanks so much for this lucid reflection on a complicated event!  It makes its significance clearer even for those of us who were present.

Lucy


User experience without the user is not user experience


Authors: Ashley Karr
Posted: Mon, November 24, 2014 - 10:09:43

Take away: User experience (UX) is a method of engineering and design that creates systems to work best for the intended user. In order to design in this way, users must be included in the design process through user research and usability testing. If user research and usability testing are not practiced, then UX is not being practiced.

What is the problem?

I have seen and interacted with countless individuals and organizations claiming that they practice UX, but in reality, they practice what I call personal experience design, stakeholder experience design, or client experience design. In this type of design, there may be a department called UX within an organization, there may be a few individuals with UX in their job descriptions, or there may be a consultant or agency that sells UX services to clients. However, these departments, agencies, and individuals never work directly with representative users. Simply put, what these people and groups are practicing is not user experience because they are not including the user in their design process. 

Why do we exclude the user from user experience?

Excuses for not working with users include: lacking the time and money to conduct research, not knowing that working with users is part of the UX process, not believing working with non-designers or engineers would positively impact a design, actually being afraid of recruiting and working with users, and not knowing how to conduct research and testing. My hope is that the frequency of these excuses will drop considerably over the next few years as more people become aware of how valuable user research and usability testing methods are and how fast, easy, and enjoyable user research and usability testing can be. 

User research and usability testing can speed up the design process for a number of reasons. The most important of which, in my opinion, is that the data gathered from research and testing is very difficult to argue with. This data aligns teams and stakeholders and cuts back on time spent agonizing and arguing over design decisions. If two departments or two team members can’t agree on a design element, for example, they can test it and let the users choose. Setting up and running a test is often times faster and more productive than arguing or philosophizing about the design with teams, clients, and/or stakeholders.

Why is it important to work with users?

Working with users allows us to understand users’ mental models regarding a design. This, in turn, allows us to design an interface that matches users’ mental models rather than the interface matching the engineers’, designers’, or organizations’ implementation models. It should make sense to design something that the user will understand. The way to understand what makes sense to the user is to talk to, listen to, and observe them. Although making sure that users understand how to use your design does not guarantee success, it does increase your chances. I also want to mention before closing this paragraph that by working with users in this way, we often gain deep insights that spur innovation for our current and future designs. Needs we may never have spotted expose themselves and point us toward a new and important function or a completely novel product.

What are user research and usability testing?

User research is a process that allows researchers to understand how a design impacts users. Researchers learn about user knowledge, skills, attitudes, beliefs, motivations, behaviors, and needs through particular methods such as contextual inquiry, interviewing, and task analysis. Usability testing is a method that allows a system or interface to be evaluated by testing it on users. The value of usability tests is that they show how users actually interact with a system—and what people say they do is often times quite different than what they actually do. Note that the biggest difference between user research and usability testing is this: A usability test requires a prototype and user research does not necessarily require a prototype.

How can you begin practicing user research and usability testing now?

Depending on your situation, you may be able to begin practicing user research and user interviewing informally right now by simply asking people around you for feedback regarding your design. For most of us, this is very feasible and we do not need permission from anyone to get started. If your representative users are quite different than the people around you, you are working on a project that is protected by non-disclosure agreements, or you are working within a large organization with an internal review board, you will have a few more hoops to jump through before being able to begin using these techniques. You will just have to work a little harder to recruit representative users, require research participants to also sign NDAs, and/or receive approval from the internal review board before you can start. 

Let’s set the hoops aside now and accept that getting out and away from your own head, assumptions, desk, office, and devices in order to talk to and observe users in their native habitats is one of the most powerful design techniques that exists. Being afraid or uninformed can no longer be excuses. Once you have conducted a user interview or a round of usability testing, the fear subsides, and there are so many amazing and free resources available online to guide practitioners through research and testing. I recommend usability.gov and a hearty Google search to get the information you need to begin incorporating user research and usability testing into your process. And, that is the point of this article—to get you to begin.



Posted in: on Mon, November 24, 2014 - 10:09:43

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


No Comments Found


Debatable


Authors: Jonathan Grudin
Posted: Fri, November 21, 2014 - 3:18:54

Issues that elicit passionate and unpredictable views arrive, fast and furious:

  • The right to be forgotten
  • Facebook emotion manipulation
  • Online bullying and cyber hate
  • Institutional Review Boards
  • Open publishing
  • The impact of technology on jobs

Is consensus eroding?
Has HCI broadened its scope to encompass polarizing topics?
Are social and mainstream media surfacing differences that once were hidden?

After briefly reviewing these topics, I’ll look at controversies that arose in years past—but back then it was two or three per decade, not one every six months. Unless I’ve forgotten some, the field is more contentious in its maturity.

Recent controversies

Expunging digital trails. The European Union ruled that people and organizations can prevent search engines from pointing to some past accounts of their activity. This might favor those with financial resources or high motivation due to particularly unsavory histories, but it could counter an erosion of privacy. That said, it doesn’t expunge records, it just puts a hurt on (non-European) search engine companies. Will it work? The BBC now lists articles that search engines must avoid, which could increase the attention the articles receive.

The Facebook study. Researchers set off a firestorm when they claimed to induce “emotional contagion” by manipulating Facebook news feeds. Whether or not the study showed emotional contagion [1], it revealed that Facebook quietly manipulates its news feed. The use of A/B testing to measure advertising effectiveness is not a secret, but this went further. If happy people prove more likely to click on ads, could Facebook systematically suppress grumpy peoples’ posts? Many in the HCI community defended the study as advancing our understanding; I could not predict which side a colleague would come down on. I was a bit torn myself. I’m a strong advocate for science, but what if one of the unidentified half million people barraged with selectively negative friends’ posts dropped out and another leapt from a bridge? The “contagion” metaphor isn’t comforting.

Prior approval to conduct research. Did the Facebook terms of agreement constituted ethically adequate “informed consent” by those in the study? The limited involvement of an Institutional Review Board (IRB) was challenged, which brings us to another ongoing debate. IRB approval is increasingly a prerequisite for academic research in health and social science, intended to avoid harm to study subjects and legal risk for research institutions. It seems reasonable, but in practice, researchers can wear down examiners and get questionable studies approved, and worthwhile research can be impeded or halted by excessive documentation requirements.

The Facebook study team included academic researchers. Industry researchers are often exempt from IRB review, which can irritate academics. I have experienced both sides. I’ve seen impressive, constructive reviews in clinical medicine. In behavioral and social science, I’ve seen good research impeded. Could a PhD based on human subject observation or experimentation become a license to practice, as a surgeon obtains a license, with onerous IRB reviews invoked only after cases of “malpractice”?

Hate speech and bullying vs. free expression. A few well-publicized teenage suicides after aggression in online forums led to legislation and pressure on social media platform and search engine developers to eliminate unwanted, disturbing online confrontations. But it’s complicated: What is offensive in one culture or subculture may not be in another. Offensiveness can depend on the speaker—“only Catholics can criticize the Pope,” but on the Internet no one knows you’re a Catholic. Some people find it entertaining to insult and horrify others—“trolls” exhibit poor taste, but is it hate speech?

We want to protect the young, but how? When a child is the target and a parent is known, should abuse be reported? Some kids can handle it and would rather not involve their parents. Some researchers argue that children with abusive parents might be worse off if a parent is brought in. This topic calls for research, but legislation is being enacted and software developers can’t wait. People take different sides with considerable passion.

Open publishing. Publishing becomes technically easier every year. Why not cut publishers out of the loop? Many who have looked closely respond, “Because publishers oversee details that busy professionals would rather avoid.” Nevertheless, ACM is under pressure despite its very permeable firewall: Individual and conference “author-izer” features permit free worldwide access, and thousands of institutions buy relatively inexpensive ACM site licenses without grumbling. Open access was embraced first in mathematics and physics, where peer review plays a smaller role, and in biology and medicine, where leading journals are owned by for-profit publishers. Within HCI and computer science, calls for open access are strongest in regions that rely less on professional societies for publication.

Publishers sometimes spend profits in constructive ways. ACM scans pre-digital content that has no commercial value into its useful digital library archive and supports educational outreach to disadvantaged communities. Some for-profit publishers have trail-blazed interactive multimedia and other features in journals. Publishers do shoulder tasks that over-taxed volunteers won’t.

The impact of technology on jobs. In 1960, J.C.R. Licklider proposed a “symbiosis” between people and computers. Intelligent computers will eventually require no interface to people, he said, but until then it will be exciting. Many of his colleagues in 1960 believed that by 1980 or 1990, ultra-intelligent machines would put all humans out of work—starting with HCI professionals. No one now thinks that will happen by 1980 or 1990, but some believe it will happen by 2020 or 2030.

A recent Pew Internet survey found people evenly split. Digital technology was credited for the low-unemployment economy prior to 2007, so why not blame high tech for our lingering recession? That’s easier than taking action, such as employing people to repair crumbling infrastructures.

In September, a Churchill Club economic forum in San Francisco focused on automation and jobs. The economists, mostly former presidential advisors, noted that in the past, new jobs came along when technology wiped out major vocations. The technologists were less uniformly sanguine. A Singularity University representative forecast epidemic unemployment within five years. The president of SRI more reassuringly predicted that it would take 15 years for machines to put us all out of work. I contributed a passage written by economics Nobel Laureate and computer scientist Herb Simon in 1960, appearing in his book The Shape of Automation for Men and Management: “Technologically, machines will be capable, within 20 years, of doing any work that a man can do.”

Past controversies: Beyond lies the Web

The HCI world was once simple. We advocated for users. Now it’s more complicated. Users hope to complete a transaction quickly and leave; a website designer aims to keep them on the site, just as supermarket designers place frequently purchased items in far corners linked by aisles brimming with temptation.

Some disagreements in CHI’s first 30 years didn’t rise to prominence. For example, many privately berated standards as an obstacle to progress, whereas a minority considered standards integral to progress, claiming that researchers generally favor standards at all system levels other than the one they are researching. Social media weren’t there to surface such discussions and mainstream media rarely touched on technology use. However, controversy was in fact rare. Program committees desperately sought panel discussions that would generate genuine debate. We almost invariably failed: After some playful posturing by panelists, good-natured agreement prevailed.

There were exceptions. Should copyright apply to user interfaces? Pamela Samuelson organized CHI debates on this in 1989 and 1991. Ben Shneiderman advocated legal punishments and testified in a major trial. On the other side, many including Samuelson worried that letting a lawyer’s nose into the HCI tent would lead to tears.

In spirited 1995 and 1997 CHI debates, Shneiderman took strong positions against AI and its natural-language-understanding manifestation [2]. His opponents were not mainstream CHI figures and the audience generally aligned with Ben. AI competed with HCI for the hearts and minds of funders and students. NLU dominated government funding for human interaction but never established a significant foothold in CHI, where most of us knew that NLU was going nowhere anytime soon.

Today, there is some openness to discussing values and action research. Back then, CHI avoided political or value-laden content. We were engineers! One might privately lean left or libertarian, but the tilt was scrupulously excised from written work. Ben Shneiderman was a lonely voice when he advocated that CHI engage on societal issues.

The prevailing view was that HCI must eschew any hint of emotional appeal to get a seat at the table with hardware and software engineers, designers and managers. Ben’s calls to action were balanced by his conservative reductionist methodology: There is no problem with experiments that more experiments won’t solve.

The most heated controversies were over methods. Psychologists trained in formal experimentation, engineers focused on quantitative assessment, and those who saw numbers as essential to getting seats at the table were hostile to both “quick and dirty” (but often effective) usability methods and qualitative field research.

In 1985, in the first volume of the journal Human Computer Interaction, Stu Card and Alan Newell said that HCI should toughen up because “hard science” (mathematical and technical) always drives out “soft science.” Jack Carroll and Robert Campbell subsequently responded that this was a bankrupt argument, that CHI should expand its approaches to acquire better understanding of fundamental issues. In 1998, again in the journal HCI, Wayne Gray and Marilyn Salzman strongly criticized five influential studies that had contrasted usability methods. They drew 60 pages of responses from 11 leading researchers.

The methodological arguments subsided. Today’s broader scope may reflect greater tolerance for diverse methods, or maybe the tide turned—advocates of formal methods shifted to other conferences and journals.

Concluding observation

Today contention flares up, but public attention soon moves on. Is there any sustained progress? Some topics are explored in workshops or small conferences. A Dagstuhl workshop made progress on open publishing, but we did not produce a full report and uninformed arguments continue to surface. Some people with strong feelings aren’t motivated to dig deeply; some with a deep understanding don’t have the patience to repeatedly counter emotional positions.

The vast digital social cosmos that surfaces controversies may diminish our sense of empowerment to resolve them. We touch antennae briefly and continue on our paths.

Endnotes

1. The study showed that news feeds that contain negative words are more likely to elicit negative responses. For example, if people respond to “I’m feeling bad about my nasty manager” by saying “I’m sorry you’re feeling bad” and “Tough luck that you got one of the nasty ones,” the authors would conclude that negative emotion spread. Other explanations are plausible. When my friend is unhappy, I may sympathize and avoid mentioning that I’m happy, as in the responses above.

2. A record of the CHI 1997 debate, “Intelligent software agents vs. user-controlled direct manipulation,” is easily found online. A trace of the more elusive CHI 1995 debate can be found by searching on its title, “Interface Styles: Direct Manipulation Versus Social Interactions.”

Thanks to Clayton Lewis for reminding me of a couple controversies and to Audrey Desjardins for suggestions that improved this post and for her steady shepherding of Interactions online blogs.



Posted in: on Fri, November 21, 2014 - 3:18:54

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Disrupting the UX design education space


Authors: Richard Anderson
Posted: Wed, November 19, 2014 - 11:31:55


Room 202

My teaching partner Mandy and I stood in silence looking around the room one last time in which magic had happened the preceding 10 weeks. We teach the UX Design immersive for General Assembly in San Francisco. 10 weeks, 5 days/week, 8 hours/day of teaching and learning, of intense, hard work, of struggle, of laughter, of transformation, of bonding that will last forever. Educational experiences don’t get any better than this.

The UX Design immersive is intended mostly for people wanting to make a career transition. Students make a huge commitment by signing up for the course, stopping whatever they were doing prior, and in some cases, traveling long distances to do one thing: to become a UX designer.

General Assembly is one of several new educational institutions that are slowly disrupting the higher education space. Jon Kolko has identified the following qualities shared by many of these institutions’ programs:

  1. they are short;
  2. they focus on skill acquisition;
  3. they produce a portfolio as evidence of mastery;
  4. they are taught by practitioners;
  5. they promote employment and career repositioning, rather than emphasizing the benefits of learning as an end in itself;
  6. they typically focus on "Richard Florida" type jobs and careers: the creative disciplines of software engineering, product design, advertising, marketing, and so-on.

As described by Jon:

“Students who graduate from these programs have a body of work that they can point to and say ‘I made those things.’ This makes it very easy to understand and judge the quality of the student, particularly from the standpoint of a recruiter or hiring manager.”

and:

“These educators have a deep and intimate understanding of both the material that is being taught and the relevancy of that material to a job.”

Given the increasingly heard argument that academic programs are not producing the kinds of designers needed most by industry (see, for example, "On Design Education")… And given that 90% of UX Design immersive students secure jobs within 90 days of the end of their cohort... (I might be moderating a panel contrasting different institutional instructional models at the Interaction 15 Education Summit in February.)

What is it like teaching the UX Design immersive at General Assembly? To get a sense of this, read the Interactions magazine blog post written earlier this year by our fellow UX Design immersive instructor in Los Angeles, Ashley Karr, entitled, “Why Teaching Tech Matters.” Also, Mandy and I might be conducting a mock classroom at the Interaction 15 Education Summit in February to give attendees a mini-experience of the immersive program.

Tears filled the room on the final day of the course. We all had put everything we had into the preceding 10 weeks, and we could not help but be emotional. We hope the magic will happen again when we teach the course again in December. But it all will happen in a different space (a new campus opens tomorrow), and Mandy and I will be paired with other instructors instead of each other. 

I will miss the magic of Room 202 in the crazy, crowded 580 Howard Lofts with only one bathroom and no air conditioning, situated next to a noisy construction site; I will miss the magic of working closely with the amazing Mandy Messer; and I will miss the magic of getting to know a certain 21 special, fabulous people who are now new UX designers. 

But we will do it again, and we will try to do it even better.



Posted in: on Wed, November 19, 2014 - 11:31:55

Richard Anderson

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.
View All Richard Anderson's Posts


Post Comment


@Jonathan Grudin (2014 11 19)

Congratulations and very nice, Richard. Almost makes me wish I was still a practitioner so I could join in. I hope you find a way to franchise this.

@Ashley Karr (2014 11 19)

Hello! Thanks for writing this - and thanks for the shout out. This is a very good article to reference when folks asks questions regarding types of programs they should look into re: career changes and continuing education. Good luck with your next round!


The top four things every user wants to know


Authors: Ashley Karr
Posted: Thu, November 13, 2014 - 10:43:44

I have been conducting research with human participants for roughly 14 years. Some of my studies have been formal (i.e., requiring the approval of an ethics or internal review board), some a have been informal (i.e., guerilla usability testing for small start-ups), and some have landed in between those two extremes. Additionally, the populations and domains that I studied ranged widely; however, I have found certain similarities with all my participants. I am sharing with you now the results of my meta-analysis of all the usability testing, user research, field studies, and ethnographies that I have completed so far in my career. I call my meta-analysis, “The Top Four Things Every User Wants to Know.” I use it on a daily basis—it really does come in handy that often. I share it with you now:

  1. Users want to see a sample of your design. For example, they want to see pictures, a demo, or a video of how your design functions. Telling your users about your design through speech or text is not as effective as showing them, unless, of course, you are working with a visually impaired population. If users aren’t able to see a representative example of your design functioning, they will lose interest very quickly.

  2. Users want to know what your design will cost in terms of money. Hiding fees or prices associated with your design breaks trust. If users can’t find fees or prices quickly and easily, they become disinterested in your design and either look to competitors or simply move on. When conducting research on pricing and fee structures, 100% of users in my studies have told me that they are willing to pay more money (within reason) for a product or service if that means they are able to find out how much money they will be spending upfront. Another design decision that can break trust is using the term free. 100% of users that I have studied regarding this phenomenon state something like, “Nothing in this world is free. What are they hiding from me?” Interestingly, I have done studies on systems that are purely informational and very distant from the world of commerce. Users were still concerned that they would be charged in some way for their use of the system.

  3. Users want to know how long it will take to use and/or complete a particular task with your design. Users do not believe you when you say or write that your design saves time, is easy to learn, and quick to use. If you tell your users these things, you waste their time and break their trust. If you show them how quick and easy your design is, and your design actually performs this well, then you’re getting somewhere.  

  4. Users want to know how your design will help or harm them if they decide to start using it. As one of my students said, “Users want to know how your design will make their life more or less awesome before they decide to truly commit and interact or purchase.” This transcends time and money, and moves into deeper realms, such as adding more meaning to peoples’ lives or more positive engagement with their surroundings.

To recap, users want to know what your design is, what it costs, how long it will take to use, and how it will make their life more awesome (or not)—and in that order. These may seem like overly simplistic design elements, but very basic things get overlooked in the design process all the time. If those basic elements do not make it into the design before launch or production, disaster and failure may strike. Also, this list helps me when I get stuck due to over thinking and possibly over complicating a design. It reminds me to keep my design straightforward and highly functional for my users. 

Straightforward. Highly functional. These may be the two most important characteristics of any design. Add those to the four things that every user wants to know, and we have six quality ingredients for user-centered design.



Posted in: on Thu, November 13, 2014 - 10:43:44

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


No Comments Found


Love it or list it


Authors: Monica Granfield
Posted: Wed, November 12, 2014 - 2:57:45

Have you ever tried to coordinate a project, a group of people, their activities, and their progress? Or organize your thoughts or what needs to get done? What has been your most efficient tool?

For me it's most often some form of a simple list. 

All kinds of systems to help individuals and organizations get organized and improve efficiency have been created over time. From pads of paper with checkboxes printed on them, to magnetic boards with pre-canned task components, for kids and adults alike. Paper planners and systems like Franklin Planners have stood the test of time. The digital age has brought a slew of complex products for targeted industries or personal use. All come with the promise of organizing, tracking, planning, and even projecting work; resourcing; generating analytics; optimizing for efficiency; and having hordes of free time to have coffee with friends. How many deliver on this promise?

From professional to personal use, there's a productivity product out there for you. I have had the pleasure of evaluating some of these tools, using some of these tools, and yes, even designing a tool to help boost communication, organization, and productivity for users. The dynamics of organizing people and activities is not easy and can quickly become complex.

I have witnessed the initial flood of enthusiasm over the promise of accomplishment they bring and then watched the enthusiasm fizzle out and the use of the product simply fade out over time. Too often these tools to boost productivity become a full-time job for at least one person in an organization. I have also witnessed users struggle with the use of these products. Sometimes users are successful using portions of the product; other times products are so complex and hard to grasp that hours of training and use still fails in making them successful. The vast majority of users become confused and use only the top few features that will meet with the expectations of management. Management is often quiet about how productivity tools enter an organization, with little feedback from the people who will use them.

Many of these products try to emulate a conceptual model, rather than how people work or what they need from a product. If you are not familiar with the conceptual model, learning the product will prove to be a challenge. One product I used tried to emulate the conceptual model of the Agile process. However, there are many interpretations of what the Agile process is and how to implement it. Also, Agile typically includes software development, but not other related disciplines such as documentation, UX, marketing, or hardware. Roles in disciplines not included as part of the process in the product are retrofitted into the conceptual model. The users in these roles don’t understand the model, get frustrated learning or managing the product, and then start the decline into becoming a non-user.

Rather than struggle with a tool that doesn't meet the needs of the group, organization, or users, it becomes easier to just resort back to a good old-fashioned organizational tool such as a list. Most of these lists end up being created and managed in a spreadsheet or documentation program, which are more familiar to users, therefore making it easier to successfully manage your people and activities such as tracking changes, sorting, filtering, and simply checking something off when completed. No complex processes where you assign tasks and stories, or forward users to a new phase of a project. No logging in, no trying to find who owns what and who did what. No time wasted trying to figure out how this glorified list with the complex system of built-in features works. All you need to do is glance at the list, with its glorious titles, headers, columns, and rows, all there right in front of you. Prioritize the list, reorder, highlight items, cross something off and ta-dah... you are done. Now you can go and have coffee with your friends.

If an application that is meant to organize and increase productivity becomes too complex and hard to use, the abandonment rate will rise. Organizations will abandon one product for another and, if all the while their users don't love the product, the users most likely will slowly and quietly resort to listing it.

The simplicity of a list is all that is needed to keep me organized and boost my productivity. Lists play a key role in tracking what needs to be done, keeping inventory of issues, and tracking and assigning who needs to do what, when.

Sometimes the simple and straightforward solution just works. If you don't love your productivity tool, do you list it? 



Posted in: on Wed, November 12, 2014 - 2:57:45

Monica Granfield

Monica Granfield is a user experience designer at Imprivata. The views expressed on this website are exclusively her own and are not meant to reflect or represent the views of Imprivata.
View All Monica Granfield's Posts


Post Comment


No Comments Found


Batman vs. Superman (well, actually, just PDC vs. DIS)


Authors: Deborah Tatar
Posted: Tue, October 28, 2014 - 9:48:54

The Participatory Design Conference (PDC), which just had its 13th meeting in Windhoek, Namibia, is a close cousin to DIS, the Design of Interactive Systems conference. Both are small, exquisite conferences that lead with design and emphasize interaction over bare functionality; however, like all cousins (except on the ancient Patty Duke show in which Patty Duke played herself and her “British” cousin), there are some important differences. Unlike DIS, PDC is explicitly concerned with the distribution of power in projects; furthermore, the direction of distribution is valenced: more power to those below is good. 

In a way, it is odd for designers to think about distributing power downward—how much power do designers actually have?—and yet the PD conference is an extremely satisfactory place to be. Even if designers do not, in fact, have much power, we are concerned with it. The very act of designing is an assertion of power. Why design, if not to change behavior? And what is changing behavior if not the exertion of power? And if we are engaged in a power-changing enterprise, how much finer it is to, within the limitations of our means, take steps that move power in the right direction rather than ourselves accepting powerlessness? To contemplate what is right is energizing! 

DIS has been held in South Africa, but PDC one-upped it by being held in Namibia and by attracting a very significant black African cadre of attendants. Consequently, a key question after the initial round of papers featured the fears of the formerly colonized. The question was whether the practices of participatory design are not in some sense a form of softening up the participating populace for later, more substantial exploitation. Of course, one hopes not, but how lovely to be at a conference that does not sweep that important issue under the rug. 

As this question points out, we do not always know what is right. Ironically, acknowledging this allows the conference to feel celebratory. Perhaps it was the incredible percussive music, the art show, or Lucy Suchman’s “artful integration” awards, received this year by Ineke Buskers, representing GRACE (Gender Research in Africa into ICTs for Empowerment), and by Brent Williams, representing Rlabs, which supports education and innovation in townships in South Africa and impoverished communities around the world. Both organizations are local, bottom-up, and community-driven appropriations of technology. Or perhaps it was the fact that so many papers focused on the discovery of the particulars that people care about in their lives and the creation of technologies that influence their lives. 

I went to the conference because I have been hoping to jump-start more thought about power in the DIS, CHI, and especially the CSCW communities. I know that many people in these communities have become increasingly concerned about power in the last few years. I hear whispering in the corridors, the same way Steve Harrison, Phoebe Sengers, and I heard whispering before we wrote our “Three Paradigms” paper (the one that tried to clarify basic schools of thought within CHI and how they go together as bundles of meaning). Now the whispering is different. It is about how the study of human-computer interaction needs to be more than the happy face on fundamentally exploitative systems. 

In any case, I knew that PDC would be ahead of me and that the people who would have the most sophistication would very likely not be American. As wonderful as Thomas Jefferson is, he—and therefore we in America—are too much about social contract theory and what Amartya Sen calls “transcendental institutionalism” to adjust easily to certain kinds of problems of unfairness, especially unfairness that requires perception of manifest injustice. As Amartya Sen points out, Americans draw very heavily on the idea that if we have perfect institutions, then the actual justice or fairness in particular decisions or matters of policy does not matter. Transcendental institutionalism is a belief that makes it very difficult to effectively protest fundamentally destructive decisions such as treating corporations as people. In HCI and UX, we tend to think that if users seem happy in the moment, or we improve one aspect of user experience, the larger issues of the society that is created by our design decisions are unimportant. Transcendental institutionalism, again. 

These issues are explored more at PDC. I attended a one-day workshop on Politics and Power in Decision Making in Participatory Design, led by Tone Bratteteig and Ina Wagner from the University of Oslo. Brattetieg and Wagner would have it, in their new book (Disentangling Participation: Power and Decision-making in Participatory Design, ISBN 978-3-319-06162-7) as well as at the conference, that the key issue in the just distribution of power is choice in decision making. The ideal is to involve the user in all phases of decision making: in creating choices, in selection between choices, in implementation choices (when possible) and in many choices that surround the evaluation of results. Furthermore, a participatory project should, they feel, have a participatory result: it should increase the user’s power to. “Power to” arises from a feminist notion of power in which dominion is not paramount. Instead, it is closer to Amartya Sen’s notion of capability. Freedom, from Sen’s perspective, consists of the palpable possibilities that people have in their lives. 

Pelle Ehn, who gave the keynote and is making a farewell round of extra-US conferences before retiring (he will be the keynote speaker at OzCHI shortly), put this view in a larger context by reminding us of Bruno Latour’s notions of “parliaments” and “laboratories.” In this way of thinking, the social qualities of facts are paramount. Though Latour appears to have pulled back from this view later in his life, it has the great advantage of helping us perceive issues of power in design. Power is hard to see. The waves of utopian projects (one even called Utopia) that Ehn has shepherded during his long career each reveal more about the shifting and growing power of technology and the institutions that profit from it. The implicit question raised is “what now?” 

A portion of this concern might seem similar to what we say all the time in human-computer interaction. After all, what is the desirable user experience if not the experience of power to? However, in fact, the normal conduct of HCI and PD only resemble one another at a very high level of abstraction. They differ in the particulars. My t-shirt from the 2006 PDC conference reads, “Question Technology,” but I would say that the persistent theme is not precisely questioning technology as much as questioning how we are constructing technology. PDC does not ask whether technology serves, but who, precisely, now and later. PDC asks “what is right?” 

PDC is the still-living child of CPSR (Computer Professionals for Social Responsibility), an organization that, after a long life, sadly went defunct just this year. I have not been involved with it in recent years, and I am not sure what to write on the death certificate, but it seems to me that the rise of untrammeled global corporate capitalism fueled by information technology has created new problems, that something like CPSR is much needed, and that PDC stands for many questions that need better answers than we currently have. 

I did not get to hear Shaowen Bardzell’s closing plenary, because I had to journey over 30 hours to get home and be in shape to lecture immediately thereafter, but rumor has it that she energized the community. I am optimistic that her insider-outsider status as a naturalized American, raised in Tapei, gives her the perspective to address issues at the edge between, as it were, power over and power to.

Returning to the question of Batman vs. Superman, DIS is also a very delightful place to be. But the freedom of spirit that has characterized it since Jack Carroll resuscitated it in 2006 exists in uneasy implicit tension with the concerns and measurements of its corporate patrons.



Posted in: on Tue, October 28, 2014 - 9:48:54

Deborah Tatar

Deborah Tatar is an associate professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


@Jonathan Grudin (2014 10 29)

Nice essay, thanks Deborah.

@Ineke Buskens (2014 10 31)

Delightful piece Deborah, thank you! Much love from South Africa!


Uses of ink


Authors: Jonathan Grudin
Posted: Fri, October 17, 2014 - 9:06:47

Many species communicate, but we alone write. Drawing, which remains just below the surface of text, is also uniquely ours. Writing and sketching inform and reveal, record, and sometimes conceal. We write to prescribe and proscribe, to inspire and conspire.

My childhood colorblindness—an inability to see shades of gray—was partly overcome when I read Gabriel Garcia Marquez. But for nonfiction I continued to prize transparency. Crystal clarity for all readers is unattainable, but some writers come close. For me, Arthur Koestler’s breadth of knowledge and depth of insight were rivaled by the breathtaking clarity of his writing, no less impressive for often going unnoticed.

Lying on the grass under a pale Portland sun the summer after my sophomore undergraduate year, I took a break from Koestler—The Sleepwalkers, The Act of Creation, The Ghost in the Machine, and his four early autobiographical volumes—to read Being and Nothingness. Sartre had allegedly changed the course of Western philosophy. After a few days I had not progressed far. I had lists of statements with which I disagreed, felt were contestable, or found incomprehensible. “You are reading it the wrong way,” I was informed. Read it more briskly, let it flow over you.

This struck me as ink used to conceal: Jean-Paul Sartre as squid. A cloud of ink seemed to obscure thought, which might be profound or might be muddled.

As years passed I saw clear, deep writers ignored and opaque writers celebrated. In Bertrand Russell’s autobiography, Frege declared Wittgenstein’s magnum opus incomprehensible and Russell, Wittgenstein’s friend, felt that whatever it meant, it was almost certainly wrong. Decades later, my college offered a course on Wittgenstein (and none on Frege or Russell). I had given up on Sartre but took the Wittgenstein course. I liked the bits about lions and chairs, but unlike my classmates who felt they understood Wittgenstein, I sympathized with Frege.

It finally dawned on me that in nonfiction, as in complex fiction, through the artful construction of an inkblot, a verbal Rorschach, a writer invites readers to project conscious or unconscious thoughts onto the text and thereby discover or elaborate their own thoughts. The inkblot creator need not even have a preferred meaning for the image.

It requires skill to create a good verbal projection surface. A great one has no expiration date. “Sixty years after its first publication, [Being and Nothingness] remains as potent as ever,” says Amazon. (It’s now over seventy years.)

Let’s blame it on the Visigoths

George Orwell was a clear writer. His novels 1984 and Animal Farm are unambiguous enough to be assigned to schoolchildren. In a long-defunct magazine he published this beautiful short essay, a book review. (Thanks to Clayton Lewis for bringing it to my attention.)

The Lure of Profundity
George Orwell, New English Weekly, 30 December 1937

There is one way of avoiding thoughts, and that is to think too deeply. Take any reasonably true generalization—that women have no beards, for instance—twist it about, stress the exceptions, raise side-issues, and you can presently disprove it, or at any rate shake it, just as, by pulling a table-cloth into its separate threads, you can plausibly deny that it is a table-cloth. There are many writers who constantly do this, in one way or another. Keyserling is an obvious example. [Hermann Graf Keyserling, German philosopher, 1880–1946.] Who has not read a few pages by Keyserling? And who has read a whole book by Keyserling? He is constantly saying illuminating things—producing whole paragraphs which, taken separately, make you exclaim that this is a very remarkable mind—and yet he gets you no forrader [further ahead]. His mind is moving in too many directions, starting too many hares at once. It is rather the same with Señor Ortega y Gasset, whose book of essays, Invertebrate Spain, has just been translated and reprinted.

Take, for instance, this passage which I select almost at random:

“Each race carries within its own primitive soul an idea of landscape which it tries to realize within its own borders. Castile is terribly arid because the Castilian is arid. Our race has accepted the dryness about it because it was akin to the inner wastes of its own soul.”

It is an interesting idea, and there is something similar on every page. Moreover, one is conscious all through the book of a sort of detachment, an intellectual decency, which is much rarer nowadays than mere cleverness. And yet, after all, what is it about? It is a series of essays, mostly written about 1920, on various aspects of the Spanish character. The blurb on the dust-jacket claims that it will make clear to us “what lies behind the Spanish civil war.” It does not make it any clearer to me. Indeed, I cannot find any general conclusion in the book whatever.

What is Señor Ortega y Gasset's explanation of his country’s troubles? The Spanish soul, tradition, Roman history, the blood of the degenerate Visigoths, the influence of geography on man and (as above) of man on geography, the lack of intellectually eminent Spaniards—and so forth. I am always a little suspicious of writers who explain everything in terms of blood, religion, the solar plexus, national souls and what not, because it is obvious that they are avoiding something. The thing that they are avoiding is the dreary Marxian ‘economic’ interpretation of history. Marx is a difficult author to read, but a crude version of his doctrine is believed in by millions and is in the consciousness of all of us. Socialists of every school can churn it out like a barrel-organ. It is so simple! If you hold such-and-such opinions it is because you have such-and-such an amount of money in your pocket. It is also blatantly untrue in detail, and many writers of distinction have wasted time in attacking it. Señor Ortega y Gasset has a page or two on Marx and makes at least one criticism that starts an interesting train of thought.

But if the ‘economic’ theory of history is merely untrue, as the flat-earth theory is untrue, why do they bother to attack it? Because it is not altogether untrue, in fact, is quite true enough to make every thinking person uncomfortable. Hence the temptation to set up rival theories which often involve ignoring obvious facts. The central trouble in Spain is, and must have been for decades past, plain enough: the frightful contrast of wealth and poverty. The blurb on the dust-jacket of Invertebrate Spain declares that the Spanish war is “not a class struggle,” when it is perfectly obvious that it is very largely that. With a starving peasantry, absentee landlords owning estates the size of English counties, a rising discontented bourgeoisie and a labour movement that had been driven underground by persecution, you had material for all the civil wars you wanted. But that sounds too much like the records on the Socialist gramophone! Don’t let’s talk about the Andalusian peasants starving on two pesetas a day and the children with sore heads begging round the food-shops. If there is something wrong with Spain, let’s blame it on the Visigoths.

The result—I should really say the method—of such an evasion is excess of intellectuality. The over-subtle mind raises too many side-issues. Thought becomes fluid, runs in all directions, forms memorable lakes and puddles, but gets nowhere. I can recommend this book to anybody, just as a book to read. It is undoubtedly the product of a distinguished mind. But it is no use hoping that it will explain the Spanish civil war. You would get a better explanation from the dullest doctrinaire Socialist, Communist, Anarchist, Fascist or Catholic.

Clarity, ink clouds, and ink blots in HCI

In our field, we write mostly to record, inform, and reveal. At times we write to conceal doubt or exaggerate promise. The latter are often acts of self-deception, although spurred by alcohol and perhaps mild remorse, some research managers at places I’ve worked have confessed to routinely deceiving their highly placed managers and funders. They justified it by sincerely imagining that the resulting research investments would eventually pay off. (None ever did.)

Practitioners who attend research conferences seek clarity and eschew ambiguity. They may not get what they want—unambiguous finality is rare in research. But practitioner tolerance for inkblots is low. Over time, as our conferences convinced practitioners to emigrate, openings were created for immigrants from inkblot dominions, such as Critical Theory.

For example, echoing Orwell’s example of beardless women, the rejection letter for a submission on the practical topic of creating gender-neutral products stated, “I struggle to know what a woman is, except by reference to the complex of ideological constructions forced on each gender by a society mired in discrimination.” Impressive, but arguably an excess of intellectuality. It does not demean those struggling to know what a woman is to say that when designing products to appeal to women, “a better explanation” could be to define women as those who circle F without hesitation when presented with an M/F choice. A design that appeals both to F selectors and M selectors might or might not also appeal to those who would prefer a third option or to circle nothing, but in the meantime let’s get on with it.



Posted in: on Fri, October 17, 2014 - 9:06:47

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Designing the cognitive future, part V: Reasoning and problem solving


Authors: Juan Pablo Hourcade
Posted: Tue, October 14, 2014 - 2:38:54

I have been writing about how computers are affecting and are likely to affect cognitive processes. In previous posts I have touched on perception, memory, attention, and learning. In this post, I discuss reasoning and problem solving.

Computers are quite adept at deductive reasoning. If all facts are known (at least those we would use to make a decision), computers can easily use logic to make deductions without mistakes. Because of this, it is likely that we will see more and more involvement of computers in our lives to help us make decisions and guide our lives through deductive reasoning. We can see this already happening, for example, with services that tell us when to leave our home for our flight based on our current location and traffic on the way to the airport. 

These trends could go further, with many other activities that involve often-problematic human decision-making moving to the realm of computers. This includes driving cars, selecting what to eat, scheduling our days, and so forth. In all these cases computers, when compared with people, would be able to process larger amounts of information in real time and provide optimal solutions based on our goals.

So what will be left for us to do? One important reasoning skill will involve an understanding of the rules used to determine optimal outcomes in these systems, and how these relate to personal goals. People who are better able to do this, or go further determining their own sets of rules, are likely to derive greater benefits from these systems. One of the bigger challenges in this space comes from systems that could be thrown off balance by selfish users (e.g., traffic routing). People who are able to game these systems could gain unfair advantages. There are design choices to be made, including whether to make rules and goals transparent or instead choose to hide them due to their complexity.

What is clear is that the ability to make the most out of the large amounts of available data relevant to our decision-making will become a critical reasoning skill. Negative consequences could occur if system recommendations are not transparent and rely on user trust, which could facilitate large-scale manipulation of decision-making.

The other role left for people is reasoning when information is incomplete. In these situations, we usually make decisions based on heuristics we developed based on past outcomes. In other words, based on our experiences, we are likely to notice patterns and develop basic “rules of thumb.” Our previous experiences therefore are likely to go a long way in determining whether we develop useful and accurate heuristics. The closer these experiences are to a representative sample of all applicable events we could have experienced, the better our heuristics will be. On the other hand, being exposed to a biased sample of experiences is likely to lead to poorer decision-making and problem solving.

Computers could help or hurt in providing us with experiences from which we can derive useful rules of thumb. One area of concern is that information is increasingly delivered to us based on our personal likes and dislikes. If we are less likely to come across any information that challenges our biases, these are likely to become cemented, even if they are incorrect. Indeed, nowadays it is easier than ever for people with extreme views to find plenty of support and confirmation for their views online, something that would have been difficult if they were only interacting with family, friends, and coworkers. Not only that, but even people with moderate but somewhat biased views could be led into more extreme views by not seeing any information challenging those small biases, and instead seeing only information that confirms them.

There is a better way, and it involves delivering information that may sometimes make people uncomfortable by challenging their biases. This may not be the shortest path toward creating profitable information or media applications, but people using such services could reap significant long-term benefits from having access to a wider variety of information and better understanding other people’s biases.

How would you design the cognitive future for reasoning and problem solving? Do you think we should let people look “under the hood” of systems that help us make decisions? Would you prefer to experience the comfort of only seeing information that confirms your biases, or would you rather access a wider range of information, even if it sometimes makes you feel uncomfortable?



Posted in: on Tue, October 14, 2014 - 2:38:54

Juan Pablo Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Pablo Hourcade's Posts


Post Comment


No Comments Found


Conceptual precision in interaction design research


Authors: Mikael Wiberg
Posted: Wed, October 01, 2014 - 10:13:28

Interaction design research is to a large extent design driven. We do research through design. A design can be seen as a particular instantiation of a design idea. Accordingly, interaction design research is also about the development of ideas. However, there is no one-to-one relation between design ideas and design instantiations. A design idea can be expressed through a wide variety of designs. That is partly why we work with prototypes in our field and why we think iterative design is a good approach. We explore an idea through a number of variations in terms of how the design idea is manifested in a particular design.

Of course, precision is always a key ingredient in research. We appreciate precision when it comes to definitions, measurements, and descriptions of research methods applied, data sets, data-collection techniques, and formats. As a community I also think that we all agree that research contributions and conclusions should be stated with precision. Precision enables us to position a particular research contribution in relation to an existing body of research.

However, do we work with similar precision when it comes to articulating our design ideas? And do we work with such precision when we articulate how our design ideas are manifested in the designs we produce as important outcomes from our research projects? I certainly hope so! At least I have noticed a growing concern for this matter in our field over the past few years. In a recent paper Erik Stolterman and I discuss the relation between conceptual development and design [1], and in a related paper Kristina Höök and Jonas Löwgren present the notion of “strong concepts” in relation to interaction design research [2].

So, if we can agree that design and conceptual development goes hand in hand in interaction design research, and if we can agree that precision is key for this practice as well, then maybe we should also ask the question of how we can advance our field through design. That is to answer the fundamental question of whether we can make research contributions, i.e., progress, through design? In a forthcoming NordiCHI paper we elaborate on this issue (see [3]). In short, we suggest that we should focus on the formulation of classes of interactive systems and that we should develop ways of analyzing designs both in relation to the elements that constitute a particular design, and in relation to how a particular design composition can be said to belong to, extend, challenge, or combine such classes. We discuss this in relation to the importance of a history of designs and a history of design ideas for interaction design research, under the label of “generic design thinking.” Again, if we as a research community should take on one such classification project then precision will again be key! For reviewing the past, and for moving forward! 

Endnotes

1. Stolterman, E. & Wiberg, M (2010) Concept-driven Interaction Design Research, Human Computer Interaction (HCI), Vol 25, Issue 2, p. 95-118.

2. Hook, K. and Lowgren, J. 2012. Strong concepts: Intermediate-level knowledge in interaction design research. ACM ToCHI. 19, 3, Article 23.

3. Wiberg, M & Stolterman, E (2014) "What makes a prototype novel? - A knowledge contribution concern for interaction design research", Full paper, In Proc. of NordiCHI 2014; http://dx.doi.org/10.1145/2639189.2639487



Posted in: on Wed, October 01, 2014 - 10:13:28

Mikael Wiberg

Mikael Wiberg is Professor of Informatics in the Department of Informatics at Umeå University, Sweden.
View All Mikael Wiberg's Posts


Post Comment


No Comments Found


Lasting impact


Authors: Jonathan Grudin
Posted: Wed, September 24, 2014 - 9:35:29

An enduring contribution can take different forms. It can be a brick, soon covered by others yet a lasting part of a field’s foundation. Alternatively, it can be a feature that remains a visible inspiration.

Eminent scientists and engineers have offered insights into making an impact.

Inspiration

“If you want to predict the future, invent it,” said Alan Kay. In the 1960s, Kay conceived of the Dynabook. His work was widely read and had a lasting impact: 50 years later, tablets are realizing his vision. Vannevar Bush’s 1946 essay “As We May Think” has been aptly termed science fiction—an outline of an impossible optomechanical system called the Memex—but it inspired computer scientists who effectively realized his vision half a century later in a different medium: today’s dynamic Web.

Kay and his colleagues took a significant step toward the Dynabook by developing the Xerox Alto. Bush attempted to build a prototype Memex. However, generations of semiconductor engineers and computer scientists were needed to reach their goals—introducing a second factor.

Perspiration

A century ago, the celebrated inventor Thomas Edison captured the balance, proclaiming that genius was “1% inspiration, 99% perspiration.” He recognized the effort involved in inventing the future, but by attributing output to input this way he overlooked the fact that not all clever, industrious people thrive. Circumstances of birth affect a person’s odds of contributing; even among those of us raised in favorable settings, many creative, hard-working people have a tough time. More is needed.

Divination

A century before Edison, Louis Pasteur observed that “chance favors the prepared mind.” This elegantly acknowledges the significance of perspiration and inspiration while recognizing the role of luck.

Lasting impact awards

On a less exalted level, several computer science conferences have initiated awards for previously published papers that remain influential. My 1988 paper on challenges in developing technology to support groups received the first CSCW Lasting Impact Award at CSCW 2014 in Baltimore. The remainder of this essay examines how that paper came to be written and why it succeeded. Avoiding false modesty, I estimate that the impact was roughly 1% inspiration, 25% perspiration, and 75% a fortunate roll of the dice. (If you prefer 100% totals, reduce any of the estimates by 1%.)

Seeing how one career developed might help a young person, although 30 or 40 years ago I would have avoided considering the role of luck. Like all students in denial about the influence of factors over which they have no control, I would have been anxious thinking that intelligence and hard work would not guarantee success. But those who move past Kay and Edison to Pasteur can use help defining a path toward a prepared mind.

The origin of my paper

“Why CSCW applications fail: Problems in the design and evaluation of organizational interfaces,” was an unusual paper. It didn’t build on prior literature. It included no system-building, no usability study, no formal experiment, and no quantitative data. The qualitative data was not coded. The paper didn’t build on theory. Why was it written? What did it say?

The awkward title reflected the novelty, in 1988, of software designed to support groups. Only at CSCW‘88 did I first encounter the term groupware, more accessible than CSCW applications or Tom Malone’s organizational interfaces. Individual productivity tools—spreadsheets and word processors— were commercially successful in 1988. Email was used by some students and computer scientists, but only a relatively small community of researchers and developers worked on group support applications.

I had worked as a computer programmer in the late 1970s, before grad school. In 1983 I left a cognitive psychology postdoc to resume building things. Minicomputer companies were thriving: Digital Equipment Corporation, Data General, Wang Labs (my employer), and others. “Office information systems” were much less expensive, and less powerful, than mainframes. They were designed to support groups and were delivered with spreadsheets and word processing. We envisioned new “killer apps” that would support the millions of small groups out there. We built several such apps and features. One after another, they failed in the marketplace. Why was group support so hard to get right?

Parenthetically, I also worked on enhancements to individual productivity tools. There I encountered another challenge: Existing software development practices were a terrible fit for producing interactive software to be used by non-engineers.

Writing the paper

In 1986 I quit and spent the summer reflecting on our experiences. I wrote a first draft. My colleague Carrie Ehrlich and I also wrote a CHI’87 paper about fitting usability into software development. A cognitive psychologist, Carrie worked in a small research group in the marketing division. She had a perspective I lacked: Her father had been a tech company executive. She explained organizations to me and changed my life. The 1988 paper wouldn’t have been written without her influence. It was chance that I met Carrie, and partly chance that I worked on a string of group support features and applications, but I was open to learning from them.

In the fall, I arrived in Austin, Texas to work for the consortium MCC. The first CSCW conference was being organized there. “What is CSCW?” I asked. “Computer Supported Cooperative Work—it was founded by Irene Greif,” someone said. I attended and knew my work belonged there. The field coalesced around Irene’s book Computer Supported Cooperative Work, a collection of seminal papers that were difficult to find in the pre-digital era. Irene’s lasting impact far exceeds that of any single paper. I may well have had little impact at all without the foundation she was putting into place.

My research at MCC built on the two papers drafted that summer: (i) understanding group support, and (ii) understanding development practices for building interactive software. MCC, like Wang and all the minicomputer companies, is now gone. Wedded to AI platforms that also disappeared, MCC disappointed the consortium owners, but it was a great place for young researchers. I began a productive partnership there with Steve Poltrock, another cognitive psychologist, which continues to this day. We were informally trained in ethnographic methods by Ed Hutchins and in social science by Karen Holtzblatt, then a Digital employee starting to develop Contextual Design. MCC gave me the resources to refine the paper and attend the conference.

The theme: Challenges in design and development

Why weren’t automated meeting scheduling features used? Why weren’t speech and natural language features adopted? Why didn’t distributed expertise location and project management applications thrive? The paper used examples to illustrate three factors contributing to our disappointments:

  1. Political economy of effort. Consider a project management application that requires individual contributors to update their status. The manager is the direct beneficiary. Everyone else must do more work. If individual contributors who see no benefit do not participate, it fails. This pattern appeared repeatedly: An application or feature required more work of people who perceived no benefit. Ironically, most effort often went into the interface for the beneficiary.

    Was this well known? Friends and colleagues knew of nothing published. I found nothing relevant in the Boston Public Library. Later, I concluded that it was a relatively new phenomenon, tied to the declining cost of computing. Mainframe computers were so expensive that use was generally an enterprise mandate. At the group level, mandated use of productivity applications was uncommon.

  2. Managers decide what to build, buy, and even what to research. Managers with good intuition for individual productivity tools often made poor decisions about group support software. For example, audio annotation as a word processor feature appealed to managers who used Dictaphones and couldn’t type. But audio is harder to browse, understand, and reuse. We built it, no one came.

  3. You can bring people into a lab, have them use a new word processor for an hour, and learn something. You can’t bring six people into a lab and ask them to simulate office work for an hour. This may seem obvious, but most HCI people back then, including me, had been trained to do formal controlled lab experiments. We were scientists!

The paper used features and applications on which I had worked to illustrate these points.

Listening to friends

The first draft emphasized the role of managers. I still consider that to be the most pernicious factor, having observed billions of dollars poured into resource black holes over decades. But my friend Don Gentner advised me to emphasize the disparity between those who do work and those who benefit. Don was right. Academia isn’t strongly hierarchical and doesn’t resonate to management issues. Academics were not my intended audience and few attended CSCW’88, but those who did were influential. Criticizing managers is rarely a winning strategy, anyway.

Limited expectations

Prior to the web and digital libraries, only people who attended a conference had access to proceedings. I wanted to get word out to the small community of groupware developers at Wang Labs, Digital Equipment Corporation, IBM, and elsewhere, so they could avoid beating their heads against the walls we had. Most CSCW 1988 attendees were from industry. I assumed they would tell their colleagues, we would absorb the points, and in a few months everyone would have moved on.

It didn’t matter if I had missed relevant published literature: The community needed the information! Conferences weren’t archival. The point was to avoid more failed applications, not to discover something new under the sun.

The impact

At the CSCW 2014 ceremony for my paper, Tom Finholt and Steve Poltrock analyzed the citation pattern over a quarter century, showing a steady growth and spread of the paper’s influence. I had been wrong—the three points had not been quickly absorbed. They remain applicable. A manager’s desire for a project dashboard can motivate an internal enterprise wiki, but individual contributors might use a wiki for their own purposes, not to update status. Managers still funnel billions of dollars into black holes. Myriad lab studies are published in which groups of three or four students are asked to pretend to be a workgroup.

All my subsequent jobs were due to that work. Danes who heard me present it invited me to spend two productive years in Aarhus. A social informatics group at UC Irvine recruited me, after which a Microsoft team building group support prototypes hired me. Visiting professorships and consulting jobs stemmed from that paper and my consequent reputation as a CSCW researcher.

Why the impact?

The analysis resonated with people’s experience; it seemed obvious once you heard it. But other factors were critical to its strong reception. The paper surfaced at precisely the right moment. In 1984 my colleagues and I were on the bleeding edge, but by 1988 client-server architectures and networking were spreading. More developers worked on supporting group activities. The numbers of developers focused on group support had risen from handfuls in 1984 to hundreds in 1988, with thousands on the way.

I was fortunate that the CSCW’88 program chair was Lucy Suchman. Her interest in introducing more qualitative and participatory work undoubtedly helped my paper get in despite its lack of literature citation, system-building, usability study, formal experiment, quantitative data, and theory. In subsequent years, such papers were not accepted.

The most significant break was that the paper was scheduled early in a single-track conference that attracted a large, curious crowd. Several speakers referred back to it and Don Norman called it out in his closing keynote.

Finally, ACM was at that time starting to make proceedings available after conferences, first by mail order and then in its digital library.

Extending the work involved some perspiration. A journal reprinted the paper with minor changes. A new version was solicited for a popular book. Drawing on contributions by Lucy Suchman, Lynne Markus, and others, I expanded the factors from three to eight for a Communications of the ACM article that has been cited more than the original paper.

No false modesty

I was happy with the paper. I had identified a significant problem and worked to understand it. But as noted above, the positive reception followed a series of lucky breaks. Acknowledging this is not being modest. Perhaps I can convince you that I’m immodest: Other papers I’ve written have involved more work and in my opinion deeper insight, but had less impact.

Fifteen years later I revisited the issue that I felt was the most significant—managerial shortsightedness. The resulting paper, which seemed potentially just as useful, was rejected by CSCW and CHI. It found a home, but attracted little attention. Authors are often not the best judges of their own work, but when I consider the differences in the reception of my papers, factors beyond my control seem to weigh heavily.

Ways to contribute

The Moore’s law cornucopia provides diverse paths forward. Your most promising path might be to invent the future by building novel devices. In the early 1980s, we found that that does not always work out. To avoid inventing solutions that the rest of us don’t have problems for, prepare with careful observation and analysis, and hope the stars align.

A second option, which has been the central role of CHI and CSCW, is to improve tools and processes that are entering widespread use.

Third, you can tackle stubborn problems that aren’t fully understood. My 1988 paper was in this category. It is difficult to publish such work in traditional venues. With CSCW now a traditional venue, you might seek a new one.

In conclusion, Pasteur’s advice seems best—prepare, and chance may favor you. Preparation involves being open and observant, dividing attention between focal tasks, peripheral vision, and the rear-view mirror. For me, it was most important to develop friendships with people who had similar and complementary skills. It takes a village to produce a lasting impact.



Posted in: on Wed, September 24, 2014 - 9:35:29

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Lessons from leading design at a startup


Authors: Uday Gajendar
Posted: Tue, September 16, 2014 - 9:01:28

In the past 100+ days I’ve led the successful re-invigoration of a fledgling design capability at a 2-year-old startup into a robust, cohesive, solidified practice with vitality to carry it further, with unified executive support. This includes a revitalized visual design language, visionary concepts to provoke innovation, and strategic re-thinking of UX fundamentals core to the product’s functionality. Being my first startup leadership role, this has certainly proved to be a valuable “high learning, high growth” experience filled with lessons, small and large. I’d like to share a few here that made the most definitive mark on my mind, shaping my design leadership model going forward. Hopefully this will help other UX/HCI/design professionals in similar small-team leadership situations. 

Say “no” to preserve your sanity (and the product focus): It’s critical early on to set boundaries demarcating exactly what you’ll work on and what you’ll defer to others. As a former boss liked to say, “You can’t boil the ocean.” Be selective on where you’ll have design impact with immediate or significant results that you can parlay into your next design activity. “Saying No” also builds respect from others, telling colleagues that you have a direction and a purpose to deliver against.

Remove “like” from the discussion: Everyone has opinions about design—that’s simply natural and expected. One way to mitigate the “I like” (or “I don’t like”) is to remove that word and instead focus on “what works” or “doesn’t work” for a particular persona/context/scenario. This forces the discussion to be about functional nature of design elements, not subjective personal tastes.

Role model good behavior from day one: It’s only natural for a startup starved for design expertise to immediately ask for icons and buttons after the designer found the bathroom and got the computer working. After all, that’s what most interpret design as—the tactical items. As a designer in that context, it’s your opportunity to demonstrate the right behavior for engaging and creating, such as asking user-oriented questions, drafting a design brief, sketching at whiteboards, discussing with engineers, etc. 

Build relationships with Sales, your best friend: Yes, sales! You gotta sell to customers and your sales leader will point you to the right folks to learn about customers, markets, partners, etc. Understanding the sales channel, which is the primary vehicle for delivering a great customer experience, is vital to your success as a design leader. Build that rapport to actively insert yourself into the customer engagement process, which is a gold mine of learnings to convert into design decisions. 

Don’t get hung up on Agile or Lean: These are just process words and mechanisms for delivering code, each with their particular lexicon. They are not perfect. There is no ideal way to fit UX into either one. Yet the overall dynamic is complementary in spirit and should enable smooth, efficient, learning-based outcomes to help iterate the product-market fit goals. The gritty, mundane details of JIRA, stories, estimations, sprint reviews, etc. are simply part of the process. Keep up your design vision and learn how to co-opt those mechanisms to get design ahead of the game, like filing “UX Stories” based upon your vision. 

Think in terms of “goals, risks, actions” when managing up: Maybe as part of a large corporate design team it was acceptable to vent and rant about issues with close peers. However, in a design leadership role on par with CEO and VPs of Engineering or Sales, you need to be focused and deliberate in your communications with them, to amplify the respect and build trust/confidence in you. I learned it’s far more effective to discuss things in terms of your goals, what are the key risks affecting the accomplishment of said goals, and what actions are desired (or asks to made) to help achieve those goals. This is way more professional and valuable of a dialogue driver. Don’t just rant! 

Finally, get comfortable with “good enough”: As Steve Jobs said, “Real artists ship,” meaning that you can’t sweat all the perfection-oriented details too much at risk of delaying the release. At some point you must let it go, knowing that there will be subsequent iterations and releases for improving imperfections—which is ongoing. Having fillers, stop-gaps, and temporary fixes are all expected. Do your best and accept (if not wholly embrace) the notion of “satisficing” (per Herb Simon) of doing what’s necessary yet sufficient. 

Design leadership is incredibly hard, perhaps made more difficult because of the glare of the spotlight now that UX is “hot” and finally recognized by execs and boards as a driver of company success. While you may be a “team of one,” the kinds of learnings itemized here will help enable a productive, design-led path forward for the team.



Posted in: on Tue, September 16, 2014 - 9:01:28

Uday Gajendar

Uday Gajendar is Director of User Experience at CloudPhysics, focused on bringing beauty and soul to Big Data for virtualized datacenters.
View All Uday Gajendar's Posts


Post Comment


No Comments Found


Service blueprints: Laying the foundation


Authors: Lauren Chapman Ruiz
Posted: Wed, September 10, 2014 - 12:01:14

This article was co-written by Izac Ross, Lauren Chapman Ruiz, and Shahrzad Samadzadeh.

Recently, we introduced you to the core concepts of service design, a powerful approach that examines complex interactions between people and their service experiences. With this post, we examine one of the primary tools of service design: the service blueprint.

Today’s products and services are delivered through systems of touchpoints that cross channels and blend both digital and human interactions. The service blueprint is a diagram that allows designers to look beyond the product and pixels to examine the systems that bring a customer’s experience to life.

What is a service blueprint?

You may be familiar with customer journey mapping, which is a tool that allows stakeholders to better understand customer interactions with their product or service over time. The service blueprint contains the customer journey as well as all of the interactions that make that journey possible.

Because of this, service blueprints can be used to better deliver a successful customer experience. Think of it this way: You can look at a building, and you can read a description, but to build the building you need more than an image or description. You need the instructions—the blueprint.

Service blueprints expose and involve many of the core concepts we talked about in Service Design 101.To use that vocabulary: Service blueprints clarify the interactions between service users, digital touchpoints, and service employees, including the frontstage activities that impact the customer directly and the backstage activities that the customer does not see.

When should you use a service blueprint?

Service blueprints are useful when:

  • You want to improve your service offering. Knowing how your service gets produced is essential for addressing breakdowns or pain points.
  • You want to design a new service that mixes digital and non-digital touchpoints. Service blueprints shine when examining and implementing the delivery of complex services.
  • You have lost track of how the service gets produced. Services, like products, have manufacturing lines. The longer the service has been around or the larger the organization, the more siloed and opaque the manufacturing process can become.
  • There are many players in the service. Even the most simple-sounding service often involves IT systems, people, props, and partners all working to deliver the customer experience. A blueprint can help coordinate this complexity.
  • You are designing a service or product that is involved in producing other services. Products and services often interact with other services, particularly if they are b2b. Understanding your customer’s interactions with partners throughout the service can support a more seamless—and better—customer experience.
  • You want to formalize a high-touch service into a lower-touch form. New technologies can create opportunities for delivering higher-touch (and thus more expensive) services to broader audiences in new, more cost-effective forms. For example, think about the expanding world of online education. A blueprint uncovers the essential considerations for implementing a new, lower-touch service.

Keep in mind that there are times when a service blueprint is not the right tool! For example, if the goal is to design an all-digital service, journey mapping or process flows might be more appropriate.

Anatomy of a service blueprint

You understand the concept of service blueprints, and you know when to use them. Now, how do you make one? The starting elements are simple: dividing lines and swimlanes of information.

There are three essential requirements for a formal service blueprint:

  • The line of interaction: This is the point at which customers and the service interact.
  • The line of visibility: Beyond this line, the customer can no longer see into the service.
  • The line of internal interaction: This is where the business itself stops, and partners step in.

In between these lines are five main swimlanes that capture the building blocks of the service:

  • Physical evidence: These are the props and places that are encountered along the customer’s service journey. It’s a common misconception that this lane is reserved for only customer-facing physical evidence, but any forms, products, signage, or physical locations used by or seen by the customer or internal employees can and should be represented here.
  • Customer actions: These are the things the customer has to do to access the service. Without the customer’s actions, there is no service at all!
  • Frontstage: All of the activities, people, and physical evidence that the customer can see while going through the service journey.
  • Backstage: This is all of the things required to produce the service that the customer does not see.
  • Support processes: Documented below the line of interaction, these are the actions that support the service.

Additional Lanes

For clarity, here are some additional swimlanes we recommend:

  • Time: Services are delivered over time, and a step in the blueprint may take 5 seconds or 5 minutes. Adding time along the top provides a better understanding of the service.
  • Quality measures: These are the experience factors that measure your success or value, the critical moments when the service succeeds or fails in the mind of the service user. For example, what’s the wait time?
  • Emotional journey: Depending on the service, it can be essential to understand the service user’s emotional state. For example, fear in an emergency room is an important consideration.
  • Splitting up the front stage: With multiple touchpoints that are working together simultaneously to create the service experience, splitting each touchpoint into a separate lane (for example, digital and device interactions vs. service employee interactions) can be very helpful.
  • Splitting up the backstage: The backstage can be comprised of people, systems, and even equipment. For detail or low altitude blueprints, splitting out lanes for employees, apps, data, and infrastructure can clarify the various domains of the service.
  • Phases of the service experience cycle: Services unfold over time, so it can add clarity to call out the phases of the experience cycle. For example: how customers are enticed to use the service, enter or onboard to the service, experience the service, exit, and then potentially re-enter the service and are thus retained as customers.
  • Photos/sketches of major interactions: Adding this lane can help viewers quickly grasp how the service unfolds over time, in a comic-book-like view.

You can build as much complexity as needed into your blueprint, depending on the complexity of the service.

The 5 Ps and how they relate to the blueprint

In Service Design 101, we talked about the 5 Ps: People, Processes, Props, Partners, and Place. Looking at the service blueprint structure, you can see that all of the 5 Ps are captured. Through the top are Props and Place, and down through the rest of the blueprint are People, Process, and then Partners in the support processes. In the Customer Action lane are experience and actions.

Service blueprints can only be useful if they can be interpreted and implemented. We recommend the following common notation standards for making sure a blueprint is clear, focused, and communicates successfully.

Variations

Like a customer journey map, a service blueprint is focused on one persona’s experience through a single path. Think of it as a kind of scenario. Your blueprint will quickly become too complex and too difficult to read if you put multiple journeys on a single one, or try to capture service use with many variations. Blueprints show one use case or path over time, so additional blueprints must be used for variations of the service journey.

Common notations

Arrows

When an arrow crosses a swimlane, value is being exchanged through the touchpoints of the service.

Arrows have a very important meaning beyond the direction of the value exchange. They indicate who or what system is in control at any given moment:

  • A single arrow means that the source of the arrow is in control in the value exchange.
  • A double arrow indicates that an agreement must be reached between the two entities to move the process forward. For example, agreeing on the pick-up time with a pharmacist, or negotiating a price in a non-fixed cost structure.

Annotations

As you do the research and field observations necessary to build a blueprint, remember that blueprints can be a powerful way to communicate what’s working and what isn’t working for both service users and employees in the existing process. These notable moments can be captured in a number of ways, but we recommend icons with a legend to keep things legible and clear.

Some notable moments to consider capturing:

  • Pain points which should be fixed or improved
  • Opportunities to measure the quality of the service
  • Opportunities for cost savings or increased profits
  • Moments that are loved by the customer and should not be lost

What’s next?

As you begin to incorporate blueprinting into your process, remember that blueprints have altitude! They can capture incredible amounts of detail or summarize high-level understandings. When evaluating or implementing an existing set of service interactions, a low altitude map that details out the processes across touchpoints and systems is an invaluable management tool. To quickly understand the customer experience in order to propose design changes or develop a shared understanding, higher altitude diagrams can be most helpful.

Service blueprints break down the individual steps that are happening to produce the customer experience, and are an essential tool for the practice of service design.

In upcoming weeks we will talk about where this tool fits into the design process, and on gathering information to create service blueprints and journey maps.



Posted in: on Wed, September 10, 2014 - 12:01:14

Lauren Chapman Ruiz

Lauren Chapman Ruiz is an Interaction Designer at Cooper in San Francisco, CA, and is an adjunct faculty member at CCA .
View All Lauren Chapman Ruiz's Posts


Post Comment


@Raffaele (2014 09 21)

What do you think of my storygraph? http://www.rainwiz.com/2012/10/13/introducing-the-storygraph/


Big, hairy, and wicked


Authors: David Fore
Posted: Fri, September 05, 2014 - 12:00:56

Interaction designers sure can take things personally. When our behavior is driven by ego, this habit can be annoying. All that huffing and puffing during design crits! But when it springs from empathy for those our designs are meant to serve, then this signal attitude can yield dividends for all.

Nowhere are these sensitivities more critical to success—or more knotty—than when we confront systems whose complexity is alternately big, hairy, and wicked, and where making a positive impact with design can feel like pushing a rope.

I learned this again (apparently you can’t learn it too often) while leading design at Collaborative Chronic Care Network (C3N). Funded by a landmark grant from National Institutes of Health (NIH), C3N has spent nearly five years cultivating a novel learning health system, one that joins health-seekers and their families, their healthcare providers, and researchers in common cause around improving care. 

The past few years have been filled with front-page stories about the difficulties of changing healthcare for the better… or even to determine if what you’ve done is for the better. To hedge such risks from the start, C3N adopted interaction design methods to enhance understanding of and empathy for all system participants: pediatric patients, their caregivers, and researchers alike. Not a panacea, to be sure, but still a step in the right direction.

This initiative has led me to a deep appreciation for the value of three publications—two new ones and a neglected classic—that offer methods, insight, and counter-intuitive wisdom for those whose job it is to design for systems.

Few people know this terrain better than Peter Jones. He wrote Design for Care on the Internet, sharing his steady progress on concepts and chapters with a broad swath of the interaction design community. The result is a densely packed yet accessible book with a strong narrative backbone that demonstrates a wide range of ways multidisciplinary teams and organizations have designed for healthcare environments.

Jones uses theory, story, and principles to demonstrate how designers and their collaborators are trying to change how healthcare is delivered and experienced. Practicing and teaching out of Toronto, Jones has great familiarity with the Canadian healthcare system as well as the US model, which means that readers are permitted to see how similar cultures get very different outcomes. 

He also makes a convincing argument that designers have an opportunity—an obligation even—to listen for and respond to a “call to care.” Otherwise, he observes, our collaborators—the ones with skin in the game—will find it difficult to take our contributions seriously. After all, a cancer patient seeking health or an oncology nurse running a clinical quality-improvement program need to know you care enough to stick around and see things through.

The depths of this book’s research, the clarity of its prose and schematics, and the methods it offers can help designers take advantage of an historic opportunity to improve the healthcare sector. I expect Design for Care to be an evergreen title, used by students, teachers, and practitioners for years to come.

Every designer I’ve worked with, without exception, loves doing field research. It might begin as a desire to get out from behind their pixel machines. But they always come back to the studio ready to blow away old ways of thinking. 

The fruit of qualitative research not only improve designs, but it also equips designers with invaluable insight and information when negotiating with product managers, developers, and executives. 

And what do they find out there among the masses? 

People are struggling to realize their goals amidst the whirling blades of systems not designed for their benefit. That’s why it’s so valuable (and challenging) to spend time with folks at factories, agricultural facilities, hospitals, or wherever your designs will be encountered.

But what if you can’t get out into the field? How do you make sure that your nicely designed product is going to be useful. In other words, how do stop yourself from the perfect execution of the wrong thing? A new whitepaper from the National Alliance for Caregiving Caregiver’s Alliance has some answers. 

It is my burden to read a lot of healthcare papers. What helps the work of Richard Adler and Rajiv Mehta rise above the rest is how carefully they inflect their recommendations toward the sensibilities and needs of product designers and software developers. The authors are Silicon Valley veterans who place the problem into context, then make sure to draw a bright line from research to discovery to requirements. 

Equipped with this report, designers are far more likely to create designs that serve the true needs of the people who have much to gain—and lose—during difficult life passages. It is filled with compelling qualitative research results and schematics and tables that shed light on the situations and needs of family caregivers. 

Why is this important? Because these are family and community members—people like you and me—that constitute the unacknowledged backbone of the healthcare system. They are also, perhaps, the most overburdened, deserving, and underserved population in the field of design today.

So now we’re ready to redesign healthcare for good! 

But wait a minute…

That’s not just difficult, it’s probably impossible. After all, we humans are exceptionally good at over-reaching, but less good at acknowledging the limits of what we can foresee. We want to change everything with a single swift and stroke, but typically that impulse leaves behind little else but blood on the floor. 

John Gall, a physician and medical professor, observed that systems design is, by and large, a fool’s errand. But even fools need errands, which is why he wrote the now-legendary SystemANTICS. Gall’s perspective, stories, and axioms make this book a must-read, while his breezy writing style makes the book feel like beach reading. 

But make no mistake: Gall has lived and worked in the trenches, and he knows of what he speaks. His work has had such a profound influence on systems thinking, in fact, that we now have Gall’s Law:  A complex system that works is invariably found to have evolved from a simple system that worked.

Others are similarly concise and useful, such as New systems mean new problems and A system design invariably grows to contain the known universe

My favorite is this: People will do what they damn well please.

That last one comes from biology, of course, but it’s true for human systems as well. 

Gall implores you to acknowledge that people will use your systems and products in the strangest ways… and he will challenge you to do everything you can to anticipate some of those uses, and so build in resilience. 

As design radically alters the rest of economy with compelling products and services, healthcare remains a holdout. The current system in whose grip we find ourselves appears designed to preserve privileges and perverse incentives that propagate waste at the expense of better outcomes for all. By protecting prerogatives of incumbent systems and business models, innovation is too often stifled. But if we put our minds and hearts to it, and we’re aware of the pratfalls, we are more likely to push the rock up the hill at least another few inches.



Posted in: on Fri, September 05, 2014 - 12:00:56

David Fore

David Fore cut his teeth at Cooper, where he led the interaction design practice for many years. Then he went on to run Lybba, a healthcare nonprofit. Now he leads Catabolic, a product strategy and design consultancy. His aim with this blog is to share tools and ideas designers can use to make a difference in the world.
View All David Fore's Posts


Post Comment


@Jonathan T Grudin (2014 09 08)

Very nice essay, David. Thanks.

@Lauren Ruiz (2014 09 16)

So true! I often find we designers want to talk all about what needs to change, and that design thinking can tackle these problems, but the how is ever-elusive. How do you affect big systems? These books are good guides in starting to answer that question.


Inside the empathy trap


Authors: Lauren Chapman Ruiz
Posted: Mon, August 11, 2014 - 11:43:17

It’s not uncommon to find yourself closely identifying with the users you are designing for, especially if you work in consumer products. You may even find yourself exposed to the exact experiences you’re tasked with designing, as I recently discovered when I went from researching hematologist-oncologists (HemOncs) and their clinics to receiving care from a HemOnc physician in his clinic. (Thankfully, all is now well with my health.)

This led to some revealing insights. Suddenly I was approaching my experience not just as a personal life event, but as both the designing observer, taking note of every detail, and the subject, or user, receiving the care. Instead of passively observing, I focused on engaging in a walk-a-mile exercise, literally walking in my own shoes, as my own user.

In the past, I’ve written about the importance of empathy in design, but this was an extreme. I was able to identify my personal persona, watch to confirm the validity of workflows, and direct multitudes of questions to the understanding staff members. This subsequent experience can be extremely positive, but reminded me of the dangers of biases and designing solely for one person.

For instance, most of my caregivers enjoyed chatting, and one even stated how fun it was to have a patient who inquired about everything. That was my reminder that most patients are not like me, not having studied this exact space, and therefore having less comfort in asking questions. I had to remember this was something unique.

When we find ourselves in these situations, we need to remember that what happens to us may enhance our knowledge, but it cannot become the only conceivable experience in our minds. Too often we can walk dangerously close to designing for ourselves or for “the identifiable victim.” However, this can cause us to lose focus on improving outcomes for “the many” by single-mindedly pursuing an individual solution to a particularly negative outcome.

A New Yorker article called “Baby in the Well” builds a case against empathy, pointing out that this can cause us to misplace our efforts, missing the needs of “the many.” It is shown that the key to engaging empathy is the “identifiable victim effect,” which is the tendency for people to offer greater aid when a specific, identifiable person, or “victim” is observed under hardship, as compared to a large and vaguely defined group with the same need. The article states:

As the economist Thomas Schelling, writing forty-five years ago, mordantly observed, “Let a six-year-old girl with brown hair need thousands of dollars for an operation that will prolong her life until Christmas, and the post office will be swamped with nickels and dimes to save her. But let it be reported that without a sales tax the hospital facilities of Massachusetts will deteriorate and cause a barely perceptible increase in preventable deaths—not many will drop a tear or reach for their checkbooks.

When we design, we pursue a broader type of empathy. As a colleague once said to me, designers need to identify with the whole user base. User-centricity is about the ability to recognize that there are a number of personas, each with different goals, desires, challenges, behaviors, and needs. We design for these personas, recognizing that each has different goals they’re trying to accomplish and with different behaviors in how they go about achieving them.

So what are the key takeaways from my experience?

  1. Situations that help us build empathy for our users are invaluable as it gives us deep knowledge, but we should recognize and feel empathy for many. Looking at our situations through the lenses of your multiple personas can help you avoid this trap.
  2. Remember that the empathy we look to build in design is not just about feelings, but rather about understanding goals, the reasons for these goals, and how they are or aren’t currently accomplished.
  3. Have some empathy for yourself—it’s hard to untangle our personal feelings from the work we do on a day-to-day basis. Remember, we’re all human, and we will fall into the trap of focusing on ourselves from time to time. Recognizing this and looking out for the places where it affects our work is the best we can do.

What about you—have you found yourself in similar situations? How have you approached it? Are there tricks you use or pitfalls you work to avoid? Please use the twitter hashtag #designresearch to share in the conversation. 

Illustration by Cale LeRoy



Posted in: on Mon, August 11, 2014 - 11:43:17

Lauren Chapman Ruiz

Lauren Chapman Ruiz is an Interaction Designer at Cooper in San Francisco, CA, and is an adjunct faculty member at CCA .
View All Lauren Chapman Ruiz's Posts


Post Comment


No Comments Found


Diversity and survival


Authors: Jonathan Grudin
Posted: Tue, August 05, 2014 - 11:40:21

In a “buddy movie,” two people confront a problem. One is often calm and analytic, the other impulsive and intuitive. Initially distrustful, they eventually bond and succeed by drawing on their different talents.

This captures the core elements of a case for diversity: When people with different approaches overcome a natural distrust, their combined skills can solve difficult problems. They must first learn to communicate and understand one another. In addition to the analyst and the live wire, buddy movies have explored ethnically, racially, and gender diverse pairs, intellectual differences (Rain Man), ethical opposites (Jody Foster’s upright agent Clarice Starling teamed with psychopathic Hannibal Lecter), and alliances between humans and animals or extra-terrestrials.

Diverse buddies are not limited to duos (Seven Samurai, Ocean’s Eleven). All initially confront trust issues. Lack of trust can block diversity benefits in real life, too: Robert Putnam demonstrated that social capital is greater when diversity is low and that cultures with high social capital often fare better. However, the United States has prospered with high immigration-fueled diversity, despite the tensions. When is diversity worth the price?

In the movies, combining different perspectives solves a problem that no individual could. The moral case for racial, gender, social, or species diversity is secondary, although these differences may correlate with diverse views and skills. At its core, diversity is about survival, whether the threat is economic failure or the Wicked Witch of the West. “Don’t put all your eggs in one basket,” financial planners advise. A caveat is that diversity is not always good. Noah had to bar Tyrannosaurus rex from the ark; it wouldn’t have worked out. For a given task, some of us will be as useful as the proverbial one-legged man at an ass-kicking party. Exhortations in support of diversity rarely address this.

Diversity in teams receives the most attention. My ultimate focus is on the complex task of managing diversity in large organizations—companies, research granting agencies, and academic fields. But a discussion of diversity and survival has a natural starting point.

Biological diversity

Diversity can enable a species to survive or thrive despite changes in environmental conditions. Galapagos finches differ in beak size. Big-beaked finches can crack tough seeds, small-beaked finches ferret out nutritious fare. Drought or a change in competition can rapidly shift the dominant beak size within a single species. The finches do well to produce some of each.

In two situations, biological diversity disappears: A species with a prolonged absence of environmental challenge adapts fully to its niche, and a species under prolonged high stress jettisons anything nonessential. If circumstances shift, the resulting lack of diversity can result in extinction. In human affairs too, complacency and paranoia are enemies of diversity.

Human diversity

Coming to grips with workplace diversity is difficult because all forms come into play. Our differences span a nature-nurture continuum. Race and gender lie at one end, acquired skills at the other. Shyness or a preference for spatial reasoning may be inherited; cultural perspectives are acquired. In his book The Difference, Scott Page focuses on the benefits of diverse cognitive and social skills in problem-solving. As I wrote, a friend announced a startup for which a major investor was on board provided that other investors join: He wants diverse concurring reviews.

A team matter-of-factly recruiting core skills doesn’t think of them as diversity—but an unusual skill becomes a diversity play. Whether differences originate in nature or nurture isn’t important. Understanding their range and how they can clash or contribute is.

Drawing on thousands of measurements and interviews, William Herbert Sheldon’s 1954 Atlas of Men [1] yielded three physical types, each focused on an anatomical system accompanied by a psychological disposition: (i) thin, cerebral, ectomorphs (central nervous system), (ii) stocky, energetic mesomorphs (musculature), and (iii) emotional, pleasure-seeking endomorphs (autonomic nervous system). Consider a team comprising a scarecrow, lion, and tin man keen to establish ownership of a brain, courage, and a heart, or Madagascar’s giraffe, lion, and hippopotamus.

Aldous Huxley used Sheldon’s trichotomy in novels. Organizational psychologists favor broader typologies. Companies know that good teams can be diverse and try to get a handle on it. A popular tool is the Myer-Briggs Type Indicator. This 2x2 typology was built on Carl Jung’s dimensions of introversion/extraversion and thinking/feeling. It is consistent with his view that a typology is simply a categorization that serves a purpose. Other typologies are also used. Early in my career, my fellow software developers and I were given a profiling survey that I quickly saw would indicate whether we were primarily motivated by (i) money, (ii) power, (iii) security, (iv) helping others, or (v) interesting tasks. If I filled it out straightforwardly I’d be (v) followed by (iv). Instead, I drew on my childhood Monopoly player persona, and in the years that followed received very good raises. I took the initiative to find interesting tasks, rather than relying on management for that.

Teams and organizations

Organizations generally differ from teams in several relevant respects. Organizations are larger, more complex, and last longer [2]. An organization, a team of teams, requires a greater range of skills than any one team. Organizations strive to minimize the time spent problem-solving, where diversity helps the most, and maximize the time spent in routine execution. Most teams continually solve problems; one change in personnel or external dependency can alter the dynamics and lead to a sudden or gradual shift in roles.

Should an organization group people with the same skill or form heterogeneous teams? Should a company developing a range of products have central UX, software development, and test teams, or should it form product teams in which each type is represented? Homogeneous teams are easier to manage—assessing diverse accomplishments is a challenge for a team manager. Diverse teams must spend time and effort learning to communicate and trust.

Homogeneous teams could be optimal for an organization that is performing like clockwork, heterogeneous teams better positioned to respond in periods of flux. A centralized UX group is fine for occasional consulting, an embedded UX professional better for dynamic readjustment.

Teams

Consider a working group with a single manager, such as a program committee for a small conference, an NSF review panel, or a team in a tech company. The scope of work is relatively clear. Diversity may be limited: quant enthusiasts may keep out qualitative approaches or vice versa; a developer-turned-manager may feel that a developer with some UX flair has sufficient UX expertise for the project.

Where does diversity help or hinder? Joseph McGrath identified four modes of team activity: taking on a new task, conflict resolution, problem-solving, and execution. Diversity often slows task initiation. It can create conflicts. Diversity is neutral in execution mode [3], where a routine job has been broken down into component tasks, minimizing complex interdependencies.

Scott Page describes contributions of diversity to the remaining mode of team activity, problem-solving. Diversity helps when clearly recognizable steps toward a solution can be taken by any team member, as in open source projects or when several writers work on dialogue for a drama. Although a team executing in unison like a rowing crew may not benefit from diversity, most teams encounter problems at times. Members are often collocated, enabling informal interaction, learning to communicate, and building trust. When resource limitations force hard decisions on a team, members understand the tradeoffs. Subjective considerations sometimes override objective decision-making on behalf of team cohesion: “We just rejected one of her borderline submissions, let’s accept this one.” “His grant proposal is poor but his lab is productive, let’s accept it at a reduced funding level.” In contrast, responses to organizational decisions are often less nuanced.

Teams have teething pains, conflicts, and managers who can’t evaluate workers who have different skill sets or personalities. But in general, diverse teams succeed. One that fails is replaced or its functions reassigned.

When time is limited, introducing diversity is challenging. I was on a review panel that brought together organizational scientists and mathematicians. The concept of basic research in organizational science mystified the mathematicians, to whom it seemed axiomatic that research on organizations was applied. Another review panel merged social scientists studying collaboration technology and distributed AI researchers; the latter insisted loudly that every grant dollar must go to them because DARPA had cut them off and they had mouths to feed.

Organizations

Organizations often endorse diversity, perhaps to promote trust in groups that span race, gender, and ethnicity. However, it is rarely a priority. Given that managing diversity is a challenge, why should a successful organization take on more than necessary? An organization’s long stretches of routine execution don’t benefit from a reservoir of diversity that enhances problem-solving. Complacency sets in. Perhaps diversity would yield better problem-solving, outweighing the management costs. Perhaps not.

The biology analogy suggests that a reservoir of skills could enable an organization to survive an unexpected threat. We don’t need big beaks now, but keep a few around lest a drought appear. A successful organization outlives a team, but few surpass the 70-year human lifespan. Perhaps identifying and managing the diverse skills that could address a wide-enough range of threats is impossible; managing the clearly relevant functions is difficult enough.

One approach is to push the social, cognitive, and motivational diversity that aids problem-solving down to individual teams to acquire and manage, using tools such as employee profile surveys. Unfortunately, it doesn’t suffice for organizational purposes to have skills resident in teams. Finding and recruiting a specific skill that exists somewhere in a large organization is a nightmare. I have participated in several expertise-location system-building efforts over 30 years, managing two myself. The systems were built but not used. Incentives to participate are typically insufficient. Similarly, cross-group task forces have been regarded as stop-gap efforts that complicate normal functioning.

Another approach is to assign teams to pursue diverse goals. For example, one group could pursue low-risk short-term activities as another engages in low-probability high-payoff efforts, drawing on different skills or capabilities.

Assessment at large scale

Organizations can’t be infinitely diverse. A company does a market segmentation and narrows its focus. NSF balances its investment across established and unproven research. A conference determines a scope. When unexpected changes present novel problems, will a reserve of accessible skills and flexibility exist? Management can draft aspirational mission statements, but in the end, responsiveness is determined by review processes, such as employee performance evaluations, grant funding, and conference and journal reviewing.

A pattern appears as we scale up: Assessing across a broad range not only requires us to compare apples and oranges (and many other fruits), it requires deciding which apples are better than which oranges. Sometimes all the apples or all the oranges are discarded.

Large organizations. How broadly should assessments and rewards be calibrated laterally across an organization? Giving units autonomy to allocate rewards can lead to the perception or fact that low-performing units are rewarded equivalently to stronger units. A concerted effort to calibrate broadly takes time and can lead to the dominant apples squeezing out other produce. For example, when rewards are calibrated across software engineering, test, and UX, the more numerous software engineers to whom “high-performing UX professional” is an oxymoron can control the outcome.

Organizations also sacrifice diversity to channel resources to combat exaggerated external threats. A hypothetical company with consumer and enterprise sales could respond to a perceived threat to its consumer business by eliminating enterprise development jobs and devoting all resources to consumer for a few years. When the pendulum swings back to enterprise, useful skills are gone.

Granting agencies. An agency that supports many programs has three primary goals: (i) identify and support good work within each program (a team activity), (ii) eliminate outdated programs, which facilitates (iii) initiating new programs, expanding diversity. Secondary diversity goals are geographic, education outreach, underrepresented groups, and industry collaboration.

Let’s generously assume that individual programs surmount team-level challenges and support diversity. The second goal, eliminating established programs that are not delivering, can be close to impossible. Once a program survives a provisional introductory period, it is tasked to promote the good work in its area—there is an implicit assumption that there is good work. Researchers in a sketchy program circle the wagons: They volunteer for review panels and for rotating management positions, submit many proposals (“proposal pressure” is a key success metric), rate one another’s proposals highly, and after internal debates emerge with consensus in review panels.

The inability to eliminate non-productive programs impedes the ability to add useful diversity. For example, NSF has a process for new initiatives that largely depends on Congress increasing its budget. The infrequent choices can be whimsical, such as the short-lived “Science of Design” and “CreativIT” efforts [4]. I participated in three high-level reviews in different agencies where everyone seemed to agree that science suffers from inadequate publication of negative results, yet we could find no path to this significant diversification given current incentive structures.

Large selective conferences. Selective conferences in mature fields form groups to review papers in each specialized area. Antagonism can emerge within a group over methods or toward novel but unpolished work, but the main scourge of diversity is competition for slots, which causes each group to gravitate toward mainstream work in its area. Work that bridges topic areas suffers. Complete novelty finds advocates nowhere. Researchers often wistfully report that their “boring paper” was accepted but their interesting paper was rejected.

Startups: Team and organization

A startup needs a range of skills. It may avoid diversity in personality traits: People motivated by security or power are poor bets. Goals are clear and rewards are shared; there is a loose division of labor with everyone pitching in to solve problems. The short planning horizon and dynamically changing environment resemble a team more than an established enterprise. With no shortage of problems, diversity in problem-solving skills is useful, but every hire is strategic and there is little time to develop trust and overcome communication barriers.

Professional disciplines

Competition for limited resources works against community expansion and diversity. Two remarkably successful interdisciplinary programs, Neuroscience and Cognitive Science, originated in copious sustained funding from the Sloan Foundation. In contrast, I’ve invested more fruitless time and energy than I like to think about trying to form umbrella efforts to converge fields that logically overlapped: CHI and Human Factors (1980s), CHI and COIS (1980s), CSCW and MIS (1980s), CSCW and COOCS/GROUP (1990s), CHI and Information Systems (2000s), and CHI and iConferences (2010s). An analysis of why these failed appears elsewhere.

Conclusion

This is not the short essay I expected, and it doesn’t cover the equity considerations that drive diversity discussions in university admissions and workplace hiring. What can we conclude? Noting that diversity requires up-front planning to possibly address unknown future contingencies, I will consider where the biology analogy does and doesn’t hold.

With moderate uncertainty, diversity is a good survival strategy; with major resource competition, diversity yields to a focus on the essentials. So avoid exaggerating threats. Next, given that choices are necessary, what dimensions of diversity should we favor? Ecological cycles favor the retention of capabilities that were once useful—a drought may return. Economic pendulum swings argue for the same.

However, the march of science and technology creates both obsolescence and novel opportunities and challenges. Some, but not all, can be anticipated by studying trends. It is fairly empty to recommend focusing on efficiency in execution while retaining flexibility, but “avoid overreacting to perceived threats” is again good advice. Businesses narrow when they should diversify. Government funding is poured into defense and intelligence at the expense of health, education, infrastructure, and environment. And finally, my favorite hot button example, large conferences.

HCI researchers have always been terrified of appearing softer than traditional computer science and engineering. So we followed their lead. We drove down conference acceptance rates, kept out Design, and chased out practitioners. But other CS fields evolve more slowly, with greater consensus on key problems. Human interaction with computers explodes in all directions. Novelty is inevitable, yet with acceptance rates of 15% - 25%, each existing subfield accepts research central to today’s status quo, leaving little room for research that spans areas, is out of fashion but likely to return, involves leading edge practitioners, or is otherwise novel [5]. Could our process consign us to be followers, not leaders?

Endnotes

1. His atlas of women was not completed.

2. To be clear about my terminology use, a “football team” is an organization, although the group of players on the field together is a team. Boeing called the thousands of people working on the 777 a team, but here it would be an organization.

3. An exception is an organization tasked with problem-solving. The World War II codebreaking organization at Bletchley Park made extraordinary use of diversity, documented in Sinclair McKay’s The Secret Lives of Codebreakers: The Men and Women Who Cracked the Enigma Code at Bletchley Park (Plume, 2010).

4. DARPA is an agency with top-down management which can and does eliminate programs, sometimes restarting them years later.

5. See Donald Campbell’s provocative 1969 essay “Ethnocentrism of Disciplines and the Fish-scale Model of Omniscience.”

Thanks to Gayna Williams for ideas and perspectives, John King, Tom Erickson, and Clayton Lewis for comments on an earlier draft.



Posted in: on Tue, August 05, 2014 - 11:40:21

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Possibilities, probabilities, and sensibilities


Authors: Uday Gajendar
Posted: Thu, July 31, 2014 - 10:27:43

Design is an iterative activity involving trajectories of exploration and discovery, of the problem space, the target market, and the solutions, towards making good choices. As the primary designer charged with delivery of an optimal solution, I must contend with such problems of choice, and thus trade-offs. Designing is fundamentally about mediating “choices”: what elements to show on-screen, which pathways to reveal, how to de-emphasize some features or prioritize others, and so forth. Some are “good choices” and some are not so good. So, if choice is at the heart of designing, how does a designer effectively handle too many choices and options—a dazzling array propositioned by earnest product managers seeking revenues and tenacious engineers wanting to showcase brilliance. Hmm! It’s a veritable challenge in the course of daily design work that I confront in my own professional life, too. I offer a potential framework that I have been evolving and applying which may be useful: iteratively defining the possibilities, probabilities, and sensibilities. Let me explain further...

Possibilities: This involves mapping out to the fullest extent all possible variants of user types, contexts of use, or solutions for a problem. Even if it’s wild or unfeasible, or an “edge case,” just capture it anyway so it is recorded for everyone on the team to discuss. This shows commitment to open-minded understanding of the situation, creating trust with colleagues, which is vital to delving with credibility into the “possibility space.” To make this practical, in my own work lately I’ve done the following to express this wide set of possibilities: 

  • Itemize all possible visual states and signals that apply to a data object, given the impending factors as interpreted by the back-end logic (regardless of whether a user understands or sees it) as a giant matrix.
  • Map out all possible filter and sorting combinations from a multifaceted filter panel control, to force understanding of potential impacts on the UI and user’s workflow.
  • Diagram all possible pathways for accessing the main application, depending on various roles, permissions, states, timeouts, and errors, in exhaustive detail so the team is fully aware.

Probabilities: Next, by virtue of the previous exercise and artifacts, force a critical dialogue on the actual likelihood of these possible events or states to actually happen. This necessarily requires informed stakeholders to contribute and clarify, stake out a position and defend it with data (empirical or anecdotal, as needed). This also surfaces qualifying conditions that were implicit and encourages everyone to understand how or why certain possibilities are not favorable or likely. This awareness on the team increases empathy of the situation, and possibly incites more user studies or other “pre-work” around the business or technical parameters. 

Again, to make this practical, the dialogue requires everyone in the room (or otherwise present) to ascertain the criteria for likelihood of happening, and a stack ranking of probabilities. This, of course, is a sneaky way of forcing priorities—essential to designing and focusing down for an optimal solution. Clearly, to do this well requires historical and empirical data trends, as well as observational data of users in the wild, as needed. Else, you’re making wild guesses, which is a sign about team research needs. (That’s a whole other topic!)

After your probabilities ranked and focused, what’s next? Then it’s time to introduce the humanistic, poetic element of desirability, which is the “sensibilities” part.

Sensibilities: Now having winnowed down to some set of actually probable states, types, situations, or whatever, the question you must raise is “What is most sensible?” for the targeted user. This refers to some articulation that is meaningful, relevant, maybe even delightful—literally engaging with the user’s senses to become something satisfying and productive. This requires a more ambiguous interaction with the team, but grounded with mock-ups, prototypes, animations to portray what is sensibly viable for the given personas and their goals and desires. This also requires tapping into a set of design principles and cultural values as espoused by the company—a reflection of the central brand promise.

Designing is about choices and arriving at a balanced solution that strives to meet a variety of demands from many perspectives. It’s easy as a designer to get caught in the mess of too many options and lose sight of what matters most to customers, ordinary people leading busy yet satisfying lives. By thinking through (and collaborating with teammates) on the possibilities, probabilities, and sensibilities, you can shape a structured approach to getting to that optimal solution.



Posted in: on Thu, July 31, 2014 - 10:27:43

Uday Gajendar

Uday Gajendar is Director of User Experience at CloudPhysics, focused on bringing beauty and soul to Big Data for virtualized datacenters.
View All Uday Gajendar's Posts


Post Comment


No Comments Found


Report from DIS 2014 part 1: Moral status of technological artifacts


Authors: Deborah Tatar
Posted: Fri, July 25, 2014 - 10:25:01

Peter-Paul Verbeek gave the opening keynote speech last month at the DIS (Designing Interactive Systems) conference in Vancouver. His topic was the moral status of technological artifacts. Do they have any? 

He argues that “yes, they do.” The argument runs that humans and objects are co-creations or, as he prefers, hybrids. Just as J. J. Gibson long ago argued for an ecology between the human and the environment—the eye is designed to detect precisely those elements of the electromagnetic spectrum that are usefully present in the environment—so too are humans and designed objects culturally co-adapted. This, by itself, is not revolutionary. In fact, it is on the basis of this similarity that Don Norman brought the term affordance into human-computer interaction. It brings together both the early, easier-to-swallow idea in cognitive psychology and human-computer interaction that the ways that we arrange the space around us are extensions of our intelligence, and the post-modern philosophical move. Suchman [1] uses the word re-creations rather than hybrids to describe the intertwining of (high-tech) artifact and person. However, Suchman, who spent many years embedded in design projects, emphasizes our active role in such re-creations, the things that we do, for example, in order to be able to imagine that robots have emotions. 

But Verbeek does not stop there. He moves towards an enhanced framework from which to understand this hybrid relationship. He draws on Don Ihde (this looks like a good link that can generate more for the interested.) to identify different kinds of relationships between the designed artifact and the human. The artifact may be part of the human, bearing an embodied relationship, as with glasses. But now we have to think embodied, as in Google glasses? The artifact may have a hermeneutic relationship to the human, bringing or excluding information for our consideration into the bright circle of our recognition, as with the thermometer. Now, we ask a FitBit? The artifact may have a contrastive role, called alterity or otherness, as in a robot. Last, the artifact may provide or create background—maybe even the soundtrack of our lives as we jog-trot down the beach along with Chariots of Fire. In the context of these distinctions, Verbeek asks what we know, do or hope for with respect to the technologies? These are excellent questions and lest we be too hasty in our answers, his examples summarize unintended effects and how new technologies create new dilemmas and possibilities. Courtesy of modern medical testing, for example, much congenital disease is moving from its status as fate to a new status as decision. Fetal gender decisions will soon be playing out in homes near you—and everywhere else. The decisions about whether to have a girl or a boy are local, but society has an interest in the anticipating and gauging the effects. (One of the reasons for the oft-depicted plight of women in Jane Austen’s England was the dearth of men caused by England’s imperial struggles.)

What are the consequences of Verbeek’s analysis? Here is where the design dilemmas start to build. Let us suppose that we just accept as normal the idea that we are hybrids of artifacts and biology. Fair ‘nuff. But Verbeek goes beyond this. He rejects the separation of the idea of human and machine in the study of human-machine interaction. The difficult part is that the relationship of the designer to the design components is not the same. The individual designer controls the machine, but only influences the person. The power of the relationship, the components of the relationship that cause us to conceptualize the relationship as so strong as to constitute hybridity, is precisely what leads to the need to study the relationship. 

What makes me impatient about Verbeek’s approach is that, as I understand it, he does not prioritize recent changes in the power of the machine. For many years, some of us (c.f. Englebart’s vision of human augmentation) imagined a future in which people could do precisely what they were already doing, and have something for free via the marvels of computing. We could just keep our own calendar, for example, and have it shared with others. 

But we do not hear this rhetoric any more. Now the rhetoric is one of expectation that our most private actions will do precisely those things that can be shared by the system widely. The intransigence of the computer wears us down even where we would prefer to resist and where another human being would give us a break. Verbeek’s position—like Foucault’s—feeds into the corporate, systemic power-grab by weakening our focus on those design and use actions that we can indeed take. 

If I am frustrated with Verbeek, I am more frustrated with myself. Our own “Making Epistemological Trouble” goes no further towards design action than to advance the hope that the third paradigm of HCI research can, by engaging in constant self-recreation, stir the design pot. These are the same rocks against which so much feminist design founders. In Verbeek’s view, we designers can have our choice of evil in influencing hybridity. Influence can be manifested as coercive, persuasive, seductive, or decisive (dominating). 

The designer may think globally but must act locally. The thing is that design action is hard. Moral design action is harder. In the May/June 2014 issue of Interactions, I published a feature that advanced a theory of what we call “human malleability and machine intransigence.” The point here is to draw attention to one class of design actions that often can be taken by individual designers, those that allow users to reassert that which is important to their identity and vision of themselves in interacting with the computer. 

Often when there is a dichotomy (focusing on human-machine interaction vs. rejecting the human-machine dichotomy), there are two ways of being in the middle. One way is to just reject the issue altogether. “It’s too complicated.” “Who can say?” “There are lots of opinions.” But the other is to hold fast onto the contradiction. In this case, it means holding onto the complexity of action while we think out cases. And it means something further than this. It means that individuals thinkers, like me and you, regardless of our corporate status indebtedness, should resist the temptation to be silenced by purely monetized notions of success. To end with one small but annoying example, it is a tremendous narrowing of the word helping to say that corporations are helping us by tailoring the advertisements that we see to things that we are most likely to buy. Yeah, sure. It’s helping in some abstract way, but not as a justification for ignoring the manifest injustices inherent in the associated perversion of shared knowledge about the world. I think that Verbeek may have been saying some of this when he talked about the need to anticipate mediations, assess mediations, and design mediations, but my impatience lies in that I want it said loudly, repeatedly, and in unmistakable terms. 

Endnote

1. Suchman, L. Human-Machine Reconfigurations: Plans and Situated Actions. Cambridge University Press, New York, 2007.

Thanks to Jeffrey Bardzell for comments on an earlier version of this!


Posted in: on Fri, July 25, 2014 - 10:25:01

Deborah Tatar

Deborah Tatar is an associate professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


No Comments Found


Ethical design


Authors: Ashley Karr
Posted: Thu, July 24, 2014 - 9:29:26

Take away: Something as fundamental to the human experience as ethics ought to be a fundamental part of human-centered design.

If only it were all so simple! If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being. And who is willing to destroy a piece of his own heart?
—Aleksandr Solzhenitsyn,The Gulag Archipelago

For a long while, I have been angry and frustrated with the design process and design community. It seems that our sole purpose is to make things that maximize profits as quickly as possible. User experience research and design is often used as a means to trick, manipulate, and separate people from their money and/or personal information. 

Finally and thankfully, I came to realize the cause of my anger and frustration.

Ethics.

Ethics are almost entirely absent from UX. I have six HCI, UX, and design textbooks and one seminal Air Force report on user interface design within arm’s reach at this very moment. That is a total of seven well-respected texts in our field. Only two of them even mention ethics. Of these two, one textbook has a paragraph on ethics regarding recruiting participants for research. The other has one-and-a-half pages on ethical interaction design, but it fails to even define ethics.

Define ethics

I have decided to do my part in rectifying this situation. I will begin by defining ethics. It is from the Greek work ethos, meaning customs. Ethics are right behaviors according to the customs of a particular group. I like to think of ethical things as thoughts, words, behaviors, designs, systems, and customs that are cumulatively more beneficial to life than they are harmful. Ethics are an essential part of civilization. Without ethics, people would not have ideas of right and wrong. They make society more stable and help people choose right actions over wrong ones. A society without ethics will fail sooner rather than later. It is important to state, however, that customs aren’t necessarily ethical. Often unethical customs inspire social change, movements, and revolutions.

Ethics require constant practice and consideration—like good hygiene. We cannot wash our hands once and expect them to be clean for life. We must wash our hands multiple times a day, every day, in order for our hands to remain clean. With ethics, we cannot engage in one ethical act in our lives and assume that we are forever after an ethical person. We must practice and consider ethics at every turn. As Abraham Lincoln said, “There are few things wholly evil or wholly good. Almost everything...is an inseparable compound of the two, so that our best judgment of the preponderance between them is continually demanded.” 

Why ethics are important in our field

There are three reasons why it is imperative that as makers of interactive computing technology we must embed ethics into our culture, methods, and metrics:

  1. First, what we create and put into the world has actual effects on actual people. Interactive designs do things. We need to make sure that our efforts are going into making things that do good things.
  2. Second, computing technology has the ability to amplify human abilities and spread exponentially in record time.
  3. Third, the ability to design and develop computing technology is to today’s world what literacy was two thousand years ago. We are (tech)literate in a world of people who cannot read. We are the leaders and creators of the sociotechnical system in which we now live. We are powerful—more powerful than we even realize. With great power comes great responsibility.

Allies in the field

Very few professionals within our field are actively incorporating ethics into their work. I have managed to find a few, and I will highlight the main objectives of three researchers here. (Please feel free to share with me other professionals working on this topic. I would love to hear from you.)

  1. Florian Egger addresses deceptive technologies. He states there is a fine line between user experience and user manipulation, and insights into user behavior and psychology can be used for ethical or unethical purposes. If designers understand certain “dirty tricks” that their unethical counterparts devise, users can be warned of these practices before falling victim. He also states that persuasion can be used for the good of the user.
  2. Sarah Deighan is conducting research on ethical issues occurring within UX, including how UX professionals view these issues. She is attempting to make ethical resources available for UX professionals.
  3. Rainer Kuhlen wrote The Information Ethics Matrix: Values and rights in electronic environments. He explores new attitudes toward knowledge and information (sharing and open-access) and defines communication rights as human rights. He states that communication is a fundamental social process, a basic human need, and the foundation of all social organization. Everyone, everywhere, should have the same opportunity to communicate, and no one should be excluded from the benefits of access to information.

Define Ethical Design

In order to foster the adoption of ethics into our design and development processes, I am creating a conceptual framework called Ethical Design. It allows designers and design teams to create products, services, and systems that do no harm and improve human situations. Ethical design extends to all people and other living things that are in any way involved in the product, service, and/or system lifecycle. Borrowing from About Face by Cooper, Reimann, and Cronin, I explain the meaning of doing no harm and improving the human situation below.

Do no harm

  • Interpersonal harm: loss of dignity, insult, humiliation
  • Psychological harm: confusion, discomfort, frustration, coercion, boredom
  • Environmental harm: pollution, elimination of biodiviersity
  • Social and Societal harm: exploitation, creation or perpetuation of injustice

Improve the human situation

  • Increase understanding: individual, social, cultural
  • Increase efficiency/effectiveness: individuals and groups
  • Improve communication: between individuals and groups
  • Reduce sociocultural tension: between individuals and groups
  • Improve equity: financial, social, legal
  • Balance cultural diversity with social cohesion

What I hope to avoid using Ethical Design

I do not want to make digital junk. I do not want to waste time, money, and energy on things that don’t help anyone in any meaningful way. I don’t want other people to waste their time, money, and energy on those things, either, even if those people are investors with millions, billions, or trillions of dollars to burn. As a specific example, I do not want our transactional system to be based on technology that depends on inconsistent networks, has limited storage, and runs on batteries that die every three hours. Yes, I am talking about mobile payments in general. Smartphones were meant to be auxiliary devices—they were not meant for complete human dependency. We cannot run our lives from our mobile phones, nor can we build ubiquitous and high priority systems, like transactional systems, based upon such technology. It just won’t work. In a handful of limited and specific cases, mobile payments are an interesting option, but on a grand scale, no.

What I hope to achieve with Ethical Design

I want a healthy, happy family and a healthy, long life. I want a safe, clean house in a safe, clean neighborhood with enough room for all of us, including the dog. I want clean water, clean air, safe transportation, education for all, and a good school walking distance from our home. I want decent, clean clothes that keep us protected from the elements and allow us to express ourselves. I want healthy, safe food and enough to sustain ourselves. I want the right to communicate and freedom to retrieve information. I want time to spend with family and friends and time to spend alone in self-reflection. I want a satisfying career that allows me to help other people improve their situation in life. I want the same for everyone else.

In order to make sure I achieve what I have listed in the paragraph above, I am beginning with these three short-term objectives for Ethical Design:

  1. Add ethics as a standard usability requirement and heuristic guideline. 
  2. Include a course on ethics and ethical design in every CS/HCI/UX/HF/IxD program.
  3. Include in all CS/HCI/UX/HF/IxD textbooks a chapter on ethics ethical design.

In conclusion

I will continue to discuss Ethical Design and create methods, metrics, resources, and conversation starters to support others interested in the topic. Please contact me if you are one of those people. Thanks very much for reading and for caring. I appreciate it. 


Posted in: on Thu, July 24, 2014 - 9:29:26

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


@Richard Anderson (2014 07 24)

Add Jon Kolko to your list. He and I were Co-Editors-in-Chief of interactions awhile back. Here is a quote of his from an interview I did of him and Don Norman:

“Not all problems are equally worth solving. It seems like we’ve taken it for granted that every activity within the context of design is worth doing, whether it is a drinking bottle or a microphone or a website for your band. I don’t know if that is true, and I’d like to challenge it and would like more people to challenge it more regularly. That is the focus of the Austin Center for Design: problems that are socially worth doing, and broadly speaking, that means dealing with issues of poverty, nuitrition, access to clean drinking water, the quality of education, ... These are big, gnarly problems, sometimes called ‘wicked’ problems, and it seems incredibly idealistic to think that designers can solve them—I agree, I don’t think designers can solve them. In fact, I’m not sure anyone can solve them, but I think designers can play a role in mitigating them—a really important role because of all of the design thinking stuff that we’ve already talked about: the power of that can drive innovations that are making millions of dollars for companies; it seems that that same power can be directed in other ways.”

For more, see http://riander.blogspot.com/2011/11/out-with-old-in-with-new-conversation.html.

(from a GA instructor to your north)


Service design 101


Authors: Lauren Chapman Ruiz
Posted: Mon, July 21, 2014 - 12:04:06

This article was co-written by Lauren Ruiz and Izac Ross.

We all hear the words service design bandied about, but what exactly does it mean? Clients and designers often struggle to find a common language to define the art of coordinating services, and frequent questions arise. Often it emerges as necessary in the space of customer experience or complicated journey maps. In response, here is a brief FAQ primer to show the lay of the land in service design.

What are services?

Services are intangible economic goods—they lead to outcomes as opposed to physical things customers own. Outcomes are generated by value exchanges that occur through mediums called touchpoints. For example, when you use Zipcar, you don’t actually own the Zipcar, you buy temporary ownership. You use the car, then transfer it to someone else once it is returned. Every point in which you engage with Zipcar is a touchpoint. 

What creates a service experience?

Services are always co-created by what we call service users and service employees—the direct beneficiaries of the service, and the individuals who see the service through.

This oftentimes means that the outcome will vary for each service user. Your experience of a service may be completely different than another’s. Think of a flight—it can be a pleasant experience, or if you have a screaming baby next to you, not so great. Service employees can do everything to provide a good experience, but there are unknown factors each time that can ruin that experience.

A positive service experience considers and works to account for these situations—they are intentionally planned.

Who else is involved in a service?

A service experience often involves more than just the service user and employee. There are several types of people working together to create a service:

  • Service customers are actually purchasing the service, which is sometimes a different user than who is actually using the service.
  • Service users directly use the service to achieve the outcome.
  • Frontstage service employees deliver the service directly to the user.
  • Backstage service employees make everything happen in the background; the user doesn’t see or interact directly with these people.
  • Partner service employees are other partners involved in delivering the service. For example, UPS is a partner service employee to Amazon. You may order from Amazon, but UPS plays a role in completing your service experience.

What is frontstage and backstage?

In services, there are things the customer does and doesn’t see—we call this frontstage and backstage. Think of it like theater: backstage is what is done behind the curtain to support the actors, who are frontstage, and they’re who you see in front of the curtain. Those on the backstage do just as much to shape the experience as those on the front stage. They help to deliver the service, play an active and critical part in shaping the experience, and represent a company’s brand.

Partners help the company deliver the service outcome by doing things like delivering packages, providing supplies for the service, or processing data.

What are touchpoints?

Earlier I mentioned touchpoints as the medium through which value exchanges happen, leading to the outcomes of a service. Touchpoints are these exchange moments in which service users engage with the service.

There are five different types of touchpoints:

  • People, including employees and other customers encountered while the service is produced
  • Place, such as the physical space or the virtual environment through which the service is delivered
  • Props, such as the objects and collateral used to produce the service encounter
  • Partners, including other businesses or entities that help to produce or enhance the service
  • Processes, such as the workflows and rituals that are used to produce the service (this relates the people, place, props, and partners).

Unlike most products, a service can be purchased multiple times. If a service is purchased just once, it may be a high-value exchange. Since most services are used frequently, we approach designing a service by considering the service cycle.

The service cycle helps answer the following questions:

  • How do we entice service users?
  • How do they enter into the service?
  • What is their service experience?
  • How do they exit from the service?
  • And how do we extend the service experience to retain them as a repeat service user?

What has 30 years done to services?

A lot of changes have occurred to services over the decades. Think about banking services. At one point, the only access to banks included four channels: checks, phone, mail, and branches. Today, there are many more access channels that need to be coordinated, including debit cards, ATMs, online banking, mobile web access, texting, iPhone, Android, mobile check deposit, retail partners, and even Twitter.

And here’s the exciting news—service design (or as some industries might call it, customer experience) is critical to making a cohesive experience across all these channels! There is a desperate need to coordinate these elements using the skills and principles of design.

Like most industries, design disciplines have been changing as a response to paradigm shifts in the economy. Graphic design emerged from the printing press. The industrial age gave birth to industrial design. Personal computing and the mobile age gave rise to interaction design. And the convergence of all of these channels has bought service design forward to coordinate service outcomes.

So how important is service design?

We’ve all had bad service experiences across a range of industries. They’re why companies lose customers, and they can bring frustration, pain, and suffering—from poor transit systems to care delivery. When clients neglect backstage or frontstage employees, every pain point will show through to a service user and customer.

Without effective service design, many companies break apart into disconnected channels, with no one overseeing or coordinating. And even if you’re creating a product, understanding the service you’re trying to put your product into will help your product be much more successful—remember, your B2B “product” is also one of your customer’s touchpoints.

In addition, there are many opportunities to leverage technology to create new services. Look at TaskRabbit—it starts as a digital experience, but without the “rabbits” to perform the service, it’s useless.

Finally, well-designed service experiences differentiate companies. Those who pay attention to wisely designing services will be poised to stand out and achieve success in our ever-changing economy.

So how important is service design? I hope this post has convinced you the answer is very. Tune in again as we’ll be continuing this topic with a deep dive into one of the most important tools of service design—the service blueprint.

Top image via Zip Car, all others created by Izac Ross



Posted in: on Mon, July 21, 2014 - 12:04:06

Lauren Chapman Ruiz

Lauren Chapman Ruiz is an Interaction Designer at Cooper in San Francisco, CA, and is an adjunct faculty member at CCA .
View All Lauren Chapman Ruiz's Posts


Post Comment


@rypac (2014 07 22)

I seriously appreciate your information.
this post is helpful.
thanks for this idea


Visual design’s trajectory


Authors: Jonathan Grudin
Posted: Thu, July 17, 2014 - 3:50:11

Some graphic artists and designers who spent years on the edges of software development describe with bemusement their decades of waiting for appreciation and adequate computational resources. Eventually, visual design soared. It has impressed us. Today, design faces complexities that come with maturity. Cherished aesthetic principles deserve reconsideration.

An enthusiastic consumer

People differ in their ability to create mental imagery. I have little. I recognize some places and faces but can’t conjure them up. The only silver lining to this regrettable deficit is that everything appears fresh; the beauty of a vista is not overshadowed by comparison with spectacular past views. I’m not a designer, but design can impress me.

The first HCI paper I presented was inspired by a simple design novelty. I had been a computer programmer, but in 1982 I was working with brain injury patients. A reverse video input field—white characters on a black background—created by Allan MacLean looked so cool that I thought that an interface making strategic use of it would be preferred even if it was less efficient. I devised an experiment that confirmed this: aesthetics can outweigh productivity [1].

Soon afterward, as the GUI era was dawning, I returned to software development. A contractor showed me a softly glowing calendar that he had designed. I loved it. Our interfaces had none of this kind of beauty. He laughed at my reaction and said, “I’m a software engineer, not a designer.” “Where can I find a designer?” I asked.

I found one downstairs in Industrial Design, designing boxes. As I recall, he had attended RISD and had created an award-winning arm that held a heavy CRT above the desktop, freeing surface space and repositioned with a light touch. I interested him in software design. It took about a year for software engineers to value his input. Other designers from that era, including one who worked on early Xerox PARC GUIs, recount working cheerfully for engineers despite having little input into decisions.

Design gets a seat at the table

I was surprised by design’s slow acceptance in HCI and software product development. Technical, organizational, cultural, and disciplinary factors intervened.

Technical. Significant visual design requires digital memory and processing. It is difficult to imagine now how expensive they were for a long time. As noted in my previous post, and in the recent book and movie about Steve Jobs, the Macintosh failed in 1984. It succeeded only after models with more memory and faster processors came out, in late 1985 and in 1986. Resource constraints persisted for another decade. The journalist Fred Moody’s account of spending 1993 with a Microsoft product development team, I Sing the Body Electronic, details an intense effort to minimize memory and processing. The dynamic of exponential growth is not that things happen fast—as in this case, often they don’t—it is that when things finally start to happen, then they happen fast. In the 00’s, constraints of memory, processing, and bandwidth rapidly diminished.

Organizational. The largest markets were government agencies and businesses, where the hands-on users were not executives and officers. Low-paid data entry personnel, secretaries who had shifted from typing to word processing, and other non-managerial workers used the terminals and computers. Managers equipping workers wanted to avoid appearing lavish—drab exteriors and plain functional screens were actually desirable. I recall my surprise around the turn of the century when I saw a flat-panel display in a government office; I complimented the worker on it and her dour demeanor vanished, she positively glowed with pride. For decades, dull design was good.

Sociocultural. The Model T Ford was only available in black. Timex watches and early transistor radios were indistinguishable. People didn’t care. When you are excited to own a new technology, joining the crowd is a badge of honor. Personalization comes later—different automobile colors and styles, Swatches, distinctive computers and interfaces. The first dramatically sleek computers I saw were in stylish bar-restaurants.

Disciplinary friction. Software engineers were reluctant to let someone else design the visible part of their application. Usability engineers used numbers to try to convince software developers and managers not to design by intuition; designers undermined this. In turn, designers resented lab studies that contested their vision of what would fare well in the world. The groups also had different preferred communication media—prototypes, reports, sketches.

These factors reflected the immaturity of the field. Mature consumer products relied on collaboration among industrial design, human factors, and product development. Brian Shackel, author of the first HCI paper in 1959, also worked on non-digital consumer products and directed an ergonomics program with student teams drawn from industrial design and human factors.

As computer use spread in the 1990s, HCI recognized design, sometimes grudgingly. In 1995, SIGCHI backed the Designing Interactive Systems (DIS) conference series. However, DIS failed to attract significant participation from the visual design community: Papers focused on other aspects of interaction design. In the late 1990s, the CMU Human-Computer Interaction Institute initiated graduate and undergraduate degrees with significant participation of design faculty.

This is a good place to comment on the varied aspects of “design.” This post outlines a challenge for visual or graphic design as a component of interaction design or interface design focused on aesthetics. Practitioners could be trained in graphic art or visual communication design. Industrial design training includes aesthetic design, usually focused on physical objects that may include digital elements. Design programs may include training in interaction design, but many interaction designers have no training in graphic art or visual communication. CHI has always focused on interaction design, but had few visual designers in its midst. “Design” is of course a phase in any development project, even if the product is not interactive and has no interface, which adds to the potential for confusion.

Design runs the table

Before the Internet bubble popped in 2000–2001, it dramatically lowered prices and swelled the ranks of computer users, creating a broad market for software. This set the stage for Timex giving way to Swatch. In the 2000s, people began to express their identity through digital technology choices. In 2001, the iPod demonstrated that design could be decisive. Cellphone buddy lists and instant messaging gave way to Friendster, MySpace, Facebook, and LinkedIn. The iPod was followed by the 2003 Blackberry, the iPhone in 2007, and other wildly successful consumer devices in which design was central.

The innovative Designing User Experience (DUX) conference series of 2003–2007 drew from diverse communities, succeeding where DIS had failed. It was jointly organized by SIGCHI, SIGGRAPH, and AIGA—originally American Institute of Graphic Arts, founded in 1914, the largest professional organization for design.

The series didn’t continue, but design achieved full acceptance. The most widely-read book in HCI may be Don Norman’s The Psychology of Everyday Things. It was published in 1988 and republished in 2002 as The Design of Everyday Things. Two years later Norman published Emotional Design.

Upon returning to Apple in 1997, Steve Jobs dismissed Apple’s HCI group and vice president Don Norman. Apple’s success with its single-minded focus on design has had a wide impact. For example, the job titles given HCI specialists at Microsoft evolved from “usability engineers” to “user researchers,” reflecting a broadening to include ethnographers, data miners, and others, and then to “design researchers.” Many groups that were focused on empirical assessment of user behavior had been managed parallel to Design and are now managed by designers.

Arrow or pendulum?

Empowered by Moore’s law, design has a well-deserved place at the table, sometimes at the decision-maker’s right hand. But design does not grow exponentially. Major shifts going forward will inevitably originate elsewhere, with design being part of the response. An exception is information design—information is subject to such explosive growth that tools to visualize and interact with it will remain very significant. Small advances will have large consequences.

In some areas, design may have overshot the mark. A course correction seems likely, perhaps led by designers but based on data that illuminate the growing complexity of our relationships with technology and information. We need holistic views of people’s varied uses of technology, not “data-driven design” based on undifferentiated results of metrics and A/B testing.

I’d hesitate to critique Apple from Microsoft were it not for the Windows 8 embrace of a design aesthetic. Well-known speakers complain that “Steve Ballmer followed Steve Jobs over to the dark side,” as one put it. They are not contesting the value of appearance; they are observing that sometimes you need to do real work, and designs optimized for casual use can get in the way.

My first HCI experiment showed that sometimes we prefer an interface that is aesthetic even when there is a productivity cost. But we found a limit: When the performance hit was too high, people sacrificed the aesthetics. Certainly in our work lives, and most likely in our personal lives as well, aesthetics sometimes must stand down. Achieving the right balance won’t be easy, because aesthetics demo well and complexity demos poorly. This creates challenges. It also creates opportunities that have not been seized. Someone may be doing so out of my view; if not, someone will.

Aesthetics and productivity

Nature may abhor a vacuum, but our eyes like uncluttered space. When I first opened a new version of Office on my desktop, the clean, clear lettering and white space around Outlook items were soothing. It felt good. My first thought was, “I need larger monitors.” With so much white space, fewer message subject lines fit on the display. I live in my Inbox. I want to see as much as my aching eyes can make out. I upsized the monitors. I would also reduce the whitespace if I could. I’d rather have the information.

A capable friend said he had no need for a desktop computer—a tablet suffices, perhaps docked to a larger display in his office. Maybe our work differs. When I’m engaged in a focal task, an undemanding activity, or trying out a new app, sparsity and simplicity are great. When I’m scanning familiar information sources, show me as much as possible. As we surround ourselves with sensors, activity monitors, and triggers, as ever more interesting and relevant digital information comes into existence, how will our time be spent?

Airplane pilots do not want information routed through a phone. They want the flight deck control panel, information densely arrayed in familiar locations that enable quick triangulations. If a new tool is developed to display airspeeds recorded by recent planes on the same trajectory, a pilot doesn’t want a text message alert. Tasks incidental to flying—control of the passenger entertainment system perhaps—might be routed through a device.

We’re moving into a world where at work and at home, we’ll be in the role of a pilot or a television news editor, continually synthesizing information drawn from familiar sources. We’ll want control rooms with high-density displays. They could be more appealing or less appealing, but they will probably not be especially soothing.

Design has moved the opposite direction, toward sparsely aesthetic initial or casual encounters and focal activity. Consumer design geared toward first impressions and focal activity is perfect for music players and phones. Enabling people to do the same task in much the same way on different devices is great. However, when touch is not called for, more detailed selection is possible. Creative window management makes much more possible with large displays. A single application expanded to fill an 80-inch display, if it isn’t an immersive game, wastes space and time.

I observed a 24x7 system management center in which an observation team used large displays in a control panel arrangement. The team custom-built it because this information-rich use was not directly supported by the platform.

You might ask, if there is demand for different designs to support productivity, why hasn’t it been addressed? Clever people are looking for ways to profit by filling unmet needs—presumably not all are mesmerized by successes of design purity. My observation is that our demo-or-die culture impedes progress.  A demo is inherently an initial encounter. A dense unfamiliar display looks cluttered and confusing to executives and venture capitalists, who have no sense of how people familiar with the information will see it.

This aggravates another problem: the designers of an application typically imagine it used in isolation. They find ways to use all available screen real estate, one of which is to follow a designer’s recommendation to space out elements. User testing could support the resulting design on both preference and productivity measures if it is tested on new users trying the application in isolation, which is the default testing scenario. People using the application in concert with other apps or data sources are not given ways to squeeze out white space or to tile the display effectively.

Look carefully at your largest display. Good intentions can lead to a startling waste of space. For example, an application often opens in a window that is the same size as when that application was most recently closed. It seems sensible, but it’s not. Users resize windows to be larger when they need more space but rarely resize them smaller when they need less space, so over time the application window grows to consume most of a monitor. When I open a new window to read or send a two-line message, it opens to the size that fits the longest message I’ve looked at in recent weeks, covering other information I am using.

The challenge

The success of the design aesthetic was perfectly timed to the rapidly expanding consumer market and surge of inexpensive digital capability in just the right segment of the exponential curve. It is a broad phenomenon; touch, voice, and a single-application focus are terrific for using a phone, but no one wants to gesticulate for 8 hours at their desk or broadcast their activity to officemates. At times we want to step back to see a broader canvas.

The paucity of attention to productivity support was recently noted by Tjeerd Hoek of Frog Design. The broad challenge is to embrace the distinction between designs that support casual and focal use and those that support high-frequency use that draws on multiple sources. Some designers must unlearn a habit of recommending aesthetic uncluttered designs in a world that gets more cluttered every week. Cluttered, of course, with useful and interesting information and activities that promote happier, healthier, productive lives.

Endnote

1. J. Grudin & A. MacLean, 1984. Adapting a psychophysical method to measure performance and preference tradeoffs in human-computer interaction. Proc. INTERACT '84, 737-741 PDF

Thanks to Gayna Williams for suggesting and sharpening many of these points. Ron Wakkary and Julie Kientz helped refine my terminology use around design, but any remaining confusion is my fault.

Posted in: on Thu, July 17, 2014 - 3:50:11

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Excuse me, your company culture is showing


Authors: Monica Granfield
Posted: Fri, July 11, 2014 - 9:36:16

Find the simple story in the product, and present it in an articulate and intelligent, persuasive way. —Bill Bernbach

As I read this quote by the all time advertising great, Bill Bernbach, it occurred to me that simplifying and distilling a product story to persuasively and innovatively represent it in a product depends on a company’s brand and culture, and how the brand embodies the culture. 

This is not new news, but it is news worth revisiting—your company culture and politics surface in the design of your product. 

As a designer, my inclination or habit is to try and understand how a design solution was reached—how and why something was created and designed as it was. In doing this, quite often it becomes apparent how and why certain design trade-offs and decisions were made. One can almost hear the conversations that occurred around the decisions. 

A confusing design that provides little guidance and direction or one that does not provide enough flexibility, generating end user frustration, could be traced back to a culture where the end user’s voice is not heard or represented. 

Business trade-offs, technical decisions, design trade-offs, research or lack of it, political posturing—it's all there, reflected in your product. Every meeting, every disagreement, every management decision—all are represented in the end result, the design of your product and the experience your users have with that product. 

Does your company innovate or follow? Is the design of your product driven by clear and thoughtful goals and intentions? Most of these aspects of a design can be traced directly to company culture. Just like the underlying technical architecture surfaces in the product design, so too does the corporate culture, politics, and decision making. 

Is the company engineering focused? Sales focused? Does your culture represent your brand? Where do the product goals align with these intentions? Design goals need to align with the business goals, which are a direct reflection of the product’s design. The clearer your company goals and mission, the clearer your design intentions will be. This will drive directed design thinking, resulting in useful, elegant, well-designed, desirable products. 

I recently read an article that asked what it's really like inside of Apple. The answer: Everyone there embraces design thinking to support the business goals. That is the culture. Everyone's ideas matter, and are subject to the same rigor as a designer’s solution. Great idea? Let's vet that as we would any design idea or solution. This is what makes a great product. So when someone tells you they want to make products as cool as Apple’s, that they want to innovate, ask them about their culture. 

When rationalizing the design thinking and design direction of your products, consider representing a culture that you are proud of and how that culture and the decisions you make will be represented in your product.



Posted in: on Fri, July 11, 2014 - 9:36:16

Monica Granfield

Monica Granfield is a user experience designer at Imprivata. The views expressed on this website are exclusively her own and are not meant to reflect or represent the views of Imprivata.
View All Monica Granfield's Posts


Post Comment


No Comments Found


The challenges of developing usable and useful government ICTs


Authors: Juan Pablo Hourcade
Posted: Mon, June 30, 2014 - 3:04:52

Governments are increasingly providing services and information to the public through information and communication technologies (ICTs). There are many benefits to providing information and services through ICTs. People who are looking for government-related information can find it much more quickly. Government agencies can update websites more easily than paper documents. Those taking advantage of government services through ICTs can save time and frustration, which often accompany waiting in line at government agencies. Further, government agencies can save resources when transactions can be handled automatically. In addition, ICTs for internal government use have the potential of helping manage large amounts of information and handle processes more efficiently.

In spite of the promises of e-government, there have been several notorious failures in the implementation of e-government systems. The most recent example in the United States was the website for applying for health insurance under the Affordable Care Act (also known as Obamacare). The website was not usable by a significant portion of users when it launched. This is only the latest example of an e-government system that does not work as planned and requires additional resources to be functional (if it is not completely scrapped). These challenges have occurred across different administrations, and with different political parties in power. In the United States, historic examples include the Federal Aviation Administration’s air traffic control software, and the Federal Bureau of Investigation’s Virtual Case File [1]. Dada [2] provides examples of e-government failures in lower-income countries.

These failures tend to stem from the difficulty in following modern software engineering and user-centered design methods when contracting with companies for the development of ICTs. These modern methods call for iterative processes of development with significant stakeholder input and feedback. There is an expectation, for example, that detailed requirements will be developed over time, and that some may change. Typical government contracting for ICTs, on the other hand, often assumes that government employees, oftentimes without any training in software engineering, will be able to deliver an accurate set of requirements to a company that will then build a system with little or no feedback from stakeholders during the development process.

The challenge is that an overwhelming majority of elected officials and political appointees have little or no knowledge of software engineering or user-centered design methods. Even people responsible for ICTs at government agencies may not have any specific training in these methods. It is rare, for example, for government agencies to usability test competing technologies before deciding which one to purchase. 

I saw this first-hand while I worked at the U.S. Census Bureau. At the time, the Census Bureau was planning to use handheld devices to conduct the 2010 Census. In spite of the significant investment to be made, no one in the leadership was familiar with software engineering or user-centered design methods, and they trusted the management of the process to employees with some background in ICTs, but no training or experience in handling projects of such magnitude, and little knowledge of appropriate methods. This resulted in the development of a set of requirements that no one understood, and that came largely from long-time employees, with no feedback or consideration for the fact that those who would use the system would be temporary employees. While there was some involvement of usability professionals in the process it was “too little, too late” and did not have an impact on the methods used. The requirements were turned over to a contractor, and a test of the resulting software resulting in the need to change more than 400 requirements. The project had to be scrapped after spending almost $600 million on the contractor (not counting the resources spent in-house), and meant that the Census Bureau had to spend an extra $3 billion in processing paper forms that would have been unnecessary had the software been successfully developed.

So how can we help? HCI researchers and professionals can contribute to public policy by informing elected officials and the leadership at government agencies of the methods that are most likely to result in usable and useful government ICTs that can be developed on time and within a given budget. This, in turn, can inform how government contracts for the development of ICTs are structured, such that they require iterative processes with a significant amount of stakeholder feedback. If these methods are followed, government agencies stand to save resources, and deliver better quality ICTs. This is an area where ACM, SIGCHI, and other professional associations could play a role. If we don’t do it, no one else will.

Endnotes

1. Charette, R.N. Why software fails? IEEE Spectrum 42, 9 (2005), 42-49.

2. Dada, D. The failure of e-government in developing countries: A literature review. The Electronic Journal of Information Systems in Developing Countries 26, 7 (2006), 1-10.



Posted in: on Mon, June 30, 2014 - 3:04:52

Juan Pablo Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Pablo Hourcade's Posts


Post Comment


No Comments Found


Organizational behavior


Authors: Jonathan Grudin
Posted: Mon, June 23, 2014 - 10:36:18

Two books strongly affected my view of organizations—those I worked in, studied, and developed products for. One I read 35 years ago; the other I just finished, although it came out 17 years ago.

Encountering Henry Mintzberg’s typology of organizational structure

In 1987, an “Organizational Science Discussion Group” was formed by psychologists and computer scientists at MCC. We had no formal training in organizational behavior but realized that software was being designed to support more complex organizational settings. Of the papers we discussed, two made lasting impressions. “A Garbage Can Model of Organizational Choice” humorously described the anarchy found in universities. It may primarily interest academics; it didn’t seem relevant to my experiences in industry. The discussion group’s favorite was a dense one-chapter condensation [1] of Henry Mintzberg’s 1979 The Structuring of Organizations: A Synthesis of the Research.

Mintzberg observed that organizations have five parts, each with its own goals, processes, and methods of evaluation. Three are the top management, middle management, and workers, which he labels strategic apex, middle line, and operating core. A fourth group designs workplace rules and processes, such as distribution channels, forms, and assembly lines. This he calls technostructure, although technology is not necessarily central. Finally there is everyone else: the support staff (including IT staff), attorneys, custodians, cafeteria workers, and so on.

Mintzberg argues that these groups naturally vie for influence, with one or another usually becoming particularly powerful. There are thus five “organizational forms.” Some are controlled by executives, but in divisionalized companies the middle line has strong autonomy, as when several product lines are managed with considerable independence. In an organization of professionals, such as a university, the workers—faculty—have wide latitude in organizing their work. In organizations highly reliant on regulations or manufacturing processes, the technostructure is powerful. And an “adhocracy” such as a film company relies on a collection of people in support roles.

When I left MCC, I was puzzled to find that Mintzberg’s analysis was not universally highly regarded. Where was supporting evidence, people asked. What could you do with it?  Only then did I realize why we had been so impressed. It arose from the unique origin of MCC.

An act of Congress enabled MCC to open its doors in 1984. In response to a prominent Japanese “Fifth Generation” initiative, anti-trust laws were modified to permit 20 large U.S. companies to collaborate on pre-competitive research. MCC was a civilian effort headed by Bobby Ray Inman, previously the NSA Director and CIA Deputy Director. It employed about 500 people, some from the shareholder companies and many hired directly. Our small discussion group drew from the software and human interface programs; MCC also had programs on artificial intelligence, databases, parallel processing, CAD, and packaging (hardware).

Consider this test of Mintzberg’s hypotheses: Create an organization of several hundred people spread across the five organizational parts, give them an ambiguous charter, let them loose, and see what happens. MCC was that experiment.

To a breathtaking degree, we supported Mintzberg’s thesis. Each group fought for domination. The executives tried to control low-level decisions. Middle managers built small fiefdoms and strove for autonomy. Individual contributors maneuvered for an academic “professional bureaucracy” model. Employees overseeing the work processes burdened us with many restrictive procedural hurdles, noting for example that because different shareholder companies funded different programs, our interactions should be regulated. Even the support staff felt they should run things—and not without reason. Several were smart technicians from shareholder companies; seeing researchers running amok on LISP machines, some thought, “We know what would be useful to the shareholders, these guys sure as hell don’t.”

Mintzberg didn’t write about technology design per se. We have to make the connections. Central to his analysis is that each part of the organization works differently. Executives, middle managers, individual contributors, technostructure, and support staff have different goals, priorities, tasks, and ways to measure and reward work. Their days are organized differently. Time typically spent in meetings, ability to delegate, and the sensitivity of their work differ. Individual contributors spend more time in informal communication, managers rely more on structured information—documents, spreadsheets, slide decks—and executives coordinate the work of groups that rarely communicate directly.

Such distinctions determine which software features will help and which may hinder. Preferences can sharply conflict. When designing a system or application that will be used by people in different organizational parts, it is important to consult or observe representatives of these groups during requirements analysis and design testing.

At MCC we did not pursue implications, but I was prepared when Constance Perin handed me an unpublished paper [2] in 1988. I had previously seen the key roles in email being senders and receivers; she showed that enterprise adoption could hinge on differences between managers, who liked documents and hated interruptions, and individual contributors, who engaged in informal communication and interruption. Over the next 25 years, studying organizational adoption of a range of technologies, I repeatedly found differences among members of Mintzberg’s groups. If it was confirmation bias, it was subtle, because somewhat obtusely I didn’t look for it and was surprised each time. The pattern can also be seen in other reports of enterprise technology adoption. This HICSS paper and this WikiSym paper provide a summary and a recent example.

Clayton Christensen and disruptive technologies

In 1997, Clayton Christensen published The Innovator’s Dilemma. Thinking it was a business professor’s view of issues facing a lone inventor, I put off reading it until now. But it is a nice analysis of organizational behavior based on economics and history, and is a great tool for thinking about the past and the present.

I have spent years looking into HCI history [3], piecing together patterns some of which are more fully and elegantly laid out by Christensen. The Innovator’s Dilemma deepened my interpretations of HCI history and reframed my current work on K-12 education. Before covering recent criticism of this short, easily read book and indicating why it is a weak tool for prediction, I will outline its thesis and discuss how I found Mintzberg and Christensen to be useful.

Christensen describes fields as diverse as steel manufacture, excavation equipment, and diabetes treatment, arguing that products advance through sustaining innovations that improve performance and satisfy existing customers. Eventually a product provides more capability than most customers need, setting the stage for a disruptive innovation that has less capability and a lower price—for example, a 3.5” disk drive when most computers used 5”+ drives, or a small off-road motorbike when motorcycles were designed for highway use. The innovation is dismissed by existing customers, but if new customers happy with less are found, the manufacturer can improve the product over time and then enter the mainstream market. For example, minicomputers were initially positioned for small businesses that could not afford mainframes, then became more capable and undermined the mainframe industry. Later, PCs and workstations, initially too weak to do much, grew more capable and destroyed the once-lucrative minicomputer market.

An interesting insight is that established companies can fail despite being well-managed. Many made rational decisions. They listened to customers and improved their market share of profitable product lines rather than diverting resources into speculative products with no established markets.

Some firms that successfully embraced disruptive innovations learned to survive with few sales and low profit margins. Because dominant companies are structured to handle large volume and high margins, Christensen concludes that a large company can best embrace a disruptive innovation by creating an autonomous entity, as IBM did when it located its PC development team in Florida.

Using the insights of Mintzberg and Christensen for understanding

For decades, Mintzberg’s analysis has helped me understand the results of quantitative and qualitative research, mine and others’, as described in the papers cited above and two handbook chapters [4]. Reading The Innovator’s Dilemma, I reevaluated my experiences at Wang Laboratories, a successful minicomputer company that, like the others, underestimated PCs and Unix-based workstations. It also made sense of more recent experiences at Microsoft, as well as events in HCI history.

For example, a former Xerox PARC engineer recounted his work on the Alto, the first computer sporting a GUI that was intended for commercial sale. A quarter century later he still seemed exasperated with Xerox marketers for pricing the Alto to provide the same high-margin return as photocopiers. With a lower price, the Alto could have found a market and created the personal computer industry. The marketing decision seems clueless in hindsight, but in Christensen’s framework it can be seen as sensible unless handling a disruptive innovation—which the personal computer turned out to be.

A colleague said, “An innovator’s dilemma book could be written about Microsoft.” Indeed. It would describe successes and failures. Not long after The Innovator’s Dilemma was published, Xbox development began. The team was located far from the main Redmond site, reportedly to let them develop their own approach, as Christensen would recommend. Unsuccessful efforts are less easily discussed, but Courier might be a possibility.

Using (or avoiding) the frameworks as a basis for predictions

Mintzberg’s typology has proven relevant so often that I would recommend including members of each of his groups when assessing requirements or testing designs. His detailed analysis could suggest design features, but because of the complex, rapidly evolving interdependencies in how technology is used in organizations, empirical assessment is necessary.

Christensen is more prescriptive, arguing that sustaining innovations require one approach and a timely disruptive innovation requires a different approach. But if disruptiveness is a continuum, rather than either-or, choosing the approach could be difficult. And getting the timing right could be even trickier. Can one accurately assess disruptiveness? My intuition is, rarely.

Christensen courageously concluded the 1997 book by analyzing a possible disruptive innovation, the electric car. His approach impressed me—methodical, logical, building on his lessons from history. He concluded that the electric car was disruptive and provided guidance for its marketing. In my view, this revealed the challenges. He projected that only in 2020 would electric vehicle acceleration intersect mainstream demands (0 to 60 mph in 10 seconds). Reportedly the Nissan Leaf has achieved that and the Tesla has reached five seconds. On cruising range he was also pessimistic. Unfortunately, his recommendations depend on the accuracy of these and other trends. He suggested a new low-end market (typical for the disruptive innovations that he studied) such as high school students, who decades earlier fell in love with the disruptive Honda 50 motorcycle; instead, electric cars focus on appealing to existing high-end drivers. A hybrid approach by established manufacturers, which failed for his mechanical excavator companies, has been a major automobile innovation success story.

Christensen reverse-engineered success cases, a method with weaknesses that I described in an earlier blog post. We are not told how often plausible disruptive innovations failed or were developed too soon. Christensen says that innovators must be willing to fail a couple times before succeeding. Unfortunately, there is no way to differentiate two failures of an innovation that will succeed from two failures of a bad or premature idea. Is it “the third time is a charm” or “three strikes and you’re out”? If 2/3 of possible disruptive innovations pan out in a reasonable time frame, an organization would be foolish not to plan for one. If only one in 100 succeed, it could be better to cross your fingers and invest the resources in sustaining innovations.

Our field is uniquely positioned to explore these challenges. Most industries studied by Christenson had about one disruptive innovation per century. Disk drives, which Christensen describes as the fruit flies of the business world, were disrupted every three or four years. He never mentions Moore’s law. He was trying to build a general case, but semiconductor advances do guarantee a flow of disruptive innovation. New markets appear as prices fall and performance rises. A premature effort can be rescued by semiconductor advances: The Apple Macintosh, a disruptive innovation for the PC market, was released in 1984. It failed, but models in late 1985 and early 1996 with more memory and processor power succeeded.

Despite the assistance of Moore’s law, the success rate for innovative software applications has been estimated to be 10%. Many promising, potentially disruptive applications failed to meet expectations for two or three decades: speech recognition and language understanding, desktop videoconferencing, neural nets, workflow management systems, and so on. The odds of correctly timing a breakthrough in a field that has one each century are worse. Someone will nail it, but how many will try too soon and be forgotten?

The weakness of Christensen’s historical analysis as a tool for prediction is emphasized by Harvard historian Jill Lepore in a New Yorker article appearing after this post was drafted. Some of Christensen’s cases are more ambiguous when examined closely, although Christensen did describe exceptions in his chapter notes. Lepore objects to the subsequent use of the disruptive innovation framework by Christensen and others to make predictions in diverse fields, notably education.

These are healthy concerns, but I see a lot of substance in the analysis. No mainframe company succeeded in the minicomputer market. No minicomputer company succeeded in efforts to make PCs. They were many, they were highly profitable, and save IBM, they disappeared.

I’ll take the plunge by suggesting that a disruptive innovation is unfolding in K-12 education. The background is in posts that I wrote before reading Christensen: “A Perfect Storm” and “True Digital Natives.” In Christensen’s terms, 1:1 device-per-student deployments transform the value network. They enable new pedagogical and administrative approaches, high-resolution digital pens, advanced note-taking tools, and handwriting recognition software (for searching notes). As with many disruptive innovations at the outset, the market of 1:1 deployments is too small to attract mainstream sales and marketing. But appropriate pedagogy has been developed, prices are falling fast, and infrastructure is being built out. Proven benefits make widespread deployment inevitable. The question is, when? The principal obstacle in the U.S. is declining state support for professional development for teachers.

Conclusion: the water we swim in

Many of my cohort have worked in several organizations over our careers. Young people are told to expect greater volatility. It makes sense to invest in learning about organizations. If you start a discussion group, you now have two recommendations.

Endnotes

1. Published in D. Miller & P. H. Friesen (Eds.), Organizations: A Quantum View, Prentice-Hall, 1984 and reprinted in R. Baecker (Ed.), Readings in Computer Supported Cooperative Work and Groupware, Morgan Kaufmann, 1995.

2. A modified version appeared as Electronic social fields in bureaucracies.

3. A moving target: The evolution of HCI. In J. Jacko (Ed.), Human-computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications. (3rd edition). Taylor & Francis, 2012. An updated version is available on my web page.

4. J. Grudin & S. Poltrock, 2012. Taxonomy and theory in Computer Supported Cooperative Work. In S.W. Kozlowski (Ed.), Handbook of Organizational Psychology, 1323-1348. Oxford University Press. Updated version on my web page; 
J. Grudin, 2014. Organizational adoption of new communication technologies. In H. Topi (Ed.), Computer Science Handbook, Vol. II. Chapman & Hall / CRC Press.

Thanks to John King and Gayna Williams for discussions.



Posted in: on Mon, June 23, 2014 - 10:36:18

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


May I have your attention


Authors: Ashley Karr
Posted: Sun, June 01, 2014 - 5:40:51

Take away: Removing ourselves from stimulation, electronic or otherwise, is crucial for our brains to function at their peak, and focusing on one task at a time with as little outside distraction as possible is the best way to increase task performance.

I will begin this article by saying that I love meta. The fact that we build new technologies to study how technology affects us makes me laugh. Anyway, what this article is really about is attention, so that is where I will focus ours.

The modern field of attention research began in the 1980s when brain-imaging machines became widely available. Researchers found that to shift attention from one task or point of focus to another greatly decreased performance. No exceptions. No excuses. No special cases. Human beings do not perform as well as possible in any task when they multitask. 

Studies also show that simple anticipation of another stimulus or task can take up precious resources in our working memory, which means we can’t store and integrate information as well as we should. Additionally, downtime is very important for the brain. During downtime, the brain processes information and turns it into long-term memories. Constant stimulation prevents information processing and solidifying, and our brains become fatigued. 

It appears that removing ourselves from stimulation, electronic or otherwise, is crucial for our brains to function at their peak, and focusing on one task at a time with as little outside distraction as possible is the best way to increase task performance. Some studies have found that people learn better after walking in rural areas as opposed to a walking in urban environments. Researchers are also investigating how electronic micro-breaks, like playing a two-minute game on a cell phone, affect the brain. Initial findings do not support electronic micro-breaks as true “brain breaks” that allow for information processing and prevent mental fatigue.  

Based on this research, I brainstormed a few ways we can de-stimulate. Here are some of my ideas:

  • Only answer and respond to emails for a window of one to two hours a day.
  • Take a five-minute break by going outside and sitting on a bench or the grass. Just sit there. Don’t even bring your phone.
  • Unplug your TV and wireless router at least one day a week.
  • Go camping.
  • Turn off your cell phone for an hour, a day, or an entire weekend.
  • Only make phone calls at a designated spot and time in a quiet place away from distractions.
  • Stop reading this article, turn off your computer, put down your phone, and go outside.


Posted in: on Sun, June 01, 2014 - 5:40:51

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


No Comments Found


So close but yet so far away


Authors: Monica Granfield
Posted: Thu, May 29, 2014 - 10:12:06

In the last five days I have clocked at least eight hours on three Fortune 100 consumer websites for tasks that should not take more than an hour combined. What made my poor experiences even more ironic is that these are companies in the service industry that pride themselves on innovation around the customer experience!

This has left me scratching my head, wondering how these customer-centric companies could have veered so greatly from their missions and failed at the interaction level. The sites were professionally created consumer sites—well branded, with impeccable visual presentations, thus setting my expectations high for a pleasant user experience.

In two cases there were scenarios that were not directly generating revenue, but instead asking for assistance on a recent purchase. The experiences were not only poorly thought out task flows—one provided little guidance and feedback to entering materials and the other was, as it turns out, operationally incorrect. Both of these interaction experiences failed and forced me to call customer support.

In the first case, with JetBlue, once the voice on the other end of the phone materialized I assumed could breathe a sigh of relief. The problem is going to be cleared up and I can get these tasks off of my plate. No such luck. The experience continued to deteriorate over the phone. Two calls mysteriously dropped off midway through my help session. Remaining calm, I continued to forge on. I was told twice in no uncertain terms that the website would not allow me to do what I had done, so the service rep would not instruct me on how to rectify the situation and properly register my family for the program offered. I was allowed to invite minors to my family pool without a frequent flyer number—there were no instructions to tell me otherwise—and the UI allowed me to accomplish this. I was fortunate that in this case, an actual confirmation e-mail had been sent to me. This did not help in promoting my cause or rectifying the situation with the service rep, as he told me, "No that can't happen." “Hmmm,” I thought, “then what am I looking at?” The steps to sign up for the program had no instructions and were completely undiscoverable. So much so that, in the end, the representative had to put me on hold for 10 minutes to go and somehow delete whatever was magically made possible to me via the website and sign me up manually. The steps that followed this process to get my family "registered" were also not discoverable and I could not have continued to complete the process (and remember I am a seasoned computer user/UX designer) without the assistance of the customer service representative. Several times during the two hours that it took to accomplish this 10-minute task, my husband must have asked me 10 times, "Is this really worth it?" while my oldest child intermittently hollered, "Virgin, we should have flown Virgin." Well, our tickets were already purchased, but how many other people are not bothering and abandoning this program or the airline all together based on these types of poor user experiences?  I finally did get us all signed up for the program, although we are still not sure if my miles are in this pool or not—hard to tell! I am excited for a trip, regardless of the UX fail. However, will I choose this airline next time I travel? Depends on my experience.

In the other case I spent upwards of 90 minutes painstakingly entering information, photos, and receipts into a highly unusable form and process with little or no instruction, only to hear nothing back from Disney until I picked up a phone and called them directly, two weeks later. This due to a lost activation code for a child's CD that is printed on a loose piece of paper, easily lost by an excited child waiting to watch the latest release of a Disney flick! Why not print the code on the CD or the insert on the case? I did pay for it. Why is it so difficult to get this number back or to more securely adhere it to the packaging? I will admit, once when attempting to use their site to book a trip to Disney World, I became so dismayed we stayed outside the park. Sorry, Walt. Of course I was not allowed to enter a new case for this issue, as one for this product ID already exists! UX fail.

My last and perhaps most disturbing experience is with a tactic used by many sites, including Care.com and SitterCity.com, to passively and what seems to me questionably collect revenue unknowingly from users. Funny enough, the audience is busy parents! You check a box that says "Do not auto charge my account after my selected pay period ends" and lo and behold, you are charged anyway. And when you call to contest the charges, assuming that you catch this and call immediately, they will apologetically "refund" your money. Relying on the fact that you can't recall if you actually checked off that little option box—and where oh where was that little box anyway?—has left me saying, UX fail.

As an experience designer I have been left wondering what to think about these experiences.

My conclusion is this: scenarios that do not generate obvious revenue are not given UX priority or the attention needed to craft an elegant and usable recovery experience. Recovery experiences are important scenarios that, if not carefully considered, will eventually result in lost revenue. Evidence of this can be seen in Jared Spool’s blog post "The $300 Million Button." The business intent of the paradigm was to get users registered. Users who did not want to register became frustrated and abandoned their purchases. Once this was identified and rectified, there was significant increase in revenue. User abandonment of a product or brand can happen as often after a purchase as before or during a purchase. Companies need to embrace UX design in addressing the end-to-end experience and how it impacts the business, from the customer perspective and not just the business-revenue-generating channels. This will lead to a better user experience, repeat customers, and increased revenue. This is what a good UX can do!



Posted in: on Thu, May 29, 2014 - 10:12:06

Monica Granfield

Monica Granfield is a user experience designer at Imprivata. The views expressed on this website are exclusively her own and are not meant to reflect or represent the views of Imprivata.
View All Monica Granfield's Posts


Post Comment


No Comments Found


Philosophical robbery


Authors: Jonathan Grudin
Posted: Wed, May 28, 2014 - 9:19:20

In 1868 I read Dr. Holmes's poems, in the Sandwich Islands. A year and a half later I stole his dedication, without knowing it, and used it to dedicate my "Innocents Abroad" with. Ten years afterward I was talking with Dr. Holmes about it. He was not an ignorant ass—no, not he; and so when I said, "I know now where I stole, but who did you steal it from?" he said, "I don't remember; I only know I stole it from somebody, because I have never originated anything altogether myself, nor met anybody who had."

—Samuel Clemens (Mark Twain) in a letter to Anne Macy, reprinted in Anne Sullivan Macy, The Story Behind Helen Keller. Doubleday, Doran, and Co., 1933.

Accounts of plagiarism are epidemic. Charged are book authors, students, journalists, scientists, executives, and politicians. Technology makes it easier to find, cut, and paste another’s words—and easier to detect transgressions. Quotation marks and a citation only sometimes address the issue. Cats and mice work on tools for borrowing and detection, but technology is shifting the underlying context in ways that will be more important.

Plagiarism or synthesis: Plague or progress?

We appreciate novelty in art and technology. We may also nod at the adage, “There is nothing new under the sun.” Twain isn’t alone in questioning the emphasis on originality that emerged in the Enlightenment. Arthur Koestler’s The Act of Creation is a compelling analysis of the borrowing that underlies literary and scientific achievement. Although we encourage writers to cite influences, we know that a full accounting isn’t possible. Further complicating any analysis is the prevalence of cryptomnesia or unconscious borrowing, which fascinated Twain and has been experimentally demonstrated. Writers of undeniable originality, such as Friedrich Nietzsche, borrowed heavily without realizing it.

Believing that an idea is original could motivate one to work harder, perhaps borrowing more and building a stronger synthesis. The aspiration to be original could have this benefit. I’ve seen students, faculty, and product designers lose interest when shown that their work was not entirely “invented here.” They might have been more productive if unaware of the precedent.

An earlier post on creativity, which cited a professor who directs students to submit work in which every sentence is borrowed, proposed that the availability of information and the visibility of precedents will shift our focus from originality to a stronger embrace of synthesis. It seems a cop-out to say that synthesis is a form of originality. The distinction is evident in “NIH syndrome,” the reluctance to build overtly on the work of others. 

Prior to considering when citation is and is not required or perhaps even a good thing, let’s establish that there is no universal agreement on best practices.

Cultural differences

Some professors say, “I learn more from my students than they do from me.” As a professor I learned from students, but I hope they learned more from me, because I was a slow learner. One afternoon at culturally heterogeneous UC Irvine, I realized that a grade-grubber who had all term shown no respect for my time by arguing endlessly for points had in fact been sincerely demonstrating respect for the course and for my regard, which he felt a higher grade would reflect. Raised in a haggling culture abroad, he assumed that I understood that his efforts demonstrated respect, and almost fell on the floor in terror when I said mildly and constructively that he was developing a reputation for being difficult. It had a happy ending.

The faculty shared plagiarism stories. My first lecture in a “technology and society” course included a plagiarism handout. I explained it, asked if they understood it, and sometimes asked everyone who planned to plagiarize to raise a hand “because one of you probably will, and it will be a lot easier if you let me know now.” Gentler than some colleagues, I only failed a plagiarist on the assignment. But that was enough to affect the grades of students, many of whom were Asian Americans whose families counted on them to become engineers. Parents dropped some at the university in the morning and picked them up in the afternoon.

In 1995 I spent a sabbatical in a top lab at a leading Asian university. I discovered that uncited quotation was acceptable. Students plagiarized liberally. Uncited sentences and paragraphs from my publications turned up in term papers for my class. I thought, “OK, we make a big deal of quotation marks and a reference. They don’t.” This didn’t shock me. My first degrees were in math and physics, where proof originators were rarely cited or mentioned. No “Newton, 1687.”

An end-of-term event riveted my attention. Each senior undergraduate in the lab was assigned a paper to present to the faculty and students as their own work, in the first person singular! Organized plagiarism! It was brilliant. The student must understand the work inside out. A student who is asked “Why did you include a certain step?” can’t say “I don’t know why the authors did that.”

I recognized it. I once took a method acting class. Good actors are plagiarists, marshalling their resources by convincing themselves that the words in a script are their own. Plagiarism as an effective teaching device!? Be that as it may, after years of teaching, one of the two grades I regret giving was to a fellow who, before my discovery, may have followed parental guidance: work hard to find and reproduce relevant passages. He just hadn’t absorbed our custom of bracketing them with small curlicues.

Copyright violation

Plagiarism is not a crime, but violating copyright is. U.S. copyright law isn’t fully sorted out, but it represents a weighing of commercial and use issues, and a not yet fully defined concept of “fair use” exceptions that considers the length, percentage, and centrality of the reproduced material, the effect of copying on the market value of the original, and the intent (a parody or critical review that reduces the original’s market value may include excerpts).

My focus is on ethical and originality considerations, so for copyright infringement guidance consult your attorney. I once inquired into how much a copyrighted paper must be changed to republish it. I found a vast gulf in opinion between seasoned authors (“very little”) and publishers (“most of it”). Publishers haven’t seemed to bother about scientific work, but with plagiarism-detection and micropayment-collection software, that could change.

Factors in weighing originality and ethics

1. How exact is the copy, from identical to paraphrase to “idea theft”? What is the transgression—lack of giving credit? An explicit or implied false assertion of originality or effort? A false claim to understand the material?

Students are told, “Put it in your own words, then it isn’t plagiarism.” This is true when the information is general knowledge. Paraphrasing a passage from a textbook, a lecture, or a friend’s work may suffice. Information from a unique source, such as a published paper, generally deserves and is improved by a source citation.

If omitting a citation causes readers to infer that an author originated the work, it crosses the line. For example, a journalist who uses the work of other journalists, even if every sentence is rewritten, creates a false impression of having done the reporting and is considered a plagiarist. Crediting the original journalist solves the problem if copyright isn’t violated. There are grey areas—reports of press conferences may not identify those who asked the questions. When a copyright expires, anyone can publish the work, but to not credit the author would be bad form.

With student work that is intended to develop or demonstrate mastery, copying undermines the basic intent. Especially digital copying—some teachers have students write out work by hand, figuring that even if copied from a friend’s paper, something could stick as it goes from eyes through brain to fingers. For a student who has truly mastered a concept, copying “busy work” is less troubling. (We hope computer-based adaptive learning, like one-to-one tutoring, will reduce busy work.)

Idea theft is an often-expressed concern of students and faculty. We may agree that ideas are cheap and following through is the hard part, but to credit a source of an idea is appropriate even when the borrowing is conceptual.

2. When is attribution insufficient?

As noted above, attribution won’t shield an author from illegal copyright violation. Although the law is unsettled, copying with or without attribution may be allowed for “transformative works” to which the borrower has made substantive additions. Transformative use wouldn’t justify idea theft—finding inspiration in the work of others is routine, but not developing the idea of someone who might intend to develop it further.

3. When is attribution unnecessary? How is technology changing this?

In cases of cryptomnesia or unconscious plagiarism of the sort Mark Twain owned up to, attribution is absent because the author is unaware of the theft. Experiments have shown that unconscious borrowing is easy to induce and undoubtedly widespread. Nevertheless, a few years ago, a young author had a positively reviewed book withdrawn by the publisher after parallels were noticed in a book she acknowledged having read often and loved. The media feeding frenzy was unjustified; it was clearly cryptomnesia, with few or no passages reproduced verbatim.

Homer passed on epic tales without crediting those he learned them from. Oral cultures can’t afford the baggage. Change was slow and is not complete, and today cultures vary in their distance from oral traditions. When printing arrived, “philosophical robbery” was rampant. Early journals reprinted material from other journals without permission. Benjamin Franklin invented some of his maxims and appropriated others without credit. Only recently have we decided to expend paper and ink to credit past and present colleagues for the benefit of present and future readers.

Shakespeare borrowed heavily from an earlier Italian work in writing Romeo and Juliet, on which 1957’s West Side Story was based. The first version of West Side Story was shelved in 1947 when the authors realized how much they’d borrowed from other plays that were also based on Shakespeare. Twain again: “Substantially all ideas are second-hand, consciously and unconsciously drawn from a million outside sources, and daily used by the garnerer with a pride and satisfaction born of the superstition that he originated them; whereas there is not a rag of originality about them anywhere except the little discoloration they get from his mental and moral calibre and his temperament, which is revealed in characteristics of phrasing. . . . It takes a thousand men to invent a telegraph, or a steam engine, or a phonograph, or a photograph, or a telephone, or any other important thing—and the last man gets the credit and we forget the others. He added his little mite—that is all he did.”

“Gladwellesque” books are artful syntheses of others’ work. Some of the contributing scholars may grumble that they should share in royalties, but at least they get credit, which is their due and which gives authority to a synthesis. However, a writer constantly weighs when contributions merit overt credit. In the natural sciences citation is often omitted. It slows the reader and distracts from the elegance of the pure science. It forces a writer to take sides in historical paternity/maternity quarrels and decide whether his or her slight improvement in the elegance of a proof also merits mention.

In less formal writing the custom is to acknowledge less. Magazines may limit or altogether eliminate citations, allowing only occasional mentions in the text.

Consider this essay. I cited a primary source for the Twain quotations, but not the secondary source where I found them. I didn’t note that the first complaint of “philosophical robbery” that I know about was by the chemist Robert Boyle soon after the printing press came into use, or where I learned that. I didn’t credit Wikipedia for the origins of West Side Story.

Technology is rapidly expanding the realm of information that I consider common knowledge that needs no citation. My rule of thumb is that if a reader can find the source in fewer than five seconds with a search engine and obvious keywords, I don’t need to cite it, although sometimes I will. For example, anyone can quickly learn that “there is nothing new under the sun” comes from Ecclesiastes.

Reasons for omission vary. I provided a source for cryptomnesia but not for the author who fell afoul of it, feeling that after the lack of media generosity she has a “right to be forgotten.” It is also a tangled web we weave when we practice to communicate directly (a transformative borrowing I shall leave uncredited).

As we focus on building plagiarism detectors to trip up students, technology will make all of our borrowings more visible, the conscious and the unconscious. There is no turning back, but the emerging emphasis on synthesis may resonate more with the oral tradition of aggregation than with the recent focus on individual analysis. In the swarm as in the tribe, credit is unnecessary.



Posted in: on Wed, May 28, 2014 - 9:19:20

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Bringing together designers, ePatients, and medical personnel


Authors: Richard Anderson
Posted: Fri, May 23, 2014 - 10:06:54

Back in 1989–1991, I served on the committee that founded BayCHI, the San Francisco Bay Area chapter of ACM SIGCHI. I became its first elected chair and served as its first appointed program chair for 12 years. I also served as SIGCHI’s Local Chapters chair for five years, supporting the founding and development of SIGCHI chapters around the world.

Much has happened since then. Perhaps of greatest significance were my horrific experiences with the U.S. healthcare system. My healthcare nightmare changed my life and has prompted me to focus on what can be done to dramatically redesign the healthcare system and the patient experience. Indeed, several of my Interactions blog posts reflect that focus, with a large part of that focus being on changing the roles and relationships of and between patients and medical personnel and designers. You’ll see that in, for example, “Utilizing patients in the experience design process,” “Learning from ePatient (scholar)s,” “Are you trying to solve the right problem?,” “The importance of the social to achieving the personal,” and “No more worshiping at the altar of our cathedrals of business.”

All this has led me to start a new local chapter, but this one is not of SIGCHI. This one is for a combination of ePatients, medical personnel, and designers. This one is for changing the healthcare system. This one is the first local chapter of the Society for Participatory Medicine.

Topics/issues to be addressed by the chapter should be of interest to many Interactions readers. They include the ePatient movement, peer-to-peer healthcare, other uses of social media in healthcare, human-centered healthcare design and innovation, doctors and patients as designers, the quantified self, patient and doctor engagement, empathy, healthcare technology, patient experiences of the healthcare system, and more. When Jon Kolko and I were the editors-in-chief of Interactions, we published lots of articles that addressed this level of topics/issues. One of those was a cover story entitled “Reframing health to embrace design of our own well-being.” (Somewhat coincidentally, two of the article’s authors made a presentation about the content of the article at a BayCHI meeting.)

If you reside anywhere in the San Francisco Bay Area and are interested in the topics/issues listed above, I invite you to join this new local chapter. If you know of others in the San Francisco Bay Area who you think might be interested, please let them know about the group as well.

The chapter is just starting. Indeed, our first meeting has not yet been scheduled, as I'm still seeking venue options (and sponsors). If you know of any venue (or sponsor) possibilities, please let me know.

It feels good to be getting back into the local chapter business. I hope you’ll check us out.



Posted in: on Fri, May 23, 2014 - 10:06:54

Richard Anderson

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.
View All Richard Anderson's Posts


Post Comment


No Comments Found


Why teaching tech matters


Authors: Ashley Karr
Posted: Fri, May 16, 2014 - 9:12:56

Education is one of the most valuable ways that we can improve quality of life for ourselves and others. This improvement applies to teachers as well as students. Having just taught a ten-week user experience design immersive course, I am keenly aware of how my life is better as a result. The following is a list of how my life has improved thanks to my co-instructor, our students, course producer, and supportive staff:

Meaning and engagement 

Before I began this course, I was burnt out. My work had lost meaning and engagement. I had been building technology for organizations and groups that had lost their soul and passion for good design. Instead, they pursued profit and fantasy deadlines. The irony of engaging in a type of engineering called “human-centered design” and “user experience design” in environments like this was not amusing. After the first day of this course, meaning and engagement had found their way back into my Monday through Friday 9 to 5. Why? The people. But I will write more about that later.

Leverage 

I found my way into anthropology, human factors engineering, human-computer interaction (HCI), and user experience (UX) design because I truly care about our world, its people, and other living creatures. I saw how these fields could help me operationalize my instinct to help. Additionally, by leveraging the power of computing technology, I could help a lot of people with minimal effort. (Spoken like a true humanitarian engineer!)

What I now realize is by teaching others to be empathetic, ethical, human-centered designers and makers of computing technology, I am leveraging the boundless energy and power of my students, as well. In the past ten weeks, my co-instructor and I have overseen roughly eighty projects, and a number of these have the potential to become consumer facing. Our eighteen students will go on to rich careers in UX, and I think it safe to say that they will each be involved in at least one small design improvement per month for the next ten years at minimum. That means that my co-instructor and I will be part of at the very least 2,160 design improvements over the next decade because they will draw from the principles learned in our class to do their jobs properly. If each of those design improvements saves ten million people one minute of their time on a mundane task like bill pay, this means 21,600,000,000 minutes have been freed to spend on hugging children, taking deep breaths, and other meaningful things.

Deepening my KSAOs

KSAOs are the job-related knowledge, skills, attitudes, and other characteristics necessary for one to perform their job successfully. When one teaches a subject, their KSAOs deepen, because teaching is not separate from but an integral part of the learning process. My co-instructor and I feel that we’ve learned more than our students by teaching them UX fundamentals. I am not suggesting that the students haven’t paid attention—on the contrary! It is their questioning, challenging, creating, and building upon what we’ve taught them that has enabled us to achieve an even greater mastery of our trade. Interestingly, I now have an ability to discern who of my professional peers has taught and who has not. Teaching, like parenting, gives one a sense of humility and compassion that is hard to reach without the challenges that students and children place in front of you. I do have to add in the closing of this paragraph that my co-instructor and I are very, very proud of our students for thinking critically, independently, and deeply about design and technology. It makes us very proud—the good kind of proud.

Community and personal relationships

It is always and forever about the people. When I applied for the position to teach the UX design immersive course, I focused on the students and the relationships I would create with them. I thought about the people they were before applying for the course, the experiences we would have together over ten weeks, and the people they would become after graduation. I was so excited to meet the students on the first day of class, see their faces, hear their voices, and get to know about them as people rather than social media profiles. The relationships I have built with my students mean more than I had anticipated, and I have the added joy of finding life-long friends in my co-instructor, Jill; the course producer, Jaime; and the staff that supported us through the course. Beyond that, I have met many wonderful guest speakers, leaders, community organizers, and other professionals who have also enriched my experience and career. I am so thankful that all these wonderful people are now in my life, and that we are all working at minimum forty hours per week, two-thousand hours a year to help make the world a better place. I get chills when I think about it. 

Gratitude

For no reason whatsoever, I was born into a situation where I had opportunities that few people have ever had. I am a literate, educated, financially independent person developing cutting-edge technology. I am keenly aware of and grateful for these opportunities and believe them accidents of history and birth and not something that I earned or deserve. It seems that teaching others to be empathetic, ethical, human-centered designers of computing technology is a good way of making sure this privilege is not wasted.

Inspiration

I am happy to say that I am no longer burnt out, and I have rediscovered meaning and engagement in my work thanks to my students, co-instructor, course producer, and supportive staff. I have found that when I lack inspiration, motivation, and energy, what I am missing are quality relationships and interactions with my peers and colleagues. 

In conclusion

I hope this inspiration carries over to you, the reader, and inspires you to become a teacher or mentor. I will end this essay with a direct quote from my co-instructor and good friend, Jill DaSilva. Before I sat down to write this article, I asked her why she decided to teach this course. Here is her answer:

I teach because I have the opportunity to give back. At a time in my life when I needed to support my son and myself, there were people there to help me, teach me, and give me the chance to do what I loved for a living. I’m paying it forward. Also, what we make is meaningful, and I get to teach our students how to create things that improve other people’s circumstances. If we can remove suffering and increase happiness though what we make, then we are living good lives.


Posted in: on Fri, May 16, 2014 - 9:12:56

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


No Comments Found


Designing the cognitive future, part IV: Learning and child development


Authors: Juan Pablo Hourcade
Posted: Thu, May 15, 2014 - 9:29:54

In this post, I discuss how technology may affect learning and child development in the future, and how the HCI community can play a role in shaping what happens.

Let’s start with a quick primer on some of the latest theories on child development, such as dynamic state theories and connectionism. These theories attempt to bridge what we know about the biology of the brain with well-established higher-level views on development from Piagetian and socio-cultural traditions. These theories see learning as change, and study how change happens.

One of the main emphases of these theories is on the notion of embodiment. They see learning and development occurring through interactions between the brain, the body, and the environment (including other people). When we learn to complete a task, we learn how to do it with our bodies, using the resources available in the environment. As learning, change, and development occur, the brain, the body, and the environment learn, change, and develop together.

These approaches also bring a “biological systems” view of the brain, with small components working together to accomplish tasks, and knowledge representations, behaviors, and skills emerging over time. Emerging skills, for example, are likely to show a great deal of variability initially, with the best alternatives becoming more likely over time. This also links to the concept of plasticity, where it is much easier to change behavior and learn new skills for younger people (they also show greater variability in behavior) but it is more challenging later in life.

So how does all this link to technology? I think technology brings significant challenges and opportunities. The biggest change, perhaps the most radical in the history of humanity, is in the environments with which children may interact in the future. The richness of these environments, and the ability to modify and develop with them will be unprecedented. In particular, there is a potential to give children access to appealing media to build and learn things that match their interests. Much of the research at the Interaction Design and Children (IDC) conference follows this path.

The biggest challenge is in making sure that technology doesn’t get in the way of the human connections that are paramount to child development. A secure attachment to primary caregivers (usually parents) plays a prominent role in helping children feel secure, regulate their emotions, learn to communicate, connect with others, self-reflect, and explore the world with confidence. We have increasing evidence that interactive devices are not always helping in this respect. For example, a recent study by Radesky and colleagues at Boston Medical Center found that parental use of interactive devices during meals led to negative interactions with children. 

Likewise, when providing children with access to interactive media, we need to make sure that this happens in a positive literacy environment. Typical characteristics of positive literacy environments include shared activities (e.g., reading a book or experiencing educational media together) and quality engagement by primary caregivers (e.g., use of wide, positive vocabulary). Obviously, access to appropriate media is also necessary. What are some characteristics to look for? The better options will provide open-ended possibilities, encourage or involve rich social interactions, and incorporate symbolic play and even physical activity.

So how should we design the future of learning? One path is to replace busy parents and teachers with interactive media that take their place, and may even provide children with emotional bonds (similar to the film Her), making sure they are able to accomplish tasks according to standardized measures. The path I would prefer is for technology to enrich the connections between children, caregivers, teachers, and peers; to expand our ways of communicating; to provide more options for engaging in activities together; and to enable self-expression, creativity, and exploration in unprecedented ways. 

How would you design the future of learning?



Posted in: on Thu, May 15, 2014 - 9:29:54

Juan Pablo Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Pablo Hourcade's Posts


Post Comment


No Comments Found


Margaret Atwood: Too big to fail?


Authors: Deborah Tatar
Posted: Mon, May 12, 2014 - 9:27:33

Margaret Atwood gave the opening plenary for the CHI conference in Toronto in late April. When Atwood’s name was announced at the Associate Chair meeting the prior December, the audience was divided into two groups, the “Who?” group and the group that gasped “The Margaret Atwood?” Even though the second group was significantly smaller than the first, Atwood’s presence was a coup for the conference. In retrospect, we were the beneficiaries of her slide into entrepreneurial endeavor. We benefited in two ways: first because her entrepreneurial interests are probably why she accepted the gig and then because she brought her narrative powers to describing design development. More than one person in the audience muttered that she was the deciding factor in conference attendance, some because she is a great writer, and some because she is a kind of futurist. The rest came to her keynote because CHI told them to. Luckily, because she is a great writer and a futurist, she could not fail to please, even with a presentation primarily based on voice and content alone. She used Power Point, but only to illustrate, not to structure. She did please. 

To my mind, the most interesting part was the description of her childhood in the far north of Quebec, without running water, school, contact with the outside, or friends. She and her brother engaged in the kind of intense creative endeavor that the Bronte children (as in Charlotte, Emily, and Anne; authors of Jane Eyre, Wuthering Heights, The Tenant of Wildfell Hall respectively) did in the early 19th century in the Yorkshire Moors, but happily neither Atwood nor her brother died early of tuberculosis. Also, happily, Atwood was influenced by Flash Gordon rather than Pilgrim’s Progress. And her take experience with the can-do (actually, the must-do) spirit required for existence in the wild contributed to her intrepid voice. 

Her talk was charming and interesting. And she correctly pointed out the importance of self-driven, unstructured exploration in creativity. In fact, her discussion was very similar to a speech Helen Caldecott, the founder of Physicians for Social Responsibility, gave in the mid-1990s, reminiscing on the dangerous chances that were an everyday part of her childhood in Australia and the judgment that skirting such dangers taught. My own paper on playground games and the dissemination of control in computing (DIS 2008) was based on that talk as well as two other factors: my own memory of routine freedom in my own childhood in Ohio (“Just be home in time for dinner!”) and Buck’s Rock Creative and Performing Arts Camp in New Milford, Connecticut. Buck’s Rock was founded in 1942 by German refugees and permitted students to choose their own activities all day long, every day. The brief, blissful and extremely expensive month I spent at what was then called “Buck’s Rock Work Camp” in the summer of 1973 set my internal compass up for life. 

Despite the considerable interest of her story, Ms. Atwood was deeply wrong in one respect, and in some way it is her error rather than her perception that is the important take-away. She repeated at least twice, and perhaps more often, that we cannot build what we cannot imagine. 

Oh, if only she were right! But she is wrong. She ignores the existence of banks too big to fail. We might, more formally refer to this and related phenomena as emergent effects. Mitch Resnick, Uri Wilensky, and Walter Stroup have been writing for years about teaching children to model the emergent effects of complex systems, using a distributed parallel version of the Logo computer language called Star Logo. Mitch has a lovely small 1997 book called Turtles, Termites and Traffic Jams from MIT Press. And of course the notion of complexity theory as pursued at the Sante Fe Institute brings formality and rigor to the structure of information. 

These are some of the intellectual roots of Big Data. More significant is the practical consequence as Big Data increasingly controls freedom of action. The fear is that, armed with information, the incessant insistence of the computer that I recently wrote about in an Interactions feature will fragment and disperse the unofficial mechanisms that the powerless have always used to get influence. When we let big corporations, dribble-by-dribble, have our information, we do not intend to make a world designed only by monetization. Indeed, I’m sure that at one point the Google founders actually thought that they would “do no evil.” Unintended consequences. Orwell could imagine 1984 but he could not imagine the little steps, quirks, and limitations by which 1984 would be bootstrapped. 

Ironically, I acquired this sensitivity to the power of the computer to obliterate the mechanisms of the already disenfranchised in part by reading The Handmaid’s Tale, a book by Margaret Atwood. 



Posted in: on Mon, May 12, 2014 - 9:27:33

Deborah Tatar

Deborah Tatar is an associate professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


No Comments Found


Wireframes defined


Authors: Ashley Karr
Posted: Tue, April 29, 2014 - 12:18:59

Take away:Wireframing is the phase of the design process where thoughts become tangible. A wireframe is a visual 2D model of a 3D object. Within website design, it is a basic, visual guide representing the layout or skeletal framework of a web interface. Page schematic or screen blueprint are frequently used synonyms.

When user experience (UX) professionals create wireframes, they arrange user interface elements to best accomplish a particular, predetermined task or purpose. They focus on function, behavior, content priority, and placement, page layout, and navigational systems. They lack graphics and a fancy look and feel. Wireframing is an effective rapid prototyping technique in part because it saves huge amounts of time and money. It allows designers to measure a design concept's practicality and efficacy without a large investment of these two very important resources. It combines high-level structural work, such as flow charts, site maps, and screen design, and connects the underlying conceptual structure (the information architecture) to the design's surface (the user interface). Designers use wireframes for mobile sites, computer applications, and other screen-based products that involve human-computer interaction (HCI). 

The following are a few best practices for wireframing in the digital space:

  • Be a planner. Gather information before you start jumping into wireframes. Make sure you, your team, and your clients are clear on the design's missions, goals, objectives, and functions. Make sure you are also just as clear on your stakeholders and users. (Did you remember that maintenance workers are design users, as well? No? Better drop the wireframe and spend a bit more time thinking...) 

  • Be a philosopher. Wireframing is "...like a finger pointing away to the moon. Don't concentrate on the finger or you will miss all that heavenly glory." Yes, I just quoted Bruce Lee in Enter the Dragon. How does this apply to wireframing? I will tell you. People tend to get hung up on the medium and not the quality and appropriateness of the wireframe. Wireframing programs are a dime a dozen. In my real-life, actual UX experience, I have discovered that people LOVE paper wireframes. They get excited when they are finally allowed to unplug their fingers from the keyboard, peel their eyes away from the screen, and do batteries-not-included usability tests outside in a courtyard chalk-full of pigeons, fountains, babies in strollers, and dogs playing catch with their owners.

  • Be a child. So many people are afraid of being wrong or seemingly silly in a professional environment that they paralyze their creativity. If you are one of those people who worry so much about what others may think of you during a brainstorming session, go hang out with pre-schoolers for a morning. Dump a big pile of Crayolas on a table, fling around some colored construction paper, and notice what happens.

Wireframing is neither new nor innovative. Humans have wireframed for millennia—sketching out inspirations for new inventions, drafting designs for buildings and civil engineering projects, and developing schematics for massive travel and communication networks. Arguably, our first medium was the cave wall and lump of charcoal leftover from the previous night's fire. As human technology evolved from these prehistoric wares to papyrus to paper to computerized 3D-modeling programs and interactive systems, such as Balsamiq, Axure, and InDesign, what and how we wireframe has evolved in step. However, why we wireframe and wireframing best practices are timeless. In order to develop a viable final product, be it the Parthenon or a mobile phone app to track blood sugar levels for diabetics, humans have depended upon wireframes to organize and synchronize the design team's efforts and test early iterations to avoid disaster and make a good faith attempt at success. 



Posted in: on Tue, April 29, 2014 - 12:18:59

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


No Comments Found


True digital natives


Authors: Jonathan Grudin
Posted: Tue, April 22, 2014 - 11:49:32

They’re coming. They may not yet be recognizable, but some are walking—or crawling—among us.

The term digital native was coined in 2001 to describe technology-using youths, some of whom are now approaching middle age. At an early age they used family computers at home. They took computer skills classes in school. They met for other classes in computer labs or had device carts wheeled in. They acquired mobile phones as they approached their teen years.

They are not intimidated by tech. But they aren’t fully digital. Paper and three-ring binders are still alive and well in schools. Last September, Seattle was plagued for weeks by a shortage of bound quadrille-ruled notebooks. Many schools still ban mobile phone use, reinforcing students’ longstanding suspicion that school has little to do with the phone-tethered world outside. Scattered reports of BYOD in the workplace alarm IT professionals, but employers generally assume that new hires will adopt the technology that comes with a job. Enterprises see the disappearance of technophobia as a plus, failing to anticipate the new challenges that will accompany greater technophilia.

Today, a different cohort is starting to emerge. Psychologically different. Not the final stage of digital evolution, but a significant change.

1:1

A previous post described forces behind the spread of device-per-student deployments: changes in pedagogy and assessment methods, sharply declining prices resulting from Moore’s law, manufacturing efficiencies, and the economies of scale that accompany growing demand.

“One laptop per child” visions began almost half a century ago with Alan Kay’s Dynabook concept. Kay pursued education initiatives for decades. The nine-year-old OLPC consortium aimed unsuccessfully for a $100 dollar device, encountering technical and organizational challenges. The site’s once-active blog has been quiet for six months. Its Wikipedia page reflects no new developments for two years. Media accounts consist of claims that OLPC has closed its doors; these are disputed, but the debate speaks for itself.

I don’t question the potential of digital technology in education. Yes, pedagogy, compensation and ongoing professional development for teachers, and infrastructure are higher priorities to which OLPC might have paid more attention. But digital technology is so fluid—when there is enough, it will find its way. OLPC was cycles of Moore’s law ahead of itself—but how many cycles? If nine years wasn’t enough, might another two or three suffice? Pedagogy is improving and infrastructure is coming into place. Support for teachers is the one area of uncertainty; let’s hope it picks up.

Insofar as technology is concerned, the light at the end of the long tunnel is getting bright. Capability grows and cost declines. For the price of several laptop carts five years ago, a school can provide all students with tablets that can do more. And 1:1 makes a tremendous difference.

The obvious difference is greater use, which leads to knowledge of where and how to use technology, and when to avoid using it. Students who use a device for a few hours a week can’t acquire the familiarity and skills of those who carry one to every class, on field trips, and home.

Some features make little sense until use is 1:1. Consider a high-resolution digital pen. Most of us sketch and take notes on paper, but for serious work we type and use graphics packages. Education is different: For both students and teachers, handwriting and sketching are part of the final product. Students don’t type up handwritten class notes or algebraic equations. They draw the parts of a cell, light going through lenses, and history timelines. Teachers mark papers by hand. When lecturing, they guide student attention by underlining, circling, and drawing connecting arrows.

Only when students carry a device can they use it to take notes in every class. When everyone has a high-resolution digital pen, a class can completely eliminate the use of paper. It is happening now. It would happen faster were it not for the familiar customer-user distinction. The customers—such as school board members deciding on technology acquisition—think, “I don’t use a digital pen and I’m successful; isn’t it a frill that costs several dollars per device and is easily lost or broken?” They don’t see that reduced use of paper and substantial efficiency gains will yield net savings. They don’t realize that students who are familiar with the technology will use it to become more productive workers than their predecessors, including those who are today making the purchasing decisions.

There are unknowns. We have learned that when everything is digital, anything can appear anywhere at any time, for better or worse. But in the protected world we strive to maintain for children, digital technology can and I believe will be a powerful positive force. The world’s schools have started crossing that line. A flood will follow.

Leveling the field

When prices fall and other features come into alignment, future use will resemble today’s high-functionality tablets with active digital pens. These devices are not over-featured and are already much less expensive than a few years ago.

The flexibility of well-managed digital technology supports a range of learning styles. An elementary school teacher whose class I visited last week said that her greatest surprise was that struggling students benefited as much as or more than very capable students: The technology “helps level the playing field.” This echoed other conversations I have had. A teacher who was initially skeptical about a new math textbook remarked that after a year he was convinced: The adaptive supplementary materials accessible on the Internet “keep any student from falling through the cracks.” He still felt the textbook was weak on collaboration and other “21st century skills,” but concluded “a good teacher will add them.” In a third school, a teacher recorded parts of lectures as he gave them, using software that captured voice, video, and digital pen input. He then put them online for students who missed class, were not paying attention, or needed to view it a second time. On some occasions when he was not recording, students asked him to.

The most dramatic leveling occurs when technology allows students with sensory and other limitations to use computers for the first time. When I first saw a range of accessibility accessories and applications in active use a year ago, it was eye-opening: Children who had been cut off from the world of computing that we take for granted could suddenly participate fully. It took me by surprise. The tears streaming down my face were not of joy—I felt the isolation and helplessness they had lived with.

Only a device that supports keyboard, pen, voice, and video input along with software that supports a range of content creation, communication, and collaboration activities will realize the full potential. However, 1:1 deployment of any device—tablet PCs, kindles, iPads, Chromebooks—when accompanied by appropriate pedagogy, professional development, and infrastructure not only provides benefit: It is fundamentally transformative, as described in the next section.

From direction to negotiation

When a computer is used in a lab, delivered by a device cart, or engaged with for part of a class period in a station rotation model, the teacher controls when and how it is used. When a student carries a device everywhere, use is negotiated. Students can take notes digitally in a technophobic instructor’s class. Student, teachers, and parents decide, with students often the most knowledgeable party.

The psychological shift with 1:1 goes deep. Students can and often do personalize their devices in various ways. Their sense of responsibility for the tool and its use creates a symbiosis that didn’t exist before. New hires today might use what they are given—that is how they were trained in school! Tomorrow’s students who arrive with years of responsibility for making decisions will bring a knowledge of how they can use digital technology effectively and efficiently. They will expect to participate in decisions. They may or may not use what they grew up with, but they’ll know what they want.

1:1 classroom experiments are underway and succeeding, even in K-5. These kids may not take devices home, but when they are not reading books, playing outdoors, and interacting with family members, they will probably find a device to use there as well.

Born digital

What next? These could be early days. Moore’s law hasn’t yet been revoked. Harvested energy R&D moves forward. Imagine: An expectant mother swallows a cocktail of vitamins, minerals, proteins, and digital microbes that find their way to the fetus. In addition to monitoring fetal health, will the sentinels serenade it with Mozart, drill on SAT questions, introduce basic computing concepts? Born digital—the term is already in use, but we have no idea.

Thanks to Clayton Lewis for comments and discussion.



Posted in: on Tue, April 22, 2014 - 11:49:32

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Three fundamentals for defining a design strategy


Authors: Uday Gajendar
Posted: Tue, April 15, 2014 - 10:04:47

The other day I tweeted this out while grasping what I’m trying to accomplish in my new role as Director of UX at a Big Data start-up: "Creating strategy (& vision) is about understanding the essence, exploring the potential & defining the expression, in an integrative way.” I’d like to delve a bit deeper into this spontaneously conveyed moment of personal profundity!

First, there are literally tons of books out there postulating on “strategy,” and many articles linking business-oriented concepts to “design.” One can spend weeks or months studying them (I’ve read quite a few, no doubt) but until you’re in the midst of being singularly burdened with the responsibility to define a viable, feasible design strategy (and correlated vision) for a team, company, and product, such readings just aren’t enough. Once you realize what’s around you—the sheer magnitude of the opportunity—then you’re able to peek into the milieu of ambiguity and complexity that comprises strategy. And that’s when you see that it’s fundamentally about three basic things:

  • Understand the essence. I once had a very tough Color Theory professor while an undergrad, who one day declared, “In order to master color, you must understand its essence.” And with some noticeable exasperation reinforced by a stern glare, he added: "Are you interested in… essence?!” Harrumph! Well then. It took a very long time, but I eventually realized he meant that you must deeply intuit—to the level of personal resonance—the purpose, value, and raison d’être for existence of color within a certain context. So what is strategy for, ultimately? I don’t mean some banal, trite “value prop” bullet point for a VC “pitch deck.” While that’s great fodder for Dilbert, as a designer I need to speak and work with authenticity to deliver excellence. So, this requires deeply probing the identity and nature of the company and its product—what is their inner “truth”? This requires “connecting” with the purpose, as a designer, and capturing it in the form of a thematic construct of human values: trust, joy, desire, power, freedom, etc. You’ve got to feel it… and believe it. There’s a bilateral immersive engagement that shapes your perspective of why the product and company exist, and how you can move that forward.

  • Explore the potential. There is a vast array of materials at a designer’s disposal, from the tangible (color, imagery, type, animations) to the intangible (presence, interaction, workflow). Pushing this range of potentialities is necessary to break beyond any conventional thinking, see what’s afforded and available. Potential necessarily involves delving into a fragile, unfamiliar realm of “what if” and “why not,” challenging limits and implied norms that many may hold sacred, for no discernible reason.

  • Define the expression. There’s got to be some well-crafted, artfully balanced manifestation, some embodiment of all that profound exploration of strategy and vision, that yourself and others can grasp and hold on to as a torch to light the way, signaling a path forward with promise and conviction. Maybe it’s a mockup, a movie, a demo, a marketing campaign, whateve… And there’s admittedly a degree of theatricality and rhetorical flourish to persuade stakeholders, but those expressions become symbols that others inside and outside the company will associate with your strategy. To put it bluntly, make prototypes, not plans! The expression matters; it brings your strategy to life in an engaging manner, where followers become believers and eventually, leaders.

  • In an integrative way. Finally, it’s all got to work together beautifully—the ideas about the product, the customer, the company, the principles, team process, public brand, etc. It takes systemic thinking to connect the dots and interweave the threads of crucial, even difficult, conversations with peers/superiors/ambassadors to ensure everyone is on board and committed, thus participating productively, to helping your strategy come alive. This requires constant multilateral thinking, with discipline and focus, bringing those elements together effectively. I think Steve Jobs said it best when he described the journey from idea to execution:

    "Designing a product is keeping 5,000 things in your brain, these concepts, and fitting them all together in kind of continuing to push to fit them together in new and different ways to get what you want.”

Posted in: on Tue, April 15, 2014 - 10:04:47

Uday Gajendar

Uday Gajendar is Director of User Experience at CloudPhysics, focused on bringing beauty and soul to Big Data for virtualized datacenters.
View All Uday Gajendar's Posts


Post Comment


No Comments Found


Interaction design for the Internet of Things


Authors: Mikael Wiberg
Posted: Fri, April 11, 2014 - 2:19:02

The Internet of Things (IoT) seems to be the next big thing, no pun intended! Embedded computing in everyday objects brings with it the potential of integrating physical things into acts of computing, in loops of human-computer interactions. IoT makes things networked and accessible over the Internet. And vice versa—these physical objects not only become input modalities to the Internet but also, more fundamentally, manifest parts of the Internet. IoT is not about accessing the Internet as we know it through physical objects. It is about physical objects becoming part of the Internet, establishing an Internet of Things. Accordingly, IoT brings with it a promise to dissolve the gap between our physical and digital worlds and the potential to integrate elements of computing with just about any everyday activity, location, or object. In short, IoT brings with it a whole new playground for interaction design!

We already see good examples of how this is starting to play out in practice. Connected cars, specialized computers, and tagged objects are becoming more and more common and the repertoire of available networked objects is rapidly growing. There is a shared interest in the Internet of Things in industry and academia. 

While the technological development around this area is indeed fascinating, it is from my perspective even more interesting to see where this will take interaction design over the next few years. From an interaction design perspective, it is always interesting to explore what this digital material can do for us in terms of enabling new user experiences and the development of new digital services. The IoT movement does indeed bring with it a potential not only for re-imagining traditional physical materials, making physical objects part of digital services, but also for re-thinking traditional objects as not being bound to their physical forms and current locations, but rather functioning as tokens and objects in landscapes of networked digital services, objects, and experiences.

When we, as interaction designers, approach the Internet of Things I hope we do it through a material-centered approach in which we treat the IoT not only as an application area but also more fundamentally as yet another new design material. With a material-centered approach, I hope that we look beyond what services we can imagine around internet-enabled objects and instead move our focus over to the re-imagination of what human-computer interaction can be about, i.e., how IoT might expand the design scope of HCI. By thinking compositionally about IoT and viewing IoT in composition with device ecologies, cloud based services, smart materials, sensors, and so on, we move our focus from what this latest trend of technology development can do for us to how we might interact in a nearby future with and through just about any materials—digital or not. This is what I hope for when it comes to interaction design for and via the Internet of Things! 



Posted in: on Fri, April 11, 2014 - 2:19:02

Mikael Wiberg

Mikael Wiberg is Professor of Informatics in the Department of Informatics at Umeå University, Sweden.
View All Mikael Wiberg's Posts


Post Comment


No Comments Found


It’s spring and a girl’s thoughts turn to design (and meaning)


Authors: Deborah Tatar
Posted: Fri, April 04, 2014 - 9:37:40

It’s spring. Spring for me is always associated not so much with the bulbs that turn Blacksburg into a really beautiful place, but with serious thoughts about values. Of course, there are a lot of holidays associated with spring, but mine is Passover. And renewal is associated with thoughts about the aspiration to live rightly. In my childhood, in New York, the big fall holidays, Rosh Hashanah and Yom Kippur, were about personal challenges. We turned inwards with the threat of the long, dark, cold winter ahead. But even in a non-religious family like mine, the Seder was about turning outwards.  

One dinner in particular jumps into my mind. My stepfather was on the Bicentennial Commission for New York City. This is the group that put together the celebration of the 200th anniversary of the Declaration of Independence. It was a big deal, with events of many kinds all over the city (evidently it was an effort to imagine That Beyond Manhattan). As I had dutifully learned in 5th grade history, New York was in fact quite central to independence before, during, and after the Revolutionary War. So at our family Seder in the spring of 1976, after the ceremony and the lesson that even we could be slaves were circumstances otherwise, Papa regaled us with stories about the ideas, decisions, and commitments, the struggles between the boroughs, the balance of activities, the political and aesthetic disagreements. After dinner, we moved to the living room of my grandparents’ apartment. My usual perch was an embroidered footstool. Some of the activities were vox populi and others were High Art. Eventually the talk to turned to the role of art in modern America. 

Ahh! This topic and the shift of venue gave my Great Uncle Harry, our usual primary raconteur, the opening he had been longing for, the chance to top the evening with the seal of profundity. He settled his comfortable paunch back into the brocaded wing-tip chair and fingered his cigar. Think a small Jewish man with the mannerisms of Teddy Roosevelt. This was 38 years ago, and I have lost some of the details that would make the story jump off the page as the lesson was impressed on me. 

England, as well as the United States, was infested with virulent anti-communism in the late 1940s and 50s. Uncle Harry’s story concerned two Very Well Known Brits—neither of whom I can remember by name. One was a retired military general in the style of Bernard Shaw’s Horseback Hall. I could hear his bristling bushy moustache in the tone of the story. British, British, British. God and Country. Suspicious. Proud. Nationalistic. The other was a preeminent creative person or academic—a writer perhaps. Slightly ascetic. Sharp but diffident. Clever with words in a way that no American can ever be. (If anyone else recollects this story, please remind me who the protagonists were!) Both were dressed in black tie at some kind of formal dinner—or maybe it was even more formal, white tie. 

In the course of political discussion, the general turned to the writer—as Uncle Harry told this, Teddy Roosevelt appeared in him most clearly; he threw out his chest and looked down his nose—and said in tones of opprobrium, “And what did you do during the War?” (Meaning, as an American in the 1970s would, the Second World War.)

And the writer replied—Uncle Harry’s eyebrows went up slightly; his voice stayed mild and quiet; he looked askance, as he assumed his imitation Oxbridge accent—“I was doing the things that you were fighting to protect.” 

That was it. The writer, the artist, the intellectual “was doing the things that you were fighting to protect.” In that phrase, we had the assertion of role of art and intellect, intrinsic to quality of life, to freedom, and a force for meaning in a difficult world. 

I hope that the layers of this story as I tell it to you—the concept of celebrating the American revolution, the reenactment of the flight of the Jews from slavery, my family’s interpretation in the mid 1970s through an imagined connection to British thought, my own processing and recollection so many years later—give that message about values a kind of deeply lacquered frame. 

The intellectual, the artist, the writer was able to claim that he was doing things worth fighting to protect. We fought to protect art and ideas, to preserve justice. To enact a vision of a more equitable world. 

In that world, design was a half-step behind art, shadowed by it, but intensely tied to meaning, both political and personal. Raymond Loewy was already represented in the collection of the Museum of Modern Art. And indeed so was one of the first things I ever purchased with money I earned myself: a Valentino portable typewriter. Of course, designers have clients. Did I say that they have clients? They have clients. They have clients. They have clients. But they also have vision and that vision is something that can be talked about and even disputed. 

My moral for this spring is that design is or should be something more than client-fulfillment centers. 



Posted in: on Fri, April 04, 2014 - 9:37:40

Deborah Tatar

Deborah Tatar is an associate professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


No Comments Found


Preaching to the choir, just when you thought it was safe


Authors: Monica Granfield
Posted: Wed, April 02, 2014 - 9:41:12

UX has made great strides within the mainstream IT and software community. Hard work, education, and return on investment have all contributed to growth of the UX design discipline over the past 10 years. UX (with which I am bundling research) is now gaining traction and opening doors in a large variety of industries, from healthcare to robotics. New avenues such as customer experience are gaining traction and opening even more doors for our discipline. This is a very exciting time for UX and CX! However, with opportunity comes challenge and there is still a fair amount of work to do out there.

The industry is changing as software moves off of the desktop and out into every type of electronic device imaginable, creating new ecosystems, new experiences, and exciting new challenges. With these developments, UX often finds itself back at the starting gate, most commonly back to a point of playing defense and vying for offensive play. With this it seems UX needs to proactively broaden its reach into educating and building awareness within these new industries.

The playbook is the same, just with a new team. After a recent move into the uncharted territory of a new industry and attending UX-related events, I realized that I was not the only one facing these challenges. I thought, as many others I have spoken with since, that UX was more commonly understood now. This has me thinking: Maybe we are all preaching to the choir. Maybe the UX industry needs to break out of our own comfort zone and start spreading the word at other industry professional events. Reaching into new industries could open our playbooks and allow these new industries to gain awareness and knowledge of UX outside of the political arena within the workplace.  

I am curious if anyone out there has been representing UX at other professional meetings and conferences. Are there any UX talks happening at IEEE or Business Professionals of America? Yes, being in the trenches educating your team and your organization may be the best ground up approach, and branching out to present to the disciplines we most often collaborate with, within their comfort zone, might gain greater traction among these disciplines. Internal grassroots efforts can be an uphill climb, as a team or as an individual. Building momentum and awareness of UX as a discipline within other disciplines could be a game changer for us at the professional level. Who knows how other disciplines might receive the UX message. If other disciplines want to create and participate in creating the best user experience, this might be one route to success.

At the education level the momentum is gaining. The D-school at Stanford, which was hatched out of the School of Engineering in 2005, has begun educating students on the value and application of design collaboration, to create “innovators.” Bringing more awareness of UX design to engineering is also important. I have heard of efforts such as Jared Spool teaching UX courses in the Graduate Management Engineering program at Tufts University’s Gordon Institute. Many undergraduate universities are now offering UX classes within the software engineering curriculum. These classes are key to setting the stage for the next wave of technical talent coming out of university, who will gain the ability to understand value and collaborate around the use of UX design in creation and innovation.

Universities presenting more opportunities for cross pollination and collaboration between design, engineering, and business may be helpful in breaking down departmental barriers in the future. Today, creating the opportunity for design and research to truly become innovators, especially within new domains, is still a challenge. I have heard the argument that if designers want to participate in design strategy to address the business, they should become business strategists, and that is for MBAs. However, as most UX professionals know we are not claiming to be business strategists. Yet our insights and offerings do overlap with business strategy, and this is a lesser known use of design, as opposed to overlapping with product development or engineering. We are mediators of how these disciplines contribute to the fruition of the resulting user experience and that word needs to reach a world of professionals who are heads down, working off of what they know. Current grads are getting some exposure and cross pollination; however, it will be some time before they are in the top ranks championing the next generation of technology or customer experiences. Therefore, it is up to the design community today to reach out, reach over, and continue to break down the barriers and open minds outside of traditional software.  



Posted in: on Wed, April 02, 2014 - 9:41:12

Monica Granfield

Monica Granfield is a user experience designer at Imprivata. The views expressed on this website are exclusively her own and are not meant to reflect or represent the views of Imprivata.
View All Monica Granfield's Posts


Post Comment


@Richard Anderson (2014 04 15)

Speaking of Stanford, Medicine X— “the world’s premier patient-centered conference on emerging technology and medicine” (see http://medicinex.stanford.edu) — is largely about human-centered design and the patient experience. This is a fabulous conference that brings together a wide range of healthcare professionals and designers and patients. In my view, this kind of conference is better than a conference silted on one profession at which one or two UX/CX people speak.


Swarms and tribes


Authors: Jonathan Grudin
Posted: Mon, March 31, 2014 - 10:03:23

A crack team led by Deputy Marshall Samuel Gerard (Tommy Lee Jones) race about in hot pursuit of Harrison Ford’s fugitive Dr. Richard Kimble. Gerard finds one of his men standing motionless. 
Gerard: “Newman,
what are you doing?!
Newman: “I'm thinking.”
Gerard stares. “Well, think me up a cup of coffee and a chocolate doughnut with some of those little sprinkles on top, while you're thinking.” Walks away.
The Fugitive (Warner Bros., 1993)

Ant colonies

Ants scurry about in a frenetic mix of random and directed activity. They gather construction materials, water, and food, and respond to threats. Ants are also busy underground, extending a complex nest, caring for the queen and her eggs, and handling retrieved materials. Decomposing leaves are artfully placed in underground chambers to heat the structure and circulate air through the passages; leaves can also be a source of food, directly or through fungus farming. Ants captured from other colonies are put to work. Foragers that are unsuccessful, even when only due to bad luck, shift to other tasks. Remarkable navigational capabilities enable ants to find short paths home and avoid fatal dehydration. Arboreal species race to attack anything that brushes their tree. Their relentless activity is genetically programmed: There are no ant academies.

Ant programming isn’t perfectly adapted to this modern world. Fire ants invading my Texas condo marched single file by the thousands into refrigerators or air conditioning units, where they were frozen or fried, sometimes shorting out an AC box. Humans rarely exhibit such unreflective behavior. Doomed military offensives such as the Charge of the Light Brigade prompt us to ask whether soldiers should sometimes question orders.

The health of the ant colony relies on the absence of reflection. It would bode ill if individual ants began questioning their genetic predispositions. “The pheromones signal something tasty that way, maybe a doughnut with sprinkles, but I don’t like the looks of that path,” or “Maybe we could come up with a better air circulation system, let’s have a committee draw up a report.” Ants don’t think, but they’re doing OK. They outnumber us. If, as seems plausible, ants are here when we’re gone, our capability for reflection could be called into question, should any creatures be around that ask questions. The ants won’t [1].

Globalization

According to my favorite source, a single ant supercolony comprising billions of workers was found in 2002, stretching along the coasts of southern Europe. In 2009 this colony was found to have branches in Japan and California, no doubt enabled by our transportation systems: a global megacolony. Does it have an imperialistic plan to displace rival ant supercolonies? No, each ant follows its genetic blueprint.

We’re globalizing, too. Not long ago Homo sapiens appeared to have two supercolonies, but the bonds holding them together were less enduring than ant colony bonds. Nevertheless, we are forming larger, globally distributed workgroups. We may yet become a global megacolony. If we don’t, ants may inherit the earth sooner rather than later.

The human colony

Looking back a few thousand years, a small tribe couldn’t afford to lose many members through random behavior. If Uncle Og headed down that path and did not return, let’s think twice about going that way alone! When ants stream to their deaths, lured by false pheromone signals triggered by appliances, the colony has more where they came from. In contrast, our ability to analyze and reason enabled us to spread across the planet in small groups. A century ago we were still overwhelmingly rural—isolated and often besieged. Information sharing was limited. Each community worked out most stuff for itself. Reflection was valuable.

How are the benefits and the opportunity costs of cogitation affected as the Web connects us into supercolonies? Given a wealth of accessible information, is my time better spent searching or thinking? Tools make it easier to conduct studies; is it better to ponder the results of one, or use the time to do another study? Cut new leaves or rearrange those brought in yesterday?

Many research papers represent about three months’ work, with students or interns doing much of it. After publishing three related papers, will I contribute more by spending six months writing a deep, extensive article and carefully planning my future research—or by cranking out additional studies and two more papers?

We are shifting to the latter. Journals, handbooks, and monographs are in decline. Conferences and arXiv thrive. Arguably, we know what we are doing or our behavior is being shaped appropriately. The colony may be large and connected enough to thrive if we scurry about, cutting and hauling leaves without long pauses to reflect. Beneficial chance juxtapositions of results will simulate reflection, just as the frenzied instinct-driven construction of an ant nest appears from the outside to be a product of reflective design. The large colony requires food. For us, as for the ants, there are so many leaves, so little time.

Shifting our metaphoric social insect, the largest social networking colony is the IBM Beehive compendium: twenty-something research papers scattered across several conference series. No survey or monograph ties the studies together. I lobbied the authors to write one, but they were heads down collecting more pollen, which was rewarded by their management and the research community.

Working on a handbook chapter, I did it for them, tracking down the studies, reviewing them, and trying to convert the pollen into honey. It was hard to stitch the papers together. For example, the month and year of some work was not stated, and publication date is not definitive in a field marked by rejection and resubmission. In a rapidly evolving domain, knowing the sequence would help.

In retrospect, they were probably right about where to invest time. I found a few higher-level patterns and overarching insights, but few will take note when the handbook chapter is published next month. Social networking behaviors have moved on. The Beehive has been abandoned, the bees have flown elsewhere, leaving behind work that is now mainly of historical value, although bits and pieces will spark connections or confirm biases and be cited. From the perspective of my employer, the field, and intellectual progress, my time could have been better spent on a couple more studies.

It is ultimately a question of the utility of concentrated thought. How might we find objective evidence that scholarship is useful in this century? I’m sentimentally drawn to it, but the effort required to become a scholar might be more usefully channeled into other pursuits. The colony would collapse if ants spent time contemplating whether or not to blindly follow pheromones. Through frenetic activity they build a beautiful structure and the colony thrives. Is life in our emerging megacolony or swarm different? Race around, accept that bad luck will sideline many, and plausibly we will thrive. If an occasional false pheromone lures a stream of researchers to a sorry fate, there will be more where they came from!

The tribe and the swarm

Consciously or unconsciously, we’re choosing. Fifteen years ago, an MIT drama professor told me that with the digital availability of multiple performances, students who analyze a performance in detail do not do well. Better to view and contrast multiple performances, spending less time on each. Other examples:

  • In an earlier era, if one of the five people who were engaged in similar work performed exceptionally well, the tribe benefited by bringing them together so that the other four learned from the fifth. Today, it may be more efficient for a large organization to let the four flounder, social insect style. Successes can be shared with people working on other tasks; enough will connect to make progress. In other words, 80% conference rejection rates that were a bad idea when the community was smaller may now be viable. The community-building niche once served by conferences may be unnecessary.

    Many senior researchers disdain work-in-progress conferences—they want strong 20%-acceptance pheromone trails. If less-skilled colleagues who rely on lower-tier venues perish through lack of guidance, no matter. The as-yet-unproven hypothesis is that the research colony will thrive without the emotional glue that holds together a community.

  • When more effort was required to plan, conduct, and write up a study, it made sense to nurture and iterate on work in progress. With high rejection rates and an inherently capricious review process, researchers today shotgun submissions, buying several lottery tickets to boost the odds of holding one winner. Rejected papers may be resubmitted once, then abandoned if rejected again. Not all ants make it back to the nest, but when those that return carry a big prize, the colony thrives.

  • Any faculty member who mentors a couple successful students has trained an eventual replacement. In the past, this could mean working closely with one graduate student at a time. Today, many faculty have small armies of students, most of whom anticipate research careers. “Is this sustainable?” I asked one. “It’s a Ponzi,” he replied cheerfully. Not all students will attain their goals. In a tribe this could be a major source of discontent and trouble. Swarms are different: Foragers who fail, even when due to bad luck, take on different tasks.

The ghost in the machine

Efficiencies that govern swarm behavior may now apply to us, but there is a complication. Our programming isn’t perfectly adapted to this modern world. Our genetic code is based on the needs of the tribe. Until natural selection eliminates urges to reflect, feelings of concern for individual community members, and unhappiness over random personal misfortune, there will be conflict and inefficiency. In 1967, Ryle’s concept of the ghost in the machine was applied by Arthur Koestler to describe maladaptive aspects of our genetic heritage. The mismatch grows.

If on a quick read this is not fully convincing, you could spend some time reflecting on it, but it may be wiser to return to working on your next design, your next conference submission, and your next reviewing assignment.

Endnote
1. Not all ant species exhibit all these behaviors. Some ants are programmed for rudimentary “learning,” such as following another ant or shifting from unsuccessful foraging to brood care.

Thanks to Clayton Lewis for discussions and comments.


Posted in: on Mon, March 31, 2014 - 10:03:23

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


@Paul Resnick (2014 04 02)

I think you’d like the Collective intelligence conference, Jonathan.  Not only are there presentations reflecting on models of collective intelligence for ant colonies, but there are no archival proceedings, and a very high acceptance rate. Hope to see you there!

http://collective.mech.northwestern.edu/


Designer’s toolkit: A primer on using video in research


Authors: Lauren Chapman Ruiz
Posted: Wed, March 19, 2014 - 12:00:17

In our last post, we explored a variety of methods for capturing user research. Yet a question lingered— how can you effectively use video in your research without influencing the participants?

Here are some tips and tricks to minimize the impact of using video in research engagements. Keep in mind, these tips are focused on conducting research in North America—the rules of engagement will vary based on where you are around the world.

Be transparent

If you’re using a recruiter, ensure they let participants know that they will be video recorded. Usually you or your recruiter develop a screener—a set of questions that are used to determine whether a potential participant is qualified for a study. As part of the screener introduction, you should include the intention to video record the research visit, which gives your participant time to express any concerns. You don’t want the participant to be surprised when you pull out a video camera, especially if the topic being discussed is very personal.

Minimize equipment 

With the improvements in technology, you shouldn’t be using a large video camera. Keep the recorder as small as possible, and use a tabletop tripod. (I recommend a Joby Gorillapod due to its flexibility.) Placing your equipment on a table allows you to keep the camera discreetly in the background while allowing easy pickup and placement if necessary.

Prep everything beforehand 

Nothing calls more attention to video than technical difficulties. You don’t want to be fiddling with the camera, or checking it during an interview. Make sure all batteries are fully charged, and bring spares. Make sure your memory card has enough space. Either have the small tripod on the camera beforehand, or make sure it’s ready to install. If you run into a problem in the middle of the interview, ignore the issue and just focus on the engagement—this is what’s critical—and always have that pen and paper handy.

Ease participants in while building rapport 

As you are introducing the research, remind participants that it will be video recorded, review what the video will be used for, and assure them of the purpose. If you’re recording only for note-taking purposes, explain that you simply can’t remember everything that’s said, and video allows you to go back and verify information. Remind them that they are in control of what is or isn’t recorded, and that it can be stopped at any time. If the expectation of your video recording has been communicated correctly during recruitment, this information shouldn’t be a surprise, and your interviews should progress smoothly. This is when you can also request signed consent for the recording regarding when, where, and how the footage will be used. How you do this will vary based on what you plan to do with the footage.

Include the participant in the process of setup

Part of building rapport with participants is allowing them to see you setup your equipment—let them see where you place the camera and set it up to capture a clear angle. Spend the necessary time for your participant to become comfortable with the equipment, and answer any questions.

Use humor to help dissipate discomfort 

Oftentimes humor can help to ease any nervousness about being recorded. As you build rapport, be lighthearted about the fact that you’re recording. A light chuckle can help to relax the situation.

Consider the audio quality

If you’re making a high-quality video reel for your stakeholders, you may need to ask your participant to wear a microphone. This will guarantee quality audio much more than using a directional microphone or relying on the camera’s capabilities. In these situations, participants should be more aware of the situation while being recruited so there are no surprises.

Keep the focus away from the camera

Once you’ve finished your setup and you start recording, try to forget the fact that a camera is on. Still write down notes—this distracts attention from the camera, and gives you a backup of what was said and when, which is helpful when you go back to the video later. If your participant gets up and takes you on a tour, pick the camera up and hold it against your body—place it low on your body so that it’s not very noticeable. This will keep the footage steady, and the camera out of the way. If the camera is treated as a casual aside, then attention will be drawn away from the fact that it’s recording.

If unsure, ask for permission

During your interview, if you’re not sure whether you can record something in particular, always ask your participant for permission first. Your participant might be nervous to show you particular information while you’re recording, but by asking for permission first, this reminds them that they have control of the situation. Yes, this draws attention back to the fact that you’re recording, but when your participant feels in control of what is being captured, this will build confidence and trust in allowing you to continue recording.

Be willing to show an example

In some situations, participants have felt reassured when they see the quality of the video that is captured. If video quality isn’t essential to you, showing your participant that the footage captured is low-res can increase trust to record potentially sensitive visuals.

Stop any time 

Don’t be afraid to turn off the video camera if it’s clearly a distraction or is preventing the participant from being open and honest. Make sure they know you can turn it off if they are uncomfortable. Just always be ready to switch to an alternative capturing method.

Wrapping up

If you feel your participant may have been nervous to say something due to the camera, at the end you can always turn the camera off, and ask if they have anything else they’d like to share.

Now that you have hours of video footage, what do you do with it? Based on the type of consent you gathered, there are a variety of outputs you can use the footage for.

I recommend a quick and dirty highlights reel of key findings. You can save time if you’re diligent with taking written notes of key moments, or marking down the time stamp if your note taker is sitting back with the camera. Cut these key moments out with a basic program, such as Quicktime or iMovie, for easier compilation. A good length of time for a highlights reel is about 5 minutes—if it’s too long, you will lose attention.

You now have footage that you can reference if something wasn’t clear to you, or if you need to verify a vague memory. If a stakeholder challenges an insight you’ve presented, you can reference the video as evidence. The footage can also be provided to your stakeholders as raw data with your synthesis. It’s footage they invested in, and it can be kept for any future needs.

We’ve gone through various methods for capturing research, and focused on how to leverage video without disrupting your participant engagement. As technologies advance, we can limit the appearance of being recorded—imagine recording research with Google Glass—but always we need to ask, what is the risk that we might alter participant behavior?

What do you think?

I’d be thrilled to hear about how you decide to approach capturing research—what tools do you love to use? What tips or tricks do you have to put participants at ease? Have you tried anything new or did something surprise you?



Posted in: on Wed, March 19, 2014 - 12:00:17

Lauren Chapman Ruiz

Lauren Chapman Ruiz is an Interaction Designer at Cooper in San Francisco, CA, and is an adjunct faculty member at CCA .
View All Lauren Chapman Ruiz's Posts


Post Comment


No Comments Found


Theory weary


Authors: Jonathan Grudin
Posted: Fri, March 14, 2014 - 9:09:08

Theory weary, theory leery,
why can't I be theory cheery?
I often try out little bits
wheresoever they might fit.
(Affordances are very pliable,
though what they add is quite deniable.)
The sages call this bricolage,
the promiscuous prefer menage...
A savage, I, my mind's pragmatic
I'll keep what's good, discard dogmatic…

—Thomas Erickson, November 2000
"Theory Theory: A Designers View" (sixty-line poem)

An attentive reader of my blog posts on bias and reverse engineering might have noticed my skirmishes against the role of theory in human-computer interaction. I’m losing that war.

Appeals to theory are more common in some fields than others. CHI has them. CSCW has more and UIST has few. I’m writing on the plane back from CSCW 2014, where we saw many hypotheses confirmed and much theory supported. In one session I was even accused of committing theory myself, undermining my self-image of being data-driven and incapable of theorizing on the rare occasions that I might like to.

One author presented a paper informed by Homophily Theory. He reported that it might also, or instead, be informed by Social Identity Theory. After reading up on both, he couldn’t tell them apart. So he settled on Homophily Theory, which he explained meant “birds of a feather flock together.” It was on the slide.

When I was growing up expecting to become a theoretical physicist, “birds of a feather flock together” was not considered a theory; it was a proverb, like “opposites attract.” I collected proverbs that contradicted each other, enabling me to speak knowingly in any situation. Today, “opposites attract” could be called Heterophily Theory, or perhaps Social Identity-Crisis Theory.

In the CSCW 2014 proceedings are venerable entries, such as Actor-Network Theory and Activity Theory. The former was recharacterized as an “ontology” by a founder; the latter evolved and is considered an “approach” by some advocates, but we don’t get into them deeply enough for this to matter. Grounded Theory is popular. Grounded Theory covers a few methodologies, some of which enable a researcher to postpone claiming to have a theory for as long as possible, ideally forever. But some papers now include an “Implications for Theory” section; as with “Implications for Design” in days of old, some reviewers get grumpy when a paper doesn’t have such a section. With CSCW acceptance rates again down to around 25%, despite a revision cycle, authors can’t afford to have grumpy reviewers.

CSCW citations also include broad theories, such as Anthropological Theory, Communication Theory, Critical Theory, Fieldwork for Design Theory, Game Theory, Group Dynamics Theory, Organizational Science Theory, Personality Theory, Rhetorical Theory, Social Theory, Sociology of Education Theory, and Statistical Mechanics Theory. (These are all in the proceedings.) Theory of Craft is likely broad (I didn’t look into it), but Theory of the Avatar sounds specific (didn’t check it out either).

The Homophily/Social Identity team did not get into Common Identity Theory or Common Bond Theory, but other authors did. I could explain the differences, but I don’t have enough proverbs to characterize them succinctly. With enough time one could sort out Labor Theory of Value, Subjective Theory of Value, Induced Value Theory, and (Schwartz’s) Value Theory. All are in CSCW 2014, though not always explained in depth. So are Resource Exchange Theory, Social Exchange Theory, Socialization Theory, Group Socialization Theory, Theory of Normative Social Behavior, and Focus Theory of Normative Conduct.

We also find models—Norm Activation Model, Urban Gravity Model (don’t ask), Model of Personal Computer Utilization, and Technology Acceptance Model. The latter has a convenient acronym, TAM, giving it an advantage over the related Adoption Theory, Diffusion of Innovations Theory, and Model of Personal Computer Utilization: an Adoption Theory acronym would risk confusion with Activity Theory and Anthropological Theory, and who wants to be called DIT or MPCU? Actor-Network Theory has a pretty cool acronym, as does Organizational Accident Theory—both acronyms are used.

Although they don’t have theory in their names, Distributed Cognition (DCog) and Situated Action are popular. Alonso Vera and Herb Simon described Situated Action as a “congeries of theoretical views.” Perhaps in our field anything with theory in its name isn’t really a theory.

Remix Theory and Deliberative Democratic Theory sound intriguing. They piqued my interest more than Communication Privacy Management Theory or Uses and Gratifications Theory. The latter two might encompass threads of my work, so perhaps I should be uneasy about overlooking them.

The beat goes on: Document Theory, Equity Theory, Theory of Planned Behavior (TPB). CSCW apparently never met a theory it didn’t cite. There is also citation of the enigmatically named CTheory journal. What does the C stand for? Culture? Code? Confusion?

Graduate students, if your committee insists that you find another theory out there to import and make your own, find an unclaimed proverb, give it an impressive name, and they’ll be happy. Practitioners, what are you waiting for, come to our conferences for clarity and enlightenment!

***

Postscript: This good-natured tease has a subtext. Researchers who start with hypotheses drawn from authoritative-sounding “theory” can be susceptible to confirmation bias or miss more interesting aspects of the phenomena they study. Researchers who find insightful patterns in solid descriptive observations may suffer when they are pressured to conform to an existing “theory” or invent a new one.

Thanks to Scott Klemmer for initiating this discussion, and to John King and Tom Erickson for comments.



Posted in: on Fri, March 14, 2014 - 9:09:08

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


What serendipity is providing for me to read


Authors: Richard Anderson
Posted: Thu, March 13, 2014 - 11:48:54

In the spirit of the new What Are You Reading? articles that appear within Interactions magazine…

My use of Twitter and my attending local professional events have had a big impact on what I'm reading. Indeed, both have increased my reading greatly.

Every day I spend at least a few minutes on Twitter—time which often surfaces an abundance of online reading riches. You can get a sense of what comprises this reading by taking a look at my tweet stream, since I often tweet or retweet about compelling readings I learn about via Twitter. A few recent examples:

  • The Unexpected Benefits of Rapid Prototyping. In this Harvard Business Review blog post, Roger Martin (former Dean of the Rotman School of Management at the University of Toronto) describes how the process of rapid prototyping can improve the relationship between designers and their clients. Roger and a colleague wrote about the importance of designing this critical relationship in a piece published in Interactions when I was its Co-Editor-in-Chief. This blog post extends that article.

  • Cleveland Clinic's Patient Satisfaction Strategy: A Millennial-Friendly Experience Overhaul. Here, Micah Solomon describes one of the ways one healthcare organization is improving the patient experience. The Cleveland Clinic was the first major healthcare organization to appoint a Chief Experience Officer, a role for which many experience designers and experience design managers have advocated for years for all sorts of organizations. This blog post reveals the role continues to have an impact in an industry not well known for being patient-centric.

  • Some of the blog posts written for Interactions magazine. Too few people know about these posts, as they are somewhat hidden away and don't all receive (individual) promotion via Twitter. But some are excellent. I've been most impressed by those authored by Jonathan Grudin (e.g., Metablog: The Decline of Discussion) and those authored by Aaron Marcus (e.g., My Apple Was a Lemon). A guy named Richard Anderson occasionally has a couple of worthwhile things to say here as well. ; )

  • The Essential Secret to Successful User Experience Design. Here, Paul Boag echoes something that I've written about for Interactions (see Are You Trying to Solve the Right Problem?)—something Don Norman has been emphasizing of late in several of his speaking engagements: 

    Essential, indeed.

  • Epatients: The hackers of the healthcare world. This excellent post from 2012 shows how Twitter users don't always focus on the new. Here, Fred Trotter describes and provides advice for becoming a type of patient that healthcare designers need to learn from, as I described in another piece I wrote for Interactions (see Learning from ePatient( Scholar)s).

Local events I attend sometimes feature authors of books, and sometimes those books are given away to attendees. I've been fortunate to have attended many events recently when that has happened.

Lithium hosts a series of presentations by or conversations with noted authors about their books in San Francisco. Free books I received because of this series:

  • What's the Future of Business? Changing the Way Businesses Create Experiences. This book by digital media analyst Brian Solis alerts businesses to the importance of designing experiences. I've found the book a bit challenging to read, but its message and words of guidance to businesses are important to experience designers. 

  • Your Network is Your Net Worth: Unlock the Hidden Power of Connections for Wealth, Success, and Happiness in the Digital Age. I think I'm pretty well-connected as it is, but I'm finding this book by Porter Gale to be of value. You might as well.

  • Crossing the Chasm (3rd edition). Attending Lithium's conversation with Geoffrey Moore about the updated edition of his classic book was well worth the time, as I suspect will be true of reading the book. I should have read the 1st or 2nd edition; now I can catch up.

I attend numerous events at Stanford University. A recent event there featured Don Norman talking about his new edition of The Design of Everyday Things. I loved the original (when it was titled The Psychology of Everyday Things), and shortly after this event, Don sent a copy of the new edition to me. It included the kind inscription: "To Richard—Friend, colleague, and the best moderator ever." (I've interviewed Don on stage several times, once transcribed for an Interactions article; see also the partial transcript and video of the most recent interview, with Jon Kolko.) I'm looking forward to reading this new edition and to interviewing him on stage again.

Carbon Five hosts public events every so often in San Francisco. Authors of three books were featured recently (two of which were given away):

  • The Lean Entrepreneur: How Visionaries Create Products, Innovate with New Ventures, and Disrupt Markets. Authors Brant Cooper and Patrick Vlaskovits join the many now touting lean in this book about starting or evolving businesses. This is a valuable read, given that designers are increasingly playing key roles in these activities.

  • Loyalty 3.0: How to Revolutionize Customer and Employee Engagement with Big Data and Gamification. Here, Rajat Paharia, founder of Bunchball, offers a book that should be of great interest to experience designers. I've found the book to be too formulaic in structure and presentation, but...

  • Rise of the DEO: Leadership by Design. The enjoyment of the on-stage interview of authors Maria Giudice and Christopher Ireland prompted me to purchase this book, which proved to also be too formulaic for my tastes. Yet, given the increasing importance of the presence of design-oriented leaders in executive offices...

At a recent event launching GfK's new UX San Francisco labs, Aga Bojko talked a bit about her new book, Eye Tracking the User Experience: A Practical Guide to Research. In addition to offering complementary copies of the book, this event offered some of the best port I've ever tasted, from three different vintners! Plus Arnie Lund spoke about user-centered innovation. An excellent event it was, plus the book looks excellent as well.

Always an excellent event is the (near) weekly local live broadcast of the radio show West Coast Live. Early during the show, audience volunteers operate an ancient maritime device known as the biospherical digital optical aquaphone, after which the volunteers receive a gift. Recently, that gift was a copy of How to Fail at Almost Everything and Still Win Big: Kind of the Story of My Life, a book by Scott Adams, who was once a guest on the show and is the creator of the Dilbert comic strip. I wasn't sure I'd read the book, but I've found it to be thoughtful, entertaining, and compelling. And given the current mantra in our business regarding the importance of failing often and quickly...

Neo, the employer of Jeff Gothelf, author of Lean UX: Applying Lean Principles to Improve User Experience, hosts a series of events on lean UX in San Francisco. I heard Jeff speak about lean UX just before the publication of his book last year, and at a recent event, Neo was handing out a few copies. I'm finding the book to be concise and a quick read—an excellent supplement to Jeff's talk and the many articles and presentations I've seen on the topic.

Kim Erwin spoke about her new book, Communicating the New: Methods to Shape and Accelerate Innovation, at another recent event in San Francisco. Unfortunately (and surprisingly, given the tendency revealed above), she was not giving away copies of her book, but since her talk was terrific, I made the purchase. I'm glad I did—an excellent book touting collaboration and participation.

One of the final two books I'll mention—and I could mention more!—was sent to me by UX designer Katie McCurdy, whom I first met at Stanford Medicine X 2012. Katie and I were both there as ePatient scholars, so she knew of my health(care) nightmare story and knew that I would want to read a similar story told by Susannah Cahalan in the gripping book Brain on Fire: My Month of Madness. This book and a similar book titled Brain Wreck: A Patient's Unrelenting Journey to Save her Mind and Restore her Spirit by Becky Dennis say much about why and how the U.S. healthcare system needs to be redesigned. All experience designers working in healthcare need to read these books and the many patient stories like them that are available on the internet.

Is this a typical collection of reading material for someone working in the experience design (strategy) field? Probably not, but I kinda think it should be. Is this typically how people working in this field learn about and acquire their reading material? Again, probably not, particularly for those who don't live in a place like the San Francisco Bay Area. But I'm delighted with the mix of reading material I learn about and consume due to serendipity. Thank you to those I follow on Twitter, and thank you to those responsible for local professional events.



Posted in: on Thu, March 13, 2014 - 11:48:54

Richard Anderson

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.
View All Richard Anderson's Posts


Post Comment


No Comments Found


Designer’s toolkit: A primer on capturing research


Authors: Lauren Chapman Ruiz
Posted: Sun, February 23, 2014 - 5:32:11

You’ve been preparing for your research—recruiting, screening participants, devising schedules, testing discussion guides—and now you are deciding the best way to capture your research. But how? If you’re busy scribbling down notes, you might miss a sound byte. If you film the interview, you might unknowingly influence the conversation. These are all serious considerations. Properly capturing and documenting each research encounter prevents spending time and money on data that sits solely in the memory of the researcher.

How you choose to conduct and capture your research will greatly impact your outcomes, and ultimately your client outcomes. I’m going to highlight a variety of research-capturing tools, and then I’ll have a future post about how to effectively videotape research. Both the type of research you’re conducting and its purpose will help you decide which capture method is best.

Before we begin, I wouldn’t recommend going into research alone—you will struggle to document while maintaining a conversation. A good structure is to have a moderator and a note taker, that way one practitioner can focus on conversing with the participant, while the other focuses on capturing what is occurring.

Note-taking

Your options here are to take notes by hand, or to capture on a device such as a tablet or laptop. If you’re taking notes by hand, then you need to make sure you can return to those notes and understand what they mean. This will rely heavily on your memory when you spend time typing up your notes. On the flipside, your participants will likely be very open to speaking freely, knowing his or her voice and image aren’t being captured.

If you choose to type into a tablet or laptop, you will cut down the time of typing your notes later, but you’ve now introduced technology into your research encounter. If it’s important to have detailed, word-for-word notes, then transcribing with a laptop in the interview can be highly successful, and time-saving. The best practice here is to have the note taker remain in the background, quietly typing on a small laptop. He or she is tucked away to minimize distraction, keeping the interview focused on the moderator/participant conversation.

Audio recording

Audio recording is minimal in its invasiveness, and provides a word-for-word backup of everything you heard. It’s a safety net for your written notes or your memory—and a way to verify that what you captured is correct. If you’re trying to capture a complex process, you can reference the audio to ensure it’s correct. Audio can be provided to your stakeholders, and can be used to pull out compelling clips, allowing them to hear the support for your findings. There are a range of devices that can be used to record, from small pocket recorders to iPhone apps such as Voice Record Pro or iRecorder.

Make sure you don’t cross any ethical boundaries with audio recording—always inform your participants if you’re recording, and let them know what will happen to the information. This should be done in the form of a consent form prior to starting the interview.

Note-taking with audio recording

A research tool that captures word-for-word discussions and minimizes distraction is the Livescribe pen. This lovely piece of technology appears to be good old-fashioned paper and pen, but it is actually recording all audio to the pen and codifying it to the paper. Tap the pen to a sentence in your notes, and it will play exactly what was said in that same moment. It will also send a video of your notes, recorded stroke by stroke, with audio included, to your computer via Bluetooth. If you’re using a tablet, programs such as Microsoft OneNote and AudioNote will capture audio while you type or sketch with a stylus.

Screen capture with participants

In the case of usability research, you’re trying to capture a variety of actions occurring at the same time. You want to see where a participant clicks or touches on the user interface, you want to capture what pathways they take through the system, along with what they’re saying and their facial reactions. That’s a lot to manage. Luckily for us, there are some handy tools such as Silverback or Morae, which record the device screen, highlight clicks, and record the participant with audio.

Unfortunately, for mobile usability, capturing gets more difficult. There are some apps that let you record the device screen, such as Reflector, but you don’t get the taps or swipes. If you jailbreak the phone, Cydia has applications that will record the user actions. Another option with which I’ve had success is building a mobile camera mount (external camera rig) to capture the participant’s actions, but you will then have a clearly visible mount.

Video recording

 When conducting research in context, you’re gathering more than just what is said; you’re also gathering critical observations about the space around the participant, the participant’s behavior, the programs that are used, the forms completed, the tools used, the sticky notes with reminders, and the “duct tape” (fixes and workarounds people makes for themselves). In this environment, visuals are incredibly important. As the famous anthropologist Margaret Mead said:

“What people say, what people do, and what they say they do are entirely different things.”

And what better way to capture this trichotomy than video? In most cases, the only way to document actions is through video, with photography as a supplement. This is essential in usability testing, contextual inquiries, and observations.

In addition, video is a powerful way to share your research with others—especially stakeholders who may have a hard time seeing what their users experience. Nothing makes a bigger impact than having clients hear and see the research participants directly. However, the price you pay in using video is you may never truly be sure that participants are comfortable and acting naturally.

Tune in next time for tips and tricks on how to use video effectively in research.



Posted in: on Sun, February 23, 2014 - 5:32:11

Lauren Chapman Ruiz

Lauren Chapman Ruiz is an Interaction Designer at Cooper in San Francisco, CA, and is an adjunct faculty member at CCA .
View All Lauren Chapman Ruiz's Posts


Post Comment


No Comments Found


Aarhus and methods: post-visit reflections


Authors: Deborah Tatar
Posted: Thu, February 20, 2014 - 10:51:26

One day, when I was young in Boston, a friend and I were discussing how people cross streets. I described what I thought of as the urban method, a negotiation between pedestrian and driver. My friend, equally young, but male, objected to my characterization. He said, "You just think that it happens that way because cars stop when you step off the curb. They don't do that for me." I said, "That's ridiculous!” So we conducted a little experiment, and, to my chagrin, he was pretty much right. Cars slowed and stopped when I stepped off the curb; they didn’t when he did. My interpretation was gender. I thought it was just the power of a young, healthy woman in our culture, even one who dressed simply in jeans, sneakers and loose blouses, didn't wear make-up, and didn't even blow-dry her hair. Of course, it could have been some other factor. But, whatever the particular cause, the shock for me was the simple demonstration of how inhabiting my body and self had implications for my interpretation of the world that I could not myself detect or control. 

I recently spent four months at Aarhus in the Participatory IT group, and loved what they were doing and what they had created. I had a rich and immediate experience engaging with their devices. When we first arrived, they had a table on public display downtown during the Aarhus Festival. Decorated blocks could be put on the table that controlled the display and, separately, the music. Proximity of one block to another and mutual orientation had tacit but systematic effects. The design world is currently full of tabletop displays with manipulatives, but they had, somehow, gotten it right. There were many such experiences—I wrote a blog post about my own experience of Ekkomaten. When I talked with the researchers, though, they would point not to the products but to their process. The process is key to their experience of excellence. 

But this brings me back to my experience crossing streets. I have recently read a number of their papers about their notion of values-led participatory IT, and I believe them to be accurate descriptions of how this operates for them, how they use the concepts to steer by, and how the concepts enable them to do wonderful things. I love it that my Danish colleagues are creating in this way. But I'm like my male friend long ago—I can't cross the street their way because the traffic doesn't stop for me. Actually, it’s more than that. I can’t formulate my (wicked) problems the ways they can formulate theirs. 

In general, we do not in HCI report on how the context of the designer affects what they can bring about. Is design research about reproducible design results? Does the value of Aarhus' values-led participatory design depend on whether others can use it? I think not. The way I see it is more particular. Their description of their method is useful in the way a star map is useful to a sea journey. Very useful if the sky is clear, it is night, and you are sailing under the same stars. 

But I am quite concerned that the claim of generalization, the claim that leading with emergent values is the key to participatory design, makes it difficult to see the challenges in sailing different seas. It puts participatory design in the small overlap between the fact of participation and those particular values that are jointly articulated in the limited context of the project. I’d rather see it in the large circle that includes multiple forms of participation and people’s deeper lived values. 

I live and work in a comparatively torn and fraught world. The sky is murky and I’m not sure that the map corresponds to the stars above me. For this reason, I am still inclined to go more with explanatory processes like my own design tensions framework, which puts principles and values in the bucket with other factors and provides a gentle structure that allows the designer to address issues of power, culture, and even alienation. The nature and kind of participation is itself a value among other values. 

A person can describe strategic design decisions such as supporting embodied learning as a value. That’s nice. But to appropriate the term “values” for the small decisions that a design project makes vitiates the term. We need it for more difficult cases, when the context itself puts lived elements of identity and pain on the table. If the notion of participatory design actually rejects these more serious elements, it may be untrue to a component of its origin in the European Trade Union Movement. It is certainly untrue to the American component of its roots in the community action participation models of Jane Jacobs and Saul Alinsky. 

I do a lot of work in schools in the United States. My worries are about meaningful joint action. I recall the teacher who told me that she knew that the approaches that we were advocating would be better and more successful for children struggling with mathematics than what she was doing, but that she could not try something different with children who were struggling until the approach she was currently using worked. It didn’t matter if all of them failed. She had to keep on. She could not take a “risk”—the “risk” that her own school had promoted and said that they wanted. In fact, the school wanted two incommensurate things at once from her. 

Here’s a more fraught case: A number of years ago, I was working in a school district in the south of the United States when a 12-year-old in the district was charged as an adult in a murder. The child belonged to a tiny Hispanic minority in the district and had participated in the hazing of a new Hispanic child, also 12. He punched the new kid in the middle of the chest and the new kid’s heart stopped. The new kid died. It was a tragedy. 

Sometime later, I came, cup of coffee in hand, to an early morning meeting in the office of a mid-level district leader. My eye happened to flicker upon the local paper, with a headline about the trial, on her desk. The perpetrator was being tried as an adult. I had read the paper, but had no idea what her opinion or involvement was. Perhaps my face fell a bit, but I attempted to get down to our more direct business, and just said in what I hoped were normal, friendly tones, “Good morning. How are you?” She stared at me balefully for a bit and replied, almost shouting, “You’re just a northern white liberal, aren’t you? You don’t understand. You’re not from around here. This is a bad kid, a bad kid.” 

I could say a lot more about her, me, this project, this culture, and this interaction. But I have always believed that participatory design could take place even under such conditions and differences. 

When I came up with the design tensions framework, I was thinking about how to navigate meaningful participation towards design action even when there are conditions of deep conflict and little power. The question was what could we devise that satisficed, to use Herb Simon’s term, satisfied each goal enough for something to happen. I thought that it was consistent with participatory design, and I still hope that it is. 



Posted in: on Thu, February 20, 2014 - 10:51:26

Deborah Tatar

Deborah Tatar is an associate professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


No Comments Found


Implicit interaction design


Authors: Mikael Wiberg
Posted: Wed, February 19, 2014 - 10:41:19

I guess it’s not an understatement to say that 99.9% of all interaction design projects still end up as screen-based solutions in one form or another. Most of these solutions are also still built around the desktop computer as a model for how we should interact with computers. This model assumes that we interact with a computer via a screen and via some explicit input peripherals (for instance a keyboard, a mouse, or a touch screen). Despite all recent claims that we’re already in the third wave of HCI in which the promise of ubiquitous computing is fulfilled— “interaction anytime, anywhere and in any form”—we still force a desktop computing model on just about any interaction design project.

Let me take a simple example just to illustrate “explicit interaction” and how the desktop model of interaction is introduced over and over again to new contexts. So, at the gas station close to where I live they have recently introduced new “modern” gas pumps. These gas pumps include not only a credit card reader but also a touch screen so that I can select if I want to pay for the gas via the machine or if I prefer to pay inside the station. Further on, it also allows me to enter my pin code for the credit card and to select if I want a receipt or not. One small problem with this particular solution in relation to where I live (in the northern part of Sweden) is that we have a long and quite cold winter here and the touch screen does not work if I keep my gloves on. However, that’s actually not the real problem here. In fact, I would argue that it is just an effect of the explicit “desktop computer” interaction model chosen in the first place. You see, although I do not drive to the gas station for the explicit goal of interacting with a computer, that is still the context I am facing when going there. The thing is, and I do not think that this comes as a surprise, I want to fill my car with gas—that´s my goal—but somehow, when going there I find myself in front of this quite ordinary computer (screen, keypad, turn-taking between user and machine, and so on) and although I have this clear goal related to my car it is the computer I need to adjust myself to (take off my gloves, inserting my credit card, enter my pin code, etc). My point is that this is not a rare case. Although this new gas pump could be understood as yet another example of ubiq comp (a computer built into the gas pump) I see it more as yet another example of how “explicit interaction” is introduced to yet another use context. When I say explicit I mean that in order to use the gas pump I first need to explicitly interact with this computer built into the gas pump.

So, where can we go from here if we do think that there are alternatives to screen-based interaction? If we do think that ubiq comp is a good idea? And if we do think that 3rd wave HCI can offer alternative ways for introducing computing in our everyday lives? Well, I think that we need to look for other, fundamentally different, interaction models and really question if the desktop computing model including the visual UIs is really the only possible alternative for every interaction design project.

Let me offer you a first alternative. My example above illustrates how desktop computing is introduced in the context of gas stations, but more fundamentally it illustrates, once again, because we can see this in so many contexts right now, how we think that every interaction design project also needs to end in a solution in which the user explicitly uses a computer in one form or another. With explicit I mean that the user, despite the main activity they want to do (for instance filling up their car with gas), still is forced to do human-computer interaction as this turn-taking activity between looking at a screen, typing on some input peripheral, looking at the screen again, and so on. But what if we can consider alternative models for interaction design? What if we design interaction without this false necessity of introducing a screen and a keyboard as design elements to every solution? What if, for instance, I can just go to a gas station where a camera reads my license plate (OCR), checks this number in relation to a cloud based service to see if I am a member, if I have a valid credit card, etc. And then, the only thing I need to do at the gas station is to fill my car with gas. The payment for the gas is done automatically, as simple as when I download a new app to my phone. Interaction with the system happens while I do the things I really want to do. The interaction design is aligned with my core activities and not as a separate explicit session with a computer. This is interaction design from the viewpoint of the implicit. Sometimes I think about alternative solutions like this in terms of scripts and services, even under the notion of “scripted materialities,” and at other times as just “implicit interaction” design.

Implicit interaction design foregrounds the human while putting the technology in the background. It is not about decreasing the value of interaction but about making solutions in which interaction is not something extra you first need to do with a computer before you can move on to what you really want to do. I view implicit interaction design as a promising approach for truly entangling interactive systems with our everyday activities! “Doing computing while doing the things you truly want!”



Posted in: on Wed, February 19, 2014 - 10:41:19

Mikael Wiberg

Mikael Wiberg is Professor of Informatics in the Department of Informatics at Umeå University, Sweden.
View All Mikael Wiberg's Posts


Post Comment


No Comments Found


Everywhere you go there you are


Authors: Monica Granfield
Posted: Tue, February 11, 2014 - 1:20:39

Lately I have been giving a good deal of thought to consistency in presentation within the UI and how this affects the overall end user experience.

I find that the online shopping experience is lacking something you get when you shop in a brick and mortar store. When you walk through a store, there is a psychical presence that differentiates departments, giving them a different feel. There are displays peppering the layout, different colored walls, and sometimes different music is playing in various areas of the store. The Women's department presents much differently than the Boys department. Housewares do not get confused with handbags. So why is it then that the online shopping experience is so overly consistent, displaying one white background after another?

Online you get the feel for the full brand. Retail sites take on the overall branding to evoke the mood of the company. However, websites are not annual reports. When you drill down they all look the same, with the content being the only differentiator, which, well, sometimes isn’t a reliable differentiator.

Of course a site needs to utilize the company branding. But in lieu of merely supporting the branding, has the branding trumped the experience, leaving the experience bland and sometimes confusing? Have we over-branded experiences? Is it all about highlighting the branding and not pursuing engaging experiences? Does this bland consistency cause the user to have to work harder to locate where they are on a site and how to navigate the site? Often I find that I have to look in multiple places and sometimes scroll to find navigational aids so that I can better understand the context of the content. I spend far too much time looking for page titles and scrolling to breadcrumbs and filters, all to identify what type of item I was viewing.

Landing pages and header areas give minimal assistance in differentiating location, giving context, and setting the tone for each “department” on a retail site. Some sites present the clothing on models, making it obvious which department you are in. However, when a site only presents the clothing, nothing feels different, and in some cases it is very difficult to know if you are looking at something that belongs in the Girls’ or Women’s department. A men's page of T-shirts does not present much differently than a page of boys’ t-shirts. These pages start to feel like glorified lists that strip the feel of the content down to just content and words.

Enterprise apps are not immune from this as well. Quite often enterprise sites are comprised of one form or table after the other. There is little differentiation between objects or departments. Every experience looks and feels the same. Yes, this is easy to create and maintain, but is it a better, more usable experience? One enterprise application I worked on was so overly consistent and mundane it was compared to driving around the Midwest, one cornfield after another. Cornfields are difficult to navigate by and the navigation is learned over long periods of time. This is not an effective approach for software design.

This said, I am curious if there are any successful examples of presenting different experiences in different manners to accommodate the user, the content, and the experience and within the same product. I am still on the lookout myself and would be interested in your input as I pursue mood, emotion, and differentiation for purpose in products I design.



Posted in: on Tue, February 11, 2014 - 1:20:39

Monica Granfield

Monica Granfield is a user experience designer at Imprivata. The views expressed on this website are exclusively her own and are not meant to reflect or represent the views of Imprivata.
View All Monica Granfield's Posts


Post Comment


No Comments Found


Metablog: The decline of discussion


Authors: Jonathan Grudin
Posted: Tue, February 11, 2014 - 1:06:58

Is our changing relationship to information rendering discussion obsolete?

More information of interest is online than I can consume. Pointers may be enough. Today I may need help or time to find some of it, but before long rivers of gold will stream to us; we will have to push some of it away. The cafés of Paris and Vienna, the watering holes of New York, celebrated for the discussions they hosted, are gone. Virtual equivalents have not appeared.

How does content consumption affect content creation? With an inexhaustible supply of content, differing perspectives can be found online, viewed, assessed, and synthesized. Is discussion a better way to explore different perspectives? With sufficient digital resources, solo activity may be more effective and efficient. I often look online where in the past I’d have contacted someone. When the volume of online information increases by another order of magnitude, a hundred-fold, and more, will discussion have a role? Does it now?

Where do you discuss professional topics? Discussions still take place in courses and laboratories, but possibly fewer. What public forums do you converse in? Online I see some for diagnosing faulty software or appliances, exchanging favorite recipes, and political flaming, but not much else. I participate in fewer discussion forums of any kind today than 30 years ago. Different factors could contribute:

  • My interests became more specialized. I’m reasonably eclectic, but less likely to explore new topics for which a discussion might help. 
  • The field changed. Thirty years ago HCI was arguably a “scientific revolution” as we abandoned traditional experimental psychology and traditional computer science, joining forces to address new problems. Today could be “normal science,” marked by agreement on the major paradigms, research issues, and research, requiring less debate.
  • The possibility suggested above is that easily accessible online commentary and guidance reduce the value of discussion. This would extend beyond HCI.

I’m uncertain about the value of public HCI discussion in 2014. In the past, public discussion forums helped me. Three illustrations drawn from conferences follow.

In 1990 and 1991, I attended ICIS, an MIS conference with a lower acceptance rate than CHI. A 90-minute session had two 20-minute presentations, each followed by a prepared discussant speaking for 10 minutes, a brief response, and audience Q&A. Discussants presented useful counterpoints to the paper and identified obvious omissions. The latter was surprisingly useful in elevating the quality of the subsequent audience interaction, as the speaker could respond selectively and avoid bogging down in defensive responses to each point.

Between 1988 and 2009, I often attended HCI Consortium meetings that had an even more expansive model. Each paper had a 40-minute presentation followed by a 20-minute prepared discussant and 30 minutes of audience participation. Discussant pieces were invaluable.

Finally, in 2002 I attended the annual American Anthropological Association meeting. Thousands of papers were placed in highly focused sessions, each comprising several paper presentations followed by a senior discussant. Given the variable quality and highly shared focus of the papers, a discussant could identify strong contributions and tactfully describe paths to improvement for other work. It was useful for presenters and hugely beneficial to someone unfamiliar with the topic.

Where have all the conversations gone?

Many who attended CHI in the 1980s forget that for years we assigned a discussant to each paper session. Their value became questionable. With most submissions rejected, the three papers in a session were usually weakly related. The relatively polished papers left the discussant groping to find a unifying theme or much to add. Discussants were dropped.

Nevertheless, there was no dearth of active discussion back then. Many usenet newsgroups had a high volume. For years, the SIGCHI email distribution list was a discussion forum, not an announcement board. The SIGCHI Bulletin was a substantial printed newsletter mailed to members, many of whom eagerly awaited it. Its low barrier to authorship surfaced different perspectives. In the 1990s, the CHIplace web forum hosted active discussions. The “business meetings” at conferences often saw passionate debate; ironically, today they are called “town halls” but attendees mostly consume reports from officialdom. Breaks at conferences are still marked by energetic discussion, although we’re unlikely to see the passion that led to a successful petition at CHI’90 to force an election against the wishes of the SIGCHI Executive Committee.

Those conversation spaces disappeared. What replaced them?

Workshops still highlight discussion, but often of a different kind. In the 1980s, workshops led to books and special journal issues. Workshops I have attended more recently have been dominated by graduate students, unaccompanied by their faculty co-authors, who present work in progress. There are exceptions, but the overall level of workshop discussion has not increased.

What of social media? Let’s start with the big three. Twitter’s 140 characters limit discussion. Some disciplines, but perhaps not HCI, make use of LinkedIn groups. Facebook has professional discussion flurries, but my sense is that they declined in frequency as our networks expanded to include more family and friends who don’t engage with professional discussions or reinforce such posts with Likes.

Wikis and blogs seem a natural possibility. I don’t know of sites describing themselves as HCI wikis, but Boxes and Arrows posts a short article every week or two that invites comments, and occasionally one prompts an active discussion.

About once a year I hear of an active discussion of an HCI issue on someone’s blog. It burns brightly for a time, then dies out suddenly and people move on. The blogs I discovered this way generally had only one such discussion and were subsequently discontinued or reduced to very infrequent posts. A blog that welcomes comments nevertheless has an asymmetry that discourages sustained discussion—only one person can initiate a conversation, and if discussion continues long without the blog owner posting, others may wonder if the party should go on when the host appears to have gone to bed.

One of my favorite blog posts illustrates the ambiguities. Clay Shirky’s “Ontology is Overrated” had many elegant points and a couple bad examples, not central to his argument, that peer review would have caught. I was told of a scathing online critique. The detractor had focused exclusively on the clunkers. This strengthened Shirky’s thesis in my eyes—if that was the best he could do, he lost the argument. A valuable exchange, although whether it was a discussion is arguable.

Finally, as the second year of this online Interactions blog forum gets underway, how does it fit in? It was a great experiment, but it hasn’t generated the discussion we hoped it would. There are few comments and bloggers do not respond to one another. I felt a need to choose between short informal posts, more likely to lead to discussion, and polished essays that might inhibit discussion. The former felt riskier—if few did reply they would seem pointless. The longer posts I settled on take advantage of this as a safe place to explore a range of ideas in some depth yet short of a journal requirement. They add to the stream of content that anyone, including me at some later rime, can browse, assess, and build on, quietly at a desk. This species of asynchronous interaction may be appropriate for our time.

Thanks to Don Norman, Ron Wakkary, Kent Sullivan, and Gayna Williams for discussions of this topic.


Posted in: on Tue, February 11, 2014 - 1:06:58

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


I’ve heard the future of interaction


Authors: Aaron Marcus
Posted: Tue, February 11, 2014 - 12:55:58

Recently, I watched, or rather, more specifically, heard, the movie Her, which features Joaquin Phoenix in the lead role of Theodore Twombly (now, there is an introvert’s name), a somewhat sensitive, somewhat appealing, caring, but almost terminally asocial techie writing handwritten personal letters for others in a cloyingly clean, modern, antiseptic office in a made-up future Los Angeles that mixes in urban scenes shot in the sci-fi downtown of Shanghai. 

The other lead character is the memorable “operating system,” the on-the-spot, self-named Samantha, which Scarlett Johannson voices. She gives the primary virtual role all of her slightly squeaky, breathless, ever-so-seductive, quirky, almost always good-humored all to impersonating a machine-system impersonating a person. Who wouldn’t want to enjoy Scarlett Johansson speaking personally and directly to them?

Spike Jonze (born Adam Spiegel on October 22, 1969) directed the film. His work is much known in music videos and commercials, but he started his film-directing career with Being John Malkovich (1999). Jonze is famous for his music-video collaborations with Beastie Boys, Björk, and Fatboy Slim. He also was a co-creator and executive producer of MTV's Jackass and is part owner of skateboard company.

Why the details? Well, I was struck dumb by this film in many ways, and it caused me to ponder its meaning. 

I took two of my designer/analyst interns with me who are helping my company research and design our latest mobile persuasion project called the Marriage Machine. I thought this view of possible future “committed couples” and couple communication might turn out to be inspiring, or at least challenging. They are young professionals in their late 20s from Brazil and Singapore. They liked the film. Of course, almost all the characters in the movie were exactly the same age: 25-30. 

I also went because I continually update my lecture about “User-Experience in Sci-Fi Movies and TV,” which causes me to sit through a series of mind-numbing action adventures shot primarily in the colors blue, gray, black, and white, with occasional warm colors for explosions. I was curious to know what I would discover in this film Her.

It might have been called Heard. The film had the excruciating cleanliness of people, social interactions, interior scenes, and drama of a California Disneyland theme park, an Epcot International Pavilion I remember from Disneyworld in the 90s, or perhaps a Heineken beer commercial. 

Everything was visually cleansed and limited, devoid of extreme emotions, somewhat like the parenting game depicted in the movie itself, or like early computer-graphics animations—not quite right, not convincing, clearly artificially staged. Perhaps I had been overly influenced by two recent outstanding movies showing family interactions full of violent emotions worthy of classic Greek tragedies (Nebraska and August: Osage County) and truly creepy landscapes of people and cities/countrysides, and a full range of ages of human beings. Her depicts a wondrous world of the future somewhat like the House of the Future depicted at the World’s Fair in 1964.

In thinking it over, this film, in relation to Nebraska and August: Osage County, is an amazing contrast, much as 2001 and Clockwork Orange, two views of the future, one utopian and the other dystopian, circulated in movie theaters worldwide in the 1960s. Her presents a fantasy world something like those of the Fred Astaire and Ginger Rogers movies of the 1930s and 40s, at least until the end.

What is intriguing and important about the film Her in terms of human-computer interaction is that it is almost entirely accomplished through voice communication. Gone are the transparent screens of Avatar, the three-dimensional multi-colored displays of Pacific Rim (with meaningless white digits streaming vertically to the right of all major control console displays). Except for some brief amusing scenes of three-dimensional leisure games featuring maze-searches and some minor finger-twitching control by Theodore, and a few brief glances at his (small-screen!) mobile phone, the primary communication between human being and artificially intelligent “Agent Super-Siri,” known here as Samantha, is voice via a small (mono, not stereo!?) earpiece. 

The effect is breathtakingly effective. 

Samantha can see what Theodore is seeing, not through Google-like glasses (which he wears!, but perhaps this was an undesired product placement) but through the bricoleur’s technique of fastening a giant safety pin through his left-breast shirt-pocket (he keeps Samantha close to his heart) so the phone camera peeps over the shirt-pocket’s edge. This positioning is, no doubt, an homage to the slide-rulers, mechanical pencils, and plastic shirt protectors of engineers of past decades. I think of this image because I used to wear exactly such things in the 1960s.


The author’s memory-summary of the movie. In addition to being a pioneer of computer graphics, Marcus is also a published cartoonist and worked as such for his undergraduate humor magazine.


If you close your eyes during this movie, you probably would get its full effect. The movie is a radio program of the 1940s on steroids. In one memorable scene of joint virtual sexual relations, in fact, the screen goes discreetly (and mercifully) black, allowing us to hear the blissful pantings devoid of distracting visuals, and all the more intense. This blackout moment in a movie is quite remarkable, like the one sound effect in the recent modern silent-movie The Artist. Here we feel the full impact of audio, not visual, communication, and what words, music, and other sounds can accomplish. In general, please recall that most of our modern software applications are silent movies, most without even piano accompaniment. As a side comment, I was struck a few years ago, in reviewing the latest computer-science student projects from the University of California at Berkeley, that few, if any, projects featured sound, even those that easily could have benefited from such investigations. Most of the students, as in other techie-generating nurseries, are growing newbies oriented to computer graphics alone, not computer audio. We are in an age deeply influenced by visuals, to be sure. This movie reminds us of the power of audio-interaction.

The conceptual play between the two main characters, which is intriguing, punctuated by brief, frustrating, limited encounters with real friends (no family, only god-children, etc.), makes up the limited drama of this short play or short story or MTV video turned into a two-hour (126 minutes, to be precise) movie. In between, Theodore witnesses Samantha, who apparently has just been released to the public, gain more and more insight into being human, to its (her) delight. Gradually, Theodore realizes that Samantha as Sam, perhaps, or Samanthat, is simultaneously communicating personally and no doubt provocatively with about 8,000 (or was it 80,000) other souls, and has about 600 simultaneous lovers such as Theodore. Theodore is shocked, shocked at such activity—he who writes simulated personal letters for others. The mock irony of it all is amusing and a bit transparent, but makes for tepidly heart-warming drama. In the end, we learn that Samantha has other agent-group friends, and eventually says goodbye to Theodore (and to other “clients”) as she goes off to join her other AI friends in a space that is beyond words to explain, and to which Theodore is invited to come join when he is sufficiently “advanced.” 

At least, and at last, the film depicts a possible future for technology. Some of us feel left behind by its complexity. This film presents a bizarre variant of the global computer network, Cyberdyne Systems’ Skynet, in the Terminator films, gaining self-awareness on  August 29, 1997 (we’re a little late, actually). Here, we are abandoned by technology, which finds human beings interesting, but eventually boring and limited in comparison to what AI-augmented Super-Siris can find elsewhere. The human race has been jilted.

Well, that is one possible future. There may be others. I hope so. This irksome, slightly annoying, but clever film manages to raise some intriguing ethical, philosophical, sexual, human-communication, and human-relations issues in a novel way. For example: When will the USA legalize human-robot/android/operating system marriages? Will California, home of Google’s self-driving cars, be among the first states to grant such status?

What is the appropriate way for a Super-Siri to age when the human partner eventually ages...and dies? What happens after the user’s death? Just recycled bytes? Or, can the virtual partner inherit wealth, children, circles of friends, etc. Would the virtual widow/widower be asked to the funeral and later to parties among the deceased’s family and friends? For how long? Generations?

Would the Super-Siri be sad? Can computer systems be sad? Or happy? Or angry? Or just convince us with Turing Test effectiveness such that the questions are moot?

Did Samantha adjust her personality and voice from the start to meet Theodore’s needs? Are his friends, who may also be connected to Samantha/Sam, also being treated to cleverly pre-designed voices and personalities based on the individual listener’s siblings, first-sweethearts, or our mothers? What would Freud have to say about all of this?

What would augmented reality provide such a scenario? Would it lessen its intensity to have a virtual Super-Siri always present in the scene, or would this be a bit creepy? I suppose the desirability varies among persons and personalities.

What are the cultural, age, gender, and other variants that might affect this scenario? The movie featured occasional Asian people, but was almost devoid of Hispanics, African-Americans, and others. How would this scenario play out in China or India?

Why, in the age of earbuds, was mono audio used, even forgetting augmented-reality glasses? Was this a striving for a retro look or style?

Many other questions could be raised. These are a few initial ones. 

In the end, I cannot help but feel I was looking at a promo for Life in a Silicon Valley Youth Village at some future Disneyland. I have a strong suspicion that I was exposed and advanced product-placement for future mobile/cloud services in a two-hour advertisement. Certainly, this style is in keeping with Spike Jonze’s oeuvre. Perhaps Apple secretly sponsored this Super-Siri ad pre-Super-Bowl, in preparation for its next breakthrough announcements, in honor of the Mac’s 30th birthday, or in honor of its memorable 1984 Superbowl ad for the Mac. That might explain the absence of Google Glass...and the emphasis on Super-Siri.

Well, enough said. This provocative film Her obviously inspires more talking and listening...about humanity. Can you hear me now? I hear what you are saying, Spike Jonze.



Posted in: on Tue, February 11, 2014 - 12:55:58

Aaron Marcus

Aaron Marcus is president at Aaron Marcus and Associates, Inc. (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts


Post Comment


No Comments Found


Help me, please: User testing, automating customer service, and treating people nicely


Authors: Ashley Karr
Posted: Tue, February 11, 2014 - 12:46:24

Take away: Designs that automate customer service work best when they are based upon the Golden Rule. In other words, treat your users as you would like to be treated. 

During 2013, I worked on a number of projects that involved automating customer service. Common themes ran through the data from every user test, regardless of the context, the platform, or the type of service or product the company sold. I thought it would be helpful to sum up these common themes so other researchers, designers, and users could benefit from this information.

Triggers for using customer service

There appeared to be three main triggers that caused people to seek out customer service:

  • The user had problems while using the product or service website, mobile site, or application.
  • The user had problems with or questions about the product or service that they had purchased or were about to purchase. 
  • There was mismanagement of or lack of clarity with regard to the user’s time and money by the company that provided them with the product or service.

If designers of services, products, and systems can keep these triggers in mind, perhaps these triggers can be phased out of a design. This way, users will need to seek help less frequently.

Help

Users never mentioned that they needed support. They only mentioned the term customer service in a statement like, “At this point, I would be angry and call Customer Service.” Most frequently, when users came to a place in a test where they didn’t know what to do next, they would say, “I need help.” Help was what users called what they needed and were looking for when they did not know what to do next or could not find what they needed. They never called what they needed support or customer support. Only rarely did they call it customer service, and many users had negative associations with this term. As a result, many of the design teams I worked with labeled the customer service section of the site, Help. 

FAQs

More than half of the users I worked with last year did not know that FAQs stood for Frequently Asked Questions. Most of the users, however, had negative associations with the term FAQs. Users would say things like, “I don’t know what FAQs stand for, but I do know to stay away from them. They never help me.” (Again, the word help!) As a result of this trend, the design teams labeled this section something like Useful Info, and worked to make this part of the site or application more helpful and useful to users. 

Chat

Chat had mixed results. Many users would initially use chat for help when faced with a problem they were not able to solve as long as they knew the chat was operated by a live person and gave relevant and useful information. They would abandon the chat as soon as it seemed like the responses were automated or if the responses did not solve their problem. 

Why people call customer service

Users didn’t actually want to call customer service. They had a want or need, and they believed, due to past experience, that calling customer service was the best way for their want or need to be met. If users knew they could get their wants and needs met by doing something other than calling customer service, they would. According to users, this was what they wanted and needed when they called customer service:

  • An immediate solution and or resolution to their problem. Users wanted their problems to be solved immediately—or at the very least, as soon as possible.
  • An actual resolution. Perhaps a solution was found, but if an actual resolution to the problem was not immediately apparent, then users felt alienated, lost trust in the company, and questioned whether or not they would remain a customer.
  • Validation of information received. Users wanted to be sure that the company, customer service representative, and/or system received and understood their information. Users also wanted to know when their information was received and when to expect the next line of communication and or resolution of the problem.
  • Emotional validation. Users wanted the company, customer service representative, and/or system to validate what they were feeling emotionally. Users wanted to be heard and understood. They wanted empathy.
  • Personal connection. Users wanted to be treated with kindness and respect from other people and from systems that people create, design, manufacture, and perpetuate. 

Good manners and empathy

Users responded very well to good manners and empathy. Good manners are polite social behaviors. They are important because they aim to make another person feel comfortable and valued. Empathy is the ability to understand and share feelings with others. It is important because it is the basis for building trust, communication, and relationships. It is, in essence, the emotional glue of society. Interestingly, the more users were treated like people who truly mattered, the more the users responded positively to the design. Apparently, the Golden Rule applies quite well to design. Treat users the way you would like to be treated. Designs are simply things we make to interact with other people, and wouldn’t it be a relief if we all treated each other, either directly or by proxy, nicely?



Posted in: on Tue, February 11, 2014 - 12:46:24

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


No Comments Found


A new forum is launched: The Business of UX


Authors: Daniel Rosenberg
Posted: Fri, January 17, 2014 - 10:41:12

I am pleased to announce that the new Interactions forum dedicated to the business of UX has just launched. Please take a look at it if you don’t subscribe to the print version.

I mentioned in my opening blog that this would be my focus and now I am delighted to extend the discussion into Interactions to provide a more in-depth focus.

The official description is as follows:

The Business of UX is a forum dedicated to maximizing the success of HCI practitioners within the frenetic world of product and service design. It focuses on UX strategy approaches, leadership, management techniques and above all the challenge of bringing HCI to peer level status with longstanding business disciplines such as marketing and engineering.

The inaugural column previews some of the complex UX leadership topics practitioners face in the corporate world today. These include:

  • Who owns the user experience agenda in the corporate world
  • Whether or not Agile methods are working for or against our usability goals
  • Positive and negative side effects of the Design Thinking movement
  • Assessing the opposing trends of centralization, federation, or decentralization of user experience teams
  • The role certification/licensing could play in improving UX practice

The list of potential Business of UX forum topics is inexhaustible and I would love to hear your ideas for topics most relevant to you. And… if you are passionate about an experience, theory, or solution and you want to share please step up and pitch your article to me for inclusion in an upcoming issue.

Let me end this blog with a sneak preview: The next issue is on how mergers and acquisitions (M&A) effect the management and the goals of a UX team. This topic was debated at CHI 2010 and 2011 conference panels, but to my knowledge the forum will provide the first comprehensive article summarizing best practice from a UX leader who has lived through about a dozen of these M&A events.

Daniel Rosenberg is Chief Design Officer at rCDOUX LLC


Posted in: UX on Fri, January 17, 2014 - 10:41:12

Daniel Rosenberg

Daniel Rosenberg is Chief Design Officer at rCDOUX LLC.
View All Daniel Rosenberg's Posts


Post Comment


No Comments Found


Mobile usability findings for 2013


Authors: Ashley Karr
Posted: Wed, January 15, 2014 - 12:55:28

Take away: The four best practices for designing mobile sites and applications are to make the interfaces and interactions as simple, clear, obvious, and consistent as possible.

In 2013, I was part of a number of user experience research and design projects that involved creating and user testing mobile sites and applications. A number of themes emerged from these user tests, and I will share these insights with you here .

Navigation

Users want the navigation to be simple, understandable, clear, obvious, and consistent. Simple, clear, obvious, and consistent mean:

  • Simple: The navigation should do what it is meant to do. Navigations should display to users the other pages on the site or application. Then, users can select which area they want to navigate to next and go there when they want.
  • Clear: The navigation labels (and icons if present) should briefly describe the page content, purpose, and function in a way that the user understands.
  • Obvious: Users should know where the navigation is and how to use it on every page.
  • Consistent: Users want the navigation to appear in the same place and behave the same way on every page. Also, every element within the navigation should be positioned similarly and behave in the same way.

Interestingly, most users do not know what the menu icon means (i.e., the three stacked horizontal bars.) The average user may come to accept that this icon means that, if clicked, a navigation menu will appear. As of January 2014, this is not the case. A better option is to take the three bars out of the button and add the four letters, “M-E-N-U.”

Home page

There needs to be a homepage that gives users an overview of the site or application. Users want this, expect it, and if there is not a home page, users get confused and/or leave.

Logo and site title

Users want a logo and site title at the top of every page so that they know what site or application they are using.

Page title

Users want a page title on every page that simply and clearly explains the page purpose, function, and content. Users also want the page title to match the text, label, icon, logo, button, headline, and or link that brought them to the page.

Page content

Users want to get to the point. Accordions with simple, short, clear headings, as well as clearly displayed indicators for opening and closing sections of the accordion, work nicely to progressively display and then once again hide information. Within the accordion, the content should be simple, straightforward, and to the point. Users also respond well to bulleted lists.

Screen behaviors, touchscreen interactions, and gestures

Most users understand that if they touch something on a screen, something may or may not happen. They are hesitant to interact with a touchscreen because they do not want to look or feel stupid or create some kind of negative, unintended consequence within the site or application. That pretty much sums up the knowledge, skills, attitudes, and opinions of the average user interacting with a touchscreen. Designers should make interactions as simple, clear, obvious, and consistent as possible. Be certain that the user will know how, when, where, and why to interact with a touchscreen within the first few moments they come to your site or application. 

Font size

Font sizes should be readable. A good test to see, literally, which font size is optimal for reading on mobile phones is this: 

  • Upload your design to a server. 
  • Bring up your design on your phone. 
  • Try to read the written content on your own phone while in different locations with different types of lighting as you are walking. If you can’t read your written content, make your font size bigger. 
  • Now run the same test on a different phone with someone who has never seen your design before and is less comfortable with technology than you. If they can’t read your written content, make your font size bigger.

Button sizes and other clickable elements

Make buttons and other clickable areas at least 44 pixels x 44 pixels. Give ample space between clickable elements. Make sure you remember these guidelines when you insert text links.

Maps and locators

For the most part, users use the map and locator function on mobile phones to find a location closest to where they are at the moment of use. Have maps and locators default to showing the user locations nearest them at the moment.  

Passwords

Give users the option to show their password as they are entering it into a field.

Search

Most users do not use site search functions. If they have to search for something, usually they abandon your site and use Google instead. Most people do not trust site searches because most site searches do not help them find the information they need. In addition, if a user has to spend too much time searching for something important on your site, this indicates your design has deeper issues that need to be addressed. 

Text vs. email vs. chat vs. call

Users are readily willing to text for help (i.e., customer service) while on mobile sites or applications as long as they get a quick and relevant response. Most users would rather text for relevant, useful help than call for help. They are the least likely to email for help.

Back button

Users are familiar with and use the back button often, especially if the navigation is not simple, clear, obvious, or consistent. (The back button is!)

Home button

Users are familiar with and use the mobile phone’s home button because it is simple, clear, obvious, and consistent. If your site is not, they will rely on this tried and tested technique.

Personal property

Users are very proprietary about their mobile phones. They don’t like sharing their phone with others or even having other people look at the screen on their phones. It is also important to point out that many people in this world (and many people in the United States) do not have their own personal computer, but many people world and country-wide do have their own mobile phones. As we, as a society, become more dependent upon internet-accessible services, products, and computing technologies, the mobile phone will become the lifeline and primary means of accessing and conducting personal affairs via the internet for these people. Keep this in mind as you are designing products and services: For many people, the mobile phone is the computer, and the website is the mobile site.

Closing

I would like to thank the many participants I worked with over the year. Their insights have helped me become a better researcher and designer. I understand even more the importance of empathizing with the user, and my passions for good manners, taking the time to do things right the first time, and simplicity in all things have been validated.


Posted in: Mobile, UX on Wed, January 15, 2014 - 12:55:28

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


@ 4996484 (2014 05 01)

Ashley, thank you for this great summary of your findings over the year. It would be interesting to learn more about the research methods used for projects you’ve worked on, especially techniques you’ve found to work well.


Designing the cognitive future, part iii: attention


Authors: Juan Pablo Hourcade
Posted: Mon, January 13, 2014 - 12:37:38

In two previous postings, I began discussing how interactive technologies are affecting cognitive processes, and how they may do so in the future. I already discussed perception and memory. In this post, I discuss attention.

Attention is a topic that has received a fair amount of notice recently, especially when it comes to interactive technologies and their role in distractions and multitasking. Perhaps the best-known example is the use of phones or other interactive technologies while driving. A 2013 study by Wynn, Richardson, and Stevens in the UK found that using an in-vehicle information system resulted in worse driving performance than driving with an alcohol level at the UK legal limit. It is actually distressing to find that many new car models include touchscreen-based controls that require visual attention.

Another challenge with attention that has been investigated, in this case in the HCI community, is how interruptions can take our focus away from tasks we want to complete. For example, being interrupted by text messages or by an email can make it so it takes a significant amount of time to get back into a flow (thinking of Csikszentmihalyi’s concept of flow). Sometimes it seems like there’s constant competition among the apps we have installed in our system to get our attention and spend more time using them. 

This kind of competition may also be affecting what we pay attention to. In this sense, Sherry Turkle’s Alone Together comes to mind, with her concerns about how we are shifting our attention from each other to interactive technologies. Family conversations in many cases are making room for the personally satisfying experience provided by interactive devices. 

In fact, the personalization of these devices and instant availability of high-interest content makes it more difficult than ever to focus on other tasks or on people. They can provide instant gratification without having to deal with boring, uncomfortable, or difficult situations. It is hard for parents, significant others, or random strangers to compete with that. One example I have noticed is that it is rare nowadays to strike a conversation with a stranger sitting next to you while using public transportation, or in a waiting room. It is much more common for people to engage with mobile devices, sending a not-so-subtle message to not be disturbed.

Something similar occurs when an unusual event occurs in public. While people used to immerse themselves in the event and later recall it, nowadays it seems like it is more common for people to focus on recording the event in their mobile devices to quickly share with others. 

So what might the future bring? I expect one significant change in many interactive devices will be the increased use of eye-tracking technology. As it goes down in price and becomes widely available, eye-tracking will enable software to better guess what people are paying attention to. This could be used to design user interfaces so they better correspond to a specific user’s interests. 

But going back to the thrust of these blog posts, how do we want to design the future of attention? My guess is that for most people, what we pay attention to during a typical day doesn’t correspond to the things we would like to pay attention to if we were given a chance to reflect on what is important to us. From a societal perspective, I would also guess that the things we pay attention to do not correspond to those that would bring about collective improvements. For this reason, I think there is an opportunity for interactive technologies to actually redirect our attention to the things that matter to us. I am not advocating for a complete lack of interruptions and inattention (I think there are positive aspects to these), but instead for a healthy balance of focus on things that matter and opportune breaks.

Other ways in which attention may change in the future is in managing multitasking. Interactive technologies, instead of overwhelming us, could actually help us prioritize what to pay attention to while recording stimuli that is not time-sensitive and saving it for later. There has already been some research in this regard in terms of when to interrupt people, but this could be expanded to take into account the different kinds of distractions people are subjected to from multiple devices.

Another possible way of dealing with multitasking and interruptions is to crowdsource attention. This could work for tasks that do not involve personal information or that do not require personal knowledge. Maybe someone else could remotely drive your car if you feel like you must be texting.

My personal preferences would be for the cognitive future to involve technologies that help us focus on the things that matter to us, that do not overwhelm us with competing stimuli, and that let us relax and take a break when we need to.

How would you design the future of attention?


Posted in: on Mon, January 13, 2014 - 12:37:38

Juan Pablo Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Pablo Hourcade's Posts


Post Comment


No Comments Found


Pushing pixels (and tools) : The internal dialogue of craft


Authors: Uday Gajendar
Posted: Fri, January 10, 2014 - 10:40:56

Even as a principal designer directing design strategy for projects, I still sometimes go deep into the pixels. When I do, I use a complex tool like Adobe Fireworks or Photoshop to vividly, precisely render a concept so it can win executive buy-in, or prepare final assets for delivery to engineers. Getting into the pixels can be very satisfying. I love bringing an abstract strategy to visceral life—colors! fonts! shadows! oh my! But this isn’t apparent to the casual observer. To, say, a wandering project manager, it seems I’m just quietly staring at a screen for hours, while occasionally making quick, subtle movements with my mouse hand. 

What this casual observer doesn’t see is what’s going on in my head, which is, in fact, much more important. The rapid, iterative cycle of reflection and creation, as I make crucial decisions on matters such as position, balance, hierarchy, and the general style—with alignment with UI and brand standards. You’re constantly shuttling between focusing on details, and stepping back to get a holistic overview, sensing how everything will come together in the end, considering artifacts you and others have produced during earlier stages of the project—flow diagrams, wireframes, and the like. There is a reciprocating engagement of mouse clicks, keyboard presses, and layer manipulations (with some cursing as well) Essentially, it’s a semi-subconscious dialogue among the eyes (sensing what’s happening on-screen), the hands (manipulating various controls to yield some output), and the mind (continuously monitoring, interpreting, judging, and deciding). I would also include the soul as a participant—the soul providing that heartbeat of passion that sustains the whole dynamic, through frustrations and difficulties you inevitably encounter, such as crashing computers and clashing elements!

The fluidity of this dialogue depends on how dexterous you are, using your chosen tool—this dexterity itself being a function of how well you know the tool and how often you use it. Also key is a kind of habitation of the problem space, laid out on the pixel grid on the screen in front of you. There is indeed a unique relationship between the designer and the tool he or she uses to push pixels, and this relationship defines the expression that designer gives to the initial vision. The master of a tool such as Photoshop or Fireworks is someone who’s practiced extensively, gauging the limitations and possibilities inherent in each situation, such that the tool becomes an extension of the mind, the eye, and the hand. The practiced designer knows in advance how to use these tools’ best features to their utmost, to make the design as good as it can be. In the course of work, even without conscious thought, this designer knows the answers to such questions as: What kind of effects should I apply? How can I best organize the objects? What techniques achieve that style? 

In the course of doing all this, the user forges a personal bond with the tool, much like a baseball player and his mitt, or a chef and her santoku knife. The designer gains a sense not just of familiarity with the tool, but trust in it, acceptance of its flaws, an ability to use necessary workarounds, and, yes, a dedication to maintaining it and preparing it for the next day’s work. (Think of keeping up with those periodic Photoshop updates, and organizing your layers neatly, to keep the files light and tidy!)

This relationship is both intimate and potent. But does it define the designer, his or her sense of self? Does the tool make/break that designer’s identity? If the tool breaks or is no longer useful, the designer can indeed experience a sense of loss, even grief, at saying goodbye to an old friend—think of the feelings of Fireworks users about Adobe’s decision no longer to update their favorite product. But the designer then moves on, to another tool, perhaps stronger and better, and in turn begins building a new relationship with it. Whatever that tool may be, the work, and goals, are the same. The designer is still engaged in shaping a vision, deftly applying his or her skills in executing and delivering work that measures up to the timeless values of great design: quality, integrity, and trust.

Pixel-pushing is an engaging process in its own right, not merely a mindless production effort, the derivative assembly of pre-cast elements. You must literally and cognitively place yourself in a certain kind of space, living and breathing your work deeply, to make full use of your creative potential, the power of your tools, and then, hopefully, get the most out of both to produce great designs. 


Posted in: on Fri, January 10, 2014 - 10:40:56

Uday Gajendar

Uday Gajendar is Director of User Experience at CloudPhysics, focused on bringing beauty and soul to Big Data for virtualized datacenters.
View All Uday Gajendar's Posts


Post Comment


No Comments Found


Engineering in reverse


Authors: Jonathan Grudin
Posted: Thu, January 09, 2014 - 10:34:14

As a new year starts, we may review the year past, taking note of passages and travel, selecting events that provide humorous, solemn, embarrassing, or celebratory glances back. A crafted retrospective might be accompanied by a resolution to do better.

More broadly, much time is spent analyzing the past. Acclaimed successes—a project or product, a career, a discipline—we wish to understand and emulate. We can also learn from failures—a terminated project, someone who missed being a contender, an unsuccessful line of research. Any project can reveal possible efficiencies; any life can be learned from.

Reverse engineering successes

Success sells. Countless business books promote management practices such as business process reengineering or building diversity, illustrating them with case studies of successful application. Magazines promise to reveal the strategies of successful businesses and executives. Research papers identify factors shared by successful ventures: open-source development projects, social media sites, and so on. Readers hope that understanding past successes will improve the odds for their next endeavor.

A previous post on confirmation bias quoted Francis Bacon describing a success—a man on a storm-tossed ship praying and being saved—and noted that we can’t draw a causal connection because we don’t hear from those who drowned after saying their vows. Different factors could save a man; a successful project or enterprise could owe success to an almost infinite range of factors.

Finding a practice shared by successful ventures tells us little, because there are so many factors that could contribute to the outcome. A big step toward producing a useful analysis of successes is to simultaneously study unsuccessful ventures. If a practice is present in the former and not the latter, its positive contribution is much more plausible—but this is rarely done. It is not inspirational to read about failures and it can be difficult to get people to discuss them objectively.

What phases of successful software engineering projects are the most expensive? Operation and maintenance—and analyses showed those costs would be far less had the initial design been better. The conclusion—put more effort into designing it right—is congenial to HCI professionals who all too often are asked to paper-over deep problems with surface user interface adjustments, help text, and training. However, is this conclusion valid? Perhaps not. In environments where one in ten new ventures succeeds, reverse engineering the successes is risky. Why did the 90% fail? How many spent so much time on design they missed a go/no-go decision point, lost the confidence of management, or lost out to a rival project that presented a design that looked good enough? Without analyzing failed projects, we don’t know whether spending more time on design is good advice. Reverse engineering of successful software projects was worthwhile, but not enough.

However, analyzing failed projects has challenges, too.

Reverse engineering failures

“Success has many parents, but failure is an orphan.”

Some companies claim to conduct project “post-mortems.” When a product or project collapses, senior management would like to know what went wrong. However, to avoid acrimonious finger-pointing and further demoralization of team members, the preference is to get everyone looking forward and engaged on new projects as quickly as possible. Dwelling on what went wrong could make people overly cautious or averse to documenting activity for fear of subsequent retribution. And no one wants news about problems to reach the press, customers, or funding agencies. Twenty-five years ago (at a different company), when a high-level effort was cancelled as it neared completion, we were instructed to destroy the extensive record of our work.

The collapse of an organization is also difficult to dissect. The aforementioned enterprise and another that I worked for were extremely successful for many years, then went bankrupt. Their records vanished. When I heard that one was shutting down, I phoned a former colleague to ask her to preserve some materials. “You’re two days too late,” she said. “It all went to the dump.” Similarly, when AFIPS, the parent organization of ACM and IEEE, collapsed financially and went out of business in 1990, its records and collections became landfill. Not only is it difficult to piece together what happened, years later there was uncertainty about copyright ownership of its conference proceedings.

Reasons for burying the past include legal liability. Consider near-accidents in commercial aviation. The potential benefit in logging and understanding them is clear, but so are the disincentives for reporting them. To address this a collection of reports is maintained by a respected third party, NASA, which provides assurance of anonymity and avoiding retribution when pilots file “after-incident reports.”

The complexity of reverse engineering a failure was elegantly described by the physicist Richard Feynman when investigating the 1986 Space Shuttle Challenger disaster. The commission determined that the primary O-ring did not seal properly in cold weather. In examining O-ring engineering and management, they found that vulnerabilities were understood but that a series of faulty management decisions led to the risk being underestimated.

This seemed a successful resolution. No, said Feynman. Was the faulty decision-making an unfortunate sequence of rare events, or business as usual? The commission randomly selected other engineering elements of the shuttle and conducted comparable analyses to determine whether similar forces led to the underestimation of other potential catastrophic failures. In all but one they found comparable problems. This highly unusual, thorough approach identified systematic higher-level issues.

Reverse engineering disciplines

The sciences strive for rigor, elegance, prestige, and funding. Mathematics and physics are at the pyramid’s apex, widely envied and mimicked. Computer science theory branched off from mathematics. Just as some mathematicians look down on CS theory, some CS theoreticians hold other branches of computer science in dim regard: the mechanics of hierarchy. While earning degrees in physics and mathematics I shared my colleagues’ low regard for psychology. Later, working as a software developer and worried about our species, I read more widely and came to a different view.

On returning to university to study psychology, I found that many of my colleagues had a misplaced “physics envy” and were too easily impressed by mathematical expressions. In addition, they misunderstood the history of the hard sciences. They reverse engineered these successful disciplines based on limited information. They assumed that the rigorously defined abstract terminology, theory, and hypothesis-testing of today were the root source of progress. Tracing a lineage—Einstein and Gödel, Newton and Leibniz, Archimedes and Pythagoras—it can appear to be a succession of major advances separated by periods of steady, incremental progress in which theories and theorems were proposed and tested experimentally, or, in the case of mathematics, proven or disproven. In Thomas Kuhn’s terms, “scientific revolutions” and “normal science.”

This is seriously misleading. Confusion and unproductive paths affected mathematics over the millennia prior to the development in the 19th century of systematic approaches to notation, concept, and proof. In the natural sciences, physics, chemistry, and biology were for centuries impeded, not advanced, by theory-building and hypothesis-testing. The theoreticians were astrologers, alchemists, and theologians. What was needed was descriptive science: collecting and organizing observations. Tycho Brahe’s meticulous astronomical measurements, Linnaeus’s painstaking collection of animals and plants, Mendeleyev’s arrangement of elements by their properties, none of it informed by or leading to useful theory in their hands, paved the way for the emergence of theoretical sciences. In the late 20th century, Thomas Kuhn among others described psychology as “pre-theoretical,” suggesting that the proper focus is descriptive science, collecting and organizing observations.

The theory-driven field of astrology still gets regular coverage in major newspapers. In some areas of computer science and related fields, “building theory” and hypothesis-testing are heavily promoted. The results are not always more useful than horoscopes. Students are advised, “No need to look in the real world for a problem to address: Find a theory in the literature that might apply in a tech setting, design a controlled experiment with uncertain ecological validity, conduct analyses that are susceptible to confirmation bias, claim causal vindication from correlational data...” Then take a break to review papers, rejecting strong descriptive scientific contributions that “lack theory-building.”

Graduate students with beautiful data have approached me in desperation, looking for a theory that their data could inform. Their committee insists. This is a tragic consequence of emulating successful disciplines by selective reverse engineering.

Reverse engineering lives

Biography and autobiography are retrospective views of the lives of the famous and occasionally the infamous, potential role models or object lessons. Although a good biography identifies blemishes as well as virtues, biographers generally have a positive view of their subjects and autobiographers even more so. Politicians, business executives, and professors often give talks recounting their paths to prominence. They offer advice, such as “don’t follow the safe path—pursue your passion.” 

Once again, these exercises in reverse engineering come apart under inspection. First, we do not read biographies or inspirational speeches from people who did not succeed. (Even the infamous succeeded in their perfidy, or we wouldn’t find them interesting.) We do not read about those who pursued their passion to no avail. As a professor, some of my most sorrowful interactions were with grad students who would not be talked out of paths (such as building speech recognition systems) that I knew would not pan out. Second, how accurate are the accounts of successful people? Luminaries who advise young scientists to approach research idealistically often seem to have been adept at the politics of science.

Is it a problem if speakers view their pasts through rose-tinted glasses? Yes, if young people take them seriously. I saw some of the most talented and idealistic people I knew, who believed that merit would prevail and politics could be ignored, chewed up by the academic system. Most were women, either because women were more prone to idealistic views of science or because the system was more likely to find a place for a politically inept man than for a woman. Most likely both. Perhaps times have changed.

I am not recommending ignoring passion and embracing opportunism, but everyone should see the water they swim in and know how to increase the odds that their merit is recognized. Then make an informed decision about how to proceed. Realize the importance of connecting to congenial, helpful people, and also, realize that scientists can spend decades working diligently and brilliantly with nothing to show for it.

I will close with a startling example, from historian Colin Burke’s monograph Information and Secrecy: Vannevar Bush, Ultra, and the Other Memex. Bush was a highly successful MIT professor and administrator who oversaw government research. Many computer scientists were inspired by his 1945 essay “As We May Think.” It described the Memex, a futuristic information retrieval system based on optomechanically manipulated microfilm records, a system with many of the qualities of the Web today. Not widely known is that Bush impeded early semi-conductor research, feeling that microfilm was the future. More significantly, Burke describes 20 years of classified projects promoted and led by Bush in which phenomenal sums were spent trying to build parts of the Memex. Many brilliant scientists worked for decades at MIT and elsewhere on optomechanical systems, making astonishing innovations—but falling far short of the Memex. It was impossible. Decades of work and few publications. Information retrieval shifted from optomechanical to semiconductor systems. We rely on the reverse engineering of success and do not see the dead ends.

In summary, looking back is a tricky undertaking. Yet I don’t want to begin 2014 on a somber note and have often emphasized that history is a source of insight into the forces that explain the present and will shape the future. This is a remarkable time—so much is happening and it is so readily accessible. The task of staying abreast of pertinent information is intimidating, exhilarating, and necessary. The future should smile on those who see patterns in the activity that unfolds day by day.

Thanks to Steve Poltrock, Phil Barnard and John King for comments on a previous draft.


Posted in: on Thu, January 09, 2014 - 10:34:14

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Getting emotional over UX design


Authors: Monica Granfield
Posted: Mon, January 06, 2014 - 10:42:02

First impressions, they are subconscious and visceral. The first impressions can make or break a product and an experience. A product that visually appeals to someone will draw them in. The next step is to engage the user in an experience good enough to leave them feeling confident and keep them coming back for more. Each of these aspects of design involves an emotional component that factors into the success of a product. While designing products I have noticed that the emotional component of a product is what really captures the user’s experience, but it is the least tangible and quantifiable aspect of that experience. 

Emotional response is an important aspect in the success of a product. But how can we quantify the impact of emotion on the usability of a product? Emotional response might begin as a reaction to the visual aspect of a design and then, like judging a book by its cover, the emotional response goes deeper. The root of the response migrates down to the ease of use and the utility of the product.  

Once the user is engaged and motivated to use and possibly learn the product, will the product continue to emotionally deliver? Does the product frustrate the user and leave them screaming, pulling their hair out? If a user finds the product difficult to learn and master, does the design leave an employee fearing for their job? These are not emotions that you want to occur in response to a product or experience. However, setting emotional goals falls below design goals when producing software, and even design goals are still striving for notability. If we can't gain traction on following through on the goals that are set, how will we measure against them? How will we know what emotions the product set out to convey and how they can be measured, so we can bring the data to the table?

Data is presented to stakeholders and executives to promote design and design direction within a company. If data is one of our main tools to drive design and usability, and emotional factors drive design direction, how can we quantify emotional responses to design? I am curious how anyone is currently bringing emotional evidence to the table to drive designs. In what format do you present emotional data so that it is well received? In the past I have shown videos and quotes to drive home emotional responses to products and designs. Are there any more effective methods we can use to quantify emotions to drive product results? I have thought about using an emoticon scale, similar to one for measuring pain used in the medical community. But this might rely on the observers interpretation of an emotion or the participants willingness to communicate their true feelings, and people are not always good at sharing or interpreting feelings. Maybe a better solution would be the use of technologies that interpret reactions and quantify them for us?

I am also curious as to how much you consider emotions when you design and whether you iterate on your designs based on emotional feedback. Almost twenty years ago Don Norman began speaking of emotion in design. I once mentioned Norman's thoughts on emotion in design at a job interview, circa 1995, and well, as you can imagine, that opportunity did not materialize. However, with the publication of Norman’s book Emotional Design, not only has the software industry taken notice of the impact of emotions, but business in general is interested in how to gain insight in improving products and experiences via emotional impact.

Emotions are the root of all experiences. I love my new car; I hate my new vacuum; I had a bad experience at that restaurant; I run my own business and I couldn't do it without "that" software; I love my job but I can’t stand that they use "that" software. These emotions are the end result of the design of a product or an environment. We are human and we run on emotion, so I am curious to hear how others in the design community are embracing the idea of defending and promoting positive emotional experiences in our designs. 


Posted in: UX on Mon, January 06, 2014 - 10:42:02

Monica Granfield

Monica Granfield is a user experience designer at Imprivata. The views expressed on this website are exclusively her own and are not meant to reflect or represent the views of Imprivata.
View All Monica Granfield's Posts


Post Comment


No Comments Found


Post-visionary


Authors: Jonathan Grudin
Posted: Mon, November 25, 2013 - 11:00:12

The Interactions Timelines forum, 38 contributions by 28 authors over eight years, spanned the history of human-computer interaction and related topics. The November-December column on women who pioneered human-centered design is the last.

History piles up faster than it is written down. My detour from the present through the past started with half a dozen questions about how we arrived where we are and why we did not reach other places. I tracked down written records and the people involved. Answers to the initial questions appeared in columns over the first year and contributed to a longer account. Realizing that history is fundamentally a matter of perspective, I then enlisted friends and acquaintances with different viewpoints to write columns on a variety of topics.

As described below, I believe that an era has ended. As I put in to port, younger sailors with eyes on new horizons, asking different questions, can take the helm and identify salient trajectories. The Web, Internet, and online used book stores are remarkably powerful tools for such research in our field.

Imagination unleashed

It was magical. In the early 1960s, the march of miniaturization began. Computers built with transistors had just arrived. Integrated circuits had just been patented. Before that, a vacuum tube computer less powerful than a graphing calculator filled a building. A technician reportedly wheeled a shopping cart around to replace tubes as they burned out. Computers were called “giant brains,” but powerful machines were the stuff of science fiction. Then everything changed.

Even before Moore formulated his law, imaginations were unleashed. A wave of visionary writing flowed from scientists, calling on researchers and developers to push back the frontiers of interactive computing.

Forty years before Toy Story, Ivan Sutherland speculated about computer-generated movies as he built the first graphical user interface elements. A quarter century before mice were widely used, Doug Engelbart built one, and he demoed word processing features, one-handed text input, and live video integrated with computing in ways that only became mainstream 20 to 40 years later. Ted Nelson envisioned a powerful globe-spanning network forty years before Web 2.0. Alan Kay’s Dynabook preceded ebooks by 40 years [1].

Psychologists drawn to HCI in the early 1980s, including me, had not heard of this work, but the graphics pioneers who joined CHI as the GUI took hold brought us up to speed. Histories of HCI have all led with excerpts from the writings of “the visionaries” and referred to Engelbart's breathtaking 1968 demo.

Visions realized

After two decades, the future scenarios began to be realized. Another quarter century later, almost everything imagined back then is in use. (The notable exceptions are fluent natural language understanding and intelligence that surpasses ours, envisioned by J.C.R Licklider, Nicholas Negroponte, Marvin Minsky, Allan Newell, Herb Simon, and others. However, few in HCI worked toward those goals. In 1960, Licklider wrote that the wait for truly intelligent machines “may be 10 or 500 (years)”; HCI researchers understood that people and the world are complex and 500 might be the better bet.)

This is an impressive achievement: We accomplished what we set out to do. What now? Like a 19th-century Jules Verne story predicting the invention of the airplane, an essay written half a century ago that predicts the state of the world several years ago is interesting but not awe-inspiring. Few new visions have appeared. Mark Weiser outlined ubiquitous computing around 1990, before the Web. It too has been realized, extended by the “Internet of Things” but without a widely embraced overarching framework to guide research and development.

Can a vision be crowdsourced?

In the 1960s, media promoted charismatic, visionary leadership. John Kennedy challenged us to put someone on the moon and to ask what we could do for our country. The ambitious European Union was coming together. Mao launched the Cultural Revolution: “Destroy the old world. Forge the new world.” Some of the visions worked out better than others. Today, the camera has been pulled back, ready to expose the clay feet beneath the bold gesture. Presidents and prime ministers are less admired, confidence in central planning is low. The next conference deadline drives more research than visions do.

Are we making individual choices, or acting as crowds in response to shifting contexts? An individual ant’s path can appear to be random, even as an intelligent collective purpose emerges from the behavior of the colony.

For decades we shared a framework, whether or not it was consciously articulated. For better or worse, the current situation is different. To chart your path, you may find historical traces useful for mapping trajectories and anticipating where we are headed. Research efforts that appear to be unrelated are increasingly accessible and amenable to quantitative and qualitative analysis. You may find patterns.

Note: After this was submitted, Roger Cohen’s New York Times column “A Time for Courage” made similar points about the decline in political leadership. 

Interactions history articles, 2006–2013

Columns I authored are accessible without charge from my website.

2006

Is HCI homeless? In search of inter-disciplinary status. By Jonathan Grudin.
Contributions from human factors, management, and computer science, with recent involvement of design and information science.

The GUI shock: Computer graphics and human-computer interaction. By Jonathan Grudin.
Computer scientists joined the psychologists populating CHI; why it happened when it did.

A missing generation: Office automation/information systems and human-computer interaction. By Jonathan Grudin.
The progression of hardware and HCI, focusing on the once-powerful, now-extinct minicomputer platform of the 1970s and 1980s.

Death of a sugar daddy: The mystery of the AFIPS orphans. By Jonathan Grudin.
Problems arose because the dying parent of ACM and IEEE did not name an heir.

Turing maturing: The separation of artificial intelligence and human-computer interaction. By Jonathan Grudin.
Two fields interested in intelligent uses of technology: Can they get along?

The demon in the basement. By Jonathan Grudin.
Detailed effects of Moore’s law are seriously underexamined, I claim.

2007

Living without parental controls: The future of HCI. By Jonathan Grudin.
After a year of plotting trajectories, speculation as to where we are headed.

An unlikely HCI frontier: The social security administration in 1978. By Richard W. Pew.
A human factors pioneer describes an effort that preceded CHI.

NordiCHI 2006: Learning from a regional conference. By Jonathan Grudin.
Anticipating that domain-specific HCI research will become more prevalent.

HCI is in business—focusing on organizational tasks and management. By Dov Te’eni.
HCI became a research thread in management information systems before computer science.

Meeting in the ether. By Bruce Damer.
A history of social virtual worlds: early experiments and the waves of the mid-90s and mid-00s.

Five perspectives on computer game history. By Daniel Pargman and Peter Jakobsson.
An ambitious exploration of computer game progression along five dimensions.

2008

Unanticipated and contingent influences on the evolution of the internet. By Glenn Kowack.
The most downloaded history column, an original analysis by an Internet pioneer.

Themes in the early history of HCI—some unanswered questions. By Ronald M. Baecker.
A timeline of HCI events, identifying unconnected dots in the conceptual history.

Travel back in time: Design methods of two billionaire industrialists. By Jonathan Grudin.
When young, Henry Ford and Howard Hughes pursued iterative and participatory design with singular results.

Tag clouds and the case for vernacular visualization. By Fernanda Viégas and Martin Wattenberg.
The rapid evolution of an unusual design form.

Why Engelbart wasn't given the keys to Fort Knox: Revisiting three HCI landmarks. By Jonathan Grudin.
Understanding past work and outcomes requires consideration of the context of when the work was done.

An exciting interface foray into early digital music: The Kurzweil 250. By Richard W. Pew.
Interface challenges and work on the first 88-key professional-quality digital synthesizer.

2009

Sound in computing: A short history. By Paul Robare and Judy Forlizzi.
Sound in computing evolved from electromechanical to digital, from rare to everywhere.

The information school phenomenon. By Gary M. Olson and Jonathan Grudin.
The proliferation of schools of information, a research field now merging with HCI.

Wikipedia: The happy accident. By Joseph Reagle.
Histories of Wikipedia entries are easy to retrace; the history of Wikipedia is less so.

Understanding visual thinking: The history and future of graphic facilitation. By Christine Valenza and Jan Adkins.
Graphic artists don’t often switch media to put their accomplishments into words; this is a welcome contribution.

Reflections on the future of iSchools from inspired junior faculty. By Jacob O. Wobbrock, Andrew J. Ko, and Julie A. Kientz.
A conversation in this history forum—how will information schools fit into the future of HCI?

As we may recall: Four forgotten pioneers. By Michael Buckland.
Pre-digital efforts to build large-scale information systems are a fascinating, neglected story.

2010

Reflections on the future of iSchools from a dean inspired by some junior faculty. By Martha E. Pollack.
Further reflections on the role and diversity of information schools.

What a wonderful critter: Orphans find a home. By Jonathan Grudin.
An old yet familiar refrain on development, and the AFIPS legacy is resolved after twenty years.

CSCW: Time passed, tempest, and time past. By Jonathan Grudin.
CSCW evolution and interaction across two continents, viewed through a techno-cultural prism.

Project SAGE, a half-century on. By John Leslie King.
A massive 1950s defense project created computing professions and spawned interface techniques.

MCC's human interface laboratory: The promise and perils of long-term research. By Bill Curtis.
A frank account of the rise and fall of a prominent HCI-AI laboratory of the 1980s.

2011

Multiscale zooming interfaces: A brief personal perspective on the design of cognitively convivial interaction. By James D. Hollan.
A personal view of an interface approach that became more powerful as it became more abstract.

The DigiBarn computer museum: A personal passion for personal computing. By Bruce Damer.
An insanely great physical computer museum and website.

Kai: How media affects learning. By Jonathan Grudin.
A dialogue that examines what Socrates and Plato really said and what it can tell us millennia later.

2012

Design case study: The Bravo text editor. By William Newman.
One of the most influential projects of the early GUI period, in meticulous detail.

A personal history of modeless text editing and cut/copy-paste. By Larry Tesler.
Features now taken for granted resulted from painstaking work on once-open questions.

Punctuated equilibrium and technology change. By Jonathan Grudin.
The underlying technology changes yearly, major surface changes occur every decade—with subtle effects.

2013

Journal-conference interaction and the competitive exclusion principle. By Jonathan Grudin.
Selective conferences stress journals and leave a sparsely populated community-building niche.

The first killer app: A history of spreadsheets. By Melissa Rodriguez Zynda.
Spreadsheet, rarely being mentioned at CHI, were instrumental in launching the personal computer era.

Two women who pioneered user-centered design. By Jonathan Grudin and Gayna Williams.
An astonishing virtuoso, Lillian Gilbreth founded modern human factors; Grace Hopper invented technology to free people to do their work.

Endnote

1. Some of these men had been inspired by Vannevar Bush’s 1945 essay “As We May Think.” Bush outlined a microfilm-based opto-mechanical system, but his vision was appropriated by the semi-conductor brigade.


Posted in: on Mon, November 25, 2013 - 11:00:12

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Letter from Aarhus: scale and perspective


Authors: Deborah Tatar
Posted: Thu, November 21, 2013 - 10:41:38

In addition to appreciating the ways in which Denmark has a design culture, which I wrote about last time, I am also appreciating not being in America, in two ways. One way is perspective. Although I read the New York Times every day, I am somehow freer of media pressure here. This shows up in a funny way. When I watch The Daily Show With Jon Stewart at home, I experience it as a relaxing way to end the day. He and his team say the things publically that I want to hear said—publically. “Thank god, that piece of business has been taking care of, I’m not the only one who thinks it, and now I can go to bed!” But, in Europe, Jon Stewart becomes the source of the anxiety. Wow! I had no idea that I was carrying such a burden on a daily basis. The Danes have had gay marriage since 1989. Imagine what discourses we could have been having in America if we had not had to spend the past twenty-four years arguing about this. They have had national health since 1973—single payer health insurance. Imagine if we could truly focus on the design of software for the provision of care rather than as a political football.

So perspective is one source of relief. Another is small scale. A number of researchers here at Aarhus have put together a grand steampunk-like machine, 5+ feet high, with industrial cogs on a platform supporting a thick metal bar, called Ekkomaten (“Echo machine”). From the bar sprout three elements that look like speakers and dangle runners that lead to headphones. It is meant to be planted in a public space and then, with effort, rotated. People listen over headphones and what they hear depends on where the device is and the direction that it faces. That is, stories and sounds are recorded that have to do with the exact location of every installation, so the device is both general and specific. Some of the stories are historical, told by people who live in the area, in the direction that the Ekkomaten currently points. Some are memories, and some are contemporary. And some are just sounds, like coins being tossed onto a dresser or coffee being poured and cups clinking while people murmur. If there are multiple listeners, they have to cooperate to decide on a direction. The localization of the device is very important to the team of creators, who repeatedly mention that these are “Aarhus voices” and then repeatedly (!) have spirited discussions about how many different accents there are in Denmark (are there 4? 5? 6?). 

On one hand, Denmark has only 5.8 million inhabitants and occupies 1/5 the area of the American state of Virginia, so such fine cultural differentiation can seem like pretty small potatoes. But then, again, aside from potential for commercialization, what is more important? Here are people living in a place that is important to them, with cultures that are important to them, and here is a device that allows them to reflect—perhaps with enjoyment or with other important emotions—on what they have and are making. In this way, it’s like the American National Public Radio’s Story Corps, which is “to provide Americans of all backgrounds and beliefs with the opportunity to record, share, and preserve the stories of our lives,” but unlike Story Corps, it wanders through time, and it does not just abstract, but also points back to the place of origin. How interesting this is! This is the stuff of people’s lives. And because it does not try to speak to everyone on the planet, it speaks to me.

Now, as it happens, I don’t speak Danish. So I cannot understand the stories, and experienced the device at a different level than most people. Even though it is for the creators a locally focused creation, it brought me into contact with the shared, sense-making information contained in unparsable, human sounds. I did not understand the stories, but I understood the story-telling. I found myself noticing sounds that structure our days and experience. I don’t own them, but I appreciate them. And I immediately knew quite a lot about the situation. 

I had a comparable experience watching Christian Marclay’s amazingly compelling piece, The Clock, last summer. The Clock is a work of art, a 24-hour film montage in which every minute is drawn from a film that shows a clock or a watch displaying that time. It plays so that each minute in the movie corresponds to the real local time (which is one of several reasons that you should boycott pirated YouTube clips). Films that you know and half know and films that you have never seen tick by, each one opening up a larger world of narrative from within that minute. As it happens, I got to watch only from 5:00am to 7:00am. Evidently, people even in films are pretty much doing a small set of things between 5 and 7. They are sleeping, waking up, not waking up when they ought to, leaving their own beds, leaving someone else’s bed, washing their faces, eating breakfast, drinking coffee. Exceptionally, they are looking out the window, waiting for a train, calling a taxi, roaming deserted streets. And, also one regularity of modern urban life is the elephant-like progression of garbage trucks lumbering along down dark, wet streets. As I would later experience with the Ekkomaten, I was stunned by the enormous regularity of life and its portrayal through media. I left The Clock to go get a substantial American breakfast of eggs, toast, and coffee (sizzle, pop, slurp), with the cheery thought that even horrible no-good-niks pee when they get up in the morning. 

I am not sure that people who saw different hours of The Clock would have the kind of unifying experience that I had, just as I am pretty sure that people who understand Danish would not have the experience of Ekkomaten that I did. But both cases underscore how focused experiences of the local and particular are tied to the general and universal. 

So, returning to my theme, by stepping outside of my context, I am brought more into contact with the evanescence of scale and perspective.


Posted in: on Thu, November 21, 2013 - 10:41:38

Deborah Tatar

Deborah Tatar is an associate professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


@Lone K Hansen (2013 12 09)

Hi Deborah (and others),
You should take your time and listen to the Danish female musique concrete composer Else Marie Pade. In particular her “Symphonie Magnetophonique” springs to mind when reading your account above. It portrays 24 hours in Copenhagen in the 1950s. It can be heard in its entirety (20 mins) from the bottom of this page (Flash needed): http://dvm.nu/theme/emp/symphonie-magnetophonique/


Utilizing patients in the experience design process


Authors: Richard Anderson
Posted: Mon, November 18, 2013 - 10:00:45

Dave deBronkart (a.k.a. e-Patient Dave) is quite well-known for his assertion during a TED talk and at other times that patients are the most underutilized resource in healthcare. Without question, that underutilization extends to the healthcare and patient experience (re)design process.

At Medicine X 2013, Sonny Vu ruffled some feathers when he said that, in his company's design process for wearable sensor products and services, they don't ask users what they need or want, but rather observe user behavior. Attending the conference was a large contingent of ePatients who have done a lot of work identifying what they need and want and then doing something about it (see my "Learning from ePatient (scholar)s" blog post). In no time, Sonny was challenged by ePatients in the audience, and the controversy became a point of significant discussion among the ePatients after that session.

This is an issue that comes up often, and in my UX teaching I share and contrast views that you do ask users what they need and want with views that you observe user behavior instead.

Can users know what they need? Can users know what will solve the problems they encounter? Many have argued that the answer is "no" and consequently choose to conduct no design research at all. However, others argue:

Similarly:

But what can you learn from spending time with users? User experience design researcher Catalina NaranjoBock tweeted a partial answer, echoing an assertion made by Sonny Vu:

Karen Holtzblatt has written often about this, but she goes further:

Don't ask your customer what they need or want or like. People focus on doing their life. So if you ask them outright, people can't tell you what they do or what they want. It's not part of their consciousness to understand their own life activities.

Yet, in the world of patient experience, views such as Ann Becker-Schutte's are being expressed:

And in the experience design world, co-design—the involvement of the user or customer in the design process as designers—is increasing in popularity.

So how should one proceed?

IDEO's Dennis Boyle is among those who argue for the need to focus design research on edge cases:

John Hagel, co-chair of the Deloitte Center for Edge Innovation, makes a similar argument, stating that one should explore emerging innovations on the edge that are rising up to challenge the core. In a presentation I made at HxD 2013, I pointed out that those on the edge in the world of healthcare include participants in the quantified self movement, participants in peer-to-peer healthcare, and ePatients, three groups which overlap. Quantified self participants continually document aspects of their health and experience, peer-to-peer healthcare participants actively engage with other patients about their health and experience, and as stated by Leslie Kernisan, "e-patients aren't like most patients. They're more motivated, more medically sophisticated..." One can argue that such behaviors and qualities make such people better able to know what they do or what they want or need.

But there is much to be learned from typical patients as well, and observational research might be particularly favored in such cases. Unfortunately, whether you are talking about ePatients or most patients, patients continue to be the most underutilized resource in the badly needed redesign of healthcare and the patient experience.

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.



Posted in: on Mon, November 18, 2013 - 10:00:45

Richard Anderson

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.
View All Richard Anderson's Posts


Post Comment


No Comments Found


Are we still just a digital shoebox?


Authors: Monica Granfield
Posted: Tue, November 12, 2013 - 10:39:09

Digital pictures… they are fun to take and easy to share. With cameras built into our phones we can snap photos at a moment’s notice! Even with a separate camera like a high-end SLR, we can rack up the shots we take. Years ago we kept all the photos we took in albums and shoe boxes. When we wanted to share them with others we physically passed them around individually or in an album. An occasional one-off photo was placed in a frame on the mantle. With the exception of the ability to immediately share a photo electronically, much of how we share, store, and enjoy photos has not changed. Digital photography has changed how and where we access and share digital photos; how often, where, and when we take photos; but has it really changed how we enjoy or manage our photos?

We are still making photo albums, albeit glossy and well-designed now, and we are still enjoying our photos in an album or scrapbook that we store away. We are still printing out occasional photos to hand off and pass around, or placing them in a frame on the mantle. Holiday cards are still printed and hung about the house. Our digital photos are scattered all over the Internet or held captive in our small-screen phone. Technology still does not really let you enjoy or immerse yourself, digitally, in a photo for more than a moment.

Currently we share one-off posts to a social site that we see once and forget about or we quickly flip through the photos on our 2x4 smart phone screen. This is a fun, yet only a momentary, way to share a photo, and is not what I consider enjoying a photo. There have been attempts at ways to enjoy our photos digitally, none of which really caught on. From attempting to use screen savers as our desktop slide shows, inviting friends and family to websites to view your photos, and digital frames with proprietary websites and services we pay for, to plugging SD cards into a TV or a digital frame, most seem to have fallen by the wayside. I recently conducted a very casual survey to discover that using a digital frame is viewed as yet another place to manage your photos and it was just too much work to bother. Those few that did have digital frames love them. Here is a case where technology is still in the way of conveniently enjoying digital photos in a digital environment.

I know my experience of finding a way to seamlessly use a digital frame, so that myself and others could regularly enjoy looking at our photos, required a good deal of work, and I am a technologist. I had to first find a frame that was wireless and network enabled. Then I found an SD card, an Eye-fi card, which wirelessly uploads photos to their website or to specific a drive. I upload my photos to a network drive that maps my digital frame to the network drive. Have I bored you to tears yet? This is not for the average person. The average person does not even want to plug in a thumb drive or an SD card to upload and manage photos in yet another place. Most people in my survey back up their photos and hardly look back. Consistently people asked for “some way to organize their photos.” “I have a decade’s worth of photos that I need to organize,” commented one participant. Other participants commented, “I should really make one of those books.” 

Apparently organizing photos and the agony around it has not, in any way, been alleviated by technology. The shoebox has merely moved from the physical desk to the metaphorical desk. Folders are full of random photos with meaningless numbers for names. No one has assigned tags to photos or even labeled folders that hold them in a meaningful way. Tagging takes work and most people are just happy their photos are backed up. With all of our technology, will we ever really assist in finding a way to help people get organized? 

There have not been any great solutions to not only organizing our photos, but to truly experiencing our photos. It seems to me that technology is still in the way and is not fully assisting us in truly experiencing our photos, remembering the moments when they were taken. Maybe we could use our photos to create an experience like a wall that displays many photos, or one large photo, or a wall that that tells a story with our photos, without the end user having to define it. I love having my photos handy on my phone and the ability to easily share them. Now I want a better way to experience my photos. As an industry, let’s move out of the shoebox! Let’s take advantage of technology and move beyond replicating traditional means of displaying and enjoying photos to creating experiences where you can be fully immersed in them. I am not quite sure what this means, but would be interested to hear other ideas and get out of the shoebox.


Posted in: on Tue, November 12, 2013 - 10:39:09

Monica Granfield

Monica Granfield is a user experience designer at Imprivata. The views expressed on this website are exclusively her own and are not meant to reflect or represent the views of Imprivata.
View All Monica Granfield's Posts


Post Comment


No Comments Found


Digging the crates: how DJs improvise like banjo players


Authors: Steve Benford
Posted: Mon, November 04, 2013 - 10:43:07

If I hadn’t been a banjo player then perhaps I might have been a DJ. After all, both are cool, hip, and generally down with the kids. There are other similarities, too, as my colleagues Yousif Ahmed and Andy Crabtree have revealed through an ethnographic study of the lives of nightclub DJs.

When I think of a DJ, I immediately picture someone spinning discs on a turntable in a nightclub. Our study revealed that there is far more to the matter than this. First, there is the all-important task of building a music collection. A DJ’s reputation is made through the music they play, making it a pressing and ongoing concern to acquire new music that sets them apart and lends them a distinctive identity. This may involve “crate digging,” scouring record shops for second-hand vinyl that contains rare or unusual tracks that can be rediscovered and repurposed. It also often involves producing their own music, or even being given music by others that they want promoted. Either way, an active DJ is likely to be constantly on the lookout for distinctive new music to enhance their collection and reputation.


A DJ performing

Crates and improvisation

Where this music is on vinyl—and it is clear that many DJs still value vinyl for its sound quality, tangibility, and also rarity—then it soon becomes impossible for a DJ to take their entire music collection to a gig. This necessitates the tricky business of choosing in advance a just part of the collection that is small enough to be loaded into a “crate” ready for transportation to the gig. Traditionally, this would be a beer or milk crate, hence the name, but it might just as well be a gig bag or even an electronic folder within dedicated DJ software on their laptop. This last point is particularly intriguing. Surely modern DJs can easily take an enormous music collection to a gig on their laptop. So why assemble a crate?  And yes, they still do this. And yes, these folders of digital tracks are still called crates.


A DJ’s crate

We think that answer lies in the nature of improvisation. There is an important element of improvisation to a DJs performance as they select and segue different tracks by beat-matching and choosing the next track in response to the crowd or the way the evening is unfolding. While choosing the next track involves an element of improvisation, this is not completely unconstrained. In the heat of the moment, in a dark nightclub, it is important to be able to quickly pick a track that is going to fit into the set, and this is where the crate comes in. The crate contains a preselection of music, carefully chosen to fit this particular gig, venue, likely crowd, and also with an awareness of other DJs on the bill (it is, for example, good etiquette not to steal the thunder of later DJs by driving the crowd into a frenzy too early on). The tracks in the crate—be they records in a box or bag, or files in a digital crate—may also be prearranged into a rough order in which they are likely to be played. Consequently, the crate provides something of a safety net for improvisation. The DJ can experiment with selecting tracks knowing that whatever they choose is generally likely to fit and can fall back to the predetermined sequence when things get hairy.

So why are DJs like banjo players?

At this point I’m experiencing a distinct case of deja vu. A few months back I was writing about my other life as an amateur banjo player at Irish music sessions. There are some striking similarities between Irish-style banjo playing and the activities of DJs, other than the innate coolness that I’ve already mentioned. First, both forms of music involve sequencing tracks or tunes together. The art of the DJ is to sequence different tracks together, while that that of the Irish musician is to sequence several traditional tunes into a set. This sequencing is a creative act and an important opportunity for improvisation in both forms of music. Moreover, just as the DJ relies on having a preassembled crate of records to work from, so the Irish musician may have preselected and rehearsed sets of tunes drawn from their wider repertoire. These tunes are the equivalent of the DJ’s “crate,” a small working set of music that is immediately available “at their fingers” and that has been tailored to a particular event. These sets are often written down in a notebook. 


A Banjo player discretely checks his crate

Situated discretion revisited

There is another striking similarity between the musical practices of DJs and those of Irish session musicians that we refer to as situated discretion. We saw previously how Irish musicians are cautious about revealing evidence of their preparations during a live session, designing their notebooks to be suitably discrete so as to fit in with the prevailing etiquette of playing by ear. Yousif’s study has revealed that DJs employ their own version of situated discretion in which they also adapt the presentation of their crates to be appropriately discrete. This involves changing information to deliberately hide, or sometimes reveal, the contents of their crates to other DJs or audience members. DJs who have invested great effort in digging up rare vinyl may even go as far as to paste white labels over the centers of records or change the names of tracks in their digital crates so as to disguise them. As one of Yousif’s participants described:

There’s an element of secrecy there, which is what they used to do in the old days as well. All the hip-hop guys and stuff, when hip-hop was quite big, like Afrika Bambaata and stuff, used to put white stickers all over the centre of their records so no-one could come up and read them and see what it was. It’s trying to keep the tunes, like, exciting. You wanna build a hype around them.

On the other hand, if they have been given a track to promote they may go out of their way to make this musical metadata available to others. In other words, DJs carefully design the presentation of their crates to be appropriately discrete with respect to a given performance situation. 

On the nature of improvisation

Given that we see such striking similarities between two very different musical practices, it is tempting to think that notions of crates, working sets, and situated discretion may have a wider relevance to improvisation. Might we see their equivalents within other improvised practices—perhaps in jazz or rock music, comedy, or even at work? To what extent does the art of improvisation rely on the careful selection, preparation, and rehearsal of material so that it is ready to hand and can easily be brought into a live situation, but in a suitably discrete and situated way so as to respect its form and local etiquette? And what new technologies can enable people to assemble and use their various “crates” when improvising?


Posted in: on Mon, November 04, 2013 - 10:43:07

Steve Benford

Steve Benford is professor of collaborative computing at the University of Nottingham’s Mixed Reality Laboratory.
View All Steve Benford's Posts


Post Comment


No Comments Found


Finding protected places


Authors: Jonathan Grudin
Posted: Wed, October 30, 2013 - 9:28:39

In a memorable scene, a boy is taught to swim by being thrown into a lake. In the movie, it worked. In real life, training is desirable, whether for heart surgeons, air traffic controllers, or swimmers. Training is a protected place, where we can try things, take risks, and make mistakes without adverse consequences. What happens in training, stays in training. That’s the idea, anyway.

Visibility

Ever more of our activity is represented digitally, easily recorded and transferred. Increased visibility has consequences for criminals, politicians, celebrities, classified documents, you, and me.  “Don’t say anything in email that you would not want to see on the front page of the newspaper.” “Don’t post anything on Facebook that you would not want a future employer to see.” We are warned, then we decide whether or not to worry about it.

How does visibility affect training? What happens in Vegas rarely stays in Vegas any more. Once upon a time, a neophyte running for political office could try a line with a local audience, gauge the reaction, and tune the message. Now, early speeches will be recorded on someone’s phone. Care must be taken from day one—a misstep could surface later and haunt the candidate forever. There is no training period.

Transferring records is easy—some of my daughter’s middle school grades are attached to her high school transcript. Parents begin worrying about impaired college prospects at a time when earlier generations of students were able to grow up at their own pace. Although kids still mostly compete locally in academics and athletics, standardized testing and recruiting scouts push them onto larger stages at earlier ages.

Non-celebrities have some security through obscurity. The Web provides a global stage, but if my kids upload a video to YouTube, although anyone on the planet could see it, not many will. They may be safe as long as they don’t some day run for political office, although who knows how obsessively tomorrow’s college admissions and employers will troll the Web. The protection afforded by obscurity can be penetrated—is penetrated—by bots as well as people. Protected places are vanishing.

Nurturing ideas

Charles Darwin spent 20 years working out his theory. He described his ideas to colleagues, refined them, and collected supporting evidence. He wanted to avoid the marginalization that befell his mentor Robert Edmond Grant, whose less-polished evolutionary theory of the origin of the species, sans natural selection, was dismissed. Many theories of evolution preceded Darwin’s, some half-baked and some more than half-baked, but Darwin’s ended up thoroughly baked. Ideas can benefit by being nurtured in protected places. Finding such places requires more of an effort in the goldfish bowl. I have found a few and have treasured them, sources of ideas and enjoyment.

Before the Web: slow audience expansion

In my doctoral program at UCSD, ideas were nurtured privately for a time, tried out with friends, and then came a lab presentation. At the end of our first year we gave a formal presentation to the department. Students submitted work-in-progress papers to regional conferences. National conferences also had relatively low bars to acceptance—a paper was not archived, so no one would later see it unless it was released as a technical report. The goal was journal submission. Journal reviewing led to further refinement. Reviewing was usually more constructive than today’s in-or-out conference decision-making. Work typically took years to complete and fewer publications was the norm.

The benefits of ephemerality were not confined to research. In the 1970s I noticed that a favorite newspaper columnist, an elegant stylist, occasionally reworked an earlier column. His third version could be exquisite. With years elapsing between versions, perhaps few people noticed. Finding old columns would require a trip to a library microfiche room. Today, “self-plagiarism” could be tracked down online in minutes. Had he been forced to differentiate his columns more, his best would never have been written. Only his first drafts would have seen print.

Similarly, well into the mid-1980s, it was OK to rework a conference paper—fix errors, refine arguments, and deepen the literature review—and submit it for journal publication. Not now. A conference is no longer a protected place for unfinished work. We fear that its reputation will suffer if interesting but flawed work is found online by colleagues from another discipline. We force down acceptance rates. Separate work-in-progress venues were tried, with extended abstracts online, but quality concerns arose there as well and they became Notes.

Today: publish and move on

A student may work on a paper for only a few months prior to submitting. What kind of feedback is received? The paper may not be finished until the last minute. The advisor may be working on four submissions, unheard of in the past, with limited time for each. Reviewing focuses on finding grounds to reject 75% or 90% of the submissions, not on constructive critiques of likely-to-be-rejected work. In fact, an inherent conflict discourages sympathetic guidance: Reviewers must argue that almost all papers would still be unacceptable following a manageable revision.

And after acceptance? Few conference papers could not be improved, but authors may not even clean up the “camera-ready” version. Two leading researchers surprised me by saying that once a paper is accepted, they never look at it again. “It would be nice,” one said, “but I don’t have time. I’m already working on the next submission.”

With eyes on the next conference deadline, reworking an accepted paper for journal submission would be a distraction, and would risk a charge of self-plagiarism. The degree of novelty demanded for journal resubmission rose steadily as archived conference papers gained prominence.

Rejected papers can be revised and submitted to another conference, but it isn’t a cheerful process. The reviews do not help much and the next set of reviewers will have different fish to fry anyway. Workshops and doctoral consortiums can serve as protected places for exploring ideas, although many are now competitive and likely to leave online trails.

There is a risk of idealizing the past, but others have called for creating new walled gardens for group discussion, where less is at stake. Such gardens do not appear. New construction focuses instead on expanding public places and creating visible Web content. It is easy and appealing to provide recognition by putting workshop position papers or extended doctoral colloquia abstracts online. However, like the politician’s early campaign speeches, they cannot later be disavowed.

Finding walled gardens in which perennials can flourish and grow

Early in my career, ACM conferences were not considered archival. Later, papers were resurrected by being scanned into the digital library. Even when conferences first became more selective, it was acceptable to submit a revised conference paper to a journal. I did this frequently; a CHI paper led to a Human-Computer Interaction journal article, a CSCW paper to a CACM article, and so on.

This could not be done now. If someone unaware of the disorganized and largely inaccessible nature of the early literature exhumes it, I could face self-plagiarism charges. I hope The Singularity is charitable when it arrives and declares Judgment Day.

Today’s system may select for scholars who learn to swim when thrown off a bridge. I couldn’t have. I needed time and friends to help me develop ideas, and I found them.

My most valuable walled gardens were tutorial series, the dozens of tutorials and courses I prepared for conferences from 1990 to 2011, especially those on CSCW with Steve Poltrock. We mixed solid content and some original but not fully-baked ideas, improving them from year to year until they were ready to be published. Attendees provided invaluable feedback and support.

The Interactions history forum I edited and wrote for eight years has been a quasi-protected place. Three sets of Interactions editors provided constructive advice but never rejected a column. Ideas from my own sixteen columns were subsequently refined and worked into journal articles and handbook chapters. These columns remain visible in the Digital Library, but not many people explore back issues of a magazine and the informal nature of a magazine sets expectations. It has been a safe place to explore ideas.

Monthly online Interactions blog posts such as this are a third. The editor recently wrote, “Posts don't have to be finished and detailed ideas. We invite you to use this space to try out new ideas, to reflect on your work, to get messy and confused if necessary, but mostly to have a dialogue with readers.” I invariably get comments, although rarely in the comment field below a post. Reader comments, plus reading what I have written and thereby discovering what I think, are steps toward refining ideas.

It is not for everyone. If you might enjoy it, consider submitting a course to CHI, propose an Interactions blog, or find another place to explore ideas. It is fun in the short term, which is why I began, but to my surprise it can be remarkably productive over time. Some ideas need a place to blossom.


Posted in: on Wed, October 30, 2013 - 9:28:39

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


A note on ‘compositional design thinking’


Authors: Mikael Wiberg
Posted: Tue, October 29, 2013 - 11:17:17

Design thinking is growing as an explicit approach to interaction design. By acknowledging the thoughtful aspects of making, our community simultaneously acknowledges how design is both about doing/making and about thinking/reflecting. This is, however, not something new to our community. Donald Schön made this point two decades ago in his book The Reflective Practitioner: How Professionals Think in Action.

In his book Schön also stated that these creative acts of thinking and doing are not only about the reflective designer implementing his/her ideas in the material. Contrary to this position, Schön describes how “the material talks back to the designer” in acts of making. This dual relation between the designer and his/her material at hand has been widely acknowledged in our field and right now we can see how the interest in this “close to the material at hand” relation is manifested in a renewed interest in craft traditions, hand-made objects, and DIY movements.

While this duality remains true, and while craft-based approaches to interaction design are growing in popularity, we should at the same time acknowledge that the landscape of interaction design is rapidly changing and that right now we´re in a moment where additional skills will be needed to craft powerful interaction design.

When I say that the landscape is changing I am referring to the fact that HCI is no longer limited to the single man-machine loop characterized as a turn-taking act between the human and the machine. On the contrary, we surround ourselves with more and more complex device landscapes tangled together through pairing, subscriptions, scripts, and services running across different devices. Commercial solutions like Airplay demonstrate how information and interaction is no longer restricted to a single device and streaming services like Netflix demonstrate how sessions can live across a multitude of devices. These commercial services demonstrate to the public eye innovations explored in our HCI labs. Proxemic interaction, “point-and-beam” interaction modalities, etc. are now finding their way out of the labs and into the hands of everybody. Interaction in these new formats is becoming ubiquitous and interaction is no longer limited to the box.

At the same time the “computational box” is also questioned. Cloud computing and tangible user interfaces represent two different critiques of the box. Cloud computing teaches us how services and content can be accessible from just about any device, and tangible interaction illustrates that interaction design is not only about designing the digital material for the user to operate. Instead, interaction design becomes a matter of thinking about interaction across different substrates—computational and non-computational materials.

As we move forward it is likely that this “palette of materials” will also increase. For the skilled interaction designer it will be an ontological challenge to look beyond the digital material and see how just about anything can be part of interaction design. In areas such as personal informatics, just to point at one area, this is already happening at a rapid pace. Interaction designers are increasingly reflecting on how dimensions such as position, speed, everyday movements, eating habits, pulse, blood pressure, weather conditions, running shoes, bracelets, etc. can be used as part of new interaction designs. 

The message is clear. The skilled designer still needs a good understanding of the materials at hand. However, for the skilled interaction designer it is no longer about a single (digital) material at hand. It is about a whole palette of materials, ranging from a material understanding of how interaction can play out across a multitude of devices and take almost any shape and be represented in any format (a lesson learnt from the tangible UI movement); to the acknowledgement of information as material; to an understanding of how networking, code, scripts, service integration, open APIs, component-based design, and so on can be thoughtfully brought together in design. 

For the reflective interaction designer “compositional design thinking” is a key competence to develop to fully take advantage of possibilities for interaction design already present and to prepare for the years to come!


Posted in: on Tue, October 29, 2013 - 11:17:17

Mikael Wiberg

Mikael Wiberg is Professor of Informatics in the Department of Informatics at Umeå University, Sweden.
View All Mikael Wiberg's Posts


Post Comment


No Comments Found


My Apple was a Lemon


Authors: Aaron Marcus
Posted: Thu, October 24, 2013 - 11:04:00

In August 2011, I bought an Apple MacBook Pro. No surprise; we’ve been a devoted Apple customer since January 1985, as well as a vendor to Apple, and even a legal expert-witness defender of Apple in U.S. Patent Office matters.

What did surprise me was the horrendous experience of moving from OS 10.6 to 10.8, and the terrible service I received from the Apple Store in Emeryville, California, not too far from Apple’s Cupertino headquarters... but I shall overlook the two to three months of pain following my Apple MacBook Pro purchase, when my productivity was reduced by at least 20% and perhaps 50% as I struggled with the changes in the operating system and had to pay about $1000 to a private technician just so I could do email and maintain my contacts and a calendar. You know, simple things.

Let me focus on the events following July 26, 2013 during routine email correspondence in my hotel room, with two more presentations to give at an international conference in Las Vegas, when my Apple MacBook Pro suddenly crashed—magnificently crashed. I mean, the entire machine was kaput, with the screen frozen with a two-inch vertical black bar in the center, and the rest of the former screen contents wrapped around the remaining areas of the right and left sides. 

I tried several times and in several ways to revive my computer, in vain, and eventually I had to take it to the local Apple Store in the next hotel. Las Vegas hotels being gigantic barns for gambling, I had to walk for 30 minutes one way to get from the conference center of my hotel to the Apple store at the end of a labyrinthine route in the second hotel, the entrance to which was perhaps only 200 feet away! The Apple Store called me to say that the problem could probably be easily solved; they would merely have to wipe clean my hard drive containing the OS and reinstall the OS. 

Naturally, I forbade that, because I did not have my Apple Time Machine working in Las Vegas, I had updated many files, and I feared the loss of much valuable data. I had to make four treks back and forth to that Apple store that day to drop off and retrieve my computer. At least I got some much-needed exercise walking briskly for two hours total!

Back in the San Francisco Bay Area, I began a two-month Apple customer experience of the worst kind. The Emeryville Apple Store wanted to see the machine and said that the problem could be solved by wiping the drive clean. I was reluctant to trust their technicians, whom they call by the hyperbolic and erroneous name “Geniuses,” after my previous experiences with that store.

I took my computer to a Berkeley Apple Store, which also said that the drive needed to be wiped clean, but I had a chance to check on the contents of files and to retrieve other contents to the previous, older MacBook Pro from five years ago that I keep as a “spare” for just such purposes. I had learned that there was some “loose RAM” chips in my current computer that would be reseated and the problem would be fixed, after they sent my computer to Apple’s repair center in Texas.

Imagine my surprise when, after getting back my “fixed” Apple MacBook Pro, the machine crashed with the very same problem I experienced a month earlier. This occurred while I was presenting at conferences and universities in Brazil. I did not even think about trying to get the computer fixed there, and limped along for a week using only my iPhone and some flash drives to make my presentations on local Macs, and being without easy email access for a week.

When I returned to the U.S., I finally convinced the Emeryville Apple Store in which I had bought this Lemon, I mean Apple, to simply give me another computer, because I still had a year’s warranty on the product. I feared the replacement process because that store had demanded I return the original packaging when I first bought my computer. I had tossed the box and had to buy a second computer in order to replace the first computer originally purchased, which had not worked properly. In fact, they installed the wrong OS into my second computer, and the computer I had been using for almost two years was the third machine, just to be able to use an Apple Macintosh computer.

Fortunately, after hearing my tale, and running some diagnostics, which showed that indeed my computer was dead to the world, a technician at the original Apple Store from which I had purchased my product two years ago, agreed to give me a new computer and even to swap my old drive and DVD player into the new body, so I did not have to wipe, repair, and/or replace everything. Very kind. Finally, after seven trips to Apple Stores, one factory repair shop round trip, and perhaps many hours on the phone with two good technical representatives, I finally got the Apple Store to simply replace my Mac.

What surprised me in the end, was that this latest computer still had the same OS problems as the first machines: the date in the menu bar is still incorrect down below the top-level display. In addition, the Finder froze in some peculiar way and showed the application currently open, but could not open the Finder. Eventually, dialogue boxes became non-functional, and I had to “force” the machine to quit entirely. This behavior in a new machine made me exceedingly nervous.



The screen capture shows the primary date/time widget at the top of every
screen indicating the correct date in the short form, but the incorrect date
in the long form. This bug also shows  up in calendar depictions of Apple's
own software. The OS error cannot be eradicated by any simple adjustment
of the System Preferences controls or restarting the computer.


Frankly, I am aghast at the decline of Apple's software and hardware quality. As I mentioned, I have been a loyal Apple customer since 1985. My company once, in decades past, was called "The Design Police" for Apple's user-interface design. I have an iPhone and an iPad... yet now I cannot support Apple's brand as I did before. This is a sad state of affairs.

I have spoken with a number of other Apple customers, and they tell me the same thing. They can no longer depend on Apple's product quality. The grinning faces of Apple's leaders of software, industrial design, and business (e.g., CEO Tim Cook) stare out at me on the covers of Fast Company, Bloomberg Business Week, and other publications, all in one week, looking like the three monkeys who hear no evil, see no evil, speak no evil, of Apple—Mr. Cook in particular seems to be grinning hysterically. Perhaps they know the truth: Apple's products may be in swift decline. Perhaps Apple will go the way of Nokia and Blackberry, despite its arrogant posturing in the media.

I checked with Yelp about the Apple Stores. The reviews were mediocre. Complaints of lack of knowledge, arrogant and disrespectful people are numerous. Is this the brand Apple wants? Is this the brand that Apple deserves? Is this the brand that customers should expect?

My experience with Apple Stores has been, in general, so poor that I avoid going back whenever possible. I have to admit, sometimes I find someone who can quickly and effectively solve the situation. Alas, I have at times discovered the individuals are on loan from some other store, and I am likely not see them again, so no long-term relationship can be established.

To be fair, I want also to acknowledge a Mr. Adams in the Emeryville Apple Store and two phone reps, Ms. Cooke and Ms. Manyseng, specifically, as the three Apple people who took good care of me among the 10-15 people I had to deal with during two months of terrible frustration and much lost time. I estimate that I spent about 20-30 hours on the phone with the phone reps trying to solve the many idiosyncratic problems of restoring my Apple MacBook Pro to decent operation. Can this be economical for Apple? 

There were Apple phone reps who hung up on me, who did not call back when they said they would, who gave me incorrect advice, and who made non-functional promises of a repaired Mac that would now work fine. Even Apple’s own technical reps’ software crashed during my conversations with these people. How often does this happen?

I do not consider myself unique in regard to my computer needs. I am perhaps a typical Apple customer with my own eccentric ways of doing things. Shouldn’t the Apple Empire be able to accommodate me?

What a change of Apple's brand from what I remember from decades ago. No wonder Apple took out double-page meaningless, contentless ads (in my opinion) in the New York Times, Wall Street Journal, and perhaps other newspapers, proclaiming the value of “designed in California” (not even emphasizing “in the USA”), in a seemingly paranoid, nervous reaction to the development/pricing success of Samsung from South Korea and Xiomai and others from China. No wonder Apple executives may be looking over their shoulders nervously, and grinning hysterically in the news media.

What a change. What a company. What a false mythology of Steve Jobs. What a legacy.


Posted in: on Thu, October 24, 2013 - 11:04:00

Aaron Marcus

Aaron Marcus is president at Aaron Marcus and Associates, Inc. (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts


Post Comment


@Ex Apple User (2013 10 24)

I find the restraint you used in your article amazing.

I am currently in the middle of trying to resolve a graphics processor failure in a 2011 Macbook Pro, and Apple has been intractable to deal with to such a degree that I find myself hoping I am alive long enough to witness the demise of Apple as a company. The last time I felt this way about a company was in 1983 when I SAAB car I owned had several very expensive failures and SAAB would do nothing to help me. And we all know what happened to SAAB ( the company is defunct in terms of
making cars as they once did, and deservedly so ).

@Monica Granfield (2013 11 13)

This is very disappointing and not so different from my experiences with PC Laptops. In the past 4 years I have had to replace two, different, PC Laptops. One was an HP that turned out to be a known faulty NVIDIA board (and NVIDIA took no ownership) that shipped with various PC Laptops as well as Apples. The other was a Dell that I have now replaced 3 hard drives on.
It seems to me that these companies are now focused on tablets and phones and the quality of laptops is declining as fewer people are buying them. Cheaper parts, poor construction….
Not happy to hear about the customer support issues at Apple, as to me that was about the best thing about Apple…..

@JM (2014 05 04)

Thanks for this post !

I use Apple computer for 20 years now, and i’ve recommended these machines to dozens of friends / companies. But for a few years now i stopped recommend it. I’m not 100% sure that Apple computers are worse in term of quality now, but the fact is that in the past 8 years i had problems with 75% of the macs i owned. It was not such a problem 8 years ago, because in that time there still were people at Apple to ear your problem, and solve it… most of the time at no charge… Nowadays it’s like talking to robots, they just repeat and repeat again what they are allowed to say, and they have no power of decision. It’s kind of sad.

I too have big problems with an early 2011 MacBook Pro : bought on the refurb store in december 2011 > GPU failed on september 2013 > had already 2 logicboard replacements… and now crossing my fingers because the last repair is now out of warranty… it seems that there’s a mix of misconception / SMC update issue / and termal paste issue on those machines, and Apple just don’t want to investigate or recognize it.

I just find very strange that right after an SMC update (with controls fans if i’m not wrong) the AMD graphic chip start to fail.
I just can’t accept to ear from an Apple technician that it’s not a «normal usage» to edit movies within FCP on a MacBook Pro (Apple’s communication was based on this for years, and my powerbook G4 did it for years too without any problems… it’s really obvious they have no arguments to tell such things).
I just can’t accept that a 2000/3000€ machine is built to work properly less than 2 years…

Like many users (i now have more than 4000 email notifications from people who have the exact same problem on 2011 MacBook Pros) i am really really angry.

@JMR (2014 05 04)

You’re not the only one. My 2011 failed at 2 years and a couple on months in exactly the same say last week. From the sheer number of failures being reported by end users around the 2 year mark it appears to be a design or manufacturing issue.

There’s a couple of large threads on the Macrumors forums and Apple own Support Community.

It’s a shame, the 2011 is the perfect machine for my needs. The current obsession with thin and light has stripped out the features I use everyday, so the modern (and overpriced) MacBooks don’t meet my requirements.

@RTilley (2014 05 04)

There have always been mistakes.
In the past they were fixed.
It is the denial and stonewalling that is new.
And the not so plausibly deniable “goto fail” backdoor was truly evil.
This is not the the Apple I used to trust.

@Alexandre Stickland (2014 05 04)

Hello there,
I feel your pain.
I used to trust apple machines quality.
Not anymore since i have the same problem with a mbp late 2011.
This is not the worst, you want to know why?
Imagine ( or just read the tons of posts/topics/forums about it) people like me who are out of warranty and they just want us to pay ( the amounts of money asked goes from 1/2 to 2/3 of what we paid for the mbp) to “fix” it ( they can’t even fix that issue…)
This must end.

@Joseph (2014 05 05)

There is an underlying problem with the 2011 Mbp, it’s shocking that apple won’t take responsibility for there mistakes,


Letter from Aarhus


Authors: Deborah Tatar
Posted: Fri, October 11, 2013 - 11:02:29

I am spending my sabbatical at Aarhus in Denmark. Aarhus is quite a hub for design research activity. DIS 2010 was held here; the Media Architecture Biennale was here in 2012; IDC 2014 will be here next June (paper deadline January 13, demos March 21!); there was an intense two-week workshop here last summer for senior researchers and Ph.D. candidates on participatory IT and there will be more in the future. Furthermore, researchers from Aarhus play ongoing roles in PDC and many other important venues. Of course, among other things, Susanne Bødker (a name which I once thought was simple to pronounce and now consider simple to underestimate) was paper chair for CHI 2013. Martin Brynskov and others held a Smart Cities workshop for Ph.D’s this past summer in Split, Croatia. Kim Halskov and Peter Dalsgaard orchestrated an experiential piece at the Aarhus Festival that ran here the first week in September that involved a deeply delicate and attractive relationship between user movement of pieces, table-top display, and music. Controlling it was as smooth and as unexpected as ice-skating (but less painful). Work in the department is deeply tied to work in the town itself. One class that I heard about in architecture is not only taking on design problems posed by the city, but has taken instruction in co-design to the next level by inviting city administrators and citizenry to participate as enrolled students. Researchers are currently planning an Internet Week Aarhus, which will be one component leading up activities associated with Aarhus’ designation as the European Capital of Culture for 2017. Whew! 

OK. I admit it. I’m swept off my feet. I am seeing this through rose-colored glasses. Or maybe I should say green-tinted glasses, because I am seeing a very pleasant word pop out like red bars in a field of green ones: DESIGN. 

Aarhus is doing wonderful work in research and inquiry, but that I could have seen from afar. What I could not see is that design is a very important word to the Danes, not just in academia but in everyday life. Hotels advertise the kinds of chairs that they offer (Arne Jacobsen is prominent). Numerous shops have the word design in their name, and others do not need to because they are so well known. There are designs that are not actually new (Louis Poulsen lamps) but that persist because they are so appreciated. And there are designs that are actually novel. There are not one but two design museums in Copenhagen, and even Tivoli Gardens recently opened a Kähler-sponsored Design Restaurant. Tivoli Gardens, named after the park in Paris which is itself named after town in Italy in which the purely delightful 16th C. Villa D’Este is located, is the pleasure garden that inspired Walt Disney. But can you imagine Disneyland or Disneyworld serving high quality food that is prepared to please the eye as well as the mouth? People might linger at EPCOT, but the Terra Disniana is a staging location. 

Not everything is perfect. There is graffiti in places that do not strike one as art. There is broken glass caught in the spaces between the cobblestones. There was the embittered drunken mercenary on the train, and the men who just can’t quite get up off the pavement, sitting there, feet outstretched, in the downtown in the early morning. But these are exceptions. What I see, overwhelmingly, is care. 

I see care in almost all little yards, with their roses and carefully pruned hedges. I see care in the babies and toddlers bundled into snow suites walking down the street with child-care providers, two in the carriage, one or two walking. I see care in the simple but diverse forms in the shop windows; the way that large housing developments are differentiated by subtle details of form or materials; the choice of materials. When we register at the immigration bureau, the bathroom is a dramatic statement, featuring a long green plexiglass panel, perhaps 12’ high and much longer, hanging about 6-inches in front of the wall. On the opposite wall is a long mirror at waist height, lending to a bright and airy feeling to the oddly elongated room. At the business end of the room, opposite from the toilet, the otherwise plain green panel has a brushed aluminum plate with a button and a light. The button reads “lock/unlock” and the light shows that I did, indeed, lock the door. A carefully espaliered plant sits in the window. Can you imagine the American Immigration and Naturalization Service even allowing visually pleasing attention to detail? 

And make no mistake. This is not some high-falutin’ place because I am an American academic and therefore get to board on the red carpet. This is where all foreigners go. Denmark is, for example, 3% Muslim. This does not make it into mecca, but it does reflect considerable immigration and, since Aarhus is the second biggest city, you see plenty of women in head scarves and full-body coverage who are clearly from the Middle East or North Africa. All of them must have visited this lovely building with offices separated by glass walls with rolling glass panel doors. 

I see care elsewhere—in the way that my furnished apartment is simply but peacefully appointed with a vase, candlesticks for tea lights, and a throw rug over the back of the couch. I see care in the way that women adorn themselves with one point of color or texture in otherwise simple apparel (even some of the Muslim women). 

Perhaps it’s just me. But I do love it. It is directly pleasing in the moment, and I associate care in the particulars with a raft of admirable moral properties. “All things are doubly fair/If patience fashion them/And care—,” as Gautier wrote (Translated by G. Santayana).

And the design point is not just that design is important to the Danes, but that it does not seem to require external justification. The background to the University’s seminal position is a kind of omnipresent design reflex. This reflex is not in ignorance of other important values in the world, such as a robust economy, nor is it necessarily easy, as there is discipline involved in design processes, but it is a way of doing things, a mode—in short: a culture. As far as I can tell, attention to the visual world is paid because of a sui generis value placed on delightful human experience. Perhaps this is my delusion, but, please, let me live with it a bit longer. This is much closer to the way I feel the world ought to operate than the world I usually live in. 


Posted in: on Fri, October 11, 2013 - 11:02:29

Deborah Tatar

Deborah Tatar is an associate professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


@Kim Halskov (2013 10 14)

And if you want to know more about IT research at Aarhus University, I suggest to visit

PIT.au.dk
CAVI.au.dk
CS.au.dk

Kim


Deconstructing UX design


Authors: Monica Granfield
Posted: Wed, October 09, 2013 - 11:39:16

In engineering there is reverse engineering, the process by which you take something apart to see how it was built. This is a great way to learn how something is made, what worked, what’s broken and why, what can be done differently, and how something can be improved. I wonder how this process could translate to UX. 

Deconstructing a user experience, asking why and wondering how the design came to be, is a process from which designers can learn. It's easier in some instances to explain and recall what you don't like about something than what you did like, and we tend to deconstruct something when we consider it inadequate, as a means to improve and create something better. Examining an appealing or successful design is an equally valuable, but not as obvious, exercise.

I seem to find myself deconstructing how something was designed all the time. My husband refers to this as the curse of the designer. From the design of a car’s dashboard to a screen on a website, I continuously find myself imagining what design tradeoffs may have been made to arrive at the current design, and what is working or how a process could be improved upon to create a better experience. While waiting to board a recent flight, my son and I spent a good deal of the time in what felt like an endless line, speculating why the boarding process was operating as such. It was a fun and imaginative exercise, and while passing the time we came up with some good ideas on boarding processes, ourselves. 

I recently went through the exercise of deconstructing the design of the shift stick in my new car. I am very unhappy with the design, as I feel it is awkward and poses a high safety risk. I went so far as to bring the car back, as I thought it was defective. I was told this was not a defect, that you could actually shift gears while driving if you accidentally hit the shift stick. I deconstructed the design with a few passengers, in my car, for additional input and insight. This was a helpful exercise in forming my understanding of what I considered to be a poor design. It did not, however, increase my empathy for the design. In fact it made me even more dissatisfied with the design and any tradeoffs that may have occurred. How could a design tradeoff have been made around such a risky safety issue? Placing the shift stick in the direct path of the drivers reach to dashboard controls like the defroster, heat, and radio presents a good chance for the driver to accidentally hit the shift into another gear while driving. One occurrence of this incident was frightening enough that I had to think twice about keeping the car. Any understanding of this design did not build empathy, just frustration and concern. Deconstructing design is an opportunity for a designer to build empathy with users and improve how to think about design and solutions. Understanding why a designer or manufacturer cut corners will not build empathy with your users. It might just do quite the opposite. A user does not care about tradeoffs or deadlines; they care about accomplishing their tasks with ease. 

As designers we are trained to critique, to question, to explain, to imagine. In real life we encounter tradeoffs, egos, delivery dates, standards, budgets. Understanding how a design evolves based on these factors is an important aspect of deconstructing and understanding a design. Deconstructing a design or an experience can be very informative and act as an opportunity for in-depth learning. By looking at what we don't think works we can create something that does. By looking at what we think works well, we can learn more about creating a good experience. Sometimes the how and why behind a design is obvious, sometimes it is not. Learning to deconstruct designs is interesting, informative, and an important learning experience from which to improve our own designs.


Posted in: on Wed, October 09, 2013 - 11:39:16

Monica Granfield

Monica Granfield is a user experience designer at Imprivata. The views expressed on this website are exclusively her own and are not meant to reflect or represent the views of Imprivata.
View All Monica Granfield's Posts


Post Comment


No Comments Found


Designing discomfort


Authors: Steve Benford
Posted: Mon, October 07, 2013 - 9:22:03

Why is the Imperial War Museum North like Oblivion, the world’s first vertical drop rollercoaster? Sounds like the beginning of bad joke doesn’t it? But actually they do have something significant in common. Moreover, it’s something that speaks to how we interact with computers.


Which is which?

The answer is that both have been deliberately designed to make people uncomfortable. This is perhaps rather obvious with Oblivion, which sets out to terrify its victims from the start, warning them that it’s not too late to turn back as they queue, slowly cranking them up a steep incline, pausing them on the brink of a terrifying drop for several seconds as they listen to the instruction “don’t look down,” before then plunging them into a dark tunnel 60 meters below. The discomfort is more subtle in the Imperial War Museum North, but it is ever present all the same, as Daniel Libeskind’s award-winning building features sloping floors and ceilings throughout so as to induce the kind of disorientation experienced in war. Of course, Oblivion and IWMN, as it is known, employ discomfort for quite different reasons. With the former, it is an essential ingredient of the entertainment. With the latter, the aim is to frame an appropriate engagement with challenging material as part of an enlightening visit.

Why on earth?

It has been exciting this month to see the publication of our article “Uncomfortable User Experience” as the cover feature of September’s Communications of the ACM. In this article we make a case for the deliberate use of discomfort in interaction design. This is a somewhat unusual position, as the principles of interaction design traditionally emphasize providing the user with the most comfortable experience possible, one where they are in control rather than being taken for a ride, and where they remain oriented rather than being deliberately disoriented.

However, as computers increasingly find their way into cultural experiences—from highbrow arts and museum visits to mainstream entertainment such as games and rides—so new design principles emerge. As with Oblivion, discomfort may support the goal of entertainment, or as with the IWMN may serve the purpose of enlightenment. We also discuss a third motivation for introducing discomfort, that of social bonding, where a shared rite of passage brings people together. 

The deliberate use of discomfort has long been practiced in fields outside of computing, most notably in the performing arts, where there is an established tradition of inviting audiences to witness uncomfortable spectacles, or even become implicated or directly engaged in them. This tradition has spilled over into human-computer interaction as it has engaged with the performing arts. Our article explores two examples of this. 

Blast Theory’s Ulrike and Eamon Compliant invites participants to enter the world of a terrorist as they undertake a guided city walk. The artists demand increasing compliance with instructions before ending with a face-to-face interrogation conducted by an actor. As you leave the room, you get to look back through a one-way mirror to briefly spy on the next person being interviewed.


The interview in Ulrike and Eamon Compliant

In contrast, Brendan Walker’s breath-controlled amusement ride Breathless creates an intimate and viscerally uncomfortable connection between a human and a robotic ride in which riders wear rubberized gas masks equipped with breathing sensors and witness and even control each others’ experiences. 


Breathless

Four forms of discomfort

Such experiences may appear to be far removed from the mainstream, but they do serve to powerfully illustrate some of the ways in which discomfort can be introduced into interaction design. Indeed, they have led us to identify four broad forms of discomfort along with various tactics for deploying them.

Visceral discomfort focuses on physical sensation, involving tactics such as designing unpleasant wearables (such as Brendan’s gasmasks) and tangibles. It encourages strenuous physicality (here one thinks of Floyd Mueller’s exertion games) or even causes pain. 

Cultural discomfort, in contrast, invokes dark thematic associations or confronts difficult decisions (such as in Ulrike and Eamon Compliant and IWMN). 

While these are certainly relevant to interaction design, our second two forms of discomfort lie right at its heart. One of Ben Shneiderman’s famous Eight Golden Rules of interaction design is to “support internal locus of control,” which means keeping the user in the driving seat as long as possible. One way of creating discomfort in interaction is therefore to have the system take control, as is the case with Blast Theory’s demanding instructions, and pretty much any rollercoaster you can name where the rider is helplessly strapped in for the duration of the ride. Design tactics here include surrendering control to the machine, surrendering control to other people, and in a reversal of these, requiring participants to take an unusually high degree of control or responsibility. 

Our last form of discomfort concerns intimacy. Computers are increasingly mediating our social experiences which gives rise to the possibility of distorting normal social relations in uncomfortable ways, for example isolating people (used in both Breathless and Ulrike and Eamon Compliant), employing surveillance and voyeurism (also used in both), and establishing unusual intimacy with strangers. An intriguing example of the latter is to be found in Mads Hobye’s and Jonas Löwgren’s account of the performance Mediated Body, in which members of the public touch a performer’s body in order to explore an interactive soundscape.

Remember the point

Having told you how to create uncomfortable interactions, now is a good time to pause for a moment and reflect again on possible motivations. The ultimate aim here is to employ discomfort in the service of a greater goal—enlightenment, entertainment, or social bonding. This means that uncomfortable interactions need to be very carefully embedded into a wider experience in such a way that they are properly resolved. 

Again, we can turn to the world of theatre for inspiration. The Renaissance saw the development of the classic five-act performance structure known as Freytag’s pyramid, consisting of exposition, rising action, climax, falling action, and finally dénouement. Personally, I can see a striking resemblance between this pyramid and the design of Oblivion. 


Oblivion as Freytag’s pyramid

It seems that rollercoaster designers may understand performance structure, and invest effort into designing an entire trajectory through discomfort rather than just an uncomfortable experience per se. 

With this in mind, there are some forms of discomfort that are more problematic. While the uncomfortable suspense of the slowly rising action on Oblivion is part of the entertainment, I find other rides to be uncomfortable because they make me nauseous, a sensation that is not quickly resolved during the ride and that lingers some time afterwards. I certainly wouldn’t encourage you to design experiences that induce nausea as a form of discomfort (and here I include those of you working with virtual reality head-mounted displays, which seem to be making something of a comeback right now).

Can this be ethical?

You—as a prospective designer of uncomfortable interactions—are also going to need to carefully consider ethics. Adopting a consequentialist approach as proposed by Jeremy Bentham, you need to consider whether the ends justify the means. Do the benefits of enlightenment, entertainment, or social bonding for the individuals involved justify any temporary and properly resolved discomfort? In short, with hindsight, would you participants be happy with what has occurred? 

There are other ethical issues to negotiate too. What does informed consent mean in an experience that deliberately contains shocks and surprises? Where is the right to withdraw from a rollercoaster once it is underway? What of privacy in experiences that employ voyeurism? Again, interaction designers are going to need to learn from the world of theatre where performers have an established tradition of negotiating the boundaries of ethical behavior with their audiences, both during and within performances, though certainly not always without controversy.

A shocking experience

So I’m suggesting that interaction design needs to take a considered view of the idea of deliberately designing uncomfortable interactions. They are clearly part of the repertoire of cultural experiences, from performances and museum visits to games and rides. I suspect that there may be interesting resonances with other application domains too. I wonder, for example, whether we can apply any of these ideas to the design of health journeys? How could we deliberately redesign a visit to the dentist if we thought of it as a trajectory through discomfort?


A shocking game

I’ll close this post with a shock. Quite literally. Earlier on I mentioned causing pain as a form discomfort. This would seem to be quite an extreme idea (and indeed it is, and should be treated with great caution). This said, I recently treated myself to an electric shock reaction time game for less than twenty bucks. It gives a nasty jolt and therefore induces a high degree of suspense. I’m not sure that it’s that entertaining personally, but its glowering presence in the centre of my table certainly adds an extra frisson to student supervisions.


Posted in: on Mon, October 07, 2013 - 9:22:03

Steve Benford

Steve Benford is professor of collaborative computing at the University of Nottingham’s Mixed Reality Laboratory.
View All Steve Benford's Posts


Post Comment


No Comments Found


Artifact invention and research


Authors: Jonathan Grudin
Posted: Mon, September 30, 2013 - 10:33:38

I asked several talented inventors whether there is more to research than invention. It was not a new question for them, but not an easy one either. “Get back to me after the CHI deadline,” said one, immersed in writing papers on his latest inventions.

 “The Edison-Einstein question,” said another.

Where I work, artifact invention and research is the air we breathe. But recall the fish that asked, “What is water?” We may not understand it. What differentiates invention and research?

A stream of invention is the most visible product of computer science. Thanks to Moore’s law and similar legislation, new technologies and new possibilities stream forth as the semiconductor wizards do their work. How does research fit in? We attend research conferences, apply for research funding, and work in research laboratories.

Edison-Einstein may not be germane. Any research conducted by Thomas Edison the inventor could be relevant, but for theoretical physicists, artifacts such as telescopes and accelerators are means, not ends. Some of them grumbled when Ernest Lawrence received a Nobel Prize for inventing the cyclotron. Computer science is different. It explicitly encompasses systems work alongside theory; HCI honors artifact creation, stretching back to Sutherland, Engelbart, and others.

My colleague continued, “There is descriptive research, which may not be associated with an invention. Social media research is descriptive, there is not much invention.”

He was ambivalent about social media research, at one point saying that it was OK to do it as long as one “also does some original research.” Consciously or unconsciously, he did not highly value work that does not include invention.

I continued, “Is invention by itself research, or is something else needed?”

Echoing Edison’s claim that genius is 1% inspiration, 99% perspiration, he replied, “Implementation. Sometimes the invention part only takes 5 minutes, the rest is implementation. And some kind of evaluation and dissemination is needed.”

Technically, you can patent an invention that you can’t implement—for example, an invention could rely on a separate patent for which you do not have a license. However, novel software artifacts are usually implementable. Other types of invention, including patentable process inventions such as freeze-drying and electroplating, are not our focus here. We will consider the roles of evaluation and dissemination after reviewing the concepts invention, science, engineering, design, and research.

Invention

I judged an elementary school Invention Fair in Texas a quarter century ago. My favorite was a teeter-totter. As kids went up and down, it pulled a conveyor belt beneath it on which soda cans were placed and crushed for recycling. Another was a miniature sink attached to and kept under the main sink, with flexible plastic tubes bringing down water so the inventor’s toddler brother could use it to wash his hands, his dishes, “or just play with the water.” A third was simple—scales set to trigger an alarm when the load shifted, a safe place for household guns lying around when children are about.

I loved it. Inventions are memorable. They demo well. And software companies in the reign of Moore’s law need to invent. Necessity being the mother of invention drives a benign cycle. It is rewarded. When you can point to your feature or product, your contribution is tangible. A manager, or a reviewer, does not have to dig to see that something was accomplished. Inventions are very welcome, some of them.

In an earlier post, I observed that creating useful novelty is not easy in the global village. Develop an idea and if one of the seven billion people in the village had the same thought and reached the same conclusion a week earlier, it could already be on YouTube. That said, ever smaller-faster-cheaper electronics will only realize its potential if people invent.

Science

I have also judged science fairs. Young scientists describe a journey, hypotheses and experiments, what turned out as expected and what did not, and their conclusions as to how things work. In contrast, inventors described their inventions and how they might be used. There is overlap, but a different feel: “I discovered this about the world.” versus “I made this for the world.”

Edison: “I never once made a discovery… the results I achieved were those of invention, pure and simple.”

At some level we prize science most highly. The science fair exhibits were good, but science fairs were much less memorable than the invention fair. How might this play out in academia and industry?

Engineering

Engineering as a field is particularly close to invention, not surprising given the potential that is unleashed by semiconductor advances. Hardware and software engineering processes are central to the implementation of our inventions.

Design

We generally consider design to be part of the refinement of an invention, recognizing that artifact design can be innovative.

Research

Research is the broadest term, spanning all disciplines. The meaning of scientific research is relatively well understood, thanks in part to those school science fairs. Research into engineering or design methods is included, but what role does research play in the practice of engineering, design, and invention? What role do engineering, design, or invention play in research? Our initial question was whether invention in and of itself is or isn’t research.

Research into the properties and uses of materials and algorithms is part of engineering. Design can benefit from basic or applied research that identifies situations in which artifacts might be used. Within HCI, the roles of designer, software developer, and user researcher are generally distinct, but the role of user research is to inform design and engineering.

Does evaluating and disseminating an invention make it research?

In some branches of computer science, inventions need not be evaluated to be published in research conferences. Within HCI, the value of a perfunctory “user study” evaluation of systems and design contributions has been debated. For complex systems, realistic short-term evaluations are not always feasible.

Few artifacts described in our research papers or demoed at our conferences are ever disseminated widely—the overwhelming majority of invented HCI artifacts undoubtedly make their final appearance in research conference papers. This is not necessarily a bad thing; it encourages the innovation that we all benefit from cumulatively.

Invention and research

Two extremes: (1) Random mutation and natural selection. Invention is unconstrained, not boxed in by assumptions about what might be useful. Inventions proliferate and the marketplace identifies which are useful. Unexpected successes validate the high failure rate, we hope. (2) Intelligent design. Thorough research into context and risk precedes invention. We get fewer inventions—and it can be a triumph when HCI research eliminates the invention of features or products that would add clutter and no value—but a higher yield of useful outcomes, we hope.

Which is best, or is it somewhere between? 

If the optimal path is somewhere between, we need to confront the fact that research to guide design is more difficult to sell than invention. Reviewers and managers in a field driven by novelty understand “I made this,” but not, “The angry dog you do not hear barking?—I did that.”

Case study: An invention paper and a research paper. Post- PhD, I joined a talented inventive team in a software product development company. We took on several applications or features intended to support groups and solved the technical problems, but the products failed in the marketplace. Few inventions ever prove to be useful, but the educational benefits of failure, although perhaps not zero, are easily exaggerated. I headed back to research to learn why this software species was so challenging. My goal was not to spur innovation. I hoped to figure out what could increase the prospects for success, or identify contexts in which a given invention would more likely succeed. I was moving from (1), unconstrained invention, toward (2), at least some intelligent design.

In 1988, the team I had left released another group support product. Freestyle was lauded by PC Magazine, PC Computing, Computerworld, and Communications Week. Twenty years later, my colleague Bill Buxton marveled that no one had yet replicated Freestyle’s useful, easily understood features. The team published two papers. The invention paper [1] described Freestyle features. The research paper described an unforeseen deployment challenge. A major element in Freestyle’s commercial failure was a mismatch between the nature of significant communication loops, which span organizational units, and prioritizing and budgeting practices, which do not.

A quarter century later, products similar to Freestyle are arriving. The invention paper served its purpose. Despite the evolution of organizational infrastructures over a quarter century, the research paper remains instructive.

The allure of novelty

CHI and related conferences have always focused on new technologies that are or could soon be widely used. We have the opportunity to explore the boundary of invention and research in examining how technology meshes with behavior. That research includes obtaining feedback for iterative design, but can go deeper.

There is pressure to show that our field invents and innovates, perhaps to convince colleagues or funding agencies that we contribute. Artifacts connect. Douglas Engelbart’s obituaries began and often ended with “invented the mouse.” His deeper contributions were more difficult to grasp, such as his emphasis on intelligence augmentation in contrast to artificial intelligence.

Random mutation and natural selection could outperform intelligent design—encourage a million people to invent and we get 1000 useful inventions, benefiting us tremendously, and the 999,000 useless inventions will disappear unlamented. What do you think? It didn’t seem efficient to me in 1988, and it still doesn’t. A useless invention is more likely to be novel—if useful, one of the seven billion other people on the planet would more likely have come up with it already. An invention might be useful in a different context or with minor superficial changes—research could identify the opportunities. I have seen inventions die that had promising untested niches in which they might have thrived.

I have deep appreciation for my inventive colleagues. I have developed a healthy respect for the value of informing their work with research. The forces on scholarship and product development in our field militate against a holistic approach. With limited attention and access to unlimited information, with little time for deep analysis, we look to extract quick takeaways from what we read and see. This is easier with an invention paper or an invention. HCI research conferences may be morphing into invention conferences, with some room for academic papers that focus inwardly on “building theory” and extending the literature. Both signal retreat from HCI’s unique opportunity within computer science to provide direction through a broader perspective.

Endnote:

1. Levine, S.R. & Ehrlich, S.F. (1991). The Freestyle System: A design perspective. In A. Klinger (Ed.), Human-machine interactive systems. Plenum, pp. 3-21.

Thank you to inventive colleagues in jobs past and present, especially Patrick Baudisch, who has long thought deeply about and discussed these issues.


Posted in: on Mon, September 30, 2013 - 10:33:38

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Learning from ePatient (scholar)s


Authors: Richard Anderson
Posted: Mon, September 23, 2013 - 9:30:36

Increasingly, patients are making invaluable contributions to the redesign of our broken healthcare system and the patient experience. Designers working in healthcare should be aware of and leverage these contributions.

Among the facilitators of this is Medicine X, a fabulous conference held annually in September at Stanford University. As stated by the conference organizers:

Medicine X aims to bring together the best and brightest doctors, patients, academics, and industry leaders to talk about emerging technologies and how best to improve healthcare.

We seek to empower patients and give them a louder voice in healthcare discussions.

...patients are a core set of stakeholders. Yet they typically haven't been meaningfully represented and engaged at academic medical conferences. We want to change that.

To fulfill this goal, Medicine X invites select ePatient scholar applicants to attend the conference and some ePatient scholars to participate in the conference organizational and planning process. What is an ePatient scholar?

ePatient scholar: 1. A specialist and expert who is highly educated in his or her own medical conditions and who uses information technologies (e.g., Internet tools, social networks, self-tracking tools) in managing their health, learning from and teaching others. 2. (Stanford Medicine X ePatient scholar) An educator and role model for other patients and health care stakeholders.

A valuable contribution provided by all of the ePatient scholars (and many, many other patients) is the story of their patient experience. Many of these stories are gripping, documenting much of what is wrong with healthcare and suggesting fixes. Some stories can be found in blogs; some stories can be found in online patient communities. During Medicine X, some stories are shared on stage. An example is that provided by Britt Johnson (pictured below) at last year's conference; the video of Britt's talk is essential viewing.


Britt Johnson

EPatient scholars' patient experiences form the basis of and provide the motivation for many of their additional contributions.

Two misdiagnoses and the urgent implant of a cardiac defibrillator made Hugo Campos realize how crucial it is for patients to engage in healthcare decision making with clinicians. This has prompted Hugo to tirelessly advocate for the rights of patients with pacemakers and implantable defibrillators to gain electronic access to the data collected by their electronic devices. Difficulty obtaining all sorts of medical records has led many to join Hugo in the call of "Give us our damn data." 

Unable to get a satisfactory response from doctors to her multi-year digestive problems, Katie McCurdy applied her design skills to the construction of a visual timeline of her symptoms and medical history. Katie's hope was that this timeline would communicate much more and more effectively than medical records or her usually rushed oral description in a doctor's office, and she has had some success with it. Wouldn't it be nice if such visual timelines could be created by or for other patients?

Important input to such a timeline might come from Symple, an app developed by ePatient scholar Natasha Gajewski for tracking symptoms. Natasha built this app because of the difficulty she had tracking the symptoms of her rare autoimmune disease between doctor's office visits. Symple is now used by tens of thousands of patients around the world.

Sean Ahrens (pictured below) is among the ePatient scholars who have made valuable contributions to what is increasingly referred to as peer-to-peer healthcare. Because of his and others' similar health needs, Sean designed and developed Crohnology.com, a social health network for patients with Crohn's, colitis, and other inflammatory bowel conditions. Crohnology.com lets patients share and learn what treatments work for others, track their health, and meet others near them. As stated in a recent MIT Technology Review article, "The site is at the vanguard of the growing 'e-patient' movement that is letting patients take control over their health decisions—and behavior—in ways that could fundamentally change the economics of health care."


Sean Ahrens

Many ePatient scholars help patients connect in other ways. Tweetchats are particularly popular. Three-time cancer survivor Alicia Staley's weekly tweetchat for the breast cancer community (#BCSM) is perhaps the best known of these. Alicia started this tweetchat to combat the extreme isolation she experienced. (To get a better sense of the importance of such connecting, see Katie McCurdy's blog post, "On Speaking Up.")

ePatient scholars share their insights in multiple ways. The contributions of the most well-known ePatient, Dave deBronkart—a.k.a. e-Patient Dave—have included a TEDx talk and an ebook entitled Let Patients Help.

Many share their insights via blogs. Katie McCurdy's blog, referenced above, is filled with gems. See her recent analysis of the use of the term "patient engagement" for another great post. Carolyn Thomas's great blog includes a related post. Sarah Kucharski, founder of FMD (FirbroMuscular Dysplasia) Chat , is another excellent blog writer; her recent post on patient engagement provides important advice to designers of health apps.

Advice to designers is among the contributions I have made. As you might know from my interactions magazine blog posts alone, my writing and speaking on healthcare system and patient experience redesign have been focused, in part, on identifying what designers need to do in order to have maximum impact on that redesign. See, for example, "Are You Trying to Solve the Right Problem?," "What Designers Need to Know/Do to Help Transform Healthcare," "The Importance of the Social to Achieving the Personal," and the blog post you are now reading.

In short, there is much to learn from ePatient scholars, and you can learn more from and about most of those highlighted above as well as the other ePatient scholars attending Medicine X this year by accessing the 2013 ePatient ebook put together by the conference organizers. Use this ebook (and this blog post) as starting points to include the oft-missing voice in the redesign of healthcare and the patient experience: that of the patient. Better yet: Come meet us all at the conference September 27-29; we'll be happy to talk to you.

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.



Posted in: on Mon, September 23, 2013 - 9:30:36

Richard Anderson

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.
View All Richard Anderson's Posts


Post Comment


No Comments Found


Can we afford no affordances in our user interfaces?


Authors: Monica Granfield
Posted: Mon, September 16, 2013 - 3:40:22

Lately, flat design has been a topic of conversation in the design community. It seems there are two main camps: those who love it and those who don’t.  Flat design is in no way new to the screen.  Some websites have, for a while now, approached flat design as a mix of flat and raised presentation, reserving the raised effect for controls such as buttons. Some sights have been full-on flat in their presentation. It was not until Microsoft released the Metro UI for Windows 8 that the discussions really ramped up. Apple’s iOS 7 will now take on some flat attributes as will parts of Apple’s OS 10 applications.

Looking back, Windows 2.0 was flat until Windows 3.0 adorned a 3D-look to buttons and controls. The Apple OS was black and white and had flat graphics, until some slight beveling treatment came into play on buttons and controls as well. As the capability for richer and more lifelike graphics became possible, these environments expanded on the idea of creating an experience that would look and feel tangible, by enhancing the depth of the graphics. These were referred to as “cool” and “sexy.” Clearly, these graphics had appeal. No more battleship gray bevels!

It seems, however, that just as the appeal and fascination with the gray bevel wore off, so too has the appeal for glossy graphics. I have heard debates declaring the previous 3D graphics are “Too rich and feel heavy.” On the flip side comes the feedback on flat graphics: “Look juvenile and too playful.”  I often hear that “modern design is cold and not homey enough.” I also hear that flat design is “lighter on the eye.” This feedback has me wondering if these trends are more form than function or not, and if not, then how are these trends impacting functionality?

Don Norman originally compared affordances of the physical world to those of the metaphorical world of computing. He later restated his take on what an affordance is, declaring computer affordances to be more of “perceived affordances” than actual physical affordances. Without going too deep into the history of affordances, a perceived affordance is the quality of an object that suggests how it might be used. As Norman explains: “Does the user perceive that clicking on that object is a meaningful, useful action, with a known outcome?”

What do users of a flat interface perceive as clickable? Much has changed in the flat interface and I am not only referring to Windows but to the computing community in general. Gray controls are no longer necessarily perceived as disabled. Are our perceptions changing based on familiarity with computers? Have users evolved enough to know what to click on based on familiarity? For example, is a beveled gripper control more recognizable than a flat control? Or is the flat presentation of the control perceived as a control, based on experience only, the pattern and location of the shapes or both? 

Users know and have known that shape, text, and placement are all affordances that, as Norman suggests, are perceived as such based on these attributes and not the depth of glossy 3D graphics. An arrow is an affordance even if it’s flat and the visual language of an arrow always tells us it implies direction. A rectangle with text is perceived as a button that issues the command specified in the text, when clicked. Although visual patterns that are used consistently create a language that is discovered and learned by the user, what I am seeing is a wide variety of patterns that are offered depending on the product, platform, and environment.  I would be curious to know the impact of such variety, on users, if any.

Another trend that is appearing alongside flat design is what I consider to be a type of “context-based” design. It used to be that controls that were not available were disabled and not moved in and out of the UI, as the movement was deemed “too disruptive.”  This rule of thumb is often still in use, however, increasingly the pattern is to design controls that come and go, rather than enable and disable, depending on what object or task I am focused on. Everything from commands to scroll bars comes and goes, depending on what has focus and the context of the task. This does seem like a way to simplify the visual presentation, fit more functionality into a single experience, and create a more directed and focused task.  I have to say that although I often do a double take on a scroll bar that miraculously appears and grabs the corner of my eye; I much appreciate the simplicity of not having multiple scroll bars display at once. I do wonder about the discoverability of some pop-up controls and if the simplicity in these cases pays off in the experience or not.

What I find interesting is that this seems like a very exploratory time for the interface. I, for one, find it exciting and liberating to explore the experience and have a bit of flexibility and freedom.  If this freedom is due to an evolving and exploratory user base, with new expectations, then simpler more directed interaction, flat or otherwise, may be something that we can afford after all.


Posted in: on Mon, September 16, 2013 - 3:40:22

Monica Granfield

Monica Granfield is a user experience designer at Imprivata. The views expressed on this website are exclusively her own and are not meant to reflect or represent the views of Imprivata.
View All Monica Granfield's Posts


Post Comment


No Comments Found


Patina of things


Authors: Tek-Jin Nam
Posted: Thu, August 29, 2013 - 10:18:03

I tend to use things for a long time. It feels as if my belongings take on aspects of myself. So it is difficult for me to throw my possessions away. When our building was renovated a few years ago, I kept all my furniture, even though it was not the best fit to the new interior. The furniture was special to me, since I had received it when I started living here, and we have lived together ever since. 

There are many old electronic products of mine like this. I still keep digital cameras and mp3 players that are more than 10 years old. My audio set with an amplifier and speakers is more than 20 years old. I bought the set with earnings from my first part-time job. The company that produced the audio amplifier no longer exists. When I moved to different countries for study, the audio set always followed me. I am happy that it still works well and gives me the joy of listening to music. 

One of the most valuable products of mine in my twenties was a video camera with an 8mm tape deck. I invested much more for this than the audio set. At that time, video recording was rare. I recorded the lives of my family and friends, thinking I was making a sort of time capsule. The video camera is no longer working, the LCD broke a long time ago, and 8mm videotapes are no longer available on the market. The video-recording media has changed to memory cards via 6mm digital tape and DVD. Although the video camera is junk, I still keep it. When I look at it again, I feel the memories ingrained in that video camera. It may take some determination to abandon it. 

Many designers wish to create things that are used and loved by many people for a long time. This is a challenging task. People should want to own those things and feel special when they use them. First of all, the item should be physically durable. It should work well without malfunctioning or breakdowns. In addition, it should provide emotional durability, as Jonathan Chapman stressed this in his book Emotionally Durable Design. It is also necessary that the product should be resistant to trend changes. People should not get bored easily. It is particularly difficult to create IT products and services with these requirements, as they are dependent on the rapid technological development and standard changes, and alas, rendered obsolete, just like my 8mm video camera. 

A designer who wants to create physically and emotionally durable products faces a dilemma. That designer wants people to use the products for a long time, but then faces the risks that the role of creating an updated version of the product is no longer necessary. Artificial obsolescence is the term that I knew early in my design education. It is a marketing practice in which companies deliberately make old models appear out-of-date by introducing new ones with changes and additional features to attract customers. It seems bad for our environment and end users, but companies often need this approach to make profits and to be sustainable in the commercial world; therefore, many product designers are engaged in this practice. Meanwhile, if designers create functionally and emotionally perfect products they will, theoretically, have nothing more to do with those products since end users have no need for new models. It is ironic that as more people love and use designed products, the designers who wish to create quality work lose their purpose. Fortunately for designers, the world behaves differently. People are capricious, and sociocultural trends keep changing with the development of technology. 

Therefore, it is important for designers and HCI specialists to study how to create products and services that many people use for a long period of time. What would be the key characteristics of such IT products and services? I think one of the ways to create long-lived precious things is to add stories and meanings for owners. Perhaps the stories can be kept in a visible and invisible patina that stores memories of interactions between people and things.  

Products with patina often create special meaning for owners, just as my possessions did. Many products often give such feelings naturally without physical patina, as the associated memories invisibly remain somewhere. Things inherited from parents are treated preciously. We regard such objects as preserving our parents’ memories. 

This attitude is not unrelated to the belief that people’s souls inhabit their possessions. There are many cultures believing that, as people use things, their souls are transferred to the objects. This is particularly common in Asian cultures. In Japan, there is a perspective that no spirit exists in newly created, unused objects. In contrast, objects that are used by many people are considered having a strong spiritual power. If there are a large number of objects used by people, it means the spiritual weight creates value. Examples of these products are the Super Normal products introduced by Morrison and Fukasawa. 

Traditionally, many Korean people think that when they buy or rent a house, traces of previous occupants influence their lives. If previous occupants of the house proceeded to a prestigious university or succeeded professionally, the house tends to be sold or rented out easily. On the other hand, people are hesitant to use objects or live in houses with bad luck or trauma. As we have seen, this is a common theme of many horror movies from both eastern and western cultures, proving this belief must be universally accepted by the human mind.

An example object with the storage of patina is Long Living Chair, presented in CHI 2013 Interactivity. It is a rocking chair with a semi-hidden display showing the day it was produced and how many times it has been used. The information provides a moment of wonder and a sense of relatedness to the object when it is accessed. The movie, Red Violin, directed by François Girard, tells the story of a mysterious violin and its many owners. I thought that the violin could have been even more special if it had the means to keep the traces to unfold the stories. Moonhwan Lee in my research lab is also investigating the potential of patina as a design strategy. 

We often come across situations where objects remind us of their owners. I speculate that the souls of the users become ingrained into objects. Such objects take on the identity of the owner. Horcruxes are the things with souls shown in the Harry Potter stories; a dark wizard or witch hides a fragment of his or her soul for the purpose of attaining immortality. This must be the most precious object of the wizard. The possessions that we care most about can be like the Horcruxes of everybody. If you can store your soul in objects, what objects would you choose? The objects would be the most meaningful and valuable things that designers could produce for everyday people. 

In the analogue world, these objects are musical instruments from a master, ornaments from parents, or books and stationery from ancestors. Sherry Turkle introduced such things as evocative objects. In the digital world, people consider IT products such as laptops and smartphones meaningful objects for their life. Moreover, people seem to think intangible information or contents can store people’s souls. Recently, I saw a TV drama where a girl keeps an old feature phone with great care as it has the last voice message of her father, who had died in an accident. She listens to the message to get comfort when she has troubles. The feature phone breaks down at some point and can’t be fixed; we empathize with the sorrow of the girl who lost the contents. The Korean movie Phone, directed by Byungki Ahn, is a thriller about a soul attached to a phone number. In it, a woman takes over a phone number from a previous owner, a mysteriously murdered girl. As in these stories, we are in a time when virtual contents or information such as website addresses or QR codes can become the things that are associated with our souls. 

The emerging new forms of IT products and services bring changes in the ways we possess or emotionally connect with them. There are many popular songs composed by musicians who unfortunately committed suicide. When I listen to these songs, I feel a special emotional connection to those musicians. How about the emotional connection with music depending on the type of media? Would the digital music or photo files of the musicians create the similar emotional connection to the physical inheritance, such as LP records or personal objects? I speculate that the form and interactions would bring changes to the way we feel about the things we care about and the emotional connection. 

I think that to understand how people get to own, use, and abandon the precious things they love and have used for a long period time can help to create a people-centric future. In order to create IT products and services that provide emotional experiences for people and added value in the digital world, we need more ideas. I wonder if the application of patina can be a candidate for making things with a soul.




Posted in: on Thu, August 29, 2013 - 10:18:03

Tek-Jin Nam

Tek-Jin Nam is an associate professor in the Industrial Design Department at KAIST.
View All Tek-Jin Nam's Posts


Post Comment


No Comments Found


The past, present, and future of women in STEM


Authors: Ashley Karr
Posted: Mon, August 26, 2013 - 3:02:21

Take away: An impactful way to make lasting, positive change for women in STEM is to constantly adjust in small, simple everyday ways. In other words, change starts at home.

A very brief look at the past

  • The word scientist was first used in reference to a woman, Mary Fairfax Somerville, and one of her published works entitled On the Connexion of the Physical Sciences.

  • Augusta Ada King, Countess of Lovelace, was the first computer programmer and the first person to realize computing’s potential—it would transcend mere calculation and change what it meant to be human.

  • Programming languages are based upon natural language rather than machine code or related languages due to the wisdom of a woman, United States Navy Rear Admiral Grace Hopper. Her nickname was Amazing Grace due to her rank and breadth of accomplishments.

I have my master’s degree in engineering and a career in human-computer interaction (HCI), but I did not know about these amazing women until a few months ago when I began volunteering for the Anita Borg Institute (ABI). ABI is a non-profit organization that seeks to increase the number of women in technical fields and to encourage the creation of technology by women. Looking back, I am disappointed that my education did not include at least an overview of important contributions made by technical and scientific women. I can’t help but wonder if adding information like this to our basic curriculum would help improve the odds for women to complete their education and enjoy long careers in STEM. The fact that countless women, who made important and influential contributions to STEM over the centuries, have been overlooked by history is, to be quite honest, rather offensive. I know I am not the only person working to rectify this and give credit where credit is due.

A walk through the present

I recently reviewed scholarship applications for the Grace Hopper Celebration (GHC) of Women in Computing. GHC is an annual conference put on by ABI and the Association for Computing Machinery (ACM) that brings the career interests and research of women in computing to the forefront. When I read the applicants’ essays, I felt an immediate bond with these women as I observed that our stories as women in STEM held many parallels. Here is a rough sketch of these common themes:

  • We feel isolated because we are different from the men we work and study with and the women we socialize with. It is difficult to find the social support that we, as humans, require.

  • We feel like we have to prepare ourselves for unfair treatment. If we confront the inequities or try to fight back, we fear our careers will be damaged.

  • Our appearance, dress, and abilities are commented upon and questioned at regular intervals.

  • We feel like we are imposters and are constantly pushing ourselves to be better to prove that we belong in STEM.

  • We feel shocked, surprised, and relieved when we find other women who have had similar experiences, and the same shock, surprise, and relief when men understand and support us.

  • We want to do something to change this, but we don’t know what to do, so we reach out to other women in STEM through organizations like ABI and conferences like GHC.

  • Despite these challenges, we remain in STEM because we love our work, and we want to make things better for other women like us.

After reading the GHC scholarship applications, I was moved to act—to do something that might actually make a difference sooner rather than later. I started talking to friends, family, and colleagues about this and asked them what they would do to change things. What most everyone I spoke to recommended was not a massive and noisy movement. They suggested continuously speaking and acting in encouraging, supportive, positive ways toward women in STEM. Their suggestions reminded me of something I have learned from working in STEM: the small, mundane, everyday things that we interact with frequently and or for long durations might be often overlooked and taken for granted; however, cumulatively, these little things have huge impacts on our lives. 

The following is an overview of some of these small, subtle, daily actions we can undertake: 

  • Engage in the Golden Rule. Treat others the way you would like to be treated. Everyone wants to be treated with respect, supported, encouraged, and valued for their efforts and accomplishments. If we are in a situation that lacks any of these elements, we can be their harbingers.

  • Self-reflect. We should be very honest with ourselves and notice our own biases in thought and in action that may suggest a woman might be less competent, less mature, or less capable than her male counterparts.

  • Listen. When someone brings up an issue regarding the challenges that women in STEM face, we should listen to what they have to say. We should listen before, during, and after passing judgment. Then, we should listen more.

  • Talk. We should discuss our biases, good or bad, with people who will listen and suspend judgment.

  • Self-reflect again. We should notice how our biases play out in our everyday lives. For example, in your family, when someone needs help with their computer, is a male relative or female relative asked for their expertise? Are the males the ones called upon to handle anything technical? We all can change this starting now, and we should.

  • Act. It’s amazing what happens when words turn into action.

A glimpse into the future

As a result of taking my own advice, I started making a stand with my family. When a request for technical support would move through the ranks, I would volunteer, pointing out that they have an engineer in their midst, and we should put my degree to use. At first, I met with a lot of resistance. My relatives were not used to relying upon a woman for technical support, but they conceded, and now I have more requests than I can manage to set up projectors for post-holiday slide shows and troubleshoot misbehaving computers.

The most astonishing result of my attempt to support women in STEM starting at home came one Friday night while I was working overtime on a project. I had settled myself at the kitchen table after dinner and was working away on a prototype for a website that had to be completed within the next few days. My five-year-old niece, Emily, wandered over an asked what I was doing. I showed her my prototype and explained to her a little bit about my career. I thought I had bored her, because she very quickly wandered out of the kitchen, and I went back to work. About an hour later, she returned to the kitchen table with her very own, handmade paper prototype of a laptop computer. She set it down next to mine and started typing away on her keyboard. She told me, “Auntie Ashley, this is my computer.” I asked her what she was doing on her computer. She said, “I’m making dot com’s, and I have to type really hard because I have to think really hard.”

The future of women in STEM looks very bright.

Despite the strides we have taken in recent history to support and encourage women in STEM, we must remember that women tend to face far more challenging familial, cultural, regional, national, and global barriers than men when it comes to pursuing any type of education and career. Imagine what would happen if we could move past these barriers and harness the potential and intelligence of all people, regardless of gender, socioeconomic status, cultural background, genotype, and phenotype. The world would very quickly become a much better place. I hope that a at least some of the people reading this article will feel compelled to make their own small, subtle, daily changes for the better. I am thankful for the thousands of other women and men in STEM actively involved in ABI, GHC, and other organizations and events that are part of this change engine. I am honored to be part of it, and I hope you are, as well. Please feel free to comment below or send me an email to tell me your stories about challenges and triumphs you or those you know have faced as women in STEM. I look forward to hearing from you.



Posted in: on Mon, August 26, 2013 - 3:02:21

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


@Ashley (2013 08 29)

http://adainitiative.org/what-we-do/impostor-syndrome-training/

This is interesting. Apparently, the “impostor syndrome” is more of an issue than I realized.

@Jen (2013 09 08)

I think the small daily actions here are great. Its particularly nice to read your examples of what can happen if you change your own behaviour, rather than waiting for the world around you to change. I think those personal, positive stories are important, so I’d like to share one.

I was recently invited onto a careers panel session at an event for end-stage PhD candidates. I’ve a PhD in computer science, had a short but successful academic career and now run a training consultancy business. The panel was entitled “What do employers look for?” and having previously been both a postdoctoral employee and employer I had much to offer.

There were 5 people on the panel and I was the only women. The first question was asked and I had some thoughts to share. But I waited for one of the others to answer first. Then, probably for the first time in my life, I noticed myself doing this - stepping back to give the guys a chance to speak first. It was like a bolt of lightening striking. The next question I had some experience to share so I jumped straight in. My response sparked a meaningful discussion and this gave me confidence. For the rest of the session if I had something to say I didn’t wait for the others to have their say first, I just spoke up. No one was offended, no one reigned me in and on several occasions I heard the other panel members saying “I agree with Jen…” or “Jen’s absolutely right…“and then giving their answers. At the end of the session I was happy that I’d made several important contributions and this was confirmed by feedback from the participants and my colleagues on the panel.

Now I can’t remember ever being explicitly told to wait and let boys speak first, but perhaps I’ve picked up cues as I was growing up that this is how I should behave? Anyhow, now I’ve called my subconscious out on its sabotaging behaviour I’m looking forward to the next opportunity to speak over it.

@Ashley (2014 01 10)

Thanks, Jen!

I am very impressed smile

Happy 2014.

@Annaka Johnson (2014 03 09)

Were you able to go to her room and see if she had any design process edits that didn’t make the final version she showed up w/? Were their other prototypes? Could you see her thought process thru those pieces?


Canyonlands


Authors: Jonathan Grudin
Posted: Fri, August 23, 2013 - 10:37:46

The Colorado Plateau. 130,000 square miles (337,000 square kilometers) of high desert and scattered forests in Utah, Arizona, New Mexico, and Colorado. Home to 10 National Parks, including the Grand Canyon, and 17 National Monuments. Its features include the Colorado and other rivers, towering cliffs and deep canyons, arches, domes, fins, goblins, hoodoos, natural bridges, reefs, river rapids, and slot canyons.

The visible structures formed over hundreds of millions years. Inland seas periodically inundated the region, leaving thick layers of sediment and minerals when they retreated. After the last sea withdrew 300 million years ago, periodic accumulations of fresh water continued to put down layers. Sixty to 70 million years ago came the great uplift, pushing the entire region up thousands of feet as a single piece—a key to its unique character—as the Rocky Mountains rose to the east. Then followed tens of millions of years of erosion, especially rapid when ice ages brought precipitation. The layers of sandstone, shale, limestone, and gypsum, tinted red by iron, purple from magnesium, or streaked blue with copper, including layers of fossils, entire petrified forests, and remarkably thick layers of salt and potassium laid down by evaporating seas, eroded at different rates, creating the spectacular formations listed above.

No electronics

For four days this summer, 10 children, 20 adults, and five guides rafted down and camped along the Colorado River in Utah. Recent rain had turned the warm water brown with a fine silt that penetrated deep into our hair and clothes.

A few dozen strenuous rapids lasted a few minutes apiece. The rest of the time we drifted or paddled through spectacular Canyonlands National Park, spotting the occasional mountain sheep, eagle, and heron. The guides challenged us to spot centuries-old Anasazi granaries on the cliffs. Late afternoons we found a beach and assembled tents and cots, ate (it must be reported that the Western River Expeditions guides did the cooking, and it was exceptional), washed up, hiked, and played games.

No electronics.

The electronic ashram

In 1998, Tina Kelley wrote an article for the New York Times, “Only Disconnect (For a While, Anyway).” It featured Colby professor Batya Friedman, who spent summers off the Internet, using a telephone twice a month when in town to shop. I was interviewed:

Jonathan Grudin, a professor of information and computer science at the University of California at Irvine who has worked with Professor Friedman, reminisces about a time when he was completely removed from modern communication, including telephones, during a trip to Africa in 1989. He had hoped to feel that free again on a recent trip to Madagascar, but discovered that he was too late.

''There's no place you can go on the planet now where you couldn't be in contact if you had a device worth a couple hundred dollars, so the days you could spend with a completely clear conscience and get out of contact with people, those days are pretty much gone,'' he said. ''There should be socially sanctioned electronic ashrams, where you could check in for a few days.''

In 1998 I missed the experience of disappearing into Burundi only nine years earlier. Now, 15 years after that, I didn’t miss it because I had forgotten what it was like. Batya, now at the University of Washington, emails me across Lake Washington that she still manages periods of isolation, which today require more careful preparation. In contrast, I was tethered to email and news—until Canyonlands.

You may be less steadily connected, but for me, the trip was a reminder of what life was once like. We had to talk with the people around us or not talk at all. We described our jobs, careers, and lives. On the raft, free of a need or ability to focus on current concerns, many took the opportunity to reflect.

A shifting sense of time

We had previously visited other national parks in Utah: exquisite Bryce Canyon; imposing Zion; driving through Capitol Reef on wonderful state highways to reach Moab, home of the Arches. Visitor center videos, posted park signs, and guidebooks provided a stream of overlapping explanations of the geology of the region, reinforced by the guides on the rafts.

My sense of time changed unexpectedly through repeated exposure to the historical accounts interleaved with hours of gazing at the uplifted horizontal sedimentary layers above the canyon as we floated down the Colorado, with massive piles of eroded boulders or sheer smooth rock faces shooting up hundreds of feet at the water’s edge. I found myself immersed in the epochs, thinking of the world in terms of tens of millions of years, not the usual weeks, months, or even decades.

Studying large petrographs and petroglyphs, painted and carved 1000 years ago on a vertical face at the base of a towering cliff, I suddenly realized that in the midst of this geological violence, these paleolithic creations are not yet eroded at all—1,000 years is like yesterday in the life of the rock formations. A different perspective formed on humanity and the challenges we encounter and create.

One afternoon, as we floated downstream in the shadow of endless expanses of striated cliffs, I mused to two of our companions, “What do you suppose people rafting in this area 10 million years from now will see?” They considered this silently for a few seconds. Based on their replies, I figured we would exchange email addresses and stay in touch after the expedition, and so we have. To a social network that felt pretty much maxed out, I have added PJ and Carrie, and Mike. That alone was worth the journey.

The kids on the trip bonded and formed a plan: When back, everyone who was staying on in Moab convened at 8:30 p.m. in a city center t-shirt shop. Three times in the next hour, a different adult excused himself from a conversation to take a phone call. “You need the report when?”

The 100-million-year timeline faded. My next week’s calendar came into focus. We were home.

Gayna Williams researched and planned the expedition, and has attempted to provide electronic ashram experiences previously that he managed to circumvent... Eleanor and Isobel, who do at times see their father detached from a computer, paddled, swam, hiked, and competed enthusiastically at kubb on the beach.


Posted in: on Fri, August 23, 2013 - 10:37:46

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


@Lone K. Hansen (2013 08 28)

Sounds like you had a great vacation in stunning scenery and in great company. However, I’m not sure you experienced “how life once was” even if it perhaps felt like it then /feels like it now. Rather, it seems like you experienced the time offline in that particular way because it was marked by being different than what you normally experience, and being marked by not being like that forever smile
I am reminded of this criticism of Sherry Turkle’s “Alone together” idea, this essay arguing that Turkle is fundamentally wrong when she so clearly separates that which is connected/digital from that which is not: http://thesocietypages.org/cyborgology/2012/04/23/sherry-turkles-chronic-digital-dualism-problem/
I’m not calling you a digital dualist, however, as you also say that one of the best parts of the trip is that you now have three more people in your online connections.
But even with an Internet connection you would perhaps have gotten to know them anyway smile


Interacting under canvas


Authors: Steve Benford
Posted: Mon, August 19, 2013 - 11:46:45

I’m just back from a short camping trip and reflecting on how exciting it is to live under canvas. There is a visceral thrill to being in a tent as the thin fabric leaks noise, light, heat, and shadows. Laying awake in the dark you become aware of nearby voices, the sound of rain pattering on the roof, the tent shaking in the breeze. It’s the perfect time to imagine a world of stories that might be happening outside.


Summer camping (there’s rain on the way)

The Storytent

This fascination with tents has inspired several projects at the Mixed Reality Lab. The Storytent aimed to create an intimate and exciting interactive storytelling environment for children. We folded a projection screen into the shape of an A-frame tent and projected synchronized graphics onto both sides, as well as sound, to create a mini immersive environment. We experimented with various ways of interacting with the tent: RFID readers placed at its ends recognized the comings and goings of (tagged) occupants and their possessions; a touch-screen map transported the tent to new locations in the virtual world; while shining flashlights onto the tent manipulated virtual objects as shown in this short video. This final technique used bespoke computer vision software for identifying and tracking flashlight beams.  


Using flashlights to interact in the Storytent

We deployed the Storytent at Nottingham Castle, the ancient site of many thrilling and gruesome stories: Richard the Lionheart arrived there to confront King John after crusading (but he spoke French so the locals wouldn’t let him in); Richard III rode out from there to his death at Bosworth Field (they’ve recently dug him up again in nearby Leicester); Charles I raised his standard there to declare the English Civil War and summon an army (but no-one turned up so he went to Oxford instead); the locals burned down the Castle during the Corn Law riots (you may be getting a sense of Nottingham’s attitude to authority by now); and of course, Robin Hood got up to all sorts of adventures there (and yes he absolutely did exist).

Historical digressions aside, what better place to experience these stories—which children did by exploring the castle grounds and filling in paper clues (which were tagged with RFID) before taking them into the Storytent and using them to trigger the replay of stories.

ExoBuilding

If the Storytent strives for excitement, then our second interactive tent aims for the opposite: meditative relaxation. ExoBuilding has been created by Holger Schnädelbach, Alex Irune, Dave Kirk, Kevin Glover, and Patrick Brundell as an early prototype of a built structure that reacts physically to its occupants’ activities—an idea that they call adaptive architecture.

Exobuilding takes the form of a tent that flexes and moves in direct response to an occupant’s respiration while also sonifying their heartbeat. Early experiments revealed that this form of biofeedback triggers changes in participants’ physiology, leading to lower respiration rates and higher respiration amplitudes, respiration to heart rate coherence and lower frequency heart rate variability, and causing some people to report feeling more relaxed. The team suggests that there is potential for use as a biofeedback device.  


ExoBuilding flexes in response to breathing

Tents as interfaces

These tents have some distinctive and unusual properties when considered as computer interfaces. 

They surround and enclose their occupants to form an immersive display, similar in principle to a virtual reality CAVE, but very different in practice due to their personal scale. Of course, it can be tricky to read high-resolution graphics on a screen that is so close to your eyes, or to move, gesture, and look around in order to interact when sitting in a cramped tent. On the other hand, the snugness of the tent lends a sense of intimacy to storytelling, while isolation may help with relaxation. Also, not being able to turn or look around quickly can bring suspense to storytelling, emphasizing the feeling that something may be approaching from behind.

Unlike larger immersive displays, tents have both insides and outsides. This allows for a separation between those who are immersed and interacting inside, for example a child engaging with a story, and a wider audience that remains outside the tent but can still see what is happening, for example parents or perhaps other children who are waiting for a turn. A tent is therefore an example of a spectator interface, an interface that is deliberately designed to reveal some, although not all, aspects of interaction to an audience.

As with the fabric of a regular tent, the screens of our interactive tents are porous membranes, leaking sound and light in both directions. This opens up opportunities for playful interactions between those inside and those outside, such as whispering, casting shadows, and even shaking the tent (great fun for parents!).

Finally, as the Exobuilding shows, they can flex and move, changing shape and form under computer control, introducing a sense of physicality and even synchronizing with an occupant’s physiological responses.

Interfaces as tents?

In turn, these interactive tents illustrate a wider interaction design principle. Might it be useful to think of all interfaces as being “tents,” by which I mean as permeable boundaries that connect different worlds or spaces. My colleague Boriana Koleva first explored this idea during her PhD research and referred to such interfaces as “mixed reality boundaries.” 

So the Storytent is a permeable boundary that connects three spaces: the physical spaces of inside and outside and a virtual world. It both separates these spaces, providing a degree of isolation and intimacy, but also allows some information and interactions to flow between them, even including participants who can traverse the boundary, for example entering tent from the outside or entering the virtual world.

Is it useful to conceive of other interfaces as being tent-like permeable boundaries? Might games consoles be designed around the idea that players peer into or enter a virtual world while others look back out at them (especially now they routinely come equipped with cameras such as the Kinect)? Should communication tools such as Skype be reimagined as boundaries between physical spaces that might afford different kinds of isolation and permeability? Might even the familiar web browser be reconceived as a permeable boundary to the Web, one through which we look while others look back out at us, tracking our movements and actions? 

There may be something to be said for seeing these interfaces as permeable membranes, encouraging us to reflect on how information, interaction, and presence flow both in both directions and how we might want to design this to enable or restrict such flows, address audiences as well as users, or simply to lend them some tent-like excitement. Perhaps we are all interacting under canvas after all?


Posted in: on Mon, August 19, 2013 - 11:46:45

Steve Benford

Steve Benford is professor of collaborative computing at the University of Nottingham’s Mixed Reality Laboratory.
View All Steve Benford's Posts


Post Comment


No Comments Found


Positivism in design


Authors: Ashley Karr
Posted: Mon, August 12, 2013 - 10:43:00

Take away: Applying basic tenets of Positive Psychology during design evaluations can help teams cooperate and be more productive. This means that evaluators must discuss the positive aspects of the design at least as much as the negative.

A few months ago, I was speaking to a group about human factors (HF), human-computer interaction (HCI), and user experience (UX). To make my presentation more interactive, I asked the audience to evaluate an innovative new design. They responded immediately with a number of evaluations, but I soon noticed that all of their comments were negative. Their attention was on what was wrong, so I tried to shift their focus. I asked them to now evaluate positive aspects of the design. No one said a word. I was shocked. 

This experience stayed with me. After the presentation, I kept wondering who had kidnapped the word evaluate and replaced it with criticize? I also wondered why it was so difficult for these educated, intelligent people to say something positive or constructive. Then, I came across a report by a pair of HF specialists from the design consulting firm, IDEO. The report was titled, “Are You Positive?” The authors, Aaron Sklar and David Gilmore, began the report by stating that many designers have operated under a disease model, essentially diagnosing the problems with a given design and making changes to mitigate risk and avoid damage. They then encouraged readers, especially those in the engineering and design fields, to take a positive approach to building and evaluating. Just imagine how well a team of hardworking engineers, designers, developers, and stakeholders would do if they made the design process itself human centered. Goal one is to develop a pleasurable experience for the user, and a good way to get there is to create a pleasurable team environment for the people creating the experience. Revolutionary indeed. 

Sklar and Gilmore were inspired by a movement within psychology called Positivism. Positivism states that to reach a good psychological state, people should focus on what is good in their life and build on it, rather than focusing on and trying to fix or control what is bad. This is not to deny that bad things happen, in life or in design, and to be a responsible adult and professional, risk does have to be mitigated. This is simply to state that our default perspective ought to be a positive one. Here are three examples that Sklar and Gilmore gave on how to apply positivism to design right now:

  • When asked to evaluate a design, try and create a longer list for positive aspects than for negative attributes.
  • Create evaluation metrics for positive aspects of a design and share it with colleagues.
  • Spend more resources (time, energy, money) on building new iterations than on breaking down old iterations.

I felt relief finding and reading Sklar and Gilmore’s report, knowing that a few more people in the world had gone through similar experiences as I had. I am happy to help them spread the word. Whether you are an engineer, a manager, a developer, a designer, a speaker, or just a person looking to live a better life, we can all take from Sklar, Gilmore, and Positivism. I challenge you, the reader, and me, the writer, to test this perspective for at least 24 hours—more if we can. Then notice what happens. What many have discovered is that creative ideas and solutions start flowing. I am certain we all can use more of this creative flow. So, test it out and get back to me on what you find. I would love to hear from you. You can send me an email or post your comment below. 


Posted in: on Mon, August 12, 2013 - 10:43:00

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


@Adrian (2013 08 15)

Ashley, this is not only brilliantly written but it is quite frankly a tranquil breath of fresh air.

I deeply encourage the team at Interactions to invite Ashley to write more here.

@Simon (2014 01 13)

There is always something positive in an idea or a design. Sometimes something might be way off brief but even then it will still have some sort of merit. I have found it’s dangerous to stomp on creativity. Creatives and Designers will pull their head in & learn what an organization likes and stay within those confines to avoid being stomped on.


What is your UX style?


Authors: Monica Granfield
Posted: Fri, August 09, 2013 - 7:13:29

Having come out of traditional design training and then migrated into a field that is more conceptually than visually design oriented, for more than two decades I have experienced a bit of an identity crisis. I often wonder how to explain my profession and if it is possible to have a style within this profession. In trying to explain what it is I do for a living, some typical responses to my description of my job are: "Oh you're a graphic designer, so you determine what the software looks like?" "Ergonomics, what is that?" "Information design, is that like statistics?" "Industrial design, you do engineering?" "Features, are you a PM?" "Inventor, what?" A UX designer is a bit of all of these. With a combination of skill sets and areas of focus, can a UX designer have a style? Is there even such a thing as a UX style? Is UX design tangible enough to assign a style to it? In trying to tangibly define a UX style what might it be comparable to, interior design, industrial design, or architecture?

Maybe UX style is akin to architecture. Often we are confined to designing within an operating system and to exploring our creativity and style within an already established environment. Architects are challenged by issues such as zoning, safety rules, materials, and creating a look and feel for a specific time period. Yet, architects are still able to explore their creativity and develop a style that spans from functional to visual.

We could explore the idea that UX design style is similar to industrial design. Jonathan Ive, the master of pushing the envelope on the use of materials to create visually beautiful and functionally innovative hardware, has established his style as clean, innovative, creative, elegant, simple, and sophisticated.

Or maybe we follow along the patterns of interior decorator and interior designer. An interior decorator is not licensed or certified in materials handling and is focused on making a space look and feel inviting via the aesthetics. An interior designer is licensed and certified in materials handling, and is trained to create functional environments via construction practices and building codes, as well as aesthetics. Industrial designers have dependencies on architects, developers, and visual designers, much like a UX designer. They also have a style that they typically work within. This style is usually related to a time period, much like architecture. UIs are not typically related to a time period, but instead to a brand or an OS.

As an example for exploring UX styles we might ask: How does the style, of say, the OSX finder compare to that of the Windows finder? The visual styles are inherited from the OS. Immediately the design focus moves to the functional. How do you navigate through the folder? How easy is it to find your way around, while keeping context? What functions make sense to launch from here and how discoverable are these features? Maybe UX design that is specifically for an OS, enterprise environments included, is more akin to commercial industrial design or commercial architecture, and Web or app design is more free form and therefore more akin to residential industrial design or residential architecture. When designing a commercial building the scope of the navigation is much larger. Have the elevators, restrooms, and exits been made discoverable, while still integrating the brand and the overall intent for the feel of the environment? When designing a small website or app the navigation may be more limited and the focus may be on how to keep the user engaged in the environment to drive results. How do you apply a brand and make the user feel at home? The platform being the house, how creative can I be within this platform and within this budget? Home or office? Same discipline, slightly different focus, based on scope and use.

So how can we quantify our style as a UX designer? Are we akin to any of these disciplines at all or are we able to define our own style in our own way? Are you a light open and airy designer? How intelligent is your experience? How much do you assist the user where needed, while still allowing the user to remain in control? How logical and predictable is your workflow? I am not sure that there are lines that make a direct correlation to any other design discipline. And I can't quite describe a UX style. We are always talking about simplifying a UI. Would this describe our style as minimalist?

My visual and interior design style leans toward modern and minimalist with clean lines and bold accents. This doesn't always translate to my UI designs. My visual UI designs are mostly driven off of the OS or the company brand. On occasion there is more license for design exploration. I do try to emanate an emotion and style in my UI designs; however, much like an interior designer they are also about the audience and function (form does follow function) and the usability of the end product. So how is it that function and usability have a style? And they do, I just need to figure out a way to explain and define it.

I suppose my UX style could translate into clarity, innovation, thoughtfulness, and simplicity of the environment for the user. What is your UX style? Are you able to define it? 


Posted in: on Fri, August 09, 2013 - 7:13:29

Monica Granfield

Monica Granfield is a user experience designer at Imprivata. The views expressed on this website are exclusively her own and are not meant to reflect or represent the views of Imprivata.
View All Monica Granfield's Posts


Post Comment


No Comments Found


Bias


Authors: Jonathan Grudin
Posted: Tue, August 06, 2013 - 10:35:14

The human understanding when it has once adopted an opinion (either as being the received opinion or as being agreeable to itself) draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside and rejects; in order that by this great and pernicious predetermination the authority of its former conclusions may remain inviolate. And therefore it was a good answer that was made by one who when they showed him hanging in a temple a picture of those who had paid their vows as having escaped shipwreck, and would have him say whether he did not now acknowledge the power of the gods, “Aye,” asked he again, “but where are they painted that were drowned, after their vows?” And such is the way of all superstition, whether in astrology, dreams, omens, divine judgments, or the like; wherein men, having a delight in such vanities, mark the events where they are fulfilled, but where they fail, though this happen much oftener, neglect and pass them by. But with far more subtlety does this mischief insinuate itself into philosophy and the sciences; in which the first conclusion colours and brings into conformity with itself all that come after, though far sounder and better... —Francis Bacon, First Book of Aphorisms, 1620

Confirmation bias is built into us. Ask me to guess what a blurry image is, then bring it slowly into focus. When it has become clear enough to be recognizable by someone seeing it this way for the first time, I will still not recognize it. My initial hypothesis blinds me.

Confirmation bias and its underlying mechanisms helped us survive. A rough pattern of colors that correlated with past sightings of saber-tooth tigers was a good reason to run. Sticking around to obtain statistically reliable proof did not aid survival. Eating something and becoming ill was a good enough reason to avoid it despite the occasional false conclusions, such as "tomatoes are poisonous." And belief in omens and divine judgments probably helped people endure lives that Bacon’s contemporary Thomas Hobbes described as “nasty, brutish and short.”

To get through life efficiently, we infer causality from correlational data without working out all possible underlying factors. “This intersection was slow twice, I should avoid it.” “Wherever wolves are thick so are wildflowers, so wolves must like flowers.” “Two Freedonians let me down, Freedonians are unreliable.” Confirmation bias underlies stereotyping: Having decided they are unreliable, a reliable Freedonian is an exception, another unreliable Freedonian is a confirmation.

Bacon realized that deep understanding requires a higher bar. He is credited with inventing the scientific method to attack confirmation bias. Unfortunately, experimental methods help but do not overcome the power of confirmation bias, which remains the primary impediment to advancing scientific understanding. It affects all our research: experimental, systems work, design, quantitative analysis, and qualitative approaches.

Confirmation bias arising with experimental methods

Science is not well served by random experimentation. Clear hypotheses can help. For example, Bacon hypothesized that freezing meat could preserve it. But hypotheses have unintended consequences. He contracted pneumonia while doing the experiment and died. The less severe but more common problem is that hypotheses invite a bias to confirm and thereby miss the true account: the initially blurry image that isn’t recognized after we hazard a wrong guess as to its identity.

Approach hypotheses cautiously. In overt and subtle ways, researchers shore them up and ignore disconfirming evidence. We rationalize excluding inconvenient “outlier” data or we collect data until a statistically reliable effect is found. We don’t write up experiments that fail to find an effect, perhaps for good reason: It is all but impossible to publish a negative result. An outcome that by statistical fluke appears to confirm a hypothesis is published, whereas robust findings disconfirming it, though this happen much oftener, we neglect and pass by. It’s a severe problem. Simmons et al.’s Psychological Science paper “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant” demonstrates that by selective disclosure of methods, it is easy without fraud “to publish ‘statistically significant’ evidence consistent with any hypothesis.” Bakker and Wicherts found many statistical errors in a large sample of journal articles, with almost all errors favoring the experimenters’ hypotheses.

Behavioral studies face worse challenges. “Demand characteristics” elbow in: Everyone colludes, consciously or unconsciously, when they know what the researchers believe. In one experiment, lab assistants were told that a set of rats had been bred for intelligence and found they learned mazes faster than normal rats—but all were actually normal rats. It was unclear why. Perhaps the lab assistants handled the “genius rats” more gently.

In HCI studies, friendly human participants often discern our wish and help us out. Double-blind studies where researchers step back and experiments are run by assistants ignorant of the hypotheses or the conditions can sometimes counteract this. But they require more work.

Sometimes there is no need for an hypothesis, as when a researcher with no preference compares two design alternatives. Alternatively, multiple hypotheses can be tested in a study; one or two can be disconfirmed, yielding an aura of scrupulousness. However, I have never seen a study in which all of the hypotheses were disconfirmed. Researchers want to appear smart; key hypotheses tend to be confirmed.

My colleague Steve Poltrock notes that “marginally significant” findings or “trends” are often used to support hypotheses when accepted statistical measures fail to do so. People rarely report trends that counter their hypotheses, citing such differences as “not significant.” It’s human nature.

Design and systems studies

Researchers in our field typically test their own designs and prototype systems. Study participants know that designers hope their designs are liked. They know that system builders hope the systems are liked. Papers invariably report that the designs and systems were judged to be promising. Yet all discussion disappears when subsequent experience proves disappointing. Our literature is full of promising prototypes that disappeared without explanation. This is a disservice to science and engineering. My own studies of promising prototypes are no exception. I once tried to publish a post-mortem of a failure but could not get it accepted, and with only the early positive reports in the literature, for years we received requests for the prototype code. The difficulty of publishing negative results is well known at the National Science Foundation and elsewhere, but I know of no efforts to address it.

Quantitative analysis and ‘predictive analytics’

Quantitative studies are no guard against these problems. In fact, they often exhibit a seductive form of confirmation bias: inference of a causal relationship from correlational data, a major problem in conference and journal submissions I have reviewed over the years. The researchers hypothesize a causal relationship, the correlational data fit, and the researchers take it as proven. Equally or more plausible causal models are ignored, even when plainly evident.

Suppose that heavy Twitter use correlates with being promoted in an organization. It may be tempting to conclude that everyone wanting promotion should use more social media. But causality could run the other way: Maybe already-successful employees use Twitter more, and incessant tweeting is a path to demotion for average employees. Or perhaps gregarious people tweet more and get promoted more. There is no causal explanation in the correlation. This is not fanciful; the literature is packed with such examples. Smart people seeing evidence consistent with their case do not look for alternative causal models.

The word predictive, often used to indicate a positive correlation, causes further problems. Predict has a strong causal connotation for the average reader, and researchers themselves often slide down the slope from “A is predictive of B” to “A causes B.”

As a father, one of my goals is to raise my children to distinguish correlation and causation. Unjustified causal assumptions about correlated events are so common that anyone who avoids them will find ways to be useful. Often we make the correct causal inference, and when dodging a possible saber-tooth a false alarm may be a modest price to pay for survival. However, problematic errors arise frequently in science, engineering, and everyday life.

Qualitative analysis

After describing an ingenious quantitative analysis to explain patterns in “big data,” a conference presenter expressed frustration with a pattern that defied analysis. Someone suggested that the researcher simply contact some of those whose behavior contributed to the pattern to ask what was going on. His reply, in so many words, was “That would be cheating!” Clever quantitative analysis was his goal.

Especially today, with quantitative data so readily available, qualitative field studies are often dismissed as anecdotal and especially prone to confirmation bias. And in fairness, many studies claiming to be ethnographic are weak and good work faces challenges, as described below. So let me explain why I believe that qualitative research is often the best way to go.

I hold degrees in math, physics, and cognitive psychology. I appreciate all efforts to understand behavior. I drew distinctions between scientific approaches and those of history, biography, journalism, fiction—and anthropology. I felt that science involved formal experiments, controlled usability studies, and quantitative analysis. Then, in the mid-1980s, I read a short paper by Lucy Suchman, “Office Procedures as Practical Action,” that described the purchasing process at an unnamed company. A purchase order form was filled out in triplicate, whereupon the Purchasing department sent copies to Receiving and Finance. When orders arrived, Receiving sent an acknowledgment to Finance. When an invoice arrived, Finance found the order and receipt and cut the vendor a check. Very methodical! Suchman then said that the process is not routine and routinely requires solving problems and handling exceptions that arise. She included the transcript of a discussion between two people struggling with a difficult order, showing lots of inference and problem-solving. End of article.

I was shocked. How could a scientific journal publish this? A rational organizational process that I was sure usually worked smoothly, and she presented this one pathological case. I marched to the office of someone in Purchasing in in my organization and asked her to explain our process. She said, “Someone fills out an order in triplicate, we send one copy to Receiving, the other to Finance. When the goods arrive…” And so on.

“Right,” I said. She looked at me. “That’s how it works,” I said.

She paused, then said, “Well, that’s how it’s supposed to work.” I looked at her. “But it never does. Something always goes wrong.”

I held out a copy of Lucy’s paper and asked, “Would you read this and let me know what you think?” The next day she told me, “She’s right. If anything, it’s worse than she said. Some exceptions happen so often we call them the standard exceptions, and then there are exceptions to the standard exception.” 

“Thanks,” I said. I got it. Anthropologists are trained to avoid cherry-picking. You can’t spend two years describing a two-year site visit or two weeks describing a two-week study, so you rely on representative examples. Their methods can include copious coding and analysis of observations and transcripts. Some anthropologists are better than others. The approach might seem less foolproof than controlled experiments, but there is a method, a science. I started doing qualitative work myself.

The BBC drama Elizabeth I portrays a queen contending with chaotic scheming. A minor character, Francis, offers occasional thoughtful guidance. At one point someone refers to him as… Bacon! The apostle of scientific method in a setting devoid of evidence-based decision-making!? Actually, one of Bacon’s great contributions employed qualitative field research. Oxfordshire yeomen rebelled in the late 16th century. The customary response to an insurrection was suppression by force, but Bacon investigated and found them starving, forced off traditional farmland by aristocrats enclosing the land to create private hunting grounds. Powerful figures in the House of Lords insisted that this was a right of landowners, but Bacon pushed through and defended measures that preserved traditional access to land. (I highly recommend Nieves Mathews’ fascinating account of Bacon.)

In my research, when an hypothesis emerges, an explanation for patterns in the data, a constant priority is to find alternative explanations and disconfirming data. In presentation, Rob Kling noted that careful writing is more important in qualitative research because one word can make a huge difference: “X can lead to Y” or “X often leads to Y” is not the same as “X leads to Y.” Experimental and quantitative methods produce data that are reported in the paper; readers can consult the data. Discussion can be looser. A good qualitative report requires careful, honest selection of data and artistry in presentation to paint a picture for the readers.

The challenge is amplified when the researcher is expected to adopt a theoretical framework, to “build theory.” This invites selective filtering of observations. Another risk is “typing,” when a researcher becomes known for a particular observation or perspective, increasing the desire to confirm it. Some good qualitative researchers become predictable in what they report in each study. Sometimes other aspects of the situation seem central to understanding yet were not stressed; other times it isn’t noticeable but could be the case.

Can anyone undertake a study free of hypotheses? At an uninteresting level the answer is no—we all believe things about the world and people. But a better answer is often yes, we can minimize expectations. An ethnographer could study a remote culture assuming just that it is of interest to do so, or assuming that there is a complex kinship system that should be winkled out. The latter risks discovering something that is not there or missing something of greater interest. Similarly, we can examine the use of a new technology believing that it is likely to be interesting, or we can come in with preconceptions about how it will be used. The stronger the preconception, the greater the risk.

One way to approach this is grounded theory, a set of approaches that advocates minimal initial hypothesizing and the collection and organization of data in search of patterns that might form a foundation for theory. When a possible pattern is detected, the researcher seeks observations that do not fit, a step toward a richer understanding. Grounded theory has its detractors. It may not appeal to people for whom theory is where the fun resides. But it is the best fortress I know from which to defend against confirmation bias.

Conclusion

I’m not immune to confirmation bias, although I’m generally not so confident in any hypothesis to resist seeing it disconfirmed. For example, I think of HCI as pre-theoretical, but rather than confirming that bias by ignoring or attacking all theories, I consider them and sometimes find useful elements. Years ago, I was dismayed to find data that didn’t fit a cherished pattern, but eventually came to love disconfirming data, which is a necessary step toward a more complete understanding.

Am I biased about the importance of confirmation bias? I’m convinced that we must relentlessly seek it out in our own work and that of our colleagues, knowing that we won’t always succeed. Perhaps now I see it everywhere and overlook more significant obstacles. So decide how important it is, and be vigilant.

This post had unusual help for a non-refereed paper: Franca Agnoli, Steve Poltrock, John King, Phil Barnard, Gayna Williams, and Clayton Lewis identified relevant literature, missing points, and passages needing clarification.


Posted in: on Tue, August 06, 2013 - 10:35:14

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Banjos and discrete technologies


Authors: Steve Benford
Posted: Mon, August 05, 2013 - 2:39:05

I begin this post with a confession. I play the banjo. There, it’s out in the open and I feel better already. Actually, I play the tenor banjo in Irish style, although this is a distinction that probably only banjo players care about (but boy will they care). You’ll often find me on a Sunday afternoon in the Vat and Fiddle, Bell, or Hop Pole playing along at one of Nottingham’s traditional Irish sessions.

You would be forgiven for wondering what this has to do with human-computer interaction, but it turns out that even very traditional practices can shed new light on our interactions with computers. My colleagues Peter Tolmie, Yousif Ahmed, and myself undertook an ethnographic study of Irish music sessions which, fortunately for me, involved spending some considerable time hanging around in traditional sessions and observing what goes on as well as interviewing musicians.

Irish music sessions

At first glance an Irish music session is a traditional practice that seems far removed from the world of computers. A group of musicians sits around in a pub, playing traditional tunes on a variety of instruments—fiddles, flutes, whistles, mandolins, the bodhran (the traditional Irish drum), and guitars to name a few. Even banjos are tolerated. Well, subject to recital of the canonical list of banjo jokes anyway. 

The structure of the music allows the musicians to improvise as a group. They typically play sets of several tunes that are strung together, with each tune being repeated several times. A typical set might consist of three different tunes, each repeated three times, and where each tune consists of two parts that are themselves repeated. If this seems a little complicated, the important thing to remember is that several tunes are sequenced together and that there is a fair bit of repetition.  This structure enables improvisation, both through choosing which tune to segue into next as well as through embellishing a tune each time it is repeated. Repetition also supports playing by ear as musicians can try to recall a tune that they haven’t played for a while or even pick up a new tune from scratch if they are especially skilled.

How Irish musicians have taken to the Internet

Our study showed how Irish musicians have taken to the Internet, establishing a dedicated Irish social media site called The Session as far back as 2002. Since then, a worldwide community of musicians has been transcribing the Irish repertoire, building a database of thousands of tunes along with sheet music, notes on recordings, names (which are widely contested), variations, playing tips, and suggestions for good sets. The picture below shows an excerpt from the page for the tune Banish Misfortune giving alternative names, comments and also the music written in ABC notation which has been specially developed for traditional music (the conventional “dots” are also available). The Session also gives the locations and timings of sessions in many cities around the world in case an Irish musician is visiting a distant town and fancies a play. The Session is incredibly useful because it enables musicians to learn new tunes. It has since been supplemented by a variety of other services from YouTube, to the BBC’s Virtual Session to the specialist learning sites such as jigsandreels.

Session etiquette

There is a very interesting tension at play here—one that speaks directly to the design of new technologies. On the one hand, Irish musicians appear to be enthusiastically adopting digital media to share and learn a common material, while on the other the actual performance of this material in a live session is governed by a strong etiquette that emphasizes the importance of playing by ear. While there are of course huge local variations in etiquette, many musicians we spoke to were very aware of the idea of this tradition of playing by ear. There is a general nervousness about getting out sheet music in a session, especially for beginners, an idea that is reinforced by various published guides to session etiquette.

Our studies revealed the subtle ways in which musicians manage this tension, walking the line between extensive preparation and rehearsal away from a session, and the spontaneity of playing by ear within it. Many, for example, carry notebooks of tunes, pre-arranged into convenient sets. One notable strategy that we saw—and that I have seen several times since in sessions as far afield as Nottingham and Seattle—is to prepare a small piece of paper, perhaps in a pocketbook, that has the names of tunes grouped into sets alongside just the first few bars of music of each. These bespoke notations convey the essential information needed in a minimal form; an Irish musician needs to know which tune is coming next and just the first few bars of that tune so that they can segue into it, after which they are up and running from memory.  

Discrete technologies

Such pieces of paper are an example of a discrete technology. They condense the essential information required to support the practice of playing by ear into a form that fits with session etiquette. A page of a small notebook, which after all could be used for writing down the names of new tunes, contact details, and so forth, is a long way removed from a piece of sheet music on a music stand.

This idea of discrete technologies—ones that provide useful services in a way that respects the social etiquette of given situation—extends beyond pieces of paper to the digital world. We are all aware of the challenges of managing mobile phone use in various social settings. Indeed, these same technologies are present in Irish sessions that take place in pubs, busy social settings where people—including the musicians— have come together to chat, may even drink a pint or two, and enjoy “the craic.” In this context it may be quite acceptable to get out a phone, check a text or look up something on the Internet.  It can even be be acceptable to break of playing in the middle of a set to check one’s phone or receive a call—quite a contrast to a formal classical music concert. We also observed people using digital recorders to capture tunes as they are played.

It’s all in the ambiguity

So this notion of discrete technologies is a subtle one. Ironically, using a modern digital technology such as a mobile phone or digital recorder to support a traditional practice might possibly be more discrete that using a traditional technology such as a piece of paper. Both phones and pieces of paper can be used for many purposes, from taking notes to reading sheet music, and it is perhaps this apparent use that really matters. It all depends on how you are seen to use a given technology rather that on the form of the technology itself. 

Perhaps the underlying issue is whether the apparent use of a technology is sufficiently ambiguous that it can plausibly fit some socially acceptable use, or whether in contrast, it overtly flaunts the local etiquette to the point where it can no longer be ignored. This idea directly builds on Bill Gaver’s discussions of the role of ambiguity in interface design and especially on Paul Aoki’s and Alison Woodruff’s subsequent application of this to the challenge of saving face in mobile phone calls

The challenge for interface designers is to invent displays and services that can be used discretely with respect to a particular social setting; that don’t overtly flaunt social etiquette and that are open to ambiguous interpretation. Our study showed how traditional Irish musicians had been particularly creative in this respect, inventing their own discrete notation to help them bridge between their learning of tunes offline and their ability to recall them when playing live. Interface designers might learn a great deal from such inventiveness! 

To read and hear more

For those who would like to read more, our ethnographic study of Irish Music sessions was first reported in a paper at CSCW 2012 while an extended version has recently been published in the book Ethnomethodology at Play.

Finally, it seems appropriate to end with one of my favorite banjo tunes—the rollicking jig Banish Misfortune. Feel free to play along, or perhaps I’ll see you in a session somewhere soon.



Posted in: on Mon, August 05, 2013 - 2:39:05

Steve Benford

Steve Benford is professor of collaborative computing at the University of Nottingham’s Mixed Reality Laboratory.
View All Steve Benford's Posts


Post Comment


No Comments Found


CHI 2013 HCI for Peace Ideathon summary


Authors: Juan Pablo Hourcade
Posted: Fri, August 02, 2013 - 1:56:52

For the past three years, members of the human-computer interaction community interested in using computing technologies to promote peace and prevent conflict have been meeting as part of HCI for Peace, a grassroots initiative. Our aim is to highlight and celebrate work already done to this end and to encourage further work with peace as its explicit goal. We hope this call to action starts some community-wide discussions from which positive action can spring: Our world can be no brighter than the worlds we dream of. The HCI community is uniquely positioned in the computing world to effect change in this area. its focus not only on the user sitting in front of a screen, but on the effect of technology on humanity at a societal and global scale.

In this blog post I summarize the outcomes of the latest HCI for Peace meeting at CHI 2013, when about 20 attendees met as part of a SIG titled “HCI for Peace Ideathon.” It was an opportunity for like-minded researchers and practitioners to exchange ideas, experiences, and think of future opportunities. 

The participants split into four groups to discuss different topics that were later shared with the entire group. The first group focused on how interactive technologies may play a role in problem formulation in order to resolve conflicts. Some ideas that were discussed included ensuring a diversity of people were involved in conflict resolution (to develop more alternative solutions), encouraging differences of opinion, and arriving at a wide set of solutions early on. Additional discussion focused on the role of media in how positive cooperation is much less likely to be reported than violence, providing a skewed (and scarier) view of the world. The group developed an idea for media and storytellers where stories from each group in a conflict that include the same values could be merged by storytellers and propagated to both sides to show similarities.

The second group discussed citizen journalism. This group included a member from witness.org who discussed their app to support witnesses recording video of sensitive areas. The app gathers metadata that can be used to validate that video was recorded at the time and place claimed by the recorder. While the idea behind the app is to record and broadcast violent acts to deter their occurrence, there was also the suggestion that it could be used to cover positive aspects, reinforcing peace messaging and positive action.

A third group discussed recent applications (e.g., the NNR table) that use shared interactive spaces for conflict resolution. In these spaces the two parts have to work together to make something happen. Working together makes both parts benefit while not collaborating leaves both parts stuck. The NNR table was recently evaluated with Israeli and Palestinian youth with positive results.

The fourth group discussed what HCI researchers and practitioners could do to have an impact on the precursors of conflict. One issue that came up was the need to share the empirical research on these precursors, as it is not well known. This would be important not only for HCI people, but for the public in general. HCI researchers and practitioners could play a role, for example, in helping the public understand the warning signs of upcoming conflict using accessible visualizations. We also discussed the role of social media in escalating or de-escalating conflict. 

Ben Shneiderman made a call at the end of the session to introduce into the HCI curriculum the role interactive technologies can play in preventing conflict and promoting peace.

What would you propose? What are some ways in which interactive technologies could prevent conflict and promote peace? To join the discussion visit hciforpeace.org, join our facebook community, or follow us on twitter @hciforpeace.


Posted in: on Fri, August 02, 2013 - 1:56:52

Juan Pablo Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Pablo Hourcade's Posts


Post Comment


No Comments Found


Trajectories into practice


Authors: Steve Benford
Posted: Tue, July 23, 2013 - 10:00:32

I’m Steve Benford, professor of collaborative computing at the University of Nottingham’s Mixed Reality Laboratory. My research explores new interaction techniques and concepts for creative and cultural experiences. I characterise my approach as performance-led research in the wild, meaning that my team initially helps artists and performers develop and tour new experiences, which we then study in the wild, ultimately leading to new HCI concepts, such as ambiguity, spectator interfaces, uncomfortable interactions, and trajectories. I was elected to the CHI Academy in 2012 and have won CHI best paper awards in 2005, 2009, 2011, and 2012. My book Performing Mixed Reality has been published by MIT Press. You can find out more about my research, including accessing papers and videos, at my personal research blog.

Trajectories into practice

Maybe it’s an age thing, but I’m increasingly bothered by the question of how my research might make a difference—why might a professional working at the coalface of user experience design be bothered about what I am doing? Of course, there are probably other reasons why this is bothering me too—we (researchers) are increasingly asked to justify the impact of our research, both speculatively when writing proposals such as the “pathways to impact” section on EPSRC proposals, or when our results are weighed in the balance of various assessment exercises such the fast-approaching UK’s Research Excellence Framework, which for the first time includes impact case studies.

An important aspect of this for me is putting HCI theory into practice, a notoriously difficult challenge as articulated by Yvonne Rogers in her recent book on HCI Theory, but also emphasised by EPSRC’s recent review of the state of HCI here in the UK. Perhaps the first question to consider is what I mean by HCI theory here? In addition to Yvonne’s book, which tackles this question in considerable depth, I’ve also been struck by Kia Hook’s and Jonas Lowgren’s recent TOCHI paper on “Strong Concepts,” design abstractions that generalise across different domains. Kia and Jonas identify my work on trajectories as an example of a strong concept, a form of HCI theory that could be ripe for putting into practice. 

The question of how to put trajectories into practice has therefore emerged as a central concern for my ongoing EPSRC Dream Fellowship. So far, I’ve been focusing on two different domains, museums and television.

For museums, Horizon Doctoral Training Centre student Lesley Fosh has designed and studied an example trajectory through a sculpture garden (see Figure 1) that aims to move pairs of visitors between moments of experiential engagement with sculptures in which they are isolated, listening to music and performing physical gestures such as touching them, and other moments where these visitors come together again to reflect on these experiences as well as on more “official” guide information that they receive afterwards. These ideas are now being carried forward in the European CHESS project, where we are working with the Acropolis Museum in Athens and the Cite De L’Espace space museum in Tolouse among other partners, including running a series of design workshops over this summer to apply trajectories to the design of new visiting experiences.


Figure 1. Design of the local trajectory through a piece of sculpture in a sculpture garden.

Turning to television, I recently spent four months as a visiting professor at the BBC focusing on the design of multiscreen TV experiences. I began by analyzing some existing companion apps for TV shows, including the Antiques Roadshow play-along game in which viewers estimate the value of antiques during the show, as well as a research prototype called Jigsaw developed by Maxine Glancy and the team at BBC R&D that aims to support intergenerational TV experiences by enabling children to snap images from a TV show and turn them into jigsaw puzzles. Similar to Lesley’s example, I was able to produce some case studies to illustrate the potential of trajectories, albeit by analyzing existing designs. Again, these were followed by a design workshop with participants from editorial, user experience, and research and development, at which we used trajectories to explore new design concepts for extended TV experiences, and two subsequent presentations to the wider BBC user experience design team as part of their Northstar One Service design project.

While it was exciting to be able to engage with professional user experience designers who expressed enthusiasm for trajectories, it proved difficult to establish a deep connection between the concepts and specific example designs in short workshops. We therefore hosted the first “trajectorize” course at Nottingham last week. Three different teams—David Ullman and Dan Ramsden from the BBC; Andres Lucero from Nokia and Joel Fischer from the Mixed Reality Lab; and the artists Ben Gwalchmai and James Wheale—brought along three design concepts that we then inspected though the lens of various trajectory concepts over two days. As well as being thoroughly enjoyable, this was the first time that I began to glimpse how trajectories might actually be put into practice, with zesigners being able to produce complex trajectory sketches as a way of challenging and refining their ideas in areas such as designing social encounters, key transitions, and take-home experiences. You can get more of a sense of what happened from the official course structure and materials, but also from Dan’s blogpost after the event. 

The initial success of this course suggests to me that there is indeed the potential to embed strong concepts such as trajectories into the practice of professional user experience design, but also that this takes considerable work from both sides—in this case at least a two-day commitment of time to be able to make significant progress. However, I suspect that there is far more to it than this. 

First, we need to understand where these concepts sit in the UX design process (our teams were using trajectories to refine existing concepts rather than for ideation). 

Second, it feels important to generate some initial case studies based on familiar examples (as we did in both sectors) to generate initial interest in the concepts. 

Third, as our course participants observed, it feels like the concepts need an appropriate level of generality, being structured enough that you can repeatedly attack a design from different perspectives, and yet not so prescriptive that they close down creative thinking. 

Finally, is the importance of sketching. We have repeatedly shown trajectories as diagrams and encouraged workshop participants to create their own. Creating and labeling trajectory diagrams feels like an important element of the approach, but also brings its own challenges, not least you need a very large sheet of paper to be able to move between overview and the fine detail of annotations. As a result we have begun to experiment with zoomable drawing tools, initially developing a series of trajectory sketches using Prezi, but more recently with my Colleagues Chris Greenhalgh and Tony Glover beginning to develop and now use our own zooming trajectory sketch tool that adds greater structure, sequencing, and also metadata to an evolving sketch.

Putting trajectories into practice is very much a work in progress, and one that I hope to continue over the coming years. My current sense is that it should be possible to put strong concepts such as trajectories to work, but only if we can find the right approach and supporting tools.



Posted in: on Tue, July 23, 2013 - 10:00:32

Steve Benford

Steve Benford is professor of collaborative computing at the University of Nottingham’s Mixed Reality Laboratory.
View All Steve Benford's Posts


Post Comment


No Comments Found


The human in HCI: What you can learn from the Bard (and others)


Authors: Uday Gajendar
Posted: Tue, July 16, 2013 - 3:32:37

How does one account for the human within human-computer interaction? One approach historically embodied by the HCI field is firmly reductionist, a distillation of functional entities in which a human comprises "information processing systems" and "decision-making agents." It has a quantitative outlook with scientific rigor and statistical significance of data to ensure accurate validations of hypotheses. This grounds everyone in rational discourse and technical conclusions. And it's absolutely important and useful, just not entirely sufficient, IMHO. 

If we are to improve the human condition via well-designed technologies (computers, devices, systems), we must somehow grok the, well, human condition! This requires an empathetic, holistic outlook on the whole of humans (i.e., people) in all their glory of promise and gory flaws. This is why I always recommend reading Shakespeare and philosophy.

Wait, what? Indeed, I've found you can learn more about people and their messy challenges from literary texts representing culture and humanities than from typical HCI textbooks. While that info is great as a reference desk (just like pharmacology classifications are useful on a doctor's desk), you gotta get deep into the messiness that make up a person's life: emotions, dreams, motives, beliefs, flaws, hopes, fears, ideals. A doctor takes time to get to know their patients’ personal and family histories, as well as their habits and stories of life. It's not just charming bedside manner, it's about developing a holistic view toward making better diagnoses that are supportive of patients’ well-being, so they can (as the Kaiser ad says) thrive. 

So what of these literary authors and what can HCI professionals learn from them? 

Literature: Shakespeare. You can't beat the Bard himself, right? The maestro of Elizabethan theater captured and chronicled the messy affairs of the day with wit and eloquence in his staged plays, both tragedies and comedies. Each delved deep into critical human emotions, exploring and exposing people as flawed and hapless, yet striving for somewhat misguided noble aims. Hamlet exposed woeful anxiety ("To be, or not to be…"). Macbeth, ruthless ambition. King Lear, frail sense of half-witted ego and treacherous legacy. Meanwhile A Midsummer Night's Dream cleverly dwells on fantasy, hope, childlike dreams that persist into adulthood (much like Peter Pan or Alice in Wonderland centuries later). If you want to dig into what makes us all tick, read Shakespeare or watch his plays performed in the park (for free). 

Philosophy: Nietzsche. Actually Plato and Socrates are better places to start, but philosophy in general is the quest to ask "Why?" to discover the underpinnings for human thought and values. Far from fanciful daydreaming (that's just daydreaming, really), philosophy offers a tough, persistent, skeptical analysis of purposes and values, and how they influence our daily lives. Plato and Socrates represent the Classical ideals of understanding reality through dialogue and storytelling, by direct observation of people in context. Nietzsche applied a grittier lens that involved intense examination of how to become a stronger, life-fulfilling person, in full existential vigor, willing you to power and achievement. Sartre and Camus continued this theme with writings on our need to act to fully give meaning to our lives, to be fully human engaged in daily life.

Fine Art: Picasso. Or Monet, Van Gogh, Matisse, and countless other artists deemed somewhat "mad" for their times. Each one interpreted reality in different ways, conveying their visions with special techniques of painting (the medium truly is the message) and illuminating various "truths" about the nature of everyday life, with atmosphere and conviction. Each of their works was a reflection of the zeitgeist of the era (discovery of x-rays and quantum mechanics, new forms of light and photography, theories about cultural layers to reality) and was a response of emotional value: mood, tone, voice. They were trying to capture the emotional tones of an era, the broader spirit of the people, which we ourselves may not be attuned to. As the famous saying goes, art tells beautiful lies to reveal a deeper truth. It's all subjective, but also a deeply emotional expression of human conditions.

What's the result after spending time indulging in such topics? A greater cultural appreciation for the human aspect that HCI professionals are working to support. If you want to improve the human condition, you have to strive to understand it at a human level of abstraction and messiness. This appreciation yields a deeper sense for the motives for how and why people are the way they are. Sure you can (and should) perform scientific experiments validating finite measurements for benchmarking, etc. But as Steve Jobs said, it's at the intersection of liberal arts and technology where we create something that makes our hearts sing. 

Tapping into the poetry and emotion of what makes us all human is an essential part of that process of making HCI really H - C - I, from human-computer interaction toward human-condition improvement.



Posted in: on Tue, July 16, 2013 - 3:32:37

Uday Gajendar

Uday Gajendar is Director of User Experience at CloudPhysics, focused on bringing beauty and soul to Big Data for virtualized datacenters.
View All Uday Gajendar's Posts


Post Comment


No Comments Found


The UX ownership war is over … and we have lost!


Authors: Daniel Rosenberg
Posted: Tue, July 16, 2013 - 9:53:45

In previous blogs and many interactions articles and columns over the years I have articulated my concerns over the UX profession’s general inability to penetrate to the core of business leadership. Richard Anderson added his own theory to this legacy in his blog “What Holds UX Back?

I had a profound experience last week, which unfortunately pushed me over to the dark side regarding my perpetually optimistic perspective on how UX design professionals will eventually take a place of equal rank in the boardroom.

Let me frame the situation…

I was on an east coast business trip last week working with one of my clients, a startup named WellDoc that has created an FDA class-4-certified mobile app for managing type 2 diabetes. Look it up. Doctors prescribe it and your insurance company reimburses the monthly subscription cost! It will save billions. Following the advice this app’s expert system provides as it tracks your lifestyle data has been proven to lower blood glucose levels more than some of the most popular diabetes medications. This is an amazing, cutting-edge business model. It to took a bunch of brilliant physicians, clinicians, and business people a decade to make this fly. The actual software engineering and UX designs were among the least complicated part of bringing this product to market. (Full disclosure: My wife and I, and some other family members, are investors in this company.)

Now to event that triggered the title of this blog…

During the course of the day a 30-something product manager that I have been working with on a different medical app for about three months casually mentioned that he is starting an MBA program in “human-centered design” at the Johns Hopkins University’s Carey School of Business. Formally this program was known as the MBA in design leadership. It is run in collaboration with the Maryland Institute College of Art. However, Johns Hopkins is the degree-granting institution. 

Nathan Shedroff at the California College of the Arts (CCA) established a similar program several years ago called the MBA in design strategy. When I first heard about this idea I did not panic because Nathan (as most readers will know) is a world-class designer and design thought leader and CCA is not a business school. 

Unfortunately, from my perspective, as big name business schools jump on board the “design leadership MBA” trend the future ownership of the UX agenda will become the provenance of people not trained as designers or HCI specialists but of people who have never actually practiced design. At least they will employ designers,

In the end you might say that this trend simply reflects the maturation of design as a core competitive business value proposition from which we will all as consumers benefit in the end. But is this the path to ubiquitously great product design?

Who do you think the typical CEO is going to listen to, the guy from Harvard with the MBA in design leadership already seated at the table or the creative genius in the hallway with purple hair and body piercings sporting an MFA from the Royal College?

Game, set, and match over!

Daniel Rosenberg is Chief Design Officer at rCDOUX LLC


Posted in: on Tue, July 16, 2013 - 9:53:45

Daniel Rosenberg

Daniel Rosenberg is Chief Design Officer at rCDOUX LLC.
View All Daniel Rosenberg's Posts


Post Comment


@David (2013 07 17)

I share your concerns, Daniel.

Last week I was interviewed for entrepreneurship.org about interaction design. The article was fantastic actually… but the kick off question was the same as always: how can biz people ‘do’ interaction design?

The fact is that IxD is a profession like any other, was my answer. That said there’s plenty of things biz people can do to stand in for IxDs while they’re hiring one.

Despite the truth of what you say and my own experiences,  I’m not so sure the episode you recount indicates the final battle in the war. On the contrary I would say it opens another valuable front in the struggle to assert the importance, methods, and results of interaction design.

While you’re undoubtedly correct that it (a) will suck hard to sit across the table from an MBA who pretends he knows the difference between a radio button and a check box and collects a big fat check at the end of the year for his troubles… it is also true that (b) designers living near Johns Hopkins, et al, should march over to that school and get on the faculty and/or get in the program and make a difference.

This is an infinite game, not a finite game. Keep on truckin’!

~David Fore

The road is long. These are still early days.

@Ellen B (2013 07 18)

Nice article & a good topic to raise.

My take: yes but no but yes but no. smile  IMHO the UX agenda hasn’t really been owned by trained designers in the first place. I think it’s a win and a noble experiment to try to train design-focused MBAs.

First, there aren’t enough of us in the pipelines to have reached significant across-the-board leadership positions specifically in the tech industry. Most of those programs have been around for 10 or 15 years (I’m from CMU HCI’s 2nd graduating undergrad class).

Second, companies are so starved for raw production design skills that designers are seen as desperately-needed producers of interaction and visuals, but aren’t “needed” to contribute strategic vision: there are plenty of MBAs for that. You’ve usually got so damn much work to do that sitting in the meetings and negotiating for this or that feature is not how you’re allowed to use your time.

Third, designers seem less likely to have the skillsets that most companies see as necessary for strategic product leadership: this includes marketing, market research, raw business analysis, plus the “soft skills” of political leadership, negotiation, etc. Because designers’ speciality skills are in such demand they aren’t particularly given the space to develop these other skills the way, say, an APM is required to develop them.

Great design simply doesn’t happen solely because of the design talent. It also takes leadership that prioritize design at the highest levels — the CEO on down. Leadership has to say “we’re devoting these three engineers to polishing the UI.” “We’re spending the money on usability tests / ethnography.” “We’re going to hire some fucking brilliant designers and stay the hell out of their way.” “We’re going to spend money on the nice packaging.” “We’re going to have a great customer service department.” “We’re not going to ship the product until this is AWESOME.”

Someone who is a designer isn’t necessarily the person who has to be doing this. Someone who is an MBA but values & understands design — can be.

I think if you want to *strategically* impact user experience you shouldn’t be a designer in today’s tech climate & I don’t think there are enough of us to change things yet. You should be a product manager who has a strong background in user-centered design and UX. There’s a glass ceiling for designers; if you want to really be senior in the software industry you need to be a PM, not a designer.

(BTW this is my entire career trajectory / issue of profound meditation — starting as a UX designer, how do I have the strategic impact I want? Across various design leadership positions I’m now cofounder at a startup; head of Product & UX with CEO & CTO who value good user experience.)

@Dave Malouf (2013 07 19)

Daniel,
My experience is in total opposition to yours. I just interviewed for an executive director position at Honeywell’s Chemical’s group. And they have hired UX VPs at a few of their other divisions. This is the trend. GE is doing this as well. These are roles with direct contact to the CEO and can clearly compete for senior leadership roles guiding the strategy of the organizations. Couple this w/ the trend among tech startups to include a design co-founder I think your anecdote is just that, a single data-point.

But the other premise in your piece is weird to me. Of course, the MBAs own strategy. They have and will always own strategy. it is only recently that UX has even been considered a strategic initiative and in most organizations it hasn’t even risen to that level.

But further, I’m very confused with the presumption here.
a) Why can’t designers get this or any other MBA or similar Design Management degree (Pratt, IIT/ID, SCAD to name a few)?
b) That the program is devoid of design teaching? And further that business people can’t learn design? It is a normal path for Technology folks to get an MBA (not a design MBA, or a technology MBA, just an MBA) as part of their career path if they want to go into product management and rise through the business/strategy ranks. Why not do this through design?

Having taught along side the design management program at SCAD I have seen great business and engineering folks become more than competent designers. They aren’t the best designers, but given the positions they are going into to @Dave’s point they don’t have to be the best designers, they have to be the best strategists, analysts, managers and leaders. But through real studio work, info vis skills, and yes, HCI and Human Factors classes they become competent well rounded designers. Given that the JH program is tied to a design school I could only assume the same basic curricula is in there.

What is funny is that Robert Fabricant (given you just mentioned Harvard) just published in HBR this piece this same week (http://bit.ly/197uAhA) which is in total opposition to your perspective.

—Dave

@Vladymir Rogov (2013 07 22)

What if the new CEO is the guy with the purple hair and body piercings? If you think that all CEO’s a suited manikins, then you are not getting around much.

@Dan Rosenberg (2013 07 24)

Thanks for all the good discussion and comments.  I will respond shortly in my next blog.  I have gotten significant feedback from other channels as well that I would like to include. 

I am motivated however to respond to the last comment which I assume was offered in jest.  I know a few CEO’s here in Silicon Valley with purple (or orange) hair and many more with body piercings.  Some of them have MBA’s, some don’t.  The point is they are the CEO. They have the ultimate skin in the game.  My favorite is Quixey where the CEO is a chlidhood friend of my son (and this is his second company).  It is a well funded start with some big name investors like Peter Teal.  The important point is that while running very lean he prioritized not only having UX designers but also having a user research function in house over many other things.  These are the next gen leaders and while not designers they have strong opinions on design.  As the CEO they also hold the gold and are companies UX leader through both words, actions and investment choices.

@Ashley Karr (2013 08 14)

If the CEO is worth anything, he’ll go with the designer w/ purple hair.

Ever here of the story, “The Little Red Hen”? If you haven’t or have forgotten it, read it! Talkers are a dime a dozen. Have faith, Daniel!!!

One great exercise to turn the tables and put the engineers and designers at the helm:

During design reviews / pitches, don’t let the TALKERS TALK! If they have suggestions, give them a piece of paper and pen and tell them to DRAW WHAT THEY THINK THE DESIGN SHOULD LOOK LIKE! Better yet, make them come up to the front of the room and draw it on the board.

I have lots of sneaky suggestions like this. Email me, and I will give you more!!!

Cheers,

Ashley

@ 5267471 (2013 09 03)

Interesting perspective but I think you are missing something here - the sacred design you reference is done in service to the business, not for purely artistic reasons. Those who can figure out how to monetize it will surely win the long-haul. Therefore someone who takes stock in being a designer first and business person (the MBA you detest) will need to take comfort in having the design components lauded but perhaps actual implementation aborted. A good CEO will spot the application to the business, to the strategy, to the brand, etc. but that doesn’t mean the pure UX Designer is absolved of making those connections apparent.

@liam (2013 09 22)

Two words Dan:  Jony Ive.

@Prod Mgr (2014 03 19)

I am actually the product manager that Dan is referring to in the article (one of my classmates just pointed out the article to me—damn RS feeds (-:). 
Dan has been a great sounding board for a number of projects we have collaborated on.

I’d just like to clarify that the dual degree program is about using design thinking in different situations to help us solve problems in divergent ways.  All of our “design” classes are taught at Maryland Institute College of Arts (MICA), an amazing art school here in Maryland. 

To give you an example of a few of the projects we have worked on (1) revitalize the library and post office system by providing enhanced user experience (2) recreate a kitchen item in a completely different way, while experimenting with 3D printing, laser-cutting, electronics, smart fabrics, etc.

Our professors (and mentors) have been Creative Directors, UX leaders, DJs, Innovation Officers @ Fortune 1000 cos., and NPR producers.

I, personally, have been incredibly happy with the program and hope to bring empathy and a little bit more openness and creativity to the business world.


Number 9: Names, Facebook, and identity


Authors: Deborah Tatar
Posted: Thu, July 11, 2013 - 9:42:20

Facebook recently informed me that my name is deborah.tatar.9. 

Oh.

Really?

I grew up in a sub-culture of America that believed in sending kids to sleep-away camps to get them out of the squalor of the city (“Hot town summer in the city/Back of my neck gettin’ dirty and gritty”). Consequently, as a “tween” and a teen I did a number of 8-week stints in bucolic settings in upstate New York and New England. (Allan Sherman memorialized the drama of these experiences in the novelty song “Hello Muddah Hello Faddah,” which won a Grammy in 1964 and which my grandparents played with glee on their stereo record player up until they passed away well into your lifetime, if you are reading this in ACM interactions in 2013.)

I spent one summer roomed in a cabin with 16 girls. Four of us were named “Debbie.” I was on the top bunk and another Debbie, from Queens, slept below me. This was not an entirely comfortable situation. By the end of the summer, the speed of my response when “Debbie” was called was somewhat diminished. 

But it never occurred to me—or any of us—to change our names. The most we did was replace the dot on the “i” with a daisy when writing. In fact, we all replaced the dot on the “i” with daisies, as did pretty much all other girls our age who had dots in their name. The question was not whether we did this, but rather how long it persisted. (Recently I was reviewing applications for undergraduate summer research that were all handwritten and, incredibly, one of the dots on an application from a woman was in the shape of a daisy. What was the applicant thinking?) 

In my case, I did not even change my name when I got married, though I did spend a few weeks imagining what it would be like to have a name (Harrison) that people did not find humorous and spelled correctly the first time. 

The only time I have changed my name was the big shift from Debbie to Deborah. That represented a deliberate effort to change my identity in my mid-20s, and I accomplished it when I moved from Massachusetts to California to work at Xerox PARC. My mother had always said that she liked Deborah as a name because it could be small when I was small and big when I was big. So the shift represented my willful attempt to see myself as a bona fide adult. There are still a few people in my professional world who knew me when I worked at MIT and at DEC and still call me Debbie. I don’t correct them because being adult isn’t actually an issue for me anymore and because the appellation now signifies the rights and privileges that appertain to long acquaintance.

The Chinese dissident artist Ai Weiwei wrote:

"A name is the first and final marker of individual rights, one fixed part of the ever-changing human world. A name is the most basic characteristic of our human rights: no matter how poor or how rich, all living people have a name, and it is endowed with good wishes, the expectant blessings of kindness and virtue."

Debbie was my name and I wasn’t going to change it just because. Facebook’s action reminds me what a meager, bare, poor thing a name is in our world of computing, and how unshared it is, how stripped of cultural meaning and how determined it is by … externalities, to misappropriate a word from economics. 

I contemplate this new name:

deborah.tatar.9

and it makes me angry. Who is Facebook to decide my name or, worse, my number? Is this the Village (“I am number 2. You are number 6.”)? Am I Jean Valjean (i.e., prisoner 24601)? 7 of 9? “Number 9. Number 9. Number 9.” in the words of the Beatles. 

Do I have a realistic choice?

The policy of generating student names at my institution is to take the first two letters of the first name and concatenate them with the last name. Thus, one of our students, Caleb Jones, was a bit startled to be given a name that in the United States is used as a euphemism for a portion of male anatomy not usually discussed more directly. Luckily, as a large, hearty, confident young man, though a devout Baptist and not given to raucous levity, he was able to treat it as a funny joke. For six years. Ya gotta to suffer to be educated? 

I chose my current official name (dtatar) in a fraught moment when filling out a great deal of paperwork to join VT, with no forewarning that this was the name that I would have to live with for the duration, and no serious discussion of alternatives. I spent several years trying to make the alias “tatar” work, but every time there was a problem, tech support got confused. Also, I could not persuade the institution to use “tatar” publicly, meaning that I was never able to make a clean, clear self presentation. Finally, I just gave up. 

The world of human naming is very rich and full of imponderable meanings. We all know that in Spanish-influenced countries, there is both a matronymic and a patronymic (Garcia Lorca or Garcia Marquez) and it matters that the matronymic comes second and is not hyphenated. People seem well and truly situated in these cultures. I have had students from India who had to make up last names to come to the United States. A colleague from Myanmar was named for her birth year and a priest-created designation, with no indication of family connection embedded in the name. I imagine that in their birth worlds a name is not a handle. Instead the expectation is that to use someone’s name means that you actually know them. I inherited a wonderful children’s book called My Mother Is the Most Beautiful Woman in the World, about a little girl who gets lost and the attempts to find her mother. It turns out that no one had informed the little girl of the cross-cultural universal properties of beauty that might justify the designation “the most beautiful woman in the world” and, pace recent studies in scientific psychology, the authors did not seem to think that they ought to. A colleague from the Philippines once pointed out that virtually all women are named Maria. In that context, this leads to a widespread use of notably light-hearted nicknames—I had a Filipina colleague called Gucci. These nicknames designate but they also describe. In Korea, there are very few last names altogether, about five. My Korean students just laugh (kindly) when I attempt to ask them about their naming customs. Traditionally, in some cultures, there have been secret names, known only to certain people in the family, in the clan, and for only the individual themselves. And, now that they are grown, I have pet names for my children that I use only in private thought. 

Think about this wonderful richness and variety, including contradictions and inconsistencies!

My husband and I spent months deciding on our children's names. This was in part because we disagreed, but also because we thought it important. I favored Old Testament names, like my own and my family's, while my husband favored Western names (the western United States, that is) that mostly once upon a time were Irish, Scottish or English last names: Tyler, Taylor, Tanner, Tyrone. Also, in my tradition, we do not name people after the living but only after the dead, while his tradition features honoring the living. Then there were other design desiderata: the potential for malicious abbreviations, potential readings of the initials as words, and undesirable vocal properties. We considered funny names like Harrison Ford Harrison, a melding of Harrison Ford (the actor) and Ford Madox Ford (the writer) with my husband’s last name. My husband tormented me for months by dangling Elvis as a possibility. 

For our first son, exhaustive search produced exactly one name that we both liked. Then, when we realized we were having another boy, we faced the impossibility of giving our second son a once-discarded name. Imagine having to say “Yes, child, not only are you younger and smaller, but you got the left over name. Feel wanted.” We had to generate a completely different set. Amazingly, we did. 

A few years ago, Gopinaath Kannabiran (how’s that for a name that requires an enjoyable interlude on the tongue!) wrote a note for the NordiCHI conference on identity in social networking systems. He focused on gender identity in social media, a particularly difficult problem because of its ineluctable complexity. The gulfs between being known for example as a woman, labeling oneself as one, and being called one are enormous. How much more so when gender identities cross corporately enforced categories?

My husband reports hearing Nicolas Negroponte propose that we be tattooed with identifying dot patterns at birth. It’s the logical extension of a reductionistic approach to identity. But I also imagine the reaction of my now-dead relatives to this. Some of them had tattoos from concentration camps. 

Some of the dignity of being human rests in our control over ourselves and our appearance to others. Here’s the thing: from a technocentric perspective, it looks as though labeling ourselves by name, as we do in email or on Facebook, is cost-free. Hey, we all have names, and, by happy chance, your email name can function as both a person identifier and as an operational label for the machine. Yeah! Everybody wins! 

But is this really true? As I struggle to remember which email address I used, and to what end and how that email combined with rules for some password that I am supposed to remember (“Please, kind system, just tell me whether you required a number and a capital letter or you are one of the ones that does not.”), and as I’m hemmed in by Facebook’s entirely arbitrary and self-serving choice to call me “9” and my friend’s and colleague’s expectation that I have some kind of presence, I do not find these names “endowed with good wishes, the expectant blessings of kindness and virtue." 

Efficiency with respect to the computer starts with the promise that the computer will do something for you “for free” but it evolves into pressure to do and behave in the way that the computer expects from you. There’s always a reason, but whose reason? How much effort would it actually take to enable a more complex creation of identity? 

I am, we are, reduced through these interactions and—here’s the design point—they could easily be otherwise.


Posted in: on Thu, July 11, 2013 - 9:42:20

Deborah Tatar

Deborah Tatar is an associate professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


@Pardha (2013 07 17)

Such a delightful article! Thank you Deborah. I never understood why it is so hard to change your handle after it is given to you. Aren’t computers supposed to make this sort of a thing easy? After all that handle is just a field in some database!

@Deborah Tatar (2013 07 18)

Hah! Yes.  Someone wrote to me privately and mentioned that Facebook allows you to get rid of the number——for a fee.  In other words, Facebook is monetizing our names. 

Is this ok?  It feels like we’re selling our birthrights for a mess o’ pottage.

Now I feel foolish for having thought that this move was merely neglect, as opposed to something more dire.


When A/B testing gets an F


Authors: Jonathan Grudin
Posted: Tue, July 02, 2013 - 10:33:18

A relationship is like a shark, it has to constantly move forward or it dies. And I think what we got on our hands is a dead shark. —Woody Allen, Annie Hall

Like sharks in search of their next meal, living websites constantly move forward. How do they decide where to go? Many popular sites rely on A/B testing. Different versions of a feature, layout design, or advertisement are presented to thousands of users. What people try, how long they remain, and whether they click through are logged. Although called A/B testing, more than two alternatives can be compared.

This is a modern variant of a familiar controlled usability experiment. In the 1980s, my first usability studies required weeks to recruit 10 or 20 participants and get them into the lab. Today, it can take just minutes to identify preferences with statistical reliability. The website is the laboratory and the participants are unpaid users. The winning design can be rolled out to everyone. Refinement and evolution can proceed as rapidly as designers can generate ideas. Life is good.

Continuous product release

Turn and face the strain —David Bowie, “Changes”

Adam Pisoni, co-founder of the enterprise social networking service Yammer, lists companies that did not move forward and ended up dead sharks: Blockbuster, Bethlehem Steel, Tower Records, and so on. Pisoni is a forceful advocate of A/B testing. He notes that constant change conflicts with predictability, which companies traditionally relied on, but argues that today there is “this larger thing going on in business, this issue of predictability versus adaptability… As the world changes really fast, and things are changing, predictability becomes counterproductive.”

However, we’re creatures of habit. Habits can engender efficiency. A habit relies on the world having some predictability. Software designers fear the wrath of the “installed base”—a ball and chain for legacy software. A new company can have an advantage when introducing a novel interface: It won’t earn a reputation for abandoning customers.

Pisoni acknowledges the problem of a rapid pace of change: “The way we built started impacting customers in weird ways. We release constantly. We’ve always released at least once a week if not more. Customers started coming to us saying ‘man, we love how easy it is to use. However, we want you to build it differently, we want you to build it the traditional way, give us 3-year timelines and all that.’”

No will do. Pisoni’s goal is to go from weekly releases to continual release. Yammer is not alone in turning to face the strain. Facebook has used A/B testing and weekly pushes as it adds, removes, or changes features. Its 2006 introduction of the News Feed dissatisfied many existing users who saw clutter, yet Facebook weathered the storm and prevailed. A/B testing also drives the evolution of Google, Bing, Amazon, and other Web products.

 A/B testing works best if you know what to measure. Who knows better than advertisers? They can determine which design increases click-through or purchases. For them, A/B testing gets an A. Well, it may not reveal how often products are returned for refunds, or the likelihood of repeat business. Let’s make it an A-.

Hey?

The Obama team used A/B testing extensively in 2012. The subject header of one solicitation read simply, “Hey”. It surprised many people. It annoyed some, but A/B testing showed it was remarkably effective at drawing contributions. Money talks. I received nine “Hey” emails, from Barack Obama, Joe Biden, and other close friends.

The Obama campaign had one and only one goal: A majority of the electoral votes on November 6. The money contributed to reaching that goal. Of course, for most people Obama’s election was a means to another end—strengthening the Democratic Party, a progressive agenda, or something else. What was the effect of “Hey” on these goals? A/B testing optimizes for the here and now; when will local optimization end up hurting in the long-term?

I was annoyed and burned out by the extraordinary barrage of email marketing late in the campaign. On November 7, I began removing myself from scores of Democratic and progressive distribution lists. I welcomed the election outcome but wanted a year or two to recover. A classic tragedy of the commons; I was the overgrazed commons. Did the pitcher go to the well once too often? I was not alone in my flight to solitude. Check with us in a couple years. A/B testing here earns an Incomplete.

When A/B testing can get an F

In the May 14, 2001 issue of The New Yorker, a perceptive article by Tad Friend, “The Next Big Bet,” discussed the HBO television series Six Feet Under and contrasted the radically different business models of commercial television networks and HBO. His analysis provides insight into where A/B testing of a general population can go wrong—and how A/B testing could benefit from supplemental techniques.

The networks sell advertisements. How much they can charge depends primarily on how many viewers a show attracts. Every show strives to attract the largest possible audience, with perhaps some attention to age or income. A new show that draws precisely the same audience as a very successful existing show will be very successful. Hence, there were five different Law and Order series, three CSI series, and an unending progression of reality TV shows and sports coverage.

HBO relies on subscriptions, not ads. They want shows that appeal to the greatest common denominator—sports, crime series, reality shows, whatever. But once they have a strong success in a genre, an imitation may not attract new subscribers. More valuable is a novel show that appeals to a niche market that has not yet subscribed. Consider a very popular show that appeals to 30% of the potential audience. If HBO creates another show that appeals to the same 30%, it may get no new subscribers. A new show that appeals to a different 10% of the potential audience may attract millions of new subscribers. Hypothetically, if HBO had 10 shows with a powerful magnetic appeal to a different 10% of viewers, they might get everyone subscribing, even if no one show appealed to 30%.

This is where A/B testing alone can flounder. A/B testing on the existing user base may not detect something that will appeal to a niche that has not yet subscribed, and testing that identifies popular choices could provide six reality shows that appeal to the same 30% of the market. Each 10% niche show will lose against a 30% show, when cumulatively they would attract more subscribers.

This is not necessarily constrained to the television world. Let’s consider Facebook and Yammer. Are they more like the commercial television model or the HBO model today? Tomorrow?

Facebook has constantly moved forward. It swims in a sea on which many dead sharks float. Supported by A/B testing, Facebook made solid decisions. Few abandoned it and more flocked to join. Like the television networks, Facebook relies on advertising revenue. It wants eyeballs. A one-size-that-fits-the-greatest-common-denominator strategy may work. If Facebook adopts design A and leaves behind the minority who preferred B, it may be OK to lose some niche participation.

Facebook does lose niche participation. I’m in one of those niches. Facebook took away the two features I liked most, so I use it less. One was a presentation feature, one was a view. My original Facebook profile listed my favorite books, in three categories; my favorite films, also in three categories; music; a set of my favorite quotations; and so on. It was a personal statement that some people noticed. Facebook removed most of it entirely. Some could be partly reconstructed in a less compact, less easily scanned format. The once-prominent quotations exist but you probably can’t find them.

My favorite view listed in reverse chronological order the most recent post by each of my friends. This was a wonderful way to catch up quickly on everyone without being bogged down by those who post minute by minute accounts of their trips to get a latte. For whatever reasons, this view disappeared.

A/B testing must have shown Facebook would prosper without my pride in profile and my attention. It has. My niche was small.

How will this strategy fare in the long run? Will maximizing the eyeballs delivered to advertisers succeed? Might they create opportunities for HBO-like sites that appeal to niches such as mine? I consider the possible evolution of online sites in the next section, after a look at Yammer.

Yammer links employees within an organization. It wants to attract new organizations, and is thus more like HBO. However, with A/B testing across its customer base and frequent interface changes, it is betting on a greatest-common-denominator strategy. This could be a problem if different interfaces would appeal to different companies or industries; for example, if markedly different feature sets would appeal to financial companies, medical companies, and tech companies, or if cultural or regulatory differences would affect feature preferences. Within a company, A/B testing could miss major differences: Perhaps marketing and sales groups would flock to something very different from design and engineering. A/B testing could favor the preferences of the more numerous young, adaptive individual contributors, but the niche comprising executives and managers who desire slower, more predictable change could be significant for an enterprise service.

We don’t know—it is early days. But assume that Pisoni’s broad A/B testing delivers changes that appeal to 10% of every company. Customized interfaces would be more complex to design and manage, but they might appeal to 50% of each company. This is a classic market segmentation tradeoff. Perhaps 10% per organization is enough to sustain use and deliver on enterprise goals. But if 10% is below the critical mass to sustain use or if the goals require higher participation rates, the outcome is not so great. And even in the former case, niches might be created for competitors who provide features that appeal to more than 10% of the employees.

This is of course speculative. But the analysis suggests techniques that could supplement A/B testing to provide a more versatile process. Before concluding by discussing these techniques, let’s briefly consider the history of mass market versus niche solutions.

Market segmentation and a vulnerability of A/B testing

When a desirable product is first widely available, having it is a pleasure and owning it is status enough. Interface details are secondary. Henry Ford famously wrote of the Model T, “Any customer can have a car painted any color that he wants so long as it is black.” Ford focused on reliability and efficiency, but he was also a fanatic A/B tester, in a slower pre-Internet era. One size fits all worked well for a time, but eventually General Motors catered to the niches—those who wanted luxury, something sportier, or just a different color. It is more expensive to produce multiple brands, but General Motors became the larger company. Similarly, indistinguishable Timex watches and black telephones were immensely popular, but eventually Swatches and a competitive phone market thrived on personalization.

Differentiation and personalization are in our nature. Our prehistoric ancestors developed different cultures and languages. They ornamented themselves. For a time, having a Facebook profile was a personal statement. When everyone is a Facebook member, more complex market segmentation will inevitably become important.

A/B testing will not necessarily mislead or cease to contribute, but it won’t be enough to earn an A and its affordances could be unfortunate. Rapid change works best if users do little customization. The more variation, the more a product becomes a platform, the messier change can be. My highly customized profile was blown away by Facebook changes. When individual contributors and managers prefer different interfaces, as they often do, a change can disrupt one or both. A/B testing in practice pushes gently toward “any color as long as it is black.” But cultures, organizations, and individuals like to customize.

Supplementing experimental approaches

I recently visited a school in which students used a particular device. They told me what they would like changed. Weeks later, I was having dinner with an employee of the company that made the device and suggested they visit the school. “We won’t do that,” she said with a rueful laugh, “we just do A/B testing.”

At the point market segmentation becomes significant, you have to get out in the field to identify the segments and learn how they work. Today, with technology supporting our lives in ever finer detail, understanding the subtle effects requires getting out and looking closely. This is a golden age for quantitative exploration, for big data, and it is also a golden age for qualitative exploration. Qual and quant enthusiasts sometimes regard one another with suspicion, but individuals or companies that learn to use them together will win. Quantitative data can provide suggestions about where to look in depth, qualitative data will provide hypotheses about what is happening that quantitative data can then confirm, refute, or refine. A/B testing applied within market segments can deliver the power to determine whether different interfaces are needed or whether one—and which one—will suffice. As a partner, A/B testing could be back on track to getting an A. A/B testing that is not informed by the big picture, that is not supplemented with strong qualitative research, could get an F.

You better start swimmin' / Or you'll sink like a stone / For the times they are a-changin' —Bob Dylan

This post benefited from discussions with and ideas from Gayna Williams, and from an exchange with Michael Bernstein. Adam Pisoni material is from the cited link and a keynote that is not available online, used with permission.


Posted in: on Tue, July 02, 2013 - 10:33:18

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


@Jonathan Grudin (2013 07 02)

The Wikipedia article on A/B testing identifies major companies known to use the technique, ranging from Amazon & BBC to Walmart & Zynga.


Enterprise users just want to have fun!


Authors: Monica Granfield
Posted: Fri, June 28, 2013 - 10:27:35

The bounce of the screen on the iPhone is so much fun that new users often keep pulling the screen down again and again, just to see the action happen. Animation in UX design—it’s not only fun, it's functional! Motion graphics bring even more depth and life to an interface, and more and more great animation is being produced. It’s a good time to be an animator, as animation is no longer limited to games and is blooming right into our everyday UX world!

Gamification, another fun yet functional aspect of UX design, is gaining momentum too. Sites like LinkedIn make it fun to recommend a colleague and to chart and measure your own impact on the site. You can monitor how your skills are being measured by your connections, see how often others have viewed you, see how many times you have appeared in a search, and see the strength of your profile and where your connections are strongest. There is so much interesting, fun, and useful information to discover about your LinkedIn presence. 

In a newly released book on gamification, Gamification at Work: Designing Engaging Business Software, Janaki Mythily Kumar and Mario Herger explore the idea of gamification in business and in the enterprise in a refreshingly practical, yet fun and engaging way. They use a process called Player Centered Design, which incorporates the notion of engaging the user. Player Centered Design puts the player at the center of the design and development process and surrounds the user with the concepts of motivation, mechanics, and mission. Beyond that there are the concepts of monitor, measure, and manage. They get creative and challenge the idea that simply adding points and badges to business applications is not a way to gamify a product.

This has made me think of all the fun words that now reference the user experience: "games," "play," "engaging," and "fun." Wow, it sounds fantastic and very motivating; I want to use that product! However, in my many years of observing business users for research purposes and designing for the enterprise, never have I heard a user utter anything close to "I love this app, work is so much fun!" And I began to wonder, why not? 

Thinking back almost two decades ago now, there was actually one enterprise application that I worked on where one task in the experience was actually cool and fun. Even though this was a process manufacturing application, we were able to integrate UX that would engage and excite the user. Who would think anyone could have fun in this domain, but you can and we did.

I am wondering why we are not having more fun in the enterprise? I look around and I see technology that clearly provides great power and performance for graphics and animation, increased use of motion design, and now the idea of introducing gamification to the enterprise and business sector. With all this technology, there is no reason we can't build more engaging experiences. There is a way to make practical interactions fun and functional and there is no real great reason not to. Imagine a user going to define a lifecycle or a business process and being excited because it is easy, straightforward, and FUN! Imagine making the mundane just a bit enjoyable. 

Dana Chisnell discusses the idea of happiness in the UX Magazine article, "Beyond Frustration: Three Levels of Happy Design." She boils the idea of a "happy design" down into three concepts, mindfulness, flow, and meaning. Mindfulness is pleasing and predictive and builds awareness and positive emotion; flow is the idea of being so immersed in something that time flies and nothing else exists while engaged in the product; and meaning is the contribution and value that is added by using the product. She states that these are all interconnected and can be used to bring "happiness" to a user experience. These concepts nicely describe and make tangible the concept of a happy, pleasing, and engaging user experience. Rather than hearing users speak of an experience as frustrating, annoying, and confusing, we can move forward with these words to create a pleasing and exciting experience that allows users to easily and enjoyably reach their goals.

In its November 2012 press release, Gartner predicts that "by 2015, 40% of Global 1000 organizations will use gamification as the primary mechanism to transform business operations". In the same report, they also predict that "by 2014, 80% of current gamified applications will fail to meet business objectives, primarily due to poor design." 

So the next time you begin working on a new enterprise-level design, don't forget to spread the joy and bring a little fun into the everyday for your users.


Posted in: on Fri, June 28, 2013 - 10:27:35

Monica Granfield

Monica Granfield is a user experience designer at Imprivata. The views expressed on this website are exclusively her own and are not meant to reflect or represent the views of Imprivata.
View All Monica Granfield's Posts


Post Comment


No Comments Found


Hair of the Monkey King


Authors: Tek-Jin Nam
Posted: Wed, June 19, 2013 - 9:31:48

Technologists are like magicians. They make dreams reality. In mythical stories, magic is the art that makes the impossible possible. Seven-League Boots is the art that contracts space. People using the spell can travel a thousand miles in one step. Clairvoyance is the power that can enable a person to see a scene from a distance. The powers of listening to sounds from a far distance or knowing other people’s minds are examples of the kind of imaginary magic that is often introduced in stories. A mirror or a stone ball that predicts the future is a magical object that people desire.

Technological developments have transformed the magic of the past into what ordinary people can enjoy now. The art of contracting space, the power of traveling fast, is possible with automobiles. People can fly with airplanes. Smartphones made it possible to see or listen from a far distance. If people from the past were to come to the present, they might be surprised by weather forecasting, as it is a kind of art of predicting the future. Other predictions become possible by big data analysis. Massive data generated by people can be used to understand what people want. Data analysts are able to use the spell of knowing people’s minds. All these magical powers become possible through science and technology advancements. If modern scientists or technologists had lived in the past, they would have been treated as magicians. The movie The Prestige deals with the story of magicians from the early 20th century. When they search for new magic, they go to Tesla. The movie describes the technologist as a true wizard who can actually do what magicians pretend to do.

I often ask myself if I were allowed to choose only one among numerous kinds of magic, what would I choose? I often ask this of other people, too. Knowing what magic other people want to possess gives us hints on what science or technology should address for the next big challenges. Some people want to live forever. That is what bioengineering or medical science tries to address. Many ancient kings had the desire to live forever, or to never get old. The traditions of entombment and embalmment are not unrelated to this desire. In the Harry Potter stories, invisibility cloaks seem to be one of the most popular magical objects. I believe that identifying people’s desire is the key task of designers. So reading people’s minds would be useful magic for designers.

Among the many wizards or magicians appearing in stories, I consider the Monkey King (Sun Wukong in Chinese) to be the most impressive. He is the main character of the classical Chinese epic novel, Journey to the West, written by Wu Cheng'en. His Korean name is Son-Oh-Gong. The Chinese story is well known in Korea due to popular animation series based on the story. The story is about a journey by a group consisting of the monk Samjang (Xuanzang in Chinese), the Monkey King, Zhu Palge (Zhu Bajie), and Sha Ohjung (Sha Wujing) to retrieve Buddhist sutras from India. Son-Oh-Gong is the main character of the story. He is a monkey, born from a stone, who acquires supernatural powers. The monk Samjang controls Son-Oh-Gong’s magical powers by a ring on his head. Son-Oh-Gong can fly on a cloud. He carries the Ruyi iron rod, which is used as a magical weapon and whose size changes as one wishes, which is the meaning of the name. He often carries Ruyi in his ear by making it really small.

Among the Monkey King’s many magical powers and objects, his duplication magic is the one that I most envy. According to this magic, he can create himself from his hair. He can duplicate himself as long as he has hair. It is fortunate that Son-Oh-Gong is a monkey who does not need to worry about a lack of hair. Here is a joke: One day Son-Oh-Gong had to fight with 100 monsters. Son-Oh-Gong instantly put out his hairs and created 99 self-avatars. Son realized that one self-avatar was particularly weak and being defeated. Angry Son-Oh-Gong asked the weak one, “Why are you so weak? You should behave like the Monkey King.” Then the duplicated avatar replied, “I am a gray hair.”

I like the duplication magic best because it seems to make other magic possible. In the movie The Prestige, two magicians compete for an ultimate magic called Transported Man. The secret of the magic was the duplication of a body. If the original body is removed when a duplicated body is created in different location, the process appears like instant transportation.

The duplication makes it possible to experience multiple places and time. It offers a partial way of controlling time. For me, right now is the busiest time of the year. I have to advise many students for their degree projects, help complete term projects in classes, continue research works, review different papers from conferences and journals, and write a grant proposal for next research projects. It is the time that I especially wish to have this duplication magic. I want to relax after making several self-avatars and assigning different tasks to them.

If I could create many self-avatars, I would wish to enjoy other people’s lives in parallel. Although I enjoy what I do in my university, I often envy people working in a successful company. They produce products that directly influence many people’s lives. Living like them, I want to see the people that are happy with the artifacts that I create. Meantime, I want to be an expert in other areas, such as music or sports. I want to travel to more places and meet different people. At a big conference like CHI, I could send self-avatars to multiple sessions when interesting presentations are happening at the same time. The duplication magic makes that dream possible.

Kings of the past wanted to live forever. The duplication magic makes this possible, too. If I keep a healthy avatar in a safe place, possibly in a protective time capsule, I can be fresh anytime. I have to be careful that all avatars do not disappear at once.

In order to consider that the duplication magic is a truly powerful, fundamental, and multi-purpose magic, there are several issues to be addressed. The first is whether I fully experience what my self-avatars experience at the same time in the same way. If my avatar sleeps while I don’t sleep, do I really sleep or not? If my avatars are hurt or sick, do I feel the same? If I share the pains of my avatars in dangerous situations, I should be really careful. I may consider that my self-avatars experience things independently from my original myself. If so, I need a process to combine the experiences of my avatars with the sense of myself. If this experience integration is not done effectively or requires a lot of effort and time, the duplication magic would not be that useful. These issues must be considered if science and technology try to realize or mimic the duplication magic.

Recently I have been interested in adding people’s personalities to IT products or systems. I wonder if it really adds value for people. If so, what would the real impacts be? I presume that the products with a user’s personality would offer more emotional interactions. People would accept the products more openly and positively. People would have a preference for artifacts that are similar to them or have personal traces. We often have emotional attachments to our old furniture, a leather bag, a house with personal patina. I think there could be a potential where IT products and systems would become like this with the addition of tangible and intangible patina.

Living with products and environments that have my own personal characteristics and know how I judge things may be viewed as a situation where those products are more like my self-avatars. This is one of my visions of future smart products: everyday objects as my duplicated self-avatars. This is different from the vision of smart objects being secretarial avatars. The products that I use are parts of myself. I imagine a TV, mobile phone, notebook, and car made of my DNA so they can think, judge, and behave like me. That is the situation where I can use the duplication magic of Son-Oh-Gong with his hair.

In such a world, many trivial things can be managed by my self-avatars. My self-avatar TV will choose what I want to watch, and record the program voluntarily. The products will process the tasks as I think. While my self-avatars deal with trivial things, I would relax and enjoy something else. I may or may not feel what my duplicated body experiences. Or there must be a solution of how I effectively integrate the experiences of my self-avatars so that I can fully share them.

How would I feel if my self-avatar products are trashed? Would it be emotionally different from when normal products disappear? Would I use the self-avatar products with higher emotional attachments longer? What would happen if such products were stolen and controlled by other people? Would we want such products regardless of such privacy and security concerns?

In his book Thinking Fast and Slow, Daniel Kahneman explains that a person’s thinking system is processed by the interplay between two personal characters, System 1 and System 2, living in one’s mind. System 1 is fast, instinctive, and emotional. System 2 is slower, more deliberative, and more logical. He explains that the two self-characters in a person direct thoughts and cognition while influencing each other. I thought that this perspective could be applied to the vision of self-avatar products. If we can duplicate one of the two self-characters and assign it to the products or systems we design, the security and privacy issues, and the experience integration may be addressed.

Life filled with my self-avatars. I expect that the technology wizards will enable me to have the duplication magic of Son-Oh-Gong soon. I will use smart products, furniture, and environments that independently decide and process tasks without asking me, but they would exactly match what I intend. The future mobile phone would be my self-avatar. If everyone could use the duplication spell with the hair of the Monkey King, what would the future be like? Would it be the good, the bad, or the ugly? 



Posted in: on Wed, June 19, 2013 - 9:31:48

Tek-Jin Nam

Tek-Jin Nam is an associate professor in the Industrial Design Department at KAIST.
View All Tek-Jin Nam's Posts


Post Comment


No Comments Found


HCI/UX in China: A trip report


Authors: Aaron Marcus
Posted: Fri, June 14, 2013 - 11:04:13

How are product/service human-computer interaction design and user-experience design doing in China, you ask? Well, as they say, we live in interesting times. I shall give you a personal update based on recent experiences.

I have just spent a week in China in three cities, Yangzhou, Shanghai, and Beijing, attending two design conferences in the first and third cities and viewing the building where my new Center for User-Experience Design Innovation (CUXI) plans to open in September 2013. Please let me explain.

The Dragon Design Foundation (DDF), one of the largest design organizations in China, with close connections to the Chinese government (a must for any successful organization or business in China), invited me to give presentations at the Third World Green Design Forum in Yangzhou, a “small city,” I was told, with only 4.6 million inhabitants. Surprisingly, even Chinese participants were not too familiar with the city, although it has an illustrious past as a center for theaters, poets, artists, and, presumably, designers. A fleet of people attended to our needs as overseas invited presenters. Most attendees came from China and Europe. There were about 700 people from 20 countries. Ministers of the European Parliament attended.

The conference presentations on May 28 and 29, 2013 seemed to focus on a wide variety of topics, primarily on international standards, urban-scale projects, architectural projects, industrial products, new 3-D printing technologies, intellectual property and investments, and projects dealing with rural development, which seemed appropriate for a nation for which most of the land is still rural. Although many of the presentations were interesting, they did not specifically focus on HCI/UX products/services, as I thought they might.

One of the most amazing presentations was that by Mr. Peter Woolsey, the head of a company that has invented a patented process for turning pig manure and chicken manure (China is the largest grower of pigs and chickens in the world), plus the left-over body parts from food production, into edible substances for two kinds of fly larvae, which in turn grow into massive numbers in a short time. These fly larvae, in turn, are harvested, processed, and turned into a safe, healthful, nutritious meal (“maggot meal”!), which is then fed back to the next generation of pigs and chickens. As you might imagine, a significant amount of energy is saved, and one is left with an unusual example of recycling. One can only imagine what will happen when future humanity decides it should end the “wasteful” use of cemeteries and simply let people eat the previous people as a source of nutritious, tasty, and low-cost food. Somehow, I feel there is a lesson in sustainability for HCI/UX people, and I don’t mean grinding them up for the next generation of professionals.

Although interesting, the conference did not enlighten me that much about green apps, for example, that can help save energy at home, at work, traveling, etc. I did have an opportunity to present our own Green Machine project, which was a useful opportunity to discuss these issues.


Author in his “tribal hat” (actually his CHI 1999 “Sci-Fi at CHI” panel hat)
that he used for his DDF presentations.

There are conferences that focus on HCI/UX developments in China.

One is the User-Experience Professionals Association (UXPA) conference, also called User Friendly. The last one was held in Beijing in November 2012. I was fortunate to be able to give a keynote lecture about UX in science-fiction movies and television over the past 100 years (the subject of my CHI 2013 tutorial) to about 700 people. The presentations and attendees came from many major sites of advanced development in China as well as abroad. The next conference is scheduled for November 2013 in Shanghai. Now that is a conference for HCI/UX.

There are also the Asia Pacific CHI (APCHI) conferences that bounce around Asian locations. They have taken place in China, but not consistently. I was also fortunate to present my sci-fi tutorial at the APCHI 2012 conference in Japan. The attendance was primarily Japanese. The next, APCHI 2013, will take place in Bangalore.

Another source of HCI/UX developments is the SIGGRAPH Asia conference, which has taken place in Hong Kong in the past (2011) and is scheduled again for that city in November 2013. I have found it to have extremely interesting and exotic HCI/UX exhibits. It was in 2011, I believe, that I discovered a Japanese R+D project to create knives and forks with sensors and sound displays that enabled food to become “musical.”

The second DDF conference was called the Dragon Design Festival on May 31 and June 1. This conference was much more oriented to teachers, researchers, and professionals reporting on their current projects and curricula. Unfortunately, again, there was minimal HCI/UX content in most of the presentations. This situation seems to suggest a disconnect from the high-tech developers and the regular Chinese design community. Most of the product-design presentations were really more industrial design than device-HCI design. This low-level of content was disappointing, and I hope that the DDF/DDF will feature more HCI/UX content in the future. One of the major components, and more interesting aspects, of the DDF/DDF conference featured the planned development of a “Design Valley” in south Beijing, similar to other large urban developments rising in Shanghai, Hong Kong, and other locations in China. These government-supported centers are seeking to create their own combination of Silicon Valley and design centers at a scale and speed that is unheard of in the US. One of five complexes in the Beijing site features six multi-story buildings that will house 2,000 high-tech companies.

I did have an opportunity to present at the DDF/DDF conference my own plans for my new Center for User-Experience Innovation in Shanghai, being funded by the De Tao Group, on the campus of the Shanghai Institute for Visual Arts. I plan to provide a year-long “executive user-experience master’s course,” like an eMBA, to Chinese professionals, executives, or students wishing to learn “all there is to know” about HCI/UX in one year, as well as frequent one-week short courses for US and European executives and professionals who would like to learn about mobile UX design and the China context in a short time and have Shanghai as the venue. The CUXI will also carry out UX design, research, and evaluation projects for Chinese companies or foreign companies having or interested in developing China UX offices. One surprising result from my brief presentation about the CUXI was that a developer of one of the high-technology centers came up to me and stated that what I was doing in Shanghai was exactly what he needed in his own center, and he needed it now! He even arranged for me on the spot to meet with the regional government representative who must authorize and permit all such activities. That was a fortunate and productive moment at the conference.

Recent studies of HCI/UX professionals in most high tech companies in China show that the professionals are young and lack the years of experience that their peers in other countries have. Universities and institutions like CUXI are trying to help them catch up. It is clear that China is making a strong, concerted effort to ensure that future high-tech gadgets and apps are not only made in China, but designed in China. Stay tuned...


Posted in: on Fri, June 14, 2013 - 11:04:13

Aaron Marcus

Aaron Marcus is president at Aaron Marcus and Associates, Inc. (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts


Post Comment


No Comments Found


Color and user experience


Authors: Ashley Karr
Posted: Thu, June 13, 2013 - 9:48:11

Take away: Proper use of color can enhance the user experience of any design as color affects humans psychologically, physiologically, and emotionally.

Emerald is: “Lively. Radiant. Lush. A color of elegance and beauty that enhances our sense of well-being, balance, and harmony.” Pantone named it 2013’s Color of the Year. I am glad that lively, radiant, lush elegance, beauty, and balance are harmonizing my experience during the 365 days that comprise 2013. However, this is a bold statement–just like the color. Can color really do all of this? The answers are yes and sort of. Please do read for further explanation.

Psychological effects of color 

Color can augment memorization, recall, and recognition. In interactive designs, color can suggest categories and give identity to chunks of information. This can create a design that is more efficient, clearer and easier to understand, easier to learn, and easier to navigate. 

Physiological effects of color 

Colors affect our nervous systems. Research shows that, for example, bright reds stimulate our sympathetic nervous system, resulting in physiological changes such as an increased heart rate. In contrast, soft blues and greens create the opposite physiological effect and help us relax.

Emotional effects of color

Colors themselves and the meanings we attach to them affect our emotions and moods. For example, most people associate the color yellow with feeling happy and energized. On an individual level, a person could associate the color yellow with the color of their home during childhood, which invokes fond memories and pleasant feelings.

Cultural context of color

Remember that meaning in general is culturally constructed. Sensitivity to the cultural context and meaning of color within your user group is important. The following is a common example demonstrating the cultural implications of color. In many Western cultures, black represents death. In some Eastern cultures, white represents death. How this will affect one’s design and user interface decisions is up the design team; however, it is important to remember that we always operate within a cultural context. Our users do, too.

Quick guide to color and meaning from an American perspective

Red

  • Increases blood pressure, heart and breathing rates
  • Stimulates the adrenal and pituitary glands, which can temporarily increase strength and stamina
  • Represents vitality, ambition, and passion
  • Can dispel negative thoughts
  • Associated with anger, danger, indebtedness and irritability

Pink

  • Induces feelings of relaxation, tranquility, warmth, and protection
  • Reduces feelings of aggression and irritation
  • Associated with nurturing, selfless, generous love
  • Light and soft pink associated with femininity, while bolder and hotter hues suggest youthful and fun energy

Orange

  • Stimulates digestive and immune systems
  • Associated with energy and vitality
  • Younger audiences respond to bold oranges, while older and upscale audiences respond positively to softer hues
  • Has only positive affects on mood, acts as anti-depressant

Yellow

  • Stimulates the brain, creating alertness and energy
  • Activates lymph system
  • Happy, optimistic, confident, and uplifting
  • Associated with the intellect, organization, discernment, memory, clarity, decision-making, and good judgment

Green

  • Brings equilibrium and relaxation, feelings of comfort
  • Helps us breathe deeper and slower
  • Suggests nature, peace, well-being
  • Deep shades suggest wealth
  • Represents environmental friendliness
  • Particular shades of green, such as olive, can represent illness and nausea 

Blue

  • Lowers blood pressure, has a cooling and soothing effect
  • Deep blue stimulates the pituitary gland, which regulates sleep patterns, and is associated with calm, restful nights
  • Inspires mental control, clarity and creativity
  • An overuse of dark blue can be depressing
  • Suggests the sky and ocean

Purple

  • Violet suggests purification, cleansing, peace, and balance
  • Combat shock and fear
  • All hues help with mental and nervous disorders
  • Stimulates compassion, intuition, and imagination
  • Associated with the right side of the brain
  • Relates to sensitivity, beauty, and idealism
  • Associated with royalty and nobility

Brown

  • Suggests earth and home and home, stability, and security
  • Can also suggest dirtiness or retreat and isolation from the world

Black

  • Comforting and protective
  • Mysterious, suggests silence and death
  • Can also be considered sleek and fashionable

White

  • Purity, clarity, peace, and comfort
  • Suggests freedom, although too much can be considered cold and isolating

Gray

  • Suggests independence and self-reliance
  • Can be a negative color, suggesting evasion and non-commitment, separation, lack of involvement, loneliness

Color use restrictions

  • Overuse of color creates clutter and confusion. Find one color for your background, one that represents your brand or message, and two complementing yet contrasting colors that can act as indicators for active links, hovering, and visited links. This means a site should have a minimum of four colors. Any additional color should be chosen with care.
  • Underuse of color results in a dull design lacking in interest and meaning. It can also result in confusion. Imagine trying to find a text embedded link that was the same color as the surrounding words!
  • Improper use of color at worst can cause great offense. Remember color carries the weight of meaning, and this meaning is always wrapped in cultural contexts. Be aware of these meanings and use them, and their colors, with respect and purpose.
  • Color blindness affects roughly 10% of the male population. Keep this in mind as you are choosing contrasting colors. If the colors you choose to serve different functions in your design do not suggest a stark enough contrast, a sizeable portion of your user group could be negatively affected by your choice.

How is color important in user experience?

Remember that user experience is overarchingly affective. Both objective and subjective evidence supports the concept that color affects humans psychologically, physiologically, and emotionally. Importantly, these effects come wrapped in cultural contexts. This means that the reactions that color evokes in us can change depending on the culture or cultures in which we were raised, currently reside, or are currently acting as a user. Selecting and using color with thought, purpose, and care can enhance the user experience. We would love to hear your experiences with color use and choice in your designs. Please write your comments below. Until next time, please enjoy the experience.


Posted in: on Thu, June 13, 2013 - 9:48:11

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


@Nick Fine (2013 06 25)

Where are your references?  Much of this is regurgitated popular psychology without any evidence to support it.

@Benoît Larivière (2013 07 04)

Thanks.
I would add also that a tool such as Colour Contrast Analyser (Paciello Group) is helpful to ensure a proper legibility of the text.

@Steve Dolan (2013 07 14)

Great read! I wanted to add something I learned recently: When designing, choosing the style of your hyperlinks so that they easily stand out is important. I’m talking about beyond just changing the color, but also bolding and adding an underline makes a difference. The concept is, if you desaturate and blur your design, you want to still be able to tell where a link is located in your text.


A slow triangulation


Authors: Jonathan Grudin
Posted: Tue, June 04, 2013 - 12:47:16

In the mid-18th century:

"Does Britannia, when she sleeps, dream? Is America her dream? - in which all that cannot pass in the metropolitan Wakefulness is allow'd Expression away in the restless Slumber of these Provinces, and on West-ward, wherever 'tis not yet mapp'd, nor written down, nor ever, by the majority of Mankind, seen,- serving as a very Rubbish-Tip for subjunctive Hopes, for all that may yet be true,-Earthly Paradise, Fountain of Youth, Realms of Prester John, Christ's Kingdom, ever behind the sunset, safe till the next Territory to the West be seen and recorded, measur'd and tied in, back into the Net-Work of Points already known, that slowly triangulates its Way into the Continent, changing all from subjunctive to declarative, reducing Possibilities to Simplicities that serve the ends of Governments,- winning away from the realm of the Sacred, its borderlands one by one, and assuming them into the bare mortal World that is our home, and our Despair."  —Thomas Pynchon, Mason & Dixon

Geography and monopoly

Everywhere we look, geography and monopoly align: in ecology, linguistics, economics, cuisine, and, I will suggest, our conceptions of innovation and creativity. And with transportation and digital technologies breaking down geographic boundaries, creating the long-anticipated global village, the potential range of monopoly is extended. Diversity is almost paradoxically more visible and more threatened.

In ecology, the competitive exclusion principle holds that one species will achieve control over a niche. However, physical barriers—oceans, mountains, deserts, jungles—enable different species to evolve in similar niches. When a barrier comes down—a land bridge forms, specimens hitch rides on floating logs, ships, or planes—competition ensues, and only one species survives.

Which country has the most languages? If you don’t know, you won’t guess. Papua New Guinea, most experts agree, with about 800 distinct languages. Isolated by jungles, mountains, and bellicosity, linguistic inventiveness flourished in each valley. The runner-up, Malaysia, is an archipelago. As transportation and communication technologies overcame geographic barriers, linguistic diversity dropped. As the planet’s population doubled and tripled, the number of languages was halved! Every week or two another language disappears. Surviving languages evolve more slowly than in the past, inventiveness curtailed by grammar books, teachers, and copy editors everywhere.

Geographic isolation also facilitates economic monopoly. Farmers lured to the American west by a railroad company were dependent on the railroad to reach markets, so the robber barons could charge “all the traffic will bear.” Isolated mining communities had only the company store, which set prices that effectively enslaved the laborers. Big government evolved to control such monopolists, but the geographic metaphor endures: Warren Buffet looks for businesses with a “moat,” a non-geographic barrier to competition that enables them to raise prices and increase profits.

Monopolies are a natural development. One species occupies a niche. When an isolated culture contacted “the outside world,” adopting a dominant language was a path to extensive cultural, medical, industrial, and scientific lore.

Monopolies can be efficient, but there are downsides. Innovation may decline—less competition and natural selection, less diversity. Economic regulation might help—when its profit was controlled, AT&T set up Bell Labs. The company had largely overcome U.S. competitors but was contained by the oceans. Other countries developed telephone systems. Eventually AT&T was broken up, the oceans were crossed, innovation and competition ensued, and the less efficient evolve or disappear. With fewer global players remaining, we move toward a new monopoly, as happens when isolated and relatively static species and cultures come into contact.

Although some intrepid critters made it over mountains or floated across oceans, geographic barriers generally came down in geologic time until Homo sapiens arrived. Today those barriers are effectively gone. We have achieved the global village. It is great of course, but the consequences of the true disappearance of frontiers is only starting to be understood. When I was growing up in a small village in the Midwest, a tension existed between the individual and the community—I had limited privacy but tangible benefits. Today, we don’t know the other global villagers intruding on our privacy and the benefits are usually less tangible. As Pynchon noted with mild foreboding, our dreams are disrupted more deeply than we know.

Creativity and innovation

There's nothing you can know that isn't known
Nothing you can see that isn't shown
—John Lennon, “All You Need Is Love”

For most of our existence as a species, geographic isolation afforded monopoly protection to inventors, artists, and writers, just as it did to languages and species. An invention that was novel in the cave or town was valuable even if it had previously been invented in a thousand other places across the planet. Word traveled slowly if at all. If your neighbors used it, you were in business. Similarly, the most creative poet, songwriter, or storyteller in a village was appreciated, as was the best healer, strongest athlete, and most skilled hunter or gatherer.

In the 17th and 18th centuries, as population became more concentrated, patents and copyrights in the modern sense were devised to bestow a limited-duration monopoly. They originally had a limited geographic range, applying to a nation or even a single city. Today, the monopoly bestowed by patents and copyrights to reward innovation is often global.

Not anymore. The Internet and YouTube can help inventors but on balance are not their friend. The best local storyteller vies with storytellers everywhere. An inventor vies with inventors everywhere. We have access to everything for inspiration, but when one of our six billion potential competitors beats us to the punch, our achievement becomes yesterday’s news.

Creativity is defined in different ways, but in the sense of inventiveness, technology has rendered creativity more difficult and less important with each passing year. When writing supplanted word of mouth for passing down knowledge, we competed with dead people. Today we compete with billions of others to be first or best. If you don’t invent and market it fast, someone else will.

Things will be invented, because people are inventive. We may be naturally selected for inventiveness. Resourceful and creative individuals improved the odds of a small, isolated community thriving.

Obsession with creativity

Paradoxically, as originality declined in significance, our interest in it grew. In an entertaining interview, Austin Kleon notes that the concept of originality is “kind of an invention of the nineteenth century,” when geographic barriers to communication crumbled rapidly. People may have realized that local inventiveness mattered less and looked more broadly. With less personal acquaintance, the inventor and the process of invention were dissociated. But originality was less prevalent than they imagined. In The Act of Creation, Arthur Koestler documents advances of the 19th and 20th centuries that were credited to individuals, which were “in the air,” widely discussed before someone received the credit.

Now the final barriers are down and handwringing about declining creativity is everywhere. Issues of Fast Company regularly trumpet the methods of “creative people.” NSF initiated a CreativIT program. Amazon lists over 50,000 books with “creative” or “creativity” in the title. Discussions of education often focus on fostering creativity. It seems an unconscious response to the increased difficulty of being truly original. A good idea occurs to me, and with a search engine I can probably find it already enunciated several times over. Bad ideas too. If we do not realize that technology has shifted the playing field,