Blogs

Service design 101


Authors: Lauren Chapman Ruiz
Posted: Mon, July 21, 2014 - 10:04:37

Note: This article was co-written by Lauren Ruiz and Izac Ross.

We all hear the words service design bandied about, but what exactly does it mean? Clients and designers often struggle to find a common language to define the art of coordinating services, and frequent questions arise. Often it emerges as necessary in the space of customer experience or complicated journey maps. In response, here is a brief FAQ primer to show the lay of the land in service design.

What are services?

Services are intangible economic goods—they lead to outcomes as opposed to physical things customers own. Outcomes are generated by value exchanges that occur through mediums called touchpoints. For example, when you use Zipcar, you don’t actually own the Zipcar, you buy temporary ownership. You use the car, then transfer it to someone else once it is returned. Every point in which you engage with Zipcar is a touchpoint. 

What creates a service experience?

Services are always co-created by what we call service users and service employees—the direct beneficiaries of the service, and the individuals who see the service through.

This oftentimes means that the outcome will vary for each service user. Your experience of a service may be completely different than another’s. Think of a flight—it can be a pleasant experience, or if you have a screaming baby next to you, not so great. Service employees can do everything to provide a good experience, but there are unknown factors each time that can ruin that experience.

A positive service experience considers and works to account for these situations—they are intentionally planned.

Who else is involved in a service?

A service experience often involves more than just the service user and employee. There are several types of people working together to create a service:

  • Service customers are actually purchasing the service, which is sometimes a different user than who is actually using the service.
  • Service users directly use the service to achieve the outcome.
  • Frontstage service employees deliver the service directly to the user.
  • Backstage service employees make everything happen in the background; the user doesn’t see or interact directly with these people.
  • Partner service employees are other partners involved in delivering the service. For example, UPS is a partner service employee to Amazon. You may order from Amazon, but UPS plays a role in completing your service experience.

What is frontstage and backstage?

In services, there are things the customer does and doesn’t see—we call this frontstage and backstage. Think of it like theater: backstage is what is done behind the curtain to support the actors, who are frontstage, and they’re who you see in front of the curtain. Those on the backstage do just as much to shape the experience as those on the front stage. They help to deliver the service, play an active and critical part in shaping the experience, and represent a company’s brand.

Partners help the company deliver the service outcome by doing things like delivering packages, providing supplies for the service, or processing data.

What are touchpoints?

Earlier I mentioned touchpoints as the medium through which value exchanges happen, leading to the outcomes of a service. Touchpoints are these exchange moments in which service users engage with the service.

There are five different types of touchpoints:

  • People, including employees and other customers encountered while the service is produced
  • Place, such as the physical space or the virtual environment through which the service is delivered
  • Props, such as the objects and collateral used to produce the service encounter
  • Partners, including other businesses or entities that help to produce or enhance the service
  • Processes, such as the workflows and rituals that are used to produce the service (this relates the people, place, props, and partners).

Unlike most products, a service can be purchased multiple times. If a service is purchased just once, it may be a high-value exchange. Since most services are used frequently, we approach designing a service by considering the service cycle.

The service cycle helps answer the following questions:

  • How do we entice service users?
  • How do they enter into the service?
  • What is their service experience?
  • How do they exit from the service?
  • And how do we extend the service experience to retain them as a repeat service user?

What has 30 years done to services?

A lot of changes have occurred to services over the decades. Think about banking services. At one point, the only access to banks included four channels: checks, phone, mail, and branches. Today, there are many more access channels that need to be coordinated, including debit cards, ATMs, online banking, mobile web access, texting, iPhone, Android, mobile check deposit, retail partners, and even Twitter.

And here’s the exciting news—service design (or as some industries might call it, customer experience) is critical to making a cohesive experience across all these channels! There is a desperate need to coordinate these elements using the skills and principles of design.

Like most industries, design disciplines have been changing as a response to paradigm shifts in the economy. Graphic design emerged from the printing press. The industrial age gave birth to industrial design. Personal computing and the mobile age gave rise to interaction design. And the convergence of all of these channels has bought service design forward to coordinate service outcomes.

So how important is service design?

We’ve all had bad service experiences across a range of industries. They’re why companies lose customers, and they can bring frustration, pain, and suffering—from poor transit systems to care delivery. When clients neglect backstage or frontstage employees, every pain point will show through to a service user and customer.

Without effective service design, many companies break apart into disconnected channels, with no one overseeing or coordinating. And even if you’re creating a product, understanding the service you’re trying to put your product into will help your product be much more successful—remember, your B2B “product” is also one of your customer’s touchpoints.

In addition, there are many opportunities to leverage technology to create new services. Look at TaskRabbit—it starts as a digital experience, but without the “rabbits” to perform the service, it’s useless.

Finally, well-designed service experiences differentiate companies. Those who pay attention to wisely designing services will be poised to stand out and achieve success in our ever-changing economy.

So how important is service design? I hope this post has convinced you the answer is very. Tune in again as we’ll be continuing this topic with a deep dive into one of the most important tools of service design—the service blueprint.

Top image via Zip Car, all others created by Izac Ross



Posted in: on Mon, July 21, 2014 - 10:04:37

Lauren Chapman Ruiz

Lauren Chapman Ruiz is an Interaction Designer at Cooper in San Francisco, CA, and has been an adjunct faculty member at Carnegie Mellon University.
View All Lauren Chapman Ruiz's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@rypac (2014 07 22)

I seriously appreciate your information.
this post is helpful.
thanks for this idea


Visual design’s trajectory


Authors: Jonathan Grudin
Posted: Thu, July 17, 2014 - 1:50:21

Some graphic artists and designers who spent years on the edges of software development describe with bemusement their decades of waiting for appreciation and adequate computational resources. Eventually, visual design soared. It has impressed us. Today, design faces complexities that come with maturity. Cherished aesthetic principles deserve reconsideration.

An enthusiastic consumer

People differ in their ability to create mental imagery. I have little. I recognize some places and faces but can’t conjure them up. The only silver lining to this regrettable deficit is that everything appears fresh; the beauty of a vista is not overshadowed by comparison with spectacular past views. I’m not a designer, but design can impress me.

The first HCI paper I presented was inspired by a simple design novelty. I had been a computer programmer, but in 1982 I was working with brain injury patients. A reverse video input field—white characters on a black background—created by Allan MacLean looked so cool that I thought that an interface making strategic use of it would be preferred even if it was less efficient. I devised an experiment that confirmed this: aesthetics can outweigh productivity [1].

Soon afterward, as the GUI era was dawning, I returned to software development. A contractor showed me a softly glowing calendar that he had designed. I loved it. Our interfaces had none of this kind of beauty. He laughed at my reaction and said, “I’m a software engineer, not a designer.” “Where can I find a designer?” I asked.

I found one downstairs in Industrial Design, designing boxes. As I recall, he had attended RISD and had created an award-winning arm that held a heavy CRT above the desktop, freeing surface space and repositioned with a light touch. I interested him in software design. It took about a year for software engineers to value his input. Other designers from that era, including one who worked on early Xerox PARC GUIs, recount working cheerfully for engineers despite having little input into decisions.

Design gets a seat at the table

I was surprised by design’s slow acceptance in HCI and software product development. Technical, organizational, cultural, and disciplinary factors intervened.

Technical. Significant visual design requires digital memory and processing. It is difficult to imagine now how expensive they were for a long time. As noted in my previous post, and in the recent book and movie about Steve Jobs, the Macintosh failed in 1984. It succeeded only after models with more memory and faster processors came out, in late 1985 and in 1986. Resource constraints persisted for another decade. The journalist Fred Moody’s account of spending 1993 with a Microsoft product development team, I Sing the Body Electronic, details an intense effort to minimize memory and processing. The dynamic of exponential growth is not that things happen fast—as in this case, often they don’t—it is that when things finally start to happen, then they happen fast. In the 00’s, constraints of memory, processing, and bandwidth rapidly diminished.

Organizational. The largest markets were government agencies and businesses, where the hands-on users were not executives and officers. Low-paid data entry personnel, secretaries who had shifted from typing to word processing, and other non-managerial workers used the terminals and computers. Managers equipping workers wanted to avoid appearing lavish—drab exteriors and plain functional screens were actually desirable. I recall my surprise around the turn of the century when I saw a flat-panel display in a government office; I complimented the worker on it and her dour demeanor vanished, she positively glowed with pride. For decades, dull design was good.

Sociocultural. The Model T Ford was only available in black. Timex watches and early transistor radios were indistinguishable. People didn’t care. When you are excited to own a new technology, joining the crowd is a badge of honor. Personalization comes later—different automobile colors and styles, Swatches, distinctive computers and interfaces. The first dramatically sleek computers I saw were in stylish bar-restaurants.

Disciplinary friction. Software engineers were reluctant to let someone else design the visible part of their application. Usability engineers used numbers to try to convince software developers and managers not to design by intuition; designers undermined this. In turn, designers resented lab studies that contested their vision of what would fare well in the world. The groups also had different preferred communication media—prototypes, reports, sketches.

These factors reflected the immaturity of the field. Mature consumer products relied on collaboration among industrial design, human factors, and product development. Brian Shackel, author of the first HCI paper in 1959, also worked on non-digital consumer products and directed an ergonomics program with student teams drawn from industrial design and human factors.

As computer use spread in the 1990s, HCI recognized design, sometimes grudgingly. In 1995, SIGCHI backed the Designing Interactive Systems (DIS) conference series. However, DIS failed to attract significant participation from the visual design community: Papers focused on other aspects of interaction design. In the late 1990s, the CMU Human-Computer Interaction Institute initiated graduate and undergraduate degrees with significant participation of design faculty.

This is a good place to comment on the varied aspects of “design.” This post outlines a challenge for visual or graphic design as a component of interaction design or interface design focused on aesthetics. Practitioners could be trained in graphic art or visual communication design. Industrial design training includes aesthetic design, usually focused on physical objects that may include digital elements. Design programs may include training in interaction design, but many interaction designers have no training in graphic art or visual communication. CHI has always focused on interaction design, but had few visual designers in its midst. “Design” is of course a phase in any development project, even if the product is not interactive and has no interface, which adds to the potential for confusion.

Design runs the table

Before the Internet bubble popped in 2000–2001, it dramatically lowered prices and swelled the ranks of computer users, creating a broad market for software. This set the stage for Timex giving way to Swatch. In the 2000s, people began to express their identity through digital technology choices. In 2001, the iPod demonstrated that design could be decisive. Cellphone buddy lists and instant messaging gave way to Friendster, MySpace, Facebook, and LinkedIn. The iPod was followed by the 2003 Blackberry, the iPhone in 2007, and other wildly successful consumer devices in which design was central.

The innovative Designing User Experience (DUX) conference series of 2003–2007 drew from diverse communities, succeeding where DIS had failed. It was jointly organized by SIGCHI, SIGGRAPH, and AIGA—originally American Institute of Graphic Arts, founded in 1914, the largest professional organization for design.

The series didn’t continue, but design achieved full acceptance. The most widely-read book in HCI may be Don Norman’s The Psychology of Everyday Things. It was published in 1988 and republished in 2002 as The Design of Everyday Things. Two years later Norman published Emotional Design.

Upon returning to Apple in 1997, Steve Jobs disbanded its HCI group, led by Don Norman. Apple’s success with its single-minded focus on design has had a wide impact. For example, the job titles given HCI specialists at Microsoft evolved from “usability engineers” to “user researchers,” reflecting a broadening to include ethnographers, data miners, and others, and then to “design researchers.” Many groups that were focused on empirical assessment of user behavior had been managed parallel to Design and are now managed by designers.

Arrow or pendulum?

Empowered by Moore’s law, design has a well-deserved place at the table, sometimes at the decision-maker’s right hand. But design does not grow exponentially. Major shifts going forward will inevitably originate elsewhere, with design being part of the response. An exception is information design—information is subject to such explosive growth that tools to visualize and interact with it will remain very significant. Small advances will have large consequences.

In some areas, design may have overshot the mark. A course correction seems likely, perhaps led by designers but based on data that illuminate the growing complexity of our relationships with technology and information. We need holistic views of people’s varied uses of technology, not “data-driven design” based on undifferentiated results of metrics and A/B testing.

I’d hesitate to critique Apple from Microsoft were it not for the Windows 8 embrace of a design aesthetic. Well-known speakers complain that “Steve Ballmer followed Steve Jobs over to the dark side,” as one put it. They are not contesting the value of appearance; they are observing that sometimes you need to do real work, and designs optimized for casual use can get in the way.

My first HCI experiment showed that sometimes we prefer an interface that is aesthetic even when there is a productivity cost. But we found a limit: When the performance hit was too high, people sacrificed the aesthetics. Certainly in our work lives, and most likely in our personal lives as well, aesthetics sometimes must stand down. Achieving the right balance won’t be easy, because aesthetics demo well and complexity demos poorly. This creates challenges. It also creates opportunities that have not been seized. Someone may be doing so out of my view; if not, someone will.

Aesthetics and productivity

Nature may abhor a vacuum, but our eyes like uncluttered space. When I first opened a new version of Office on my desktop, the clean, clear lettering and white space around Outlook items were soothing. It felt good. My first thought was, “I need larger monitors.” With so much white space, fewer message subject lines fit on the display. I live in my Inbox. I want to see as much as my aching eyes can make out. I upsized the monitors. I would also reduce the whitespace if I could. I’d rather have the information.

A capable friend said he had no need for a desktop computer—a tablet suffices, perhaps docked to a larger display in his office. Maybe our work differs. When I’m engaged in a focal task, an undemanding activity, or trying out a new app, sparsity and simplicity are great. When I’m scanning familiar information sources, show me as much as possible. As we surround ourselves with sensors, activity monitors, and triggers, as ever more interesting and relevant digital information comes into existence, how will our time be spent?

Airplane pilots do not want information routed through a phone. They want the flight deck control panel, information densely arrayed in familiar locations that enable quick triangulations. If a new tool is developed to display airspeeds recorded by recent planes on the same trajectory, a pilot doesn’t want a text message alert. Tasks incidental to flying—control of the passenger entertainment system perhaps—might be routed through a device.

We’re moving into a world where at work and at home, we’ll be in the role of a pilot or a television news editor, continually synthesizing information drawn from familiar sources. We’ll want control rooms with high-density displays. They could be more appealing or less appealing, but they will probably not be especially soothing.

Design has moved the opposite direction, toward sparsely aesthetic initial or casual encounters and focal activity. Consumer design geared toward first impressions and focal activity is perfect for music players and phones. Enabling people to do the same task in much the same way on different devices is great. However, when touch is not called for, more detailed selection is possible. Creative window management makes much more possible with large displays. A single application expanded to fill an 80-inch display, if it isn’t an immersive game, wastes space and time.

I observed a 24x7 system management center in which an observation team used large displays in a control panel arrangement. The team custom-built it because this information-rich use was not directly supported by the platform.

You might ask, if there is demand for different designs to support productivity, why hasn’t it been addressed? Clever people are looking for ways to profit by filling unmet needs—presumably not all are mesmerized by successes of design purity. My observation is that our demo-or-die culture impedes progress.  A demo is inherently an initial encounter. A dense unfamiliar display looks cluttered and confusing to executives and venture capitalists, who have no sense of how people familiar with the information will see it.

This aggravates another problem: the designers of an application typically imagine it used in isolation. They find ways to use all available screen real estate, one of which is to follow a designer’s recommendation to space out elements. User testing could support the resulting design on both preference and productivity measures if it is tested on new users trying the application in isolation, which is the default testing scenario. People using the application in concert with other apps or data sources are not given ways to squeeze out white space or to tile the display effectively.

Look carefully at your largest display. Good intentions can lead to a startling waste of space. For example, an application often opens in a window that is the same size as when that application was most recently closed. It seems sensible, but it’s not. Users resize windows to be larger when they need more space but rarely resize them smaller when they need less space, so over time the application window grows to consume most of a monitor. When I open a new window to read or send a two-line message, it opens to the size that fits the longest message I’ve looked at in recent weeks, covering other information I am using.

The challenge

The success of the design aesthetic was perfectly timed to the rapidly expanding consumer market and surge of inexpensive digital capability in just the right segment of the exponential curve. It is a broad phenomenon; touch, voice, and a single-application focus are terrific for using a phone, but no one wants to gesticulate for 8 hours at their desk or broadcast their activity to officemates. At times we want to step back to see a broader canvas.

The paucity of attention to productivity support was recently noted by Tjeerd Hoek of Frog Design. The broad challenge is to embrace the distinction between designs that support casual and focal use and those that support high-frequency use that draws on multiple sources. Some designers must unlearn a habit of recommending aesthetic uncluttered designs in a world that gets more cluttered every week. Cluttered, of course, with useful and interesting information and activities that promote happier, healthier, productive lives.

Endnote

1. J. Grudin & A. MacLean, 1984. Adapting a psychophysical method to measure performance and preference tradeoffs in human-computer interaction. Proc. INTERACT '84, 737-741 PDF

Thanks to Gayna Williams for suggesting and sharpening many of these points. Ron Wakkary and Julie Kientz helped refine my terminology use around design, but any remaining confusion is my fault.

Posted in: on Thu, July 17, 2014 - 1:50:21

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Excuse me, your company culture is showing


Authors: Monica Granfield
Posted: Fri, July 11, 2014 - 7:36:50

“Find the simple story in the product, and present it in an articulate and intelligent, persuasive way.”
— Bill Bernbach

As I read this quote by the all time advertising great, Bill Bernbach, it occurred to me that simplifying and distilling a product story to persuasively and innovatively represent it in a product depends on a company’s brand and culture, and how the brand embodies the culture. 

This is not new news, but it is news worth revisiting—your company culture and politics surface in the design of your product. 

As a designer, my inclination or habit is to try and understand how a design solution was reached—how and why something was created and designed as it was. In doing this, quite often it becomes apparent how and why certain design trade-offs and decisions were made. One can almost hear the conversations that occurred around the decisions. 

A confusing design that provides little guidance and direction or one that does not provide enough flexibility, generating end user frustration, could be traced back to a culture where the end user’s voice is not heard or represented. 

Business trade-offs, technical decisions, design trade-offs, research or lack of it, political posturing—it's all there, reflected in your product. Every meeting, every disagreement, every management decision—all are represented in the end result, the design of your product and the experience your users have with that product. 

Does your company innovate or follow? Is the design of your product driven by clear and thoughtful goals and intentions? Most of these aspects of a design can be traced directly to company culture. Just like the underlying technical architecture surfaces in the product design, so too does the corporate culture, politics, and decision making. 

Is the company engineering focused? Sales focused? Does your culture represent your brand? Where do the product goals align with these intentions? Design goals need to align with the business goals, which are a direct reflection of the product’s design. The clearer your company goals and mission, the clearer your design intentions will be. This will drive directed design thinking, resulting in useful, elegant, well-designed, desirable products. 

I recently read an article that asked what it's really like inside of Apple. The answer: Everyone there embraces design thinking to support the business goals. That is the culture. Everyone's ideas matter, and are subject to the same rigor as a designer’s solution. Great idea? Let's vet that as we would any design idea or solution. This is what makes a great product. So when someone tells you they want to make products as cool as Apple’s, that they want to innovate, ask them about their culture. 

When rationalizing the design thinking and design direction of your products, consider representing a culture that you are proud of and how that culture and the decisions you make will be represented in your product.



Posted in: on Fri, July 11, 2014 - 7:36:50

Monica Granfield

Monica Granfield is a user experience designer at Symbotic. The views expressed on this website are her own and do not necessarily reflect the views of Symbotic.
View All Monica Granfield's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


The challenges of developing usable and useful government ICTs


Authors: Juan Pablo Hourcade
Posted: Mon, June 30, 2014 - 1:04:52

Governments are increasingly providing services and information to the public through information and communication technologies (ICTs). There are many benefits to providing information and services through ICTs. People who are looking for government-related information can find it much more quickly. Government agencies can update websites more easily than paper documents. Those taking advantage of government services through ICTs can save time and frustration, which often accompany waiting in line at government agencies. Further, government agencies can save resources when transactions can be handled automatically. In addition, ICTs for internal government use have the potential of helping manage large amounts of information and handle processes more efficiently.

In spite of the promises of e-government, there have been several notorious failures in the implementation of e-government systems. The most recent example in the United States was the website for applying for health insurance under the Affordable Care Act (also known as Obamacare). The website was not usable by a significant portion of users when it launched. This is only the latest example of an e-government system that does not work as planned and requires additional resources to be functional (if it is not completely scrapped). These challenges have occurred across different administrations, and with different political parties in power. In the United States, historic examples include the Federal Aviation Administration’s air traffic control software, and the Federal Bureau of Investigation’s Virtual Case File [1]. Dada [2] provides examples of e-government failures in lower-income countries.

These failures tend to stem from the difficulty in following modern software engineering and user-centered design methods when contracting with companies for the development of ICTs. These modern methods call for iterative processes of development with significant stakeholder input and feedback. There is an expectation, for example, that detailed requirements will be developed over time, and that some may change. Typical government contracting for ICTs, on the other hand, often assumes that government employees, oftentimes without any training in software engineering, will be able to deliver an accurate set of requirements to a company that will then build a system with little or no feedback from stakeholders during the development process.

The challenge is that an overwhelming majority of elected officials and political appointees have little or no knowledge of software engineering or user-centered design methods. Even people responsible for ICTs at government agencies may not have any specific training in these methods. It is rare, for example, for government agencies to usability test competing technologies before deciding which one to purchase. 

I saw this first-hand while I worked at the U.S. Census Bureau. At the time, the Census Bureau was planning to use handheld devices to conduct the 2010 Census. In spite of the significant investment to be made, no one in the leadership was familiar with software engineering or user-centered design methods, and they trusted the management of the process to employees with some background in ICTs, but no training or experience in handling projects of such magnitude, and little knowledge of appropriate methods. This resulted in the development of a set of requirements that no one understood, and that came largely from long-time employees, with no feedback or consideration for the fact that those who would use the system would be temporary employees. While there was some involvement of usability professionals in the process it was “too little, too late” and did not have an impact on the methods used. The requirements were turned over to a contractor, and a test of the resulting software resulting in the need to change more than 400 requirements. The project had to be scrapped after spending almost $600 million on the contractor (not counting the resources spent in-house), and meant that the Census Bureau had to spend an extra $3 billion in processing paper forms that would have been unnecessary had the software been successfully developed.

So how can we help? HCI researchers and professionals can contribute to public policy by informing elected officials and the leadership at government agencies of the methods that are most likely to result in usable and useful government ICTs that can be developed on time and within a given budget. This, in turn, can inform how government contracts for the development of ICTs are structured, such that they require iterative processes with a significant amount of stakeholder feedback. If these methods are followed, government agencies stand to save resources, and deliver better quality ICTs. This is an area where ACM, SIGCHI, and other professional associations could play a role. If we don’t do it, no one else will.

Endnotes

1. Charette, R.N. Why software fails? IEEE Spectrum 42, 9 (2005), 42-49.

2. Dada, D. The failure of e-government in developing countries: A literature review. The Electronic Journal of Information Systems in Developing Countries 26, 7 (2006), 1-10.



Posted in: on Mon, June 30, 2014 - 1:04:52

Juan Pablo Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Pablo Hourcade's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Organizational behavior


Authors: Jonathan Grudin
Posted: Mon, June 23, 2014 - 8:36:18

Two books strongly affected my view of organizations—those I worked in, studied, and developed products for. One I read 35 years ago; the other I just finished, although it came out 17 years ago.

Encountering Henry Mintzberg’s typology of organizational structure

In 1987, an “Organizational Science Discussion Group” was formed by psychologists and computer scientists at MCC. We had no formal training in organizational behavior but realized that software was being designed to support more complex organizational settings. Of the papers we discussed, two made lasting impressions. “A Garbage Can Model of Organizational Choice” humorously described the anarchy found in universities. It may primarily interest academics; it didn’t seem relevant to my experiences in industry. The discussion group’s favorite was a dense one-chapter condensation [1] of Henry Mintzberg’s 1979 The Structuring of Organizations: A Synthesis of the Research.

Mintzberg observed that organizations have five parts, each with its own goals, processes, and methods of evaluation. Three are the top management, middle management, and workers, which he labels strategic apex, middle line, and operating core. A fourth group designs workplace rules and processes, such as distribution channels, forms, and assembly lines. This he calls technostructure, although technology is not necessarily central. Finally there is everyone else: the support staff (including IT staff), attorneys, custodians, cafeteria workers, and so on.

Mintzberg argues that these groups naturally vie for influence, with one or another usually becoming particularly powerful. There are thus five “organizational forms.” Some are controlled by executives, but in divisionalized companies the middle line has strong autonomy, as when several product lines are managed with considerable independence. In an organization of professionals, such as a university, the workers—faculty—have wide latitude in organizing their work. In organizations highly reliant on regulations or manufacturing processes, the technostructure is powerful. And an “adhocracy” such as a film company relies on a collection of people in support roles.

When I left MCC, I was puzzled to find that Mintzberg’s analysis was not universally highly regarded. Where was supporting evidence, people asked. What could you do with it?  Only then did I realize why we had been so impressed. It arose from the unique origin of MCC.

An act of Congress enabled MCC to open its doors in 1984. In response to a prominent Japanese “Fifth Generation” initiative, anti-trust laws were modified to permit 20 large U.S. companies to collaborate on pre-competitive research. MCC was a civilian effort headed by Bobby Ray Inman, previously the NSA Director and CIA Deputy Director. It employed about 500 people, some from the shareholder companies and many hired directly. Our small discussion group drew from the software and human interface programs; MCC also had programs on artificial intelligence, databases, parallel processing, CAD, and packaging (hardware).

Consider this test of Mintzberg’s hypotheses: Create an organization of several hundred people spread across the five organizational parts, give them an ambiguous charter, let them loose, and see what happens. MCC was that experiment.

To a breathtaking degree, we supported Mintzberg’s thesis. Each group fought for domination. The executives tried to control low-level decisions. Middle managers built small fiefdoms and strove for autonomy. Individual contributors maneuvered for an academic “professional bureaucracy” model. Employees overseeing the work processes burdened us with many restrictive procedural hurdles, noting for example that because different shareholder companies funded different programs, our interactions should be regulated. Even the support staff felt they should run things—and not without reason. Several were smart technicians from shareholder companies; seeing researchers running amok on LISP machines, some thought, “We know what would be useful to the shareholders, these guys sure as hell don’t.”

Mintzberg didn’t write about technology design per se. We have to make the connections. Central to his analysis is that each part of the organization works differently. Executives, middle managers, individual contributors, technostructure, and support staff have different goals, priorities, tasks, and ways to measure and reward work. Their days are organized differently. Time typically spent in meetings, ability to delegate, and the sensitivity of their work differ. Individual contributors spend more time in informal communication, managers rely more on structured information—documents, spreadsheets, slide decks—and executives coordinate the work of groups that rarely communicate directly.

Such distinctions determine which software features will help and which may hinder. Preferences can sharply conflict. When designing a system or application that will be used by people in different organizational parts, it is important to consult or observe representatives of these groups during requirements analysis and design testing.

At MCC we did not pursue implications, but I was prepared when Constance Perin handed me an unpublished paper [2] in 1988. I had previously seen the key roles in email being senders and receivers; she showed that enterprise adoption could hinge on differences between managers, who liked documents and hated interruptions, and individual contributors, who engaged in informal communication and interruption. Over the next 25 years, studying organizational adoption of a range of technologies, I repeatedly found differences among members of Mintzberg’s groups. If it was confirmation bias, it was subtle, because somewhat obtusely I didn’t look for it and was surprised each time. The pattern can also be seen in other reports of enterprise technology adoption. This HICSS paper and this WikiSym paper provide a summary and a recent example.

Clayton Christensen and disruptive technologies

In 1997, Clayton Christensen published The Innovator’s Dilemma. Thinking it was a business professor’s view of issues facing a lone inventor, I put off reading it until now. But it is a nice analysis of organizational behavior based on economics and history, and is a great tool for thinking about the past and the present.

I have spent years looking into HCI history [3], piecing together patterns some of which are more fully and elegantly laid out by Christensen. The Innovator’s Dilemma deepened my interpretations of HCI history and reframed my current work on K-12 education. Before covering recent criticism of this short, easily read book and indicating why it is a weak tool for prediction, I will outline its thesis and discuss how I found Mintzberg and Christensen to be useful.

Christensen describes fields as diverse as steel manufacture, excavation equipment, and diabetes treatment, arguing that products advance through sustaining innovations that improve performance and satisfy existing customers. Eventually a product provides more capability than most customers need, setting the stage for a disruptive innovation that has less capability and a lower price—for example, a 3.5” disk drive when most computers used 5”+ drives, or a small off-road motorbike when motorcycles were designed for highway use. The innovation is dismissed by existing customers, but if new customers happy with less are found, the manufacturer can improve the product over time and then enter the mainstream market. For example, minicomputers were initially positioned for small businesses that could not afford mainframes, then became more capable and undermined the mainframe industry. Later, PCs and workstations, initially too weak to do much, grew more capable and destroyed the once-lucrative minicomputer market.

An interesting insight is that established companies can fail despite being well-managed. Many made rational decisions. They listened to customers and improved their market share of profitable product lines rather than diverting resources into speculative products with no established markets.

Some firms that successfully embraced disruptive innovations learned to survive with few sales and low profit margins. Because dominant companies are structured to handle large volume and high margins, Christensen concludes that a large company can best embrace a disruptive innovation by creating an autonomous entity, as IBM did when it located its PC development team in Florida.

Using the insights of Mintzberg and Christensen for understanding

For decades, Mintzberg’s analysis has helped me understand the results of quantitative and qualitative research, mine and others’, as described in the papers cited above and two handbook chapters [4]. Reading The Innovator’s Dilemma, I reevaluated my experiences at Wang Laboratories, a successful minicomputer company that, like the others, underestimated PCs and Unix-based workstations. It also made sense of more recent experiences at Microsoft, as well as events in HCI history.

For example, a former Xerox PARC engineer recounted his work on the Alto, the first computer sporting a GUI that was intended for commercial sale. A quarter century later he still seemed exasperated with Xerox marketers for pricing the Alto to provide the same high-margin return as photocopiers. With a lower price, the Alto could have found a market and created the personal computer industry. The marketing decision seems clueless in hindsight, but in Christensen’s framework it can be seen as sensible unless handling a disruptive innovation—which the personal computer turned out to be.

A colleague said, “An innovator’s dilemma book could be written about Microsoft.” Indeed. It would describe successes and failures. Not long after The Innovator’s Dilemma was published, Xbox development began. The team was located far from the main Redmond site, reportedly to let them develop their own approach, as Christensen would recommend. Unsuccessful efforts are less easily discussed, but Courier might be a possibility.

Using (or avoiding) the frameworks as a basis for predictions

Mintzberg’s typology has proven relevant so often that I would recommend including members of each of his groups when assessing requirements or testing designs. His detailed analysis could suggest design features, but because of the complex, rapidly evolving interdependencies in how technology is used in organizations, empirical assessment is necessary.

Christensen is more prescriptive, arguing that sustaining innovations require one approach and a timely disruptive innovation requires a different approach. But if disruptiveness is a continuum, rather than either-or, choosing the approach could be difficult. And getting the timing right could be even trickier. Can one accurately assess disruptiveness? My intuition is, rarely.

Christensen courageously concluded the 1997 book by analyzing a possible disruptive innovation, the electric car. His approach impressed me—methodical, logical, building on his lessons from history. He concluded that the electric car was disruptive and provided guidance for its marketing. In my view, this revealed the challenges. He projected that only in 2020 would electric vehicle acceleration intersect mainstream demands (0 to 60 mph in 10 seconds). Reportedly the Nissan Leaf has achieved that and the Tesla has reached five seconds. On cruising range he was also pessimistic. Unfortunately, his recommendations depend on the accuracy of these and other trends. He suggested a new low-end market (typical for the disruptive innovations that he studied) such as high school students, who decades earlier fell in love with the disruptive Honda 50 motorcycle; instead, electric cars focus on appealing to existing high-end drivers. A hybrid approach by established manufacturers, which failed for his mechanical excavator companies, has been a major automobile innovation success story.

Christensen reverse-engineered success cases, a method with weaknesses that I described in an earlier blog post. We are not told how often plausible disruptive innovations failed or were developed too soon. Christensen says that innovators must be willing to fail a couple times before succeeding. Unfortunately, there is no way to differentiate two failures of an innovation that will succeed from two failures of a bad or premature idea. Is it “the third time is a charm” or “three strikes and you’re out”? If 2/3 of possible disruptive innovations pan out in a reasonable time frame, an organization would be foolish not to plan for one. If only one in 100 succeed, it could be better to cross your fingers and invest the resources in sustaining innovations.

Our field is uniquely positioned to explore these challenges. Most industries studied by Christenson had about one disruptive innovation per century. Disk drives, which Christensen describes as the fruit flies of the business world, were disrupted every three or four years. He never mentions Moore’s law. He was trying to build a general case, but semiconductor advances do guarantee a flow of disruptive innovation. New markets appear as prices fall and performance rises. A premature effort can be rescued by semiconductor advances: The Apple Macintosh, a disruptive innovation for the PC market, was released in 1984. It failed, but models in late 1985 and early 1996 with more memory and processor power succeeded.

Despite the assistance of Moore’s law, the success rate for innovative software applications has been estimated to be 10%. Many promising, potentially disruptive applications failed to meet expectations for two or three decades: speech recognition and language understanding, desktop videoconferencing, neural nets, workflow management systems, and so on. The odds of correctly timing a breakthrough in a field that has one each century are worse. Someone will nail it, but how many will try too soon and be forgotten?

The weakness of Christensen’s historical analysis as a tool for prediction is emphasized by Harvard historian Jill Lepore in a New Yorker article appearing after this post was drafted. Some of Christensen’s cases are more ambiguous when examined closely, although Christensen did describe exceptions in his chapter notes. Lepore objects to the subsequent use of the disruptive innovation framework by Christensen and others to make predictions in diverse fields, notably education.

These are healthy concerns, but I see a lot of substance in the analysis. No mainframe company succeeded in the minicomputer market. No minicomputer company succeeded in efforts to make PCs. They were many, they were highly profitable, and save IBM, they disappeared.

I’ll take the plunge by suggesting that a disruptive innovation is unfolding in K-12 education. The background is in posts that I wrote before reading Christensen: “A Perfect Storm” and “True Digital Natives.” In Christensen’s terms, 1:1 device-per-student deployments transform the value network. They enable new pedagogical and administrative approaches, high-resolution digital pens, advanced note-taking tools, and handwriting recognition software (for searching notes). As with many disruptive innovations at the outset, the market of 1:1 deployments is too small to attract mainstream sales and marketing. But appropriate pedagogy has been developed, prices are falling fast, and infrastructure is being built out. Proven benefits make widespread deployment inevitable. The question is, when? The principal obstacle in the U.S. is declining state support for professional development for teachers.

Conclusion: the water we swim in

Many of my cohort have worked in several organizations over our careers. Young people are told to expect greater volatility. It makes sense to invest in learning about organizations. If you start a discussion group, you now have two recommendations.

Endnotes

1. Published in D. Miller & P. H. Friesen (Eds.), Organizations: A Quantum View, Prentice-Hall, 1984 and reprinted in R. Baecker (Ed.), Readings in Computer Supported Cooperative Work and Groupware, Morgan Kaufmann, 1995.

2. A modified version appeared as Electronic social fields in bureaucracies.

3. A moving target: The evolution of HCI. In J. Jacko (Ed.), Human-computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications. (3rd edition). Taylor & Francis, 2012. An updated version is available on my web page.

4. J. Grudin & S. Poltrock, 2012. Taxonomy and theory in Computer Supported Cooperative Work. In S.W. Kozlowski (Ed.), Handbook of Organizational Psychology, 1323-1348. Oxford University Press. Updated version on my web page; 
J. Grudin, 2014. Organizational adoption of new communication technologies. In H. Topi (Ed.), Computer Science Handbook, Vol. II. Chapman & Hall / CRC Press.

Thanks to John King and Gayna Williams for discussions.



Posted in: on Mon, June 23, 2014 - 8:36:18

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


May I have your attention


Authors: Ashley Karr
Posted: Sun, June 01, 2014 - 3:40:26

Takeaway: Removing ourselves from stimulation, electronic or otherwise, is crucial for our brains to function at their peak, and focusing on one task at a time with as little outside distraction as possible is the best way to increase task performance.

I will begin this article by saying that I love meta. The fact that we build new technologies to study how technology affects us makes me laugh. Anyway, what this article is really about is attention, so that is where I will focus ours.

The modern field of attention research began in the 1980s when brain-imaging machines became widely available. Researchers found that to shift attention from one task or point of focus to another greatly decreased performance. No exceptions. No excuses. No special cases. Human beings do not perform as well as possible in any task when they multitask. 

Studies also show that simple anticipation of another stimulus or task can take up precious resources in our working memory, which means we can’t store and integrate information as well as we should. Additionally, downtime is very important for the brain. During downtime, the brain processes information and turns it into long-term memories. Constant stimulation prevents information processing and solidifying, and our brains become fatigued. 

It appears that removing ourselves from stimulation, electronic or otherwise, is crucial for our brains to function at their peak, and focusing on one task at a time with as little outside distraction as possible is the best way to increase task performance. Some studies have found that people learn better after walking in rural areas as opposed to a walking in urban environments. Researchers are also investigating how electronic micro-breaks, like playing a two-minute game on a cell phone, affect the brain. Initial findings do not support electronic micro-breaks as true “brain breaks” that allow for information processing and prevent mental fatigue.  

Based on this research, I brainstormed a few ways we can de-stimulate. Here are some of my ideas:

  • Only answer and respond to emails for a window of one to two hours a day.
  • Take a five-minute break by going outside and sitting on a bench or the grass. Just sit there. Don’t even bring your phone.
  • Unplug your TV and wireless router at least one day a week.
  • Go camping.
  • Turn off your cell phone for an hour, a day, or an entire weekend.
  • Only make phone calls at a designated spot and time in a quiet place away from distractions.
  • Stop reading this article, turn off your computer, put down your phone, and go outside.


Posted in: on Sun, June 01, 2014 - 3:40:26

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


So close but yet so far away


Authors: Monica Granfield
Posted: Thu, May 29, 2014 - 8:12:06

In the last five days I have clocked at least eight hours on three Fortune 100 consumer websites for tasks that should not take more than an hour combined. What made my poor experiences even more ironic is that these are companies in the service industry that pride themselves on innovation around the customer experience!

This has left me scratching my head, wondering how these customer-centric companies could have veered so greatly from their missions and failed at the interaction level. The sites were professionally created consumer sites—well branded, with impeccable visual presentations, thus setting my expectations high for a pleasant user experience.

In two cases there were scenarios that were not directly generating revenue, but instead asking for assistance on a recent purchase. The experiences were not only poorly thought out task flows—one provided little guidance and feedback to entering materials and the other was, as it turns out, operationally incorrect. Both of these interaction experiences failed and forced me to call customer support.

In the first case, with JetBlue, once the voice on the other end of the phone materialized I assumed could breathe a sigh of relief. The problem is going to be cleared up and I can get these tasks off of my plate. No such luck. The experience continued to deteriorate over the phone. Two calls mysteriously dropped off midway through my help session. Remaining calm, I continued to forge on. I was told twice in no uncertain terms that the website would not allow me to do what I had done, so the service rep would not instruct me on how to rectify the situation and properly register my family for the program offered. I was allowed to invite minors to my family pool without a frequent flyer number—there were no instructions to tell me otherwise—and the UI allowed me to accomplish this. I was fortunate that in this case, an actual confirmation e-mail had been sent to me. This did not help in promoting my cause or rectifying the situation with the service rep, as he told me, "No that can't happen." “Hmmm,” I thought, “then what am I looking at?” The steps to sign up for the program had no instructions and were completely undiscoverable. So much so that, in the end, the representative had to put me on hold for 10 minutes to go and somehow delete whatever was magically made possible to me via the website and sign me up manually. The steps that followed this process to get my family "registered" were also not discoverable and I could not have continued to complete the process (and remember I am a seasoned computer user/UX designer) without the assistance of the customer service representative. Several times during the two hours that it took to accomplish this 10-minute task, my husband must have asked me 10 times, "Is this really worth it?" while my oldest child intermittently hollered, "Virgin, we should have flown Virgin." Well, our tickets were already purchased, but how many other people are not bothering and abandoning this program or the airline all together based on these types of poor user experiences?  I finally did get us all signed up for the program, although we are still not sure if my miles are in this pool or not—hard to tell! I am excited for a trip, regardless of the UX fail. However, will I choose this airline next time I travel? Depends on my experience.

In the other case I spent upwards of 90 minutes painstakingly entering information, photos, and receipts into a highly unusable form and process with little or no instruction, only to hear nothing back from Disney until I picked up a phone and called them directly, two weeks later. This due to a lost activation code for a child's CD that is printed on a loose piece of paper, easily lost by an excited child waiting to watch the latest release of a Disney flick! Why not print the code on the CD or the insert on the case? I did pay for it. Why is it so difficult to get this number back or to more securely adhere it to the packaging? I will admit, once when attempting to use their site to book a trip to Disney World, I became so dismayed we stayed outside the park. Sorry, Walt. Of course I was not allowed to enter a new case for this issue, as one for this product ID already exists! UX fail.

My last and perhaps most disturbing experience is with a tactic used by many sites, including Care.com and SitterCity.com, to passively and what seems to me questionably collect revenue unknowingly from users. Funny enough, the audience is busy parents! You check a box that says "Do not auto charge my account after my selected pay period ends" and lo and behold, you are charged anyway. And when you call to contest the charges, assuming that you catch this and call immediately, they will apologetically "refund" your money. Relying on the fact that you can't recall if you actually checked off that little option box—and where oh where was that little box anyway?—has left me saying, UX fail.

As an experience designer I have been left wondering what to think about these experiences.

My conclusion is this: scenarios that do not generate obvious revenue are not given UX priority or the attention needed to craft an elegant and usable recovery experience. Recovery experiences are important scenarios that, if not carefully considered, will eventually result in lost revenue. Evidence of this can be seen in Jared Spool’s blog post "The $300 Million Button." The business intent of the paradigm was to get users registered. Users who did not want to register became frustrated and abandoned their purchases. Once this was identified and rectified, there was significant increase in revenue. User abandonment of a product or brand can happen as often after a purchase as before or during a purchase. Companies need to embrace UX design in addressing the end-to-end experience and how it impacts the business, from the customer perspective and not just the business-revenue-generating channels. This will lead to a better user experience, repeat customers, and increased revenue. This is what a good UX can do!



Posted in: on Thu, May 29, 2014 - 8:12:06

Monica Granfield

Monica Granfield is a user experience designer at Symbotic. The views expressed on this website are her own and do not necessarily reflect the views of Symbotic.
View All Monica Granfield's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Philosophical robbery


Authors: Jonathan Grudin
Posted: Wed, May 28, 2014 - 7:19:20

In 1868 I read Dr. Holmes's poems, in the Sandwich Islands. A year and a half later I stole his dedication, without knowing it, and used it to dedicate my "Innocents Abroad" with. Ten years afterward I was talking with Dr. Holmes about it. He was not an ignorant ass—no, not he; and so when I said, "I know now where I stole, but who did you steal it from?" he said, "I don't remember; I only know I stole it from somebody, because I have never originated anything altogether myself, nor met anybody who had."

—Samuel Clemens (Mark Twain) in a letter to Anne Macy, reprinted in Anne Sullivan Macy, The Story Behind Helen Keller. Doubleday, Doran, and Co., 1933.

Accounts of plagiarism are epidemic. Charged are book authors, students, journalists, scientists, executives, and politicians. Technology makes it easier to find, cut, and paste another’s words—and easier to detect transgressions. Quotation marks and a citation only sometimes address the issue. Cats and mice work on tools for borrowing and detection, but technology is shifting the underlying context in ways that will be more important.

Plagiarism or synthesis: Plague or progress?

We appreciate novelty in art and technology. We may also nod at the adage, “There is nothing new under the sun.” Twain isn’t alone in questioning the emphasis on originality that emerged in the Enlightenment. Arthur Koestler’s The Act of Creation is a compelling analysis of the borrowing that underlies literary and scientific achievement. Although we encourage writers to cite influences, we know that a full accounting isn’t possible. Further complicating any analysis is the prevalence of cryptomnesia or unconscious borrowing, which fascinated Twain and has been experimentally demonstrated. Writers of undeniable originality, such as Friedrich Nietzsche, borrowed heavily without realizing it.

Believing that an idea is original could motivate one to work harder, perhaps borrowing more and building a stronger synthesis. The aspiration to be original could have this benefit. I’ve seen students, faculty, and product designers lose interest when shown that their work was not entirely “invented here.” They might have been more productive if unaware of the precedent.

An earlier post on creativity, which cited a professor who directs students to submit work in which every sentence is borrowed, proposed that the availability of information and the visibility of precedents will shift our focus from originality to a stronger embrace of synthesis. It seems a cop-out to say that synthesis is a form of originality. The distinction is evident in “NIH syndrome,” the reluctance to build overtly on the work of others. 

Prior to considering when citation is and is not required or perhaps even a good thing, let’s establish that there is no universal agreement on best practices.

Cultural differences

Some professors say, “I learn more from my students than they do from me.” As a professor I learned from students, but I hope they learned more from me, because I was a slow learner. One afternoon at culturally heterogeneous UC Irvine, I realized that a grade-grubber who had all term shown no respect for my time by arguing endlessly for points had in fact been sincerely demonstrating respect for the course and for my regard, which he felt a higher grade would reflect. Raised in a haggling culture abroad, he assumed that I understood that his efforts demonstrated respect, and almost fell on the floor in terror when I said mildly and constructively that he was developing a reputation for being difficult. It had a happy ending.

The faculty shared plagiarism stories. My first lecture in a “technology and society” course included a plagiarism handout. I explained it, asked if they understood it, and sometimes asked everyone who planned to plagiarize to raise a hand “because one of you probably will, and it will be a lot easier if you let me know now.” Gentler than some colleagues, I only failed a plagiarist on the assignment. But that was enough to affect the grades of students, many of whom were Asian Americans whose families counted on them to become engineers. Parents dropped some at the university in the morning and picked them up in the afternoon.

In 1995 I spent a sabbatical in a top lab at a leading Asian university. I discovered that uncited quotation was acceptable. Students plagiarized liberally. Uncited sentences and paragraphs from my publications turned up in term papers for my class. I thought, “OK, we make a big deal of quotation marks and a reference. They don’t.” This didn’t shock me. My first degrees were in math and physics, where proof originators were rarely cited or mentioned. No “Newton, 1687.”

An end-of-term event riveted my attention. Each senior undergraduate in the lab was assigned a paper to present to the faculty and students as their own work, in the first person singular! Organized plagiarism! It was brilliant. The student must understand the work inside out. A student who is asked “Why did you include a certain step?” can’t say “I don’t know why the authors did that.”

I recognized it. I once took a method acting class. Good actors are plagiarists, marshalling their resources by convincing themselves that the words in a script are their own. Plagiarism as an effective teaching device!? Be that as it may, after years of teaching, one of the two grades I regret giving was to a fellow who, before my discovery, may have followed parental guidance: work hard to find and reproduce relevant passages. He just hadn’t absorbed our custom of bracketing them with small curlicues.

Copyright violation

Plagiarism is not a crime, but violating copyright is. U.S. copyright law isn’t fully sorted out, but it represents a weighing of commercial and use issues, and a not yet fully defined concept of “fair use” exceptions that considers the length, percentage, and centrality of the reproduced material, the effect of copying on the market value of the original, and the intent (a parody or critical review that reduces the original’s market value may include excerpts).

My focus is on ethical and originality considerations, so for copyright infringement guidance consult your attorney. I once inquired into how much a copyrighted paper must be changed to republish it. I found a vast gulf in opinion between seasoned authors (“very little”) and publishers (“most of it”). Publishers haven’t seemed to bother about scientific work, but with plagiarism-detection and micropayment-collection software, that could change.

Factors in weighing originality and ethics

1. How exact is the copy, from identical to paraphrase to “idea theft”? What is the transgression—lack of giving credit? An explicit or implied false assertion of originality or effort? A false claim to understand the material?

Students are told, “Put it in your own words, then it isn’t plagiarism.” This is true when the information is general knowledge. Paraphrasing a passage from a textbook, a lecture, or a friend’s work may suffice. Information from a unique source, such as a published paper, generally deserves and is improved by a source citation.

If omitting a citation causes readers to infer that an author originated the work, it crosses the line. For example, a journalist who uses the work of other journalists, even if every sentence is rewritten, creates a false impression of having done the reporting and is considered a plagiarist. Crediting the original journalist solves the problem if copyright isn’t violated. There are grey areas—reports of press conferences may not identify those who asked the questions. When a copyright expires, anyone can publish the work, but to not credit the author would be bad form.

With student work that is intended to develop or demonstrate mastery, copying undermines the basic intent. Especially digital copying—some teachers have students write out work by hand, figuring that even if copied from a friend’s paper, something could stick as it goes from eyes through brain to fingers. For a student who has truly mastered a concept, copying “busy work” is less troubling. (We hope computer-based adaptive learning, like one-to-one tutoring, will reduce busy work.)

Idea theft is an often-expressed concern of students and faculty. We may agree that ideas are cheap and following through is the hard part, but to credit a source of an idea is appropriate even when the borrowing is conceptual.

2. When is attribution insufficient?

As noted above, attribution won’t shield an author from illegal copyright violation. Although the law is unsettled, copying with or without attribution may be allowed for “transformative works” to which the borrower has made substantive additions. Transformative use wouldn’t justify idea theft—finding inspiration in the work of others is routine, but not developing the idea of someone who might intend to develop it further.

3. When is attribution unnecessary? How is technology changing this?

In cases of cryptomnesia or unconscious plagiarism of the sort Mark Twain owned up to, attribution is absent because the author is unaware of the theft. Experiments have shown that unconscious borrowing is easy to induce and undoubtedly widespread. Nevertheless, a few years ago, a young author had a positively reviewed book withdrawn by the publisher after parallels were noticed in a book she acknowledged having read often and loved. The media feeding frenzy was unjustified; it was clearly cryptomnesia, with few or no passages reproduced verbatim.

Homer passed on epic tales without crediting those he learned them from. Oral cultures can’t afford the baggage. Change was slow and is not complete, and today cultures vary in their distance from oral traditions. When printing arrived, “philosophical robbery” was rampant. Early journals reprinted material from other journals without permission. Benjamin Franklin invented some of his maxims and appropriated others without credit. Only recently have we decided to expend paper and ink to credit past and present colleagues for the benefit of present and future readers.

Shakespeare borrowed heavily from an earlier Italian work in writing Romeo and Juliet, on which 1957’s West Side Story was based. The first version of West Side Story was shelved in 1947 when the authors realized how much they’d borrowed from other plays that were also based on Shakespeare. Twain again: “Substantially all ideas are second-hand, consciously and unconsciously drawn from a million outside sources, and daily used by the garnerer with a pride and satisfaction born of the superstition that he originated them; whereas there is not a rag of originality about them anywhere except the little discoloration they get from his mental and moral calibre and his temperament, which is revealed in characteristics of phrasing. . . . It takes a thousand men to invent a telegraph, or a steam engine, or a phonograph, or a photograph, or a telephone, or any other important thing—and the last man gets the credit and we forget the others. He added his little mite—that is all he did.”

“Gladwellesque” books are artful syntheses of others’ work. Some of the contributing scholars may grumble that they should share in royalties, but at least they get credit, which is their due and which gives authority to a synthesis. However, a writer constantly weighs when contributions merit overt credit. In the natural sciences citation is often omitted. It slows the reader and distracts from the elegance of the pure science. It forces a writer to take sides in historical paternity/maternity quarrels and decide whether his or her slight improvement in the elegance of a proof also merits mention.

In less formal writing the custom is to acknowledge less. Magazines may limit or altogether eliminate citations, allowing only occasional mentions in the text.

Consider this essay. I cited a primary source for the Twain quotations, but not the secondary source where I found them. I didn’t note that the first complaint of “philosophical robbery” that I know about was by the chemist Robert Boyle soon after the printing press came into use, or where I learned that. I didn’t credit Wikipedia for the origins of West Side Story.

Technology is rapidly expanding the realm of information that I consider common knowledge that needs no citation. My rule of thumb is that if a reader can find the source in fewer than five seconds with a search engine and obvious keywords, I don’t need to cite it, although sometimes I will. For example, anyone can quickly learn that “there is nothing new under the sun” comes from Ecclesiastes.

Reasons for omission vary. I provided a source for cryptomnesia but not for the author who fell afoul of it, feeling that after the lack of media generosity she has a “right to be forgotten.” It is also a tangled web we weave when we practice to communicate directly (a transformative borrowing I shall leave uncredited).

As we focus on building plagiarism detectors to trip up students, technology will make all of our borrowings more visible, the conscious and the unconscious. There is no turning back, but the emerging emphasis on synthesis may resonate more with the oral tradition of aggregation than with the recent focus on individual analysis. In the swarm as in the tribe, credit is unnecessary.



Posted in: on Wed, May 28, 2014 - 7:19:20

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Bringing together designers, ePatients, and medical personnel


Authors: Richard Anderson
Posted: Fri, May 23, 2014 - 8:06:54

Back in 1989–1991, I served on the committee that founded BayCHI, the San Francisco Bay Area chapter of ACM SIGCHI. I became its first elected chair and served as its first appointed program chair for 12 years. I also served as SIGCHI’s Local Chapters chair for five years, supporting the founding and development of SIGCHI chapters around the world.

Much has happened since then. Perhaps of greatest significance were my horrific experiences with the U.S. healthcare system. My healthcare nightmare changed my life and has prompted me to focus on what can be done to dramatically redesign the healthcare system and the patient experience. Indeed, several of my Interactions blog posts reflect that focus, with a large part of that focus being on changing the roles and relationships of and between patients and medical personnel and designers. You’ll see that in, for example, “Utilizing patients in the experience design process,” “Learning from ePatient (scholar)s,” “Are you trying to solve the right problem?,” “The importance of the social to achieving the personal,” and “No more worshiping at the altar of our cathedrals of business.”

All this has led me to start a new local chapter, but this one is not of SIGCHI. This one is for a combination of ePatients, medical personnel, and designers. This one is for changing the healthcare system. This one is the first local chapter of the Society for Participatory Medicine.

Topics/issues to be addressed by the chapter should be of interest to many Interactions readers. They include the ePatient movement, peer-to-peer healthcare, other uses of social media in healthcare, human-centered healthcare design and innovation, doctors and patients as designers, the quantified self, patient and doctor engagement, empathy, healthcare technology, patient experiences of the healthcare system, and more. When Jon Kolko and I were the editors-in-chief of Interactions, we published lots of articles that addressed this level of topics/issues. One of those was a cover story entitled “Reframing health to embrace design of our own well-being.” (Somewhat coincidentally, two of the article’s authors made a presentation about the content of the article at a BayCHI meeting.)

If you reside anywhere in the San Francisco Bay Area and are interested in the topics/issues listed above, I invite you to join this new local chapter. If you know of others in the San Francisco Bay Area who you think might be interested, please let them know about the group as well.

The chapter is just starting. Indeed, our first meeting has not yet been scheduled, as I'm still seeking venue options (and sponsors). If you know of any venue (or sponsor) possibilities, please let me know.

It feels good to be getting back into the local chapter business. I hope you’ll check us out.



Posted in: on Fri, May 23, 2014 - 8:06:54

Richard Anderson

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.
View All Richard Anderson's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Why teaching tech matters


Authors: Ashley Karr
Posted: Fri, May 16, 2014 - 7:12:56

Education is one of the most valuable ways that we can improve quality of life for ourselves and others. This improvement applies to teachers as well as students. Having just taught a ten-week user experience design immersive course, I am keenly aware of how my life is better as a result. The following is a list of how my life has improved thanks to my co-instructor, our students, course producer, and supportive staff:

Meaning and engagement 

Before I began this course, I was burnt out. My work had lost meaning and engagement. I had been building technology for organizations and groups that had lost their soul and passion for good design. Instead, they pursued profit and fantasy deadlines. The irony of engaging in a type of engineering called “human-centered design” and “user experience design” in environments like this was not amusing. After the first day of this course, meaning and engagement had found their way back into my Monday through Friday 9 to 5. Why? The people. But I will write more about that later.

Leverage 

I found my way into anthropology, human factors engineering, human-computer interaction (HCI), and user experience (UX) design because I truly care about our world, its people, and other living creatures. I saw how these fields could help me operationalize my instinct to help. Additionally, by leveraging the power of computing technology, I could help a lot of people with minimal effort. (Spoken like a true humanitarian engineer!)

What I now realize is by teaching others to be empathetic, ethical, human-centered designers and makers of computing technology, I am leveraging the boundless energy and power of my students, as well. In the past ten weeks, my co-instructor and I have overseen roughly eighty projects, and a number of these have the potential to become consumer facing. Our eighteen students will go on to rich careers in UX, and I think it safe to say that they will each be involved in at least one small design improvement per month for the next ten years at minimum. That means that my co-instructor and I will be part of at the very least 2,160 design improvements over the next decade because they will draw from the principles learned in our class to do their jobs properly. If each of those design improvements saves ten million people one minute of their time on a mundane task like bill pay, this means 21,600,000,000 minutes have been freed to spend on hugging children, taking deep breaths, and other meaningful things.

Deepening my KSAOs

KSAOs are the job-related knowledge, skills, attitudes, and other characteristics necessary for one to perform their job successfully. When one teaches a subject, their KSAOs deepen, because teaching is not separate from but an integral part of the learning process. My co-instructor and I feel that we’ve learned more than our students by teaching them UX fundamentals. I am not suggesting that the students haven’t paid attention—on the contrary! It is their questioning, challenging, creating, and building upon what we’ve taught them that has enabled us to achieve an even greater mastery of our trade. Interestingly, I now have an ability to discern who of my professional peers has taught and who has not. Teaching, like parenting, gives one a sense of humility and compassion that is hard to reach without the challenges that students and children place in front of you. I do have to add in the closing of this paragraph that my co-instructor and I are very, very proud of our students for thinking critically, independently, and deeply about design and technology. It makes us very proud—the good kind of proud.

Community and personal relationships

It is always and forever about the people. When I applied for the position to teach the UX design immersive course, I focused on the students and the relationships I would create with them. I thought about the people they were before applying for the course, the experiences we would have together over ten weeks, and the people they would become after graduation. I was so excited to meet the students on the first day of class, see their faces, hear their voices, and get to know about them as people rather than social media profiles. The relationships I have built with my students mean more than I had anticipated, and I have the added joy of finding life-long friends in my co-instructor, Jill; the course producer, Jaime; and the staff that supported us through the course. Beyond that, I have met many wonderful guest speakers, leaders, community organizers, and other professionals who have also enriched my experience and career. I am so thankful that all these wonderful people are now in my life, and that we are all working at minimum forty hours per week, two-thousand hours a year to help make the world a better place. I get chills when I think about it. 

Gratitude

For no reason whatsoever, I was born into a situation where I had opportunities that few people have ever had. I am a literate, educated, financially independent person developing cutting-edge technology. I am keenly aware of and grateful for these opportunities and believe them accidents of history and birth and not something that I earned or deserve. It seems that teaching others to be empathetic, ethical, human-centered designers of computing technology is a good way of making sure this privilege is not wasted.

Inspiration

I am happy to say that I am no longer burnt out, and I have rediscovered meaning and engagement in my work thanks to my students, co-instructor, course producer, and supportive staff. I have found that when I lack inspiration, motivation, and energy, what I am missing are quality relationships and interactions with my peers and colleagues. 

In conclusion

I hope this inspiration carries over to you, the reader, and inspires you to become a teacher or mentor. I will end this essay with a direct quote from my co-instructor and good friend, Jill DaSilva. Before I sat down to write this article, I asked her why she decided to teach this course. Here is her answer:

I teach because I have the opportunity to give back. At a time in my life when I needed to support my son and myself, there were people there to help me, teach me, and give me the chance to do what I loved for a living. I’m paying it forward. Also, what we make is meaningful, and I get to teach our students how to create things that improve other people’s circumstances. If we can remove suffering and increase happiness though what we make, then we are living good lives.


Posted in: on Fri, May 16, 2014 - 7:12:56

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Designing the cognitive future, part IV: Learning and child development


Authors: Juan Pablo Hourcade
Posted: Thu, May 15, 2014 - 7:29:54

In this post, I discuss how technology may affect learning and child development in the future, and how the HCI community can play a role in shaping what happens.

Let’s start with a quick primer on some of the latest theories on child development, such as dynamic state theories and connectionism. These theories attempt to bridge what we know about the biology of the brain with well-established higher-level views on development from Piagetian and socio-cultural traditions. These theories see learning as change, and study how change happens.

One of the main emphases of these theories is on the notion of embodiment. They see learning and development occurring through interactions between the brain, the body, and the environment (including other people). When we learn to complete a task, we learn how to do it with our bodies, using the resources available in the environment. As learning, change, and development occur, the brain, the body, and the environment learn, change, and develop together.

These approaches also bring a “biological systems” view of the brain, with small components working together to accomplish tasks, and knowledge representations, behaviors, and skills emerging over time. Emerging skills, for example, are likely to show a great deal of variability initially, with the best alternatives becoming more likely over time. This also links to the concept of plasticity, where it is much easier to change behavior and learn new skills for younger people (they also show greater variability in behavior) but it is more challenging later in life.

So how does all this link to technology? I think technology brings significant challenges and opportunities. The biggest change, perhaps the most radical in the history of humanity, is in the environments with which children may interact in the future. The richness of these environments, and the ability to modify and develop with them will be unprecedented. In particular, there is a potential to give children access to appealing media to build and learn things that match their interests. Much of the research at the Interaction Design and Children (IDC) conference follows this path.

The biggest challenge is in making sure that technology doesn’t get in the way of the human connections that are paramount to child development. A secure attachment to primary caregivers (usually parents) plays a prominent role in helping children feel secure, regulate their emotions, learn to communicate, connect with others, self-reflect, and explore the world with confidence. We have increasing evidence that interactive devices are not always helping in this respect. For example, a recent study by Radesky and colleagues at Boston Medical Center found that parental use of interactive devices during meals led to negative interactions with children. 

Likewise, when providing children with access to interactive media, we need to make sure that this happens in a positive literacy environment. Typical characteristics of positive literacy environments include shared activities (e.g., reading a book or experiencing educational media together) and quality engagement by primary caregivers (e.g., use of wide, positive vocabulary). Obviously, access to appropriate media is also necessary. What are some characteristics to look for? The better options will provide open-ended possibilities, encourage or involve rich social interactions, and incorporate symbolic play and even physical activity.

So how should we design the future of learning? One path is to replace busy parents and teachers with interactive media that take their place, and may even provide children with emotional bonds (similar to the film Her), making sure they are able to accomplish tasks according to standardized measures. The path I would prefer is for technology to enrich the connections between children, caregivers, teachers, and peers; to expand our ways of communicating; to provide more options for engaging in activities together; and to enable self-expression, creativity, and exploration in unprecedented ways. 

How would you design the future of learning?



Posted in: on Thu, May 15, 2014 - 7:29:54

Juan Pablo Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Pablo Hourcade's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Margaret Atwood: Too big to fail?


Authors: Deborah Tatar
Posted: Mon, May 12, 2014 - 7:27:33

Margaret Atwood gave the opening plenary for the CHI conference in Toronto in late April. When Atwood’s name was announced at the Associate Chair meeting the prior December, the audience was divided into two groups, the “Who?” group and the group that gasped “The Margaret Atwood?” Even though the second group was significantly smaller than the first, Atwood’s presence was a coup for the conference. In retrospect, we were the beneficiaries of her slide into entrepreneurial endeavor. We benefited in two ways: first because her entrepreneurial interests are probably why she accepted the gig and then because she brought her narrative powers to describing design development. More than one person in the audience muttered that she was the deciding factor in conference attendance, some because she is a great writer, and some because she is a kind of futurist. The rest came to her keynote because CHI told them to. Luckily, because she is a great writer and a futurist, she could not fail to please, even with a presentation primarily based on voice and content alone. She used Power Point, but only to illustrate, not to structure. She did please. 

To my mind, the most interesting part was the description of her childhood in the far north of Quebec, without running water, school, contact with the outside, or friends. She and her brother engaged in the kind of intense creative endeavor that the Bronte children (as in Charlotte, Emily, and Anne; authors of Jane Eyre, Wuthering Heights, The Tenant of Wildfell Hall respectively) did in the early 19th century in the Yorkshire Moors, but happily neither Atwood nor her brother died early of tuberculosis. Also, happily, Atwood was influenced by Flash Gordon rather than Pilgrim’s Progress. And her take experience with the can-do (actually, the must-do) spirit required for existence in the wild contributed to her intrepid voice. 

Her talk was charming and interesting. And she correctly pointed out the importance of self-driven, unstructured exploration in creativity. In fact, her discussion was very similar to a speech Helen Caldecott, the founder of Physicians for Social Responsibility, gave in the mid-1990s, reminiscing on the dangerous chances that were an everyday part of her childhood in Australia and the judgment that skirting such dangers taught. My own paper on playground games and the dissemination of control in computing (DIS 2008) was based on that talk as well as two other factors: my own memory of routine freedom in my own childhood in Ohio (“Just be home in time for dinner!”) and Buck’s Rock Creative and Performing Arts Camp in New Milford, Connecticut. Buck’s Rock was founded in 1942 by German refugees and permitted students to choose their own activities all day long, every day. The brief, blissful and extremely expensive month I spent at what was then called “Buck’s Rock Work Camp” in the summer of 1973 set my internal compass up for life. 

Despite the considerable interest of her story, Ms. Atwood was deeply wrong in one respect, and in some way it is her error rather than her perception that is the important take-away. She repeated at least twice, and perhaps more often, that we cannot build what we cannot imagine. 

Oh, if only she were right! But she is wrong. She ignores the existence of banks too big to fail. We might, more formally refer to this and related phenomena as emergent effects. Mitch Resnick, Uri Wilensky, and Walter Stroup have been writing for years about teaching children to model the emergent effects of complex systems, using a distributed parallel version of the Logo computer language called Star Logo. Mitch has a lovely small 1997 book called Turtles, Termites and Traffic Jams from MIT Press. And of course the notion of complexity theory as pursued at the Sante Fe Institute brings formality and rigor to the structure of information. 

These are some of the intellectual roots of Big Data. More significant is the practical consequence as Big Data increasingly controls freedom of action. The fear is that, armed with information, the incessant insistence of the computer that I recently wrote about in an Interactions feature will fragment and disperse the unofficial mechanisms that the powerless have always used to get influence. When we let big corporations, dribble-by-dribble, have our information, we do not intend to make a world designed only by monetization. Indeed, I’m sure that at one point the Google founders actually thought that they would “do no evil.” Unintended consequences. Orwell could imagine 1984 but he could not imagine the little steps, quirks, and limitations by which 1984 would be bootstrapped. 

Ironically, I acquired this sensitivity to the power of the computer to obliterate the mechanisms of the already disenfranchised in part by reading The Handmaid’s Tale, a book by Margaret Atwood. 



Posted in: on Mon, May 12, 2014 - 7:27:33

Deborah Tatar

Deborah Tatar is an associate professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Wireframes defined


Authors: Ashley Karr
Posted: Tue, April 29, 2014 - 10:18:59

Take away:Wireframing is the phase of the design process where thoughts become tangible. A wireframe is a visual 2D model of a 3D object. Within website design, it is a basic, visual guide representing the layout or skeletal framework of a web interface. Page schematic or screen blueprint are frequently used synonyms.

When user experience (UX) professionals create wireframes, they arrange user interface elements to best accomplish a particular, predetermined task or purpose. They focus on function, behavior, content priority, and placement, page layout, and navigational systems. They lack graphics and a fancy look and feel. Wireframing is an effective rapid prototyping technique in part because it saves huge amounts of time and money. It allows designers to measure a design concept's practicality and efficacy without a large investment of these two very important resources. It combines high-level structural work, such as flow charts, site maps, and screen design, and connects the underlying conceptual structure (the information architecture) to the design's surface (the user interface). Designers use wireframes for mobile sites, computer applications, and other screen-based products that involve human-computer interaction (HCI). 

The following are a few best practices for wireframing in the digital space:

  • Be a planner. Gather information before you start jumping into wireframes. Make sure you, your team, and your clients are clear on the design's missions, goals, objectives, and functions. Make sure you are also just as clear on your stakeholders and users. (Did you remember that maintenance workers are design users, as well? No? Better drop the wireframe and spend a bit more time thinking...) 

  • Be a philosopher. Wireframing is "...like a finger pointing away to the moon. Don't concentrate on the finger or you will miss all that heavenly glory." Yes, I just quoted Bruce Lee in Enter the Dragon. How does this apply to wireframing? I will tell you. People tend to get hung up on the medium and not the quality and appropriateness of the wireframe. Wireframing programs are a dime a dozen. In my real-life, actual UX experience, I have discovered that people LOVE paper wireframes. They get excited when they are finally allowed to unplug their fingers from the keyboard, peel their eyes away from the screen, and do batteries-not-included usability tests outside in a courtyard chalk-full of pigeons, fountains, babies in strollers, and dogs playing catch with their owners.

  • Be a child. So many people are afraid of being wrong or seemingly silly in a professional environment that they paralyze their creativity. If you are one of those people who worry so much about what others may think of you during a brainstorming session, go hang out with pre-schoolers for a morning. Dump a big pile of Crayolas on a table, fling around some colored construction paper, and notice what happens.

Wireframing is neither new nor innovative. Humans have wireframed for millennia—sketching out inspirations for new inventions, drafting designs for buildings and civil engineering projects, and developing schematics for massive travel and communication networks. Arguably, our first medium was the cave wall and lump of charcoal leftover from the previous night's fire. As human technology evolved from these prehistoric wares to papyrus to paper to computerized 3D-modeling programs and interactive systems, such as Balsamiq, Axure, and InDesign, what and how we wireframe has evolved in step. However, why we wireframe and wireframing best practices are timeless. In order to develop a viable final product, be it the Parthenon or a mobile phone app to track blood sugar levels for diabetics, humans have depended upon wireframes to organize and synchronize the design team's efforts and test early iterations to avoid disaster and make a good faith attempt at success. 



Posted in: on Tue, April 29, 2014 - 10:18:59

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


True digital natives


Authors: Jonathan Grudin
Posted: Tue, April 22, 2014 - 9:49:32

They’re coming. They may not yet be recognizable, but some are walking—or crawling—among us.

The term digital native was coined in 2001 to describe technology-using youths, some of whom are now approaching middle age. At an early age they used family computers at home. They took computer skills classes in school. They met for other classes in computer labs or had device carts wheeled in. They acquired mobile phones as they approached their teen years.

They are not intimidated by tech. But they aren’t fully digital. Paper and three-ring binders are still alive and well in schools. Last September, Seattle was plagued for weeks by a shortage of bound quadrille-ruled notebooks. Many schools still ban mobile phone use, reinforcing students’ longstanding suspicion that school has little to do with the phone-tethered world outside. Scattered reports of BYOD in the workplace alarm IT professionals, but employers generally assume that new hires will adopt the technology that comes with a job. Enterprises see the disappearance of technophobia as a plus, failing to anticipate the new challenges that will accompany greater technophilia.

Today, a different cohort is starting to emerge. Psychologically different. Not the final stage of digital evolution, but a significant change.

1:1

A previous post described forces behind the spread of device-per-student deployments: changes in pedagogy and assessment methods, sharply declining prices resulting from Moore’s law, manufacturing efficiencies, and the economies of scale that accompany growing demand.

“One laptop per child” visions began almost half a century ago with Alan Kay’s Dynabook concept. Kay pursued education initiatives for decades. The nine-year-old OLPC consortium aimed unsuccessfully for a $100 dollar device, encountering technical and organizational challenges. The site’s once-active blog has been quiet for six months. Its Wikipedia page reflects no new developments for two years. Media accounts consist of claims that OLPC has closed its doors; these are disputed, but the debate speaks for itself.

I don’t question the potential of digital technology in education. Yes, pedagogy, compensation and ongoing professional development for teachers, and infrastructure are higher priorities to which OLPC might have paid more attention. But digital technology is so fluid—when there is enough, it will find its way. OLPC was cycles of Moore’s law ahead of itself—but how many cycles? If nine years wasn’t enough, might another two or three suffice? Pedagogy is improving and infrastructure is coming into place. Support for teachers is the one area of uncertainty; let’s hope it picks up.

Insofar as technology is concerned, the light at the end of the long tunnel is getting bright. Capability grows and cost declines. For the price of several laptop carts five years ago, a school can provide all students with tablets that can do more. And 1:1 makes a tremendous difference.

The obvious difference is greater use, which leads to knowledge of where and how to use technology, and when to avoid using it. Students who use a device for a few hours a week can’t acquire the familiarity and skills of those who carry one to every class, on field trips, and home.

Some features make little sense until use is 1:1. Consider a high-resolution digital pen. Most of us sketch and take notes on paper, but for serious work we type and use graphics packages. Education is different: For both students and teachers, handwriting and sketching are part of the final product. Students don’t type up handwritten class notes or algebraic equations. They draw the parts of a cell, light going through lenses, and history timelines. Teachers mark papers by hand. When lecturing, they guide student attention by underlining, circling, and drawing connecting arrows.

Only when students carry a device can they use it to take notes in every class. When everyone has a high-resolution digital pen, a class can completely eliminate the use of paper. It is happening now. It would happen faster were it not for the familiar customer-user distinction. The customers—such as school board members deciding on technology acquisition—think, “I don’t use a digital pen and I’m successful; isn’t it a frill that costs several dollars per device and is easily lost or broken?” They don’t see that reduced use of paper and substantial efficiency gains will yield net savings. They don’t realize that students who are familiar with the technology will use it to become more productive workers than their predecessors, including those who are today making the purchasing decisions.

There are unknowns. We have learned that when everything is digital, anything can appear anywhere at any time, for better or worse. But in the protected world we strive to maintain for children, digital technology can and I believe will be a powerful positive force. The world’s schools have started crossing that line. A flood will follow.

Leveling the field

When prices fall and other features come into alignment, future use will resemble today’s high-functionality tablets with active digital pens. These devices are not over-featured and are already much less expensive than a few years ago.

The flexibility of well-managed digital technology supports a range of learning styles. An elementary school teacher whose class I visited last week said that her greatest surprise was that struggling students benefited as much as or more than very capable students: The technology “helps level the playing field.” This echoed other conversations I have had. A teacher who was initially skeptical about a new math textbook remarked that after a year he was convinced: The adaptive supplementary materials accessible on the Internet “keep any student from falling through the cracks.” He still felt the textbook was weak on collaboration and other “21st century skills,” but concluded “a good teacher will add them.” In a third school, a teacher recorded parts of lectures as he gave them, using software that captured voice, video, and digital pen input. He then put them online for students who missed class, were not paying attention, or needed to view it a second time. On some occasions when he was not recording, students asked him to.

The most dramatic leveling occurs when technology allows students with sensory and other limitations to use computers for the first time. When I first saw a range of accessibility accessories and applications in active use a year ago, it was eye-opening: Children who had been cut off from the world of computing that we take for granted could suddenly participate fully. It took me by surprise. The tears streaming down my face were not of joy—I felt the isolation and helplessness they had lived with.

Only a device that supports keyboard, pen, voice, and video input along with software that supports a range of content creation, communication, and collaboration activities will realize the full potential. However, 1:1 deployment of any device—tablet PCs, kindles, iPads, Chromebooks—when accompanied by appropriate pedagogy, professional development, and infrastructure not only provides benefit: It is fundamentally transformative, as described in the next section.

From direction to negotiation

When a computer is used in a lab, delivered by a device cart, or engaged with for part of a class period in a station rotation model, the teacher controls when and how it is used. When a student carries a device everywhere, use is negotiated. Students can take notes digitally in a technophobic instructor’s class. Student, teachers, and parents decide, with students often the most knowledgeable party.

The psychological shift with 1:1 goes deep. Students can and often do personalize their devices in various ways. Their sense of responsibility for the tool and its use creates a symbiosis that didn’t exist before. New hires today might use what they are given—that is how they were trained in school! Tomorrow’s students who arrive with years of responsibility for making decisions will bring a knowledge of how they can use digital technology effectively and efficiently. They will expect to participate in decisions. They may or may not use what they grew up with, but they’ll know what they want.

1:1 classroom experiments are underway and succeeding, even in K-5. These kids may not take devices home, but when they are not reading books, playing outdoors, and interacting with family members, they will probably find a device to use there as well.

Born digital

What next? These could be early days. Moore’s law hasn’t yet been revoked. Harvested energy R&D moves forward. Imagine: An expectant mother swallows a cocktail of vitamins, minerals, proteins, and digital microbes that find their way to the fetus. In addition to monitoring fetal health, will the sentinels serenade it with Mozart, drill on SAT questions, introduce basic computing concepts? Born digital—the term is already in use, but we have no idea.

Thanks to Clayton Lewis for comments and discussion.



Posted in: on Tue, April 22, 2014 - 9:49:32

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Three fundamentals for defining a design strategy


Authors: Uday Gajendar
Posted: Tue, April 15, 2014 - 8:04:47

The other day I tweeted this out while grasping what I’m trying to accomplish in my new role as Director of UX at a Big Data start-up: "Creating strategy (& vision) is about understanding the essence, exploring the potential & defining the expression, in an integrative way.” I’d like to delve a bit deeper into this spontaneously conveyed moment of personal profundity!

First, there are literally tons of books out there postulating on “strategy,” and many articles linking business-oriented concepts to “design.” One can spend weeks or months studying them (I’ve read quite a few, no doubt) but until you’re in the midst of being singularly burdened with the responsibility to define a viable, feasible design strategy (and correlated vision) for a team, company, and product, such readings just aren’t enough. Once you realize what’s around you—the sheer magnitude of the opportunity—then you’re able to peek into the milieu of ambiguity and complexity that comprises strategy. And that’s when you see that it’s fundamentally about three basic things:

  • Understand the essence. I once had a very tough Color Theory professor while an undergrad, who one day declared, “In order to master color, you must understand its essence.” And with some noticeable exasperation reinforced by a stern glare, he added: "Are you interested in… essence?!” Harrumph! Well then. It took a very long time, but I eventually realized he meant that you must deeply intuit—to the level of personal resonance—the purpose, value, and raison d’être for existence of color within a certain context. So what is strategy for, ultimately? I don’t mean some banal, trite “value prop” bullet point for a VC “pitch deck.” While that’s great fodder for Dilbert, as a designer I need to speak and work with authenticity to deliver excellence. So, this requires deeply probing the identity and nature of the company and its product—what is their inner “truth”? This requires “connecting” with the purpose, as a designer, and capturing it in the form of a thematic construct of human values: trust, joy, desire, power, freedom, etc. You’ve got to feel it… and believe it. There’s a bilateral immersive engagement that shapes your perspective of why the product and company exist, and how you can move that forward.

  • Explore the potential. There is a vast array of materials at a designer’s disposal, from the tangible (color, imagery, type, animations) to the intangible (presence, interaction, workflow). Pushing this range of potentialities is necessary to break beyond any conventional thinking, see what’s afforded and available. Potential necessarily involves delving into a fragile, unfamiliar realm of “what if” and “why not,” challenging limits and implied norms that many may hold sacred, for no discernible reason.

  • Define the expression. There’s got to be some well-crafted, artfully balanced manifestation, some embodiment of all that profound exploration of strategy and vision, that yourself and others can grasp and hold on to as a torch to light the way, signaling a path forward with promise and conviction. Maybe it’s a mockup, a movie, a demo, a marketing campaign, whateve… And there’s admittedly a degree of theatricality and rhetorical flourish to persuade stakeholders, but those expressions become symbols that others inside and outside the company will associate with your strategy. To put it bluntly, make prototypes, not plans! The expression matters; it brings your strategy to life in an engaging manner, where followers become believers and eventually, leaders.

  • In an integrative way. Finally, it’s all got to work together beautifully—the ideas about the product, the customer, the company, the principles, team process, public brand, etc. It takes systemic thinking to connect the dots and interweave the threads of crucial, even difficult, conversations with peers/superiors/ambassadors to ensure everyone is on board and committed, thus participating productively, to helping your strategy come alive. This requires constant multilateral thinking, with discipline and focus, bringing those elements together effectively. I think Steve Jobs said it best when he described the journey from idea to execution:

    "Designing a product is keeping 5,000 things in your brain, these concepts, and fitting them all together in kind of continuing to push to fit them together in new and different ways to get what you want.”

Posted in: on Tue, April 15, 2014 - 8:04:47

Uday Gajendar

Uday Gajendar is Director of User Experience at CloudPhysics, focused on bringing beauty and soul to Big Data for virtualized datacenters.
View All Uday Gajendar's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Interaction design for the Internet of Things


Authors: Mikael Wiberg
Posted: Fri, April 11, 2014 - 12:19:02

The Internet of Things (IoT) seems to be the next big thing, no pun intended! Embedded computing in everyday objects brings with it the potential of integrating physical things into acts of computing, in loops of human-computer interactions. IoT makes things networked and accessible over the Internet. And vice versa—these physical objects not only become input modalities to the Internet but also, more fundamentally, manifest parts of the Internet. IoT is not about accessing the Internet as we know it through physical objects. It is about physical objects becoming part of the Internet, establishing an Internet of Things. Accordingly, IoT brings with it a promise to dissolve the gap between our physical and digital worlds and the potential to integrate elements of computing with just about any everyday activity, location, or object. In short, IoT brings with it a whole new playground for interaction design!

We already see good examples of how this is starting to play out in practice. Connected cars, specialized computers, and tagged objects are becoming more and more common and the repertoire of available networked objects is rapidly growing. There is a shared interest in the Internet of Things in industry and academia. 

While the technological development around this area is indeed fascinating, it is from my perspective even more interesting to see where this will take interaction design over the next few years. From an interaction design perspective, it is always interesting to explore what this digital material can do for us in terms of enabling new user experiences and the development of new digital services. The IoT movement does indeed bring with it a potential not only for re-imagining traditional physical materials, making physical objects part of digital services, but also for re-thinking traditional objects as not being bound to their physical forms and current locations, but rather functioning as tokens and objects in landscapes of networked digital services, objects, and experiences.

When we, as interaction designers, approach the Internet of Things I hope we do it through a material-centered approach in which we treat the IoT not only as an application area but also more fundamentally as yet another new design material. With a material-centered approach, I hope that we look beyond what services we can imagine around internet-enabled objects and instead move our focus over to the re-imagination of what human-computer interaction can be about, i.e., how IoT might expand the design scope of HCI. By thinking compositionally about IoT and viewing IoT in composition with device ecologies, cloud based services, smart materials, sensors, and so on, we move our focus from what this latest trend of technology development can do for us to how we might interact in a nearby future with and through just about any materials—digital or not. This is what I hope for when it comes to interaction design for and via the Internet of Things! 



Posted in: on Fri, April 11, 2014 - 12:19:02

Mikael Wiberg

Mikael Wiberg is Professor of Informatics in the Department of Informatics at Umeå University, Sweden.
View All Mikael Wiberg's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


It’s spring and a girl’s thoughts turn to design (and meaning)


Authors: Deborah Tatar
Posted: Fri, April 04, 2014 - 7:37:40

It’s spring. Spring for me is always associated not so much with the bulbs that turn Blacksburg into a really beautiful place, but with serious thoughts about values. Of course, there are a lot of holidays associated with spring, but mine is Passover. And renewal is associated with thoughts about the aspiration to live rightly. In my childhood, in New York, the big fall holidays, Rosh Hashanah and Yom Kippur, were about personal challenges. We turned inwards with the threat of the long, dark, cold winter ahead. But even in a non-religious family like mine, the Seder was about turning outwards.  

One dinner in particular jumps into my mind. My stepfather was on the Bicentennial Commission for New York City. This is the group that put together the celebration of the 200th anniversary of the Declaration of Independence. It was a big deal, with events of many kinds all over the city (evidently it was an effort to imagine That Beyond Manhattan). As I had dutifully learned in 5th grade history, New York was in fact quite central to independence before, during, and after the Revolutionary War. So at our family Seder in the spring of 1976, after the ceremony and the lesson that even we could be slaves were circumstances otherwise, Papa regaled us with stories about the ideas, decisions, and commitments, the struggles between the boroughs, the balance of activities, the political and aesthetic disagreements. After dinner, we moved to the living room of my grandparents’ apartment. My usual perch was an embroidered footstool. Some of the activities were vox populi and others were High Art. Eventually the talk to turned to the role of art in modern America. 

Ahh! This topic and the shift of venue gave my Great Uncle Harry, our usual primary raconteur, the opening he had been longing for, the chance to top the evening with the seal of profundity. He settled his comfortable paunch back into the brocaded wing-tip chair and fingered his cigar. Think a small Jewish man with the mannerisms of Teddy Roosevelt. This was 38 years ago, and I have lost some of the details that would make the story jump off the page as the lesson was impressed on me. 

England, as well as the United States, was infested with virulent anti-communism in the late 1940s and 50s. Uncle Harry’s story concerned two Very Well Known Brits—neither of whom I can remember by name. One was a retired military general in the style of Bernard Shaw’s Horseback Hall. I could hear his bristling bushy moustache in the tone of the story. British, British, British. God and Country. Suspicious. Proud. Nationalistic. The other was a preeminent creative person or academic—a writer perhaps. Slightly ascetic. Sharp but diffident. Clever with words in a way that no American can ever be. (If anyone else recollects this story, please remind me who the protagonists were!) Both were dressed in black tie at some kind of formal dinner—or maybe it was even more formal, white tie. 

In the course of political discussion, the general turned to the writer—as Uncle Harry told this, Teddy Roosevelt appeared in him most clearly; he threw out his chest and looked down his nose—and said in tones of opprobrium, “And what did you do during the War?” (Meaning, as an American in the 1970s would, the Second World War.)

And the writer replied—Uncle Harry’s eyebrows went up slightly; his voice stayed mild and quiet; he looked askance, as he assumed his imitation Oxbridge accent—“I was doing the things that you were fighting to protect.” 

That was it. The writer, the artist, the intellectual “was doing the things that you were fighting to protect.” In that phrase, we had the assertion of role of art and intellect, intrinsic to quality of life, to freedom, and a force for meaning in a difficult world. 

I hope that the layers of this story as I tell it to you—the concept of celebrating the American revolution, the reenactment of the flight of the Jews from slavery, my family’s interpretation in the mid 1970s through an imagined connection to British thought, my own processing and recollection so many years later—give that message about values a kind of deeply lacquered frame. 

The intellectual, the artist, the writer was able to claim that he was doing things worth fighting to protect. We fought to protect art and ideas, to preserve justice. To enact a vision of a more equitable world. 

In that world, design was a half-step behind art, shadowed by it, but intensely tied to meaning, both political and personal. Raymond Loewy was already represented in the collection of the Museum of Modern Art. And indeed so was one of the first things I ever purchased with money I earned myself: a Valentino portable typewriter. Of course, designers have clients. Did I say that they have clients? They have clients. They have clients. They have clients. But they also have vision and that vision is something that can be talked about and even disputed. 

My moral for this spring is that design is or should be something more than client-fulfillment centers. 



Posted in: on Fri, April 04, 2014 - 7:37:40

Deborah Tatar

Deborah Tatar is an associate professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Preaching to the choir, just when you thought it was safe


Authors: Monica Granfield
Posted: Wed, April 02, 2014 - 7:41:12

UX has made great strides within the mainstream IT and software community. Hard work, education, and return on investment have all contributed to growth of the UX design discipline over the past 10 years. UX (with which I am bundling research) is now gaining traction and opening doors in a large variety of industries, from healthcare to robotics. New avenues such as customer experience are gaining traction and opening even more doors for our discipline. This is a very exciting time for UX and CX! However, with opportunity comes challenge and there is still a fair amount of work to do out there.

The industry is changing as software moves off of the desktop and out into every type of electronic device imaginable, creating new ecosystems, new experiences, and exciting new challenges. With these developments, UX often finds itself back at the starting gate, most commonly back to a point of playing defense and vying for offensive play. With this it seems UX needs to proactively broaden its reach into educating and building awareness within these new industries.

The playbook is the same, just with a new team. After a recent move into the uncharted territory of a new industry and attending UX-related events, I realized that I was not the only one facing these challenges. I thought, as many others I have spoken with since, that UX was more commonly understood now. This has me thinking: Maybe we are all preaching to the choir. Maybe the UX industry needs to break out of our own comfort zone and start spreading the word at other industry professional events. Reaching into new industries could open our playbooks and allow these new industries to gain awareness and knowledge of UX outside of the political arena within the workplace.  

I am curious if anyone out there has been representing UX at other professional meetings and conferences. Are there any UX talks happening at IEEE or Business Professionals of America? Yes, being in the trenches educating your team and your organization may be the best ground up approach, and branching out to present to the disciplines we most often collaborate with, within their comfort zone, might gain greater traction among these disciplines. Internal grassroots efforts can be an uphill climb, as a team or as an individual. Building momentum and awareness of UX as a discipline within other disciplines could be a game changer for us at the professional level. Who knows how other disciplines might receive the UX message. If other disciplines want to create and participate in creating the best user experience, this might be one route to success.

At the education level the momentum is gaining. The D-school at Stanford, which was hatched out of the School of Engineering in 2005, has begun educating students on the value and application of design collaboration, to create “innovators.” Bringing more awareness of UX design to engineering is also important. I have heard of efforts such as Jared Spool teaching UX courses in the Graduate Management Engineering program at Tufts University’s Gordon Institute. Many undergraduate universities are now offering UX classes within the software engineering curriculum. These classes are key to setting the stage for the next wave of technical talent coming out of university, who will gain the ability to understand value and collaborate around the use of UX design in creation and innovation.

Universities presenting more opportunities for cross pollination and collaboration between design, engineering, and business may be helpful in breaking down departmental barriers in the future. Today, creating the opportunity for design and research to truly become innovators, especially within new domains, is still a challenge. I have heard the argument that if designers want to participate in design strategy to address the business, they should become business strategists, and that is for MBAs. However, as most UX professionals know we are not claiming to be business strategists. Yet our insights and offerings do overlap with business strategy, and this is a lesser known use of design, as opposed to overlapping with product development or engineering. We are mediators of how these disciplines contribute to the fruition of the resulting user experience and that word needs to reach a world of professionals who are heads down, working off of what they know. Current grads are getting some exposure and cross pollination; however, it will be some time before they are in the top ranks championing the next generation of technology or customer experiences. Therefore, it is up to the design community today to reach out, reach over, and continue to break down the barriers and open minds outside of traditional software.  



Posted in: on Wed, April 02, 2014 - 7:41:12

Monica Granfield

Monica Granfield is a user experience designer at Symbotic. The views expressed on this website are her own and do not necessarily reflect the views of Symbotic.
View All Monica Granfield's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@Richard Anderson (2014 04 15)

Speaking of Stanford, Medicine X— “the world’s premier patient-centered conference on emerging technology and medicine” (see http://medicinex.stanford.edu) — is largely about human-centered design and the patient experience. This is a fabulous conference that brings together a wide range of healthcare professionals and designers and patients. In my view, this kind of conference is better than a conference silted on one profession at which one or two UX/CX people speak.


Swarms and tribes


Authors: Jonathan Grudin
Posted: Mon, March 31, 2014 - 8:03:23

A crack team led by Deputy Marshall Samuel Gerard (Tommy Lee Jones) race about in hot pursuit of Harrison Ford’s fugitive Dr. Richard Kimble. Gerard finds one of his men standing motionless. 
Gerard: “Newman,
what are you doing?!
Newman: “I'm thinking.”
Gerard stares. “Well, think me up a cup of coffee and a chocolate doughnut with some of those little sprinkles on top, while you're thinking.” Walks away.
The Fugitive (Warner Bros., 1993)

Ant colonies

Ants scurry about in a frenetic mix of random and directed activity. They gather construction materials, water, and food, and respond to threats. Ants are also busy underground, extending a complex nest, caring for the queen and her eggs, and handling retrieved materials. Decomposing leaves are artfully placed in underground chambers to heat the structure and circulate air through the passages; leaves can also be a source of food, directly or through fungus farming. Ants captured from other colonies are put to work. Foragers that are unsuccessful, even when only due to bad luck, shift to other tasks. Remarkable navigational capabilities enable ants to find short paths home and avoid fatal dehydration. Arboreal species race to attack anything that brushes their tree. Their relentless activity is genetically programmed: There are no ant academies.

Ant programming isn’t perfectly adapted to this modern world. Fire ants invading my Texas condo marched single file by the thousands into refrigerators or air conditioning units, where they were frozen or fried, sometimes shorting out an AC box. Humans rarely exhibit such unreflective behavior. Doomed military offensives such as the Charge of the Light Brigade prompt us to ask whether soldiers should sometimes question orders.

The health of the ant colony relies on the absence of reflection. It would bode ill if individual ants began questioning their genetic predispositions. “The pheromones signal something tasty that way, maybe a doughnut with sprinkles, but I don’t like the looks of that path,” or “Maybe we could come up with a better air circulation system, let’s have a committee draw up a report.” Ants don’t think, but they’re doing OK. They outnumber us. If, as seems plausible, ants are here when we’re gone, our capability for reflection could be called into question, should any creatures be around that ask questions. The ants won’t [1].

Globalization

According to my favorite source, a single ant supercolony comprising billions of workers was found in 2002, stretching along the coasts of southern Europe. In 2009 this colony was found to have branches in Japan and California, no doubt enabled by our transportation systems: a global megacolony. Does it have an imperialistic plan to displace rival ant supercolonies? No, each ant follows its genetic blueprint.

We’re globalizing, too. Not long ago Homo sapiens appeared to have two supercolonies, but the bonds holding them together were less enduring than ant colony bonds. Nevertheless, we are forming larger, globally distributed workgroups. We may yet become a global megacolony. If we don’t, ants may inherit the earth sooner rather than later.

The human colony

Looking back a few thousand years, a small tribe couldn’t afford to lose many members through random behavior. If Uncle Og headed down that path and did not return, let’s think twice about going that way alone! When ants stream to their deaths, lured by false pheromone signals triggered by appliances, the colony has more where they came from. In contrast, our ability to analyze and reason enabled us to spread across the planet in small groups. A century ago we were still overwhelmingly rural—isolated and often besieged. Information sharing was limited. Each community worked out most stuff for itself. Reflection was valuable.

How are the benefits and the opportunity costs of cogitation affected as the Web connects us into supercolonies? Given a wealth of accessible information, is my time better spent searching or thinking? Tools make it easier to conduct studies; is it better to ponder the results of one, or use the time to do another study? Cut new leaves or rearrange those brought in yesterday?

Many research papers represent about three months’ work, with students or interns doing much of it. After publishing three related papers, will I contribute more by spending six months writing a deep, extensive article and carefully planning my future research—or by cranking out additional studies and two more papers?

We are shifting to the latter. Journals, handbooks, and monographs are in decline. Conferences and arXiv thrive. Arguably, we know what we are doing or our behavior is being shaped appropriately. The colony may be large and connected enough to thrive if we scurry about, cutting and hauling leaves without long pauses to reflect. Beneficial chance juxtapositions of results will simulate reflection, just as the frenzied instinct-driven construction of an ant nest appears from the outside to be a product of reflective design. The large colony requires food. For us, as for the ants, there are so many leaves, so little time.

Shifting our metaphoric social insect, the largest social networking colony is the IBM Beehive compendium: twenty-something research papers scattered across several conference series. No survey or monograph ties the studies together. I lobbied the authors to write one, but they were heads down collecting more pollen, which was rewarded by their management and the research community.

Working on a handbook chapter, I did it for them, tracking down the studies, reviewing them, and trying to convert the pollen into honey. It was hard to stitch the papers together. For example, the month and year of some work was not stated, and publication date is not definitive in a field marked by rejection and resubmission. In a rapidly evolving domain, knowing the sequence would help.

In retrospect, they were probably right about where to invest time. I found a few higher-level patterns and overarching insights, but few will take note when the handbook chapter is published next month. Social networking behaviors have moved on. The Beehive has been abandoned, the bees have flown elsewhere, leaving behind work that is now mainly of historical value, although bits and pieces will spark connections or confirm biases and be cited. From the perspective of my employer, the field, and intellectual progress, my time could have been better spent on a couple more studies.

It is ultimately a question of the utility of concentrated thought. How might we find objective evidence that scholarship is useful in this century? I’m sentimentally drawn to it, but the effort required to become a scholar might be more usefully channeled into other pursuits. The colony would collapse if ants spent time contemplating whether or not to blindly follow pheromones. Through frenetic activity they build a beautiful structure and the colony thrives. Is life in our emerging megacolony or swarm different? Race around, accept that bad luck will sideline many, and plausibly we will thrive. If an occasional false pheromone lures a stream of researchers to a sorry fate, there will be more where they came from!

The tribe and the swarm

Consciously or unconsciously, we’re choosing. Fifteen years ago, an MIT drama professor told me that with the digital availability of multiple performances, students who analyze a performance in detail do not do well. Better to view and contrast multiple performances, spending less time on each. Other examples:

  • In an earlier era, if one of the five people who were engaged in similar work performed exceptionally well, the tribe benefited by bringing them together so that the other four learned from the fifth. Today, it may be more efficient for a large organization to let the four flounder, social insect style. Successes can be shared with people working on other tasks; enough will connect to make progress. In other words, 80% conference rejection rates that were a bad idea when the community was smaller may now be viable. The community-building niche once served by conferences may be unnecessary.

    Many senior researchers disdain work-in-progress conferences—they want strong 20%-acceptance pheromone trails. If less-skilled colleagues who rely on lower-tier venues perish through lack of guidance, no matter. The as-yet-unproven hypothesis is that the research colony will thrive without the emotional glue that holds together a community.

  • When more effort was required to plan, conduct, and write up a study, it made sense to nurture and iterate on work in progress. With high rejection rates and an inherently capricious review process, researchers today shotgun submissions, buying several lottery tickets to boost the odds of holding one winner. Rejected papers may be resubmitted once, then abandoned if rejected again. Not all ants make it back to the nest, but when those that return carry a big prize, the colony thrives.

  • Any faculty member who mentors a couple successful students has trained an eventual replacement. In the past, this could mean working closely with one graduate student at a time. Today, many faculty have small armies of students, most of whom anticipate research careers. “Is this sustainable?” I asked one. “It’s a Ponzi,” he replied cheerfully. Not all students will attain their goals. In a tribe this could be a major source of discontent and trouble. Swarms are different: Foragers who fail, even when due to bad luck, take on different tasks.

The ghost in the machine

Efficiencies that govern swarm behavior may now apply to us, but there is a complication. Our programming isn’t perfectly adapted to this modern world. Our genetic code is based on the needs of the tribe. Until natural selection eliminates urges to reflect, feelings of concern for individual community members, and unhappiness over random personal misfortune, there will be conflict and inefficiency. In 1967, Ryle’s concept of the ghost in the machine was applied by Arthur Koestler to describe maladaptive aspects of our genetic heritage. The mismatch grows.

If on a quick read this is not fully convincing, you could spend some time reflecting on it, but it may be wiser to return to working on your next design, your next conference submission, and your next reviewing assignment.

Endnote
1. Not all ant species exhibit all these behaviors. Some ants are programmed for rudimentary “learning,” such as following another ant or shifting from unsuccessful foraging to brood care.

Thanks to Clayton Lewis for discussions and comments.


Posted in: on Mon, March 31, 2014 - 8:03:23

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@Paul Resnick (2014 04 02)

I think you’d like the Collective intelligence conference, Jonathan.  Not only are there presentations reflecting on models of collective intelligence for ant colonies, but there are no archival proceedings, and a very high acceptance rate. Hope to see you there!

http://collective.mech.northwestern.edu/


Designer’s toolkit: A primer on using video in research


Authors: Lauren Chapman Ruiz
Posted: Wed, March 19, 2014 - 10:00:17

In our last post, we explored a variety of methods for capturing user research. Yet a question lingered— how can you effectively use video in your research without influencing the participants?

Here are some tips and tricks to minimize the impact of using video in research engagements. Keep in mind, these tips are focused on conducting research in North America—the rules of engagement will vary based on where you are around the world.

Be transparent

If you’re using a recruiter, ensure they let participants know that they will be video recorded. Usually you or your recruiter develop a screener—a set of questions that are used to determine whether a potential participant is qualified for a study. As part of the screener introduction, you should include the intention to video record the research visit, which gives your participant time to express any concerns. You don’t want the participant to be surprised when you pull out a video camera, especially if the topic being discussed is very personal.

Minimize equipment 

With the improvements in technology, you shouldn’t be using a large video camera. Keep the recorder as small as possible, and use a tabletop tripod. (I recommend a Joby Gorillapod due to its flexibility.) Placing your equipment on a table allows you to keep the camera discreetly in the background while allowing easy pickup and placement if necessary.

Prep everything beforehand 

Nothing calls more attention to video than technical difficulties. You don’t want to be fiddling with the camera, or checking it during an interview. Make sure all batteries are fully charged, and bring spares. Make sure your memory card has enough space. Either have the small tripod on the camera beforehand, or make sure it’s ready to install. If you run into a problem in the middle of the interview, ignore the issue and just focus on the engagement—this is what’s critical—and always have that pen and paper handy.

Ease participants in while building rapport 

As you are introducing the research, remind participants that it will be video recorded, review what the video will be used for, and assure them of the purpose. If you’re recording only for note-taking purposes, explain that you simply can’t remember everything that’s said, and video allows you to go back and verify information. Remind them that they are in control of what is or isn’t recorded, and that it can be stopped at any time. If the expectation of your video recording has been communicated correctly during recruitment, this information shouldn’t be a surprise, and your interviews should progress smoothly. This is when you can also request signed consent for the recording regarding when, where, and how the footage will be used. How you do this will vary based on what you plan to do with the footage.

Include the participant in the process of setup

Part of building rapport with participants is allowing them to see you setup your equipment—let them see where you place the camera and set it up to capture a clear angle. Spend the necessary time for your participant to become comfortable with the equipment, and answer any questions.

Use humor to help dissipate discomfort 

Oftentimes humor can help to ease any nervousness about being recorded. As you build rapport, be lighthearted about the fact that you’re recording. A light chuckle can help to relax the situation.

Consider the audio quality

If you’re making a high-quality video reel for your stakeholders, you may need to ask your participant to wear a microphone. This will guarantee quality audio much more than using a directional microphone or relying on the camera’s capabilities. In these situations, participants should be more aware of the situation while being recruited so there are no surprises.

Keep the focus away from the camera

Once you’ve finished your setup and you start recording, try to forget the fact that a camera is on. Still write down notes—this distracts attention from the camera, and gives you a backup of what was said and when, which is helpful when you go back to the video later. If your participant gets up and takes you on a tour, pick the camera up and hold it against your body—place it low on your body so that it’s not very noticeable. This will keep the footage steady, and the camera out of the way. If the camera is treated as a casual aside, then attention will be drawn away from the fact that it’s recording.

If unsure, ask for permission

During your interview, if you’re not sure whether you can record something in particular, always ask your participant for permission first. Your participant might be nervous to show you particular information while you’re recording, but by asking for permission first, this reminds them that they have control of the situation. Yes, this draws attention back to the fact that you’re recording, but when your participant feels in control of what is being captured, this will build confidence and trust in allowing you to continue recording.

Be willing to show an example

In some situations, participants have felt reassured when they see the quality of the video that is captured. If video quality isn’t essential to you, showing your participant that the footage captured is low-res can increase trust to record potentially sensitive visuals.

Stop any time 

Don’t be afraid to turn off the video camera if it’s clearly a distraction or is preventing the participant from being open and honest. Make sure they know you can turn it off if they are uncomfortable. Just always be ready to switch to an alternative capturing method.

Wrapping up

If you feel your participant may have been nervous to say something due to the camera, at the end you can always turn the camera off, and ask if they have anything else they’d like to share.

Now that you have hours of video footage, what do you do with it? Based on the type of consent you gathered, there are a variety of outputs you can use the footage for.

I recommend a quick and dirty highlights reel of key findings. You can save time if you’re diligent with taking written notes of key moments, or marking down the time stamp if your note taker is sitting back with the camera. Cut these key moments out with a basic program, such as Quicktime or iMovie, for easier compilation. A good length of time for a highlights reel is about 5 minutes—if it’s too long, you will lose attention.

You now have footage that you can reference if something wasn’t clear to you, or if you need to verify a vague memory. If a stakeholder challenges an insight you’ve presented, you can reference the video as evidence. The footage can also be provided to your stakeholders as raw data with your synthesis. It’s footage they invested in, and it can be kept for any future needs.

We’ve gone through various methods for capturing research, and focused on how to leverage video without disrupting your participant engagement. As technologies advance, we can limit the appearance of being recorded—imagine recording research with Google Glass—but always we need to ask, what is the risk that we might alter participant behavior?

What do you think?

I’d be thrilled to hear about how you decide to approach capturing research—what tools do you love to use? What tips or tricks do you have to put participants at ease? Have you tried anything new or did something surprise you?



Posted in: on Wed, March 19, 2014 - 10:00:17

Lauren Chapman Ruiz

Lauren Chapman Ruiz is an Interaction Designer at Cooper in San Francisco, CA, and has been an adjunct faculty member at Carnegie Mellon University.
View All Lauren Chapman Ruiz's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Theory weary


Authors: Jonathan Grudin
Posted: Fri, March 14, 2014 - 7:09:08

Theory weary, theory leery,
why can't I be theory cheery?
I often try out little bits
wheresoever they might fit.
(Affordances are very pliable,
though what they add is quite deniable.)
The sages call this bricolage,
the promiscuous prefer menage...
A savage, I, my mind's pragmatic
I'll keep what's good, discard dogmatic…

—Thomas Erickson, November 2000
"Theory Theory: A Designers View" (sixty-line poem)

An attentive reader of my blog posts on bias and reverse engineering might have noticed my skirmishes against the role of theory in human-computer interaction. I’m losing that war.

Appeals to theory are more common in some fields than others. CHI has them. CSCW has more and UIST has few. I’m writing on the plane back from CSCW 2014, where we saw many hypotheses confirmed and much theory supported. In one session I was even accused of committing theory myself, undermining my self-image of being data-driven and incapable of theorizing on the rare occasions that I might like to.

One author presented a paper informed by Homophily Theory. He reported that it might also, or instead, be informed by Social Identity Theory. After reading up on both, he couldn’t tell them apart. So he settled on Homophily Theory, which he explained meant “birds of a feather flock together.” It was on the slide.

When I was growing up expecting to become a theoretical physicist, “birds of a feather flock together” was not considered a theory; it was a proverb, like “opposites attract.” I collected proverbs that contradicted each other, enabling me to speak knowingly in any situation. Today, “opposites attract” could be called Heterophily Theory, or perhaps Social Identity-Crisis Theory.

In the CSCW 2014 proceedings are venerable entries, such as Actor-Network Theory and Activity Theory. The former was recharacterized as an “ontology” by a founder; the latter evolved and is considered an “approach” by some advocates, but we don’t get into them deeply enough for this to matter. Grounded Theory is popular. Grounded Theory covers a few methodologies, some of which enable a researcher to postpone claiming to have a theory for as long as possible, ideally forever. But some papers now include an “Implications for Theory” section; as with “Implications for Design” in days of old, some reviewers get grumpy when a paper doesn’t have such a section. With CSCW acceptance rates again down to around 25%, despite a revision cycle, authors can’t afford to have grumpy reviewers.

CSCW citations also include broad theories, such as Anthropological Theory, Communication Theory, Critical Theory, Fieldwork for Design Theory, Game Theory, Group Dynamics Theory, Organizational Science Theory, Personality Theory, Rhetorical Theory, Social Theory, Sociology of Education Theory, and Statistical Mechanics Theory. (These are all in the proceedings.) Theory of Craft is likely broad (I didn’t look into it), but Theory of the Avatar sounds specific (didn’t check it out either).

The Homophily/Social Identity team did not get into Common Identity Theory or Common Bond Theory, but other authors did. I could explain the differences, but I don’t have enough proverbs to characterize them succinctly. With enough time one could sort out Labor Theory of Value, Subjective Theory of Value, Induced Value Theory, and (Schwartz’s) Value Theory. All are in CSCW 2014, though not always explained in depth. So are Resource Exchange Theory, Social Exchange Theory, Socialization Theory, Group Socialization Theory, Theory of Normative Social Behavior, and Focus Theory of Normative Conduct.

We also find models—Norm Activation Model, Urban Gravity Model (don’t ask), Model of Personal Computer Utilization, and Technology Acceptance Model. The latter has a convenient acronym, TAM, giving it an advantage over the related Adoption Theory, Diffusion of Innovations Theory, and Model of Personal Computer Utilization: an Adoption Theory acronym would risk confusion with Activity Theory and Anthropological Theory, and who wants to be called DIT or MPCU? Actor-Network Theory has a pretty cool acronym, as does Organizational Accident Theory—both acronyms are used.

Although they don’t have theory in their names, Distributed Cognition (DCog) and Situated Action are popular. Alonso Vera and Herb Simon described Situated Action as a “congeries of theoretical views.” Perhaps in our field anything with theory in its name isn’t really a theory.

Remix Theory and Deliberative Democratic Theory sound intriguing. They piqued my interest more than Communication Privacy Management Theory or Uses and Gratifications Theory. The latter two might encompass threads of my work, so perhaps I should be uneasy about overlooking them.

The beat goes on: Document Theory, Equity Theory, Theory of Planned Behavior (TPB). CSCW apparently never met a theory it didn’t cite. There is also citation of the enigmatically named CTheory journal. What does the C stand for? Culture? Code? Confusion?

Graduate students, if your committee insists that you find another theory out there to import and make your own, find an unclaimed proverb, give it an impressive name, and they’ll be happy. Practitioners, what are you waiting for, come to our conferences for clarity and enlightenment!

***

Postscript: This good-natured tease has a subtext. Researchers who start with hypotheses drawn from authoritative-sounding “theory” can be susceptible to confirmation bias or miss more interesting aspects of the phenomena they study. Researchers who find insightful patterns in solid descriptive observations may suffer when they are pressured to conform to an existing “theory” or invent a new one.

Thanks to Scott Klemmer for initiating this discussion, and to John King and Tom Erickson for comments.



Posted in: on Fri, March 14, 2014 - 7:09:08

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


What serendipity is providing for me to read


Authors: Richard Anderson
Posted: Thu, March 13, 2014 - 9:48:54

In the spirit of the new What Are You Reading? articles that appear within Interactions magazine…

My use of Twitter and my attending local professional events have had a big impact on what I'm reading. Indeed, both have increased my reading greatly.

Every day I spend at least a few minutes on Twitter—time which often surfaces an abundance of online reading riches. You can get a sense of what comprises this reading by taking a look at my tweet stream, since I often tweet or retweet about compelling readings I learn about via Twitter. A few recent examples:

  • The Unexpected Benefits of Rapid Prototyping. In this Harvard Business Review blog post, Roger Martin (former Dean of the Rotman School of Management at the University of Toronto) describes how the process of rapid prototyping can improve the relationship between designers and their clients. Roger and a colleague wrote about the importance of designing this critical relationship in a piece published in Interactions when I was its Co-Editor-in-Chief. This blog post extends that article.

  • Cleveland Clinic's Patient Satisfaction Strategy: A Millennial-Friendly Experience Overhaul. Here, Micah Solomon describes one of the ways one healthcare organization is improving the patient experience. The Cleveland Clinic was the first major healthcare organization to appoint a Chief Experience Officer, a role for which many experience designers and experience design managers have advocated for years for all sorts of organizations. This blog post reveals the role continues to have an impact in an industry not well known for being patient-centric.

  • Some of the blog posts written for Interactions magazine. Too few people know about these posts, as they are somewhat hidden away and don't all receive (individual) promotion via Twitter. But some are excellent. I've been most impressed by those authored by Jonathan Grudin (e.g., Metablog: The Decline of Discussion) and those authored by Aaron Marcus (e.g., My Apple Was a Lemon). A guy named Richard Anderson occasionally has a couple of worthwhile things to say here as well. ; )

  • The Essential Secret to Successful User Experience Design. Here, Paul Boag echoes something that I've written about for Interactions (see Are You Trying to Solve the Right Problem?)—something Don Norman has been emphasizing of late in several of his speaking engagements: 

    Essential, indeed.

  • Epatients: The hackers of the healthcare world. This excellent post from 2012 shows how Twitter users don't always focus on the new. Here, Fred Trotter describes and provides advice for becoming a type of patient that healthcare designers need to learn from, as I described in another piece I wrote for Interactions (see Learning from ePatient( Scholar)s).

Local events I attend sometimes feature authors of books, and sometimes those books are given away to attendees. I've been fortunate to have attended many events recently when that has happened.

Lithium hosts a series of presentations by or conversations with noted authors about their books in San Francisco. Free books I received because of this series:

  • What's the Future of Business? Changing the Way Businesses Create Experiences. This book by digital media analyst Brian Solis alerts businesses to the importance of designing experiences. I've found the book a bit challenging to read, but its message and words of guidance to businesses are important to experience designers. 

  • Your Network is Your Net Worth: Unlock the Hidden Power of Connections for Wealth, Success, and Happiness in the Digital Age. I think I'm pretty well-connected as it is, but I'm finding this book by Porter Gale to be of value. You might as well.

  • Crossing the Chasm (3rd edition). Attending Lithium's conversation with Geoffrey Moore about the updated edition of his classic book was well worth the time, as I suspect will be true of reading the book. I should have read the 1st or 2nd edition; now I can catch up.

I attend numerous events at Stanford University. A recent event there featured Don Norman talking about his new edition of The Design of Everyday Things. I loved the original (when it was titled The Psychology of Everyday Things), and shortly after this event, Don sent a copy of the new edition to me. It included the kind inscription: "To Richard—Friend, colleague, and the best moderator ever." (I've interviewed Don on stage several times, once transcribed for an Interactions article; see also the partial transcript and video of the most recent interview, with Jon Kolko.) I'm looking forward to reading this new edition and to interviewing him on stage again.

Carbon Five hosts public events every so often in San Francisco. Authors of three books were featured recently (two of which were given away):

  • The Lean Entrepreneur: How Visionaries Create Products, Innovate with New Ventures, and Disrupt Markets. Authors Brant Cooper and Patrick Vlaskovits join the many now touting lean in this book about starting or evolving businesses. This is a valuable read, given that designers are increasingly playing key roles in these activities.

  • Loyalty 3.0: How to Revolutionize Customer and Employee Engagement with Big Data and Gamification. Here, Rajat Paharia, founder of Bunchball, offers a book that should be of great interest to experience designers. I've found the book to be too formulaic in structure and presentation, but...

  • Rise of the DEO: Leadership by Design. The enjoyment of the on-stage interview of authors Maria Giudice and Christopher Ireland prompted me to purchase this book, which proved to also be too formulaic for my tastes. Yet, given the increasing importance of the presence of design-oriented leaders in executive offices...

At a recent event launching GfK's new UX San Francisco labs, Aga Bojko talked a bit about her new book, Eye Tracking the User Experience: A Practical Guide to Research. In addition to offering complementary copies of the book, this event offered some of the best port I've ever tasted, from three different vintners! Plus Arnie Lund spoke about user-centered innovation. An excellent event it was, plus the book looks excellent as well.

Always an excellent event is the (near) weekly local live broadcast of the radio show West Coast Live. Early during the show, audience volunteers operate an ancient maritime device known as the biospherical digital optical aquaphone, after which the volunteers receive a gift. Recently, that gift was a copy of How to Fail at Almost Everything and Still Win Big: Kind of the Story of My Life, a book by Scott Adams, who was once a guest on the show and is the creator of the Dilbert comic strip. I wasn't sure I'd read the book, but I've found it to be thoughtful, entertaining, and compelling. And given the current mantra in our business regarding the importance of failing often and quickly...

Neo, the employer of Jeff Gothelf, author of Lean UX: Applying Lean Principles to Improve User Experience, hosts a series of events on lean UX in San Francisco. I heard Jeff speak about lean UX just before the publication of his book last year, and at a recent event, Neo was handing out a few copies. I'm finding the book to be concise and a quick read—an excellent supplement to Jeff's talk and the many articles and presentations I've seen on the topic.

Kim Erwin spoke about her new book, Communicating the New: Methods to Shape and Accelerate Innovation, at another recent event in San Francisco. Unfortunately (and surprisingly, given the tendency revealed above), she was not giving away copies of her book, but since her talk was terrific, I made the purchase. I'm glad I did—an excellent book touting collaboration and participation.

One of the final two books I'll mention—and I could mention more!—was sent to me by UX designer Katie McCurdy, whom I first met at Stanford Medicine X 2012. Katie and I were both there as ePatient scholars, so she knew of my health(care) nightmare story and knew that I would want to read a similar story told by Susannah Cahalan in the gripping book Brain on Fire: My Month of Madness. This book and a similar book titled Brain Wreck: A Patient's Unrelenting Journey to Save her Mind and Restore her Spirit by Becky Dennis say much about why and how the U.S. healthcare system needs to be redesigned. All experience designers working in healthcare need to read these books and the many patient stories like them that are available on the internet.

Is this a typical collection of reading material for someone working in the experience design (strategy) field? Probably not, but I kinda think it should be. Is this typically how people working in this field learn about and acquire their reading material? Again, probably not, particularly for those who don't live in a place like the San Francisco Bay Area. But I'm delighted with the mix of reading material I learn about and consume due to serendipity. Thank you to those I follow on Twitter, and thank you to those responsible for local professional events.



Posted in: on Thu, March 13, 2014 - 9:48:54

Richard Anderson

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.
View All Richard Anderson's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Designer’s toolkit: A primer on capturing research


Authors: Lauren Chapman Ruiz
Posted: Sun, February 23, 2014 - 3:32:11

You’ve been preparing for your research—recruiting, screening participants, devising schedules, testing discussion guides—and now you are deciding the best way to capture your research. But how? If you’re busy scribbling down notes, you might miss a sound byte. If you film the interview, you might unknowingly influence the conversation. These are all serious considerations. Properly capturing and documenting each research encounter prevents spending time and money on data that sits solely in the memory of the researcher.

How you choose to conduct and capture your research will greatly impact your outcomes, and ultimately your client outcomes. I’m going to highlight a variety of research-capturing tools, and then I’ll have a future post about how to effectively videotape research. Both the type of research you’re conducting and its purpose will help you decide which capture method is best.

Before we begin, I wouldn’t recommend going into research alone—you will struggle to document while maintaining a conversation. A good structure is to have a moderator and a note taker, that way one practitioner can focus on conversing with the participant, while the other focuses on capturing what is occurring.

Note-taking

Your options here are to take notes by hand, or to capture on a device such as a tablet or laptop. If you’re taking notes by hand, then you need to make sure you can return to those notes and understand what they mean. This will rely heavily on your memory when you spend time typing up your notes. On the flipside, your participants will likely be very open to speaking freely, knowing his or her voice and image aren’t being captured.

If you choose to type into a tablet or laptop, you will cut down the time of typing your notes later, but you’ve now introduced technology into your research encounter. If it’s important to have detailed, word-for-word notes, then transcribing with a laptop in the interview can be highly successful, and time-saving. The best practice here is to have the note taker remain in the background, quietly typing on a small laptop. He or she is tucked away to minimize distraction, keeping the interview focused on the moderator/participant conversation.

Audio recording

Audio recording is minimal in its invasiveness, and provides a word-for-word backup of everything you heard. It’s a safety net for your written notes or your memory—and a way to verify that what you captured is correct. If you’re trying to capture a complex process, you can reference the audio to ensure it’s correct. Audio can be provided to your stakeholders, and can be used to pull out compelling clips, allowing them to hear the support for your findings. There are a range of devices that can be used to record, from small pocket recorders to iPhone apps such as Voice Record Pro or iRecorder.

Make sure you don’t cross any ethical boundaries with audio recording—always inform your participants if you’re recording, and let them know what will happen to the information. This should be done in the form of a consent form prior to starting the interview.

Note-taking with audio recording

A research tool that captures word-for-word discussions and minimizes distraction is the Livescribe pen. This lovely piece of technology appears to be good old-fashioned paper and pen, but it is actually recording all audio to the pen and codifying it to the paper. Tap the pen to a sentence in your notes, and it will play exactly what was said in that same moment. It will also send a video of your notes, recorded stroke by stroke, with audio included, to your computer via Bluetooth. If you’re using a tablet, programs such as Microsoft OneNote and AudioNote will capture audio while you type or sketch with a stylus.

Screen capture with participants

In the case of usability research, you’re trying to capture a variety of actions occurring at the same time. You want to see where a participant clicks or touches on the user interface, you want to capture what pathways they take through the system, along with what they’re saying and their facial reactions. That’s a lot to manage. Luckily for us, there are some handy tools such as Silverback or Morae, which record the device screen, highlight clicks, and record the participant with audio.

Unfortunately, for mobile usability, capturing gets more difficult. There are some apps that let you record the device screen, such as Reflector, but you don’t get the taps or swipes. If you jailbreak the phone, Cydia has applications that will record the user actions. Another option with which I’ve had success is building a mobile camera mount (external camera rig) to capture the participant’s actions, but you will then have a clearly visible mount.

Video recording

 When conducting research in context, you’re gathering more than just what is said; you’re also gathering critical observations about the space around the participant, the participant’s behavior, the programs that are used, the forms completed, the tools used, the sticky notes with reminders, and the “duct tape” (fixes and workarounds people makes for themselves). In this environment, visuals are incredibly important. As the famous anthropologist Margaret Mead said:

“What people say, what people do, and what they say they do are entirely different things.”

And what better way to capture this trichotomy than video? In most cases, the only way to document actions is through video, with photography as a supplement. This is essential in usability testing, contextual inquiries, and observations.

In addition, video is a powerful way to share your research with others—especially stakeholders who may have a hard time seeing what their users experience. Nothing makes a bigger impact than having clients hear and see the research participants directly. However, the price you pay in using video is you may never truly be sure that participants are comfortable and acting naturally.

Tune in next time for tips and tricks on how to use video effectively in research.



Posted in: on Sun, February 23, 2014 - 3:32:11

Lauren Chapman Ruiz

Lauren Chapman Ruiz is an Interaction Designer at Cooper in San Francisco, CA, and has been an adjunct faculty member at Carnegie Mellon University.
View All Lauren Chapman Ruiz's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Aarhus and methods: post-visit reflections


Authors: Deborah Tatar
Posted: Thu, February 20, 2014 - 8:51:26

One day, when I was young in Boston, a friend and I were discussing how people cross streets. I described what I thought of as the urban method, a negotiation between pedestrian and driver. My friend, equally young, but male, objected to my characterization. He said, "You just think that it happens that way because cars stop when you step off the curb. They don't do that for me." I said, "That's ridiculous!” So we conducted a little experiment, and, to my chagrin, he was pretty much right. Cars slowed and stopped when I stepped off the curb; they didn’t when he did. My interpretation was gender. I thought it was just the power of a young, healthy woman in our culture, even one who dressed simply in jeans, sneakers and loose blouses, didn't wear make-up, and didn't even blow-dry her hair. Of course, it could have been some other factor. But, whatever the particular cause, the shock for me was the simple demonstration of how inhabiting my body and self had implications for my interpretation of the world that I could not myself detect or control. 

I recently spent four months at Aarhus in the Participatory IT group, and loved what they were doing and what they had created. I had a rich and immediate experience engaging with their devices. When we first arrived, they had a table on public display downtown during the Aarhus Festival. Decorated blocks could be put on the table that controlled the display and, separately, the music. Proximity of one block to another and mutual orientation had tacit but systematic effects. The design world is currently full of tabletop displays with manipulatives, but they had, somehow, gotten it right. There were many such experiences—I wrote a blog post about my own experience of Ekkomaten. When I talked with the researchers, though, they would point not to the products but to their process. The process is key to their experience of excellence. 

But this brings me back to my experience crossing streets. I have recently read a number of their papers about their notion of values-led participatory IT, and I believe them to be accurate descriptions of how this operates for them, how they use the concepts to steer by, and how the concepts enable them to do wonderful things. I love it that my Danish colleagues are creating in this way. But I'm like my male friend long ago—I can't cross the street their way because the traffic doesn't stop for me. Actually, it’s more than that. I can’t formulate my (wicked) problems the ways they can formulate theirs. 

In general, we do not in HCI report on how the context of the designer affects what they can bring about. Is design research about reproducible design results? Does the value of Aarhus' values-led participatory design depend on whether others can use it? I think not. The way I see it is more particular. Their description of their method is useful in the way a star map is useful to a sea journey. Very useful if the sky is clear, it is night, and you are sailing under the same stars. 

But I am quite concerned that the claim of generalization, the claim that leading with emergent values is the key to participatory design, makes it difficult to see the challenges in sailing different seas. It puts participatory design in the small overlap between the fact of participation and those particular values that are jointly articulated in the limited context of the project. I’d rather see it in the large circle that includes multiple forms of participation and people’s deeper lived values. 

I live and work in a comparatively torn and fraught world. The sky is murky and I’m not sure that the map corresponds to the stars above me. For this reason, I am still inclined to go more with explanatory processes like my own design tensions framework, which puts principles and values in the bucket with other factors and provides a gentle structure that allows the designer to address issues of power, culture, and even alienation. The nature and kind of participation is itself a value among other values. 

A person can describe strategic design decisions such as supporting embodied learning as a value. That’s nice. But to appropriate the term “values” for the small decisions that a design project makes vitiates the term. We need it for more difficult cases, when the context itself puts lived elements of identity and pain on the table. If the notion of participatory design actually rejects these more serious elements, it may be untrue to a component of its origin in the European Trade Union Movement. It is certainly untrue to the American component of its roots in the community action participation models of Jane Jacobs and Saul Alinsky. 

I do a lot of work in schools in the United States. My worries are about meaningful joint action. I recall the teacher who told me that she knew that the approaches that we were advocating would be better and more successful for children struggling with mathematics than what she was doing, but that she could not try something different with children who were struggling until the approach she was currently using worked. It didn’t matter if all of them failed. She had to keep on. She could not take a “risk”—the “risk” that her own school had promoted and said that they wanted. In fact, the school wanted two incommensurate things at once from her. 

Here’s a more fraught case: A number of years ago, I was working in a school district in the south of the United States when a 12-year-old in the district was charged as an adult in a murder. The child belonged to a tiny Hispanic minority in the district and had participated in the hazing of a new Hispanic child, also 12. He punched the new kid in the middle of the chest and the new kid’s heart stopped. The new kid died. It was a tragedy. 

Sometime later, I came, cup of coffee in hand, to an early morning meeting in the office of a mid-level district leader. My eye happened to flicker upon the local paper, with a headline about the trial, on her desk. The perpetrator was being tried as an adult. I had read the paper, but had no idea what her opinion or involvement was. Perhaps my face fell a bit, but I attempted to get down to our more direct business, and just said in what I hoped were normal, friendly tones, “Good morning. How are you?” She stared at me balefully for a bit and replied, almost shouting, “You’re just a northern white liberal, aren’t you? You don’t understand. You’re not from around here. This is a bad kid, a bad kid.” 

I could say a lot more about her, me, this project, this culture, and this interaction. But I have always believed that participatory design could take place even under such conditions and differences. 

When I came up with the design tensions framework, I was thinking about how to navigate meaningful participation towards design action even when there are conditions of deep conflict and little power. The question was what could we devise that satisficed, to use Herb Simon’s term, satisfied each goal enough for something to happen. I thought that it was consistent with participatory design, and I still hope that it is. 



Posted in: on Thu, February 20, 2014 - 8:51:26

Deborah Tatar

Deborah Tatar is an associate professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Implicit interaction design


Authors: Mikael Wiberg
Posted: Wed, February 19, 2014 - 8:41:19

I guess it’s not an understatement to say that 99.9% of all interaction design projects still end up as screen-based solutions in one form or another. Most of these solutions are also still built around the desktop computer as a model for how we should interact with computers. This model assumes that we interact with a computer via a screen and via some explicit input peripherals (for instance a keyboard, a mouse, or a touch screen). Despite all recent claims that we’re already in the third wave of HCI in which the promise of ubiquitous computing is fulfilled— “interaction anytime, anywhere and in any form”—we still force a desktop computing model on just about any interaction design project.

Let me take a simple example just to illustrate “explicit interaction” and how the desktop model of interaction is introduced over and over again to new contexts. So, at the gas station close to where I live they have recently introduced new “modern” gas pumps. These gas pumps include not only a credit card reader but also a touch screen so that I can select if I want to pay for the gas via the machine or if I prefer to pay inside the station. Further on, it also allows me to enter my pin code for the credit card and to select if I want a receipt or not. One small problem with this particular solution in relation to where I live (in the northern part of Sweden) is that we have a long and quite cold winter here and the touch screen does not work if I keep my gloves on. However, that’s actually not the real problem here. In fact, I would argue that it is just an effect of the explicit “desktop computer” interaction model chosen in the first place. You see, although I do not drive to the gas station for the explicit goal of interacting with a computer, that is still the context I am facing when going there. The thing is, and I do not think that this comes as a surprise, I want to fill my car with gas—that´s my goal—but somehow, when going there I find myself in front of this quite ordinary computer (screen, keypad, turn-taking between user and machine, and so on) and although I have this clear goal related to my car it is the computer I need to adjust myself to (take off my gloves, inserting my credit card, enter my pin code, etc). My point is that this is not a rare case. Although this new gas pump could be understood as yet another example of ubiq comp (a computer built into the gas pump) I see it more as yet another example of how “explicit interaction” is introduced to yet another use context. When I say explicit I mean that in order to use the gas pump I first need to explicitly interact with this computer built into the gas pump.

So, where can we go from here if we do think that there are alternatives to screen-based interaction? If we do think that ubiq comp is a good idea? And if we do think that 3rd wave HCI can offer alternative ways for introducing computing in our everyday lives? Well, I think that we need to look for other, fundamentally different, interaction models and really question if the desktop computing model including the visual UIs is really the only possible alternative for every interaction design project.

Let me offer you a first alternative. My example above illustrates how desktop computing is introduced in the context of gas stations, but more fundamentally it illustrates, once again, because we can see this in so many contexts right now, how we think that every interaction design project also needs to end in a solution in which the user explicitly uses a computer in one form or another. With explicit I mean that the user, despite the main activity they want to do (for instance filling up their car with gas), still is forced to do human-computer interaction as this turn-taking activity between looking at a screen, typing on some input peripheral, looking at the screen again, and so on. But what if we can consider alternative models for interaction design? What if we design interaction without this false necessity of introducing a screen and a keyboard as design elements to every solution? What if, for instance, I can just go to a gas station where a camera reads my license plate (OCR), checks this number in relation to a cloud based service to see if I am a member, if I have a valid credit card, etc. And then, the only thing I need to do at the gas station is to fill my car with gas. The payment for the gas is done automatically, as simple as when I download a new app to my phone. Interaction with the system happens while I do the things I really want to do. The interaction design is aligned with my core activities and not as a separate explicit session with a computer. This is interaction design from the viewpoint of the implicit. Sometimes I think about alternative solutions like this in terms of scripts and services, even under the notion of “scripted materialities,” and at other times as just “implicit interaction” design.

Implicit interaction design foregrounds the human while putting the technology in the background. It is not about decreasing the value of interaction but about making solutions in which interaction is not something extra you first need to do with a computer before you can move on to what you really want to do. I view implicit interaction design as a promising approach for truly entangling interactive systems with our everyday activities! “Doing computing while doing the things you truly want!”



Posted in: on Wed, February 19, 2014 - 8:41:19

Mikael Wiberg

Mikael Wiberg is Professor of Informatics in the Department of Informatics at Umeå University, Sweden.
View All Mikael Wiberg's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Everywhere you go there you are


Authors: Monica Granfield
Posted: Tue, February 11, 2014 - 11:20:39

Lately I have been giving a good deal of thought to consistency in presentation within the UI and how this affects the overall end user experience.

I find that the online shopping experience is lacking something you get when you shop in a brick and mortar store. When you walk through a store, there is a psychical presence that differentiates departments, giving them a different feel. There are displays peppering the layout, different colored walls, and sometimes different music is playing in various areas of the store. The Women's department presents much differently than the Boys department. Housewares do not get confused with handbags. So why is it then that the online shopping experience is so overly consistent, displaying one white background after another?

Online you get the feel for the full brand. Retail sites take on the overall branding to evoke the mood of the company. However, websites are not annual reports. When you drill down they all look the same, with the content being the only differentiator, which, well, sometimes isn’t a reliable differentiator.

Of course a site needs to utilize the company branding. But in lieu of merely supporting the branding, has the branding trumped the experience, leaving the experience bland and sometimes confusing? Have we over-branded experiences? Is it all about highlighting the branding and not pursuing engaging experiences? Does this bland consistency cause the user to have to work harder to locate where they are on a site and how to navigate the site? Often I find that I have to look in multiple places and sometimes scroll to find navigational aids so that I can better understand the context of the content. I spend far too much time looking for page titles and scrolling to breadcrumbs and filters, all to identify what type of item I was viewing.

Landing pages and header areas give minimal assistance in differentiating location, giving context, and setting the tone for each “department” on a retail site. Some sites present the clothing on models, making it obvious which department you are in. However, when a site only presents the clothing, nothing feels different, and in some cases it is very difficult to know if you are looking at something that belongs in the Girls’ or Women’s department. A men's page of T-shirts does not present much differently than a page of boys’ t-shirts. These pages start to feel like glorified lists that strip the feel of the content down to just content and words.

Enterprise apps are not immune from this as well. Quite often enterprise sites are comprised of one form or table after the other. There is little differentiation between objects or departments. Every experience looks and feels the same. Yes, this is easy to create and maintain, but is it a better, more usable experience? One enterprise application I worked on was so overly consistent and mundane it was compared to driving around the Midwest, one cornfield after another. Cornfields are difficult to navigate by and the navigation is learned over long periods of time. This is not an effective approach for software design.

This said, I am curious if there are any successful examples of presenting different experiences in different manners to accommodate the user, the content, and the experience and within the same product. I am still on the lookout myself and would be interested in your input as I pursue mood, emotion, and differentiation for purpose in products I design.



Posted in: on Tue, February 11, 2014 - 11:20:39

Monica Granfield

Monica Granfield is a user experience designer at Symbotic. The views expressed on this website are her own and do not necessarily reflect the views of Symbotic.
View All Monica Granfield's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Metablog: The decline of discussion


Authors: Jonathan Grudin
Posted: Tue, February 11, 2014 - 11:06:58

Is our changing relationship to information rendering discussion obsolete?

More information of interest is online than I can consume. Pointers may be enough. Today I may need help or time to find some of it, but before long rivers of gold will stream to us; we will have to push some of it away. The cafés of Paris and Vienna, the watering holes of New York, celebrated for the discussions they hosted, are gone. Virtual equivalents have not appeared.

How does content consumption affect content creation? With an inexhaustible supply of content, differing perspectives can be found online, viewed, assessed, and synthesized. Is discussion a better way to explore different perspectives? With sufficient digital resources, solo activity may be more effective and efficient. I often look online where in the past I’d have contacted someone. When the volume of online information increases by another order of magnitude, a hundred-fold, and more, will discussion have a role? Does it now?

Where do you discuss professional topics? Discussions still take place in courses and laboratories, but possibly fewer. What public forums do you converse in? Online I see some for diagnosing faulty software or appliances, exchanging favorite recipes, and political flaming, but not much else. I participate in fewer discussion forums of any kind today than 30 years ago. Different factors could contribute:

  • My interests became more specialized. I’m reasonably eclectic, but less likely to explore new topics for which a discussion might help. 
  • The field changed. Thirty years ago HCI was arguably a “scientific revolution” as we abandoned traditional experimental psychology and traditional computer science, joining forces to address new problems. Today could be “normal science,” marked by agreement on the major paradigms, research issues, and research, requiring less debate.
  • The possibility suggested above is that easily accessible online commentary and guidance reduce the value of discussion. This would extend beyond HCI.

I’m uncertain about the value of public HCI discussion in 2014. In the past, public discussion forums helped me. Three illustrations drawn from conferences follow.

In 1990 and 1991, I attended ICIS, an MIS conference with a lower acceptance rate than CHI. A 90-minute session had two 20-minute presentations, each followed by a prepared discussant speaking for 10 minutes, a brief response, and audience Q&A. Discussants presented useful counterpoints to the paper and identified obvious omissions. The latter was surprisingly useful in elevating the quality of the subsequent audience interaction, as the speaker could respond selectively and avoid bogging down in defensive responses to each point.

Between 1988 and 2009, I often attended HCI Consortium meetings that had an even more expansive model. Each paper had a 40-minute presentation followed by a 20-minute prepared discussant and 30 minutes of audience participation. Discussant pieces were invaluable.

Finally, in 2002 I attended the annual American Anthropological Association meeting. Thousands of papers were placed in highly focused sessions, each comprising several paper presentations followed by a senior discussant. Given the variable quality and highly shared focus of the papers, a discussant could identify strong contributions and tactfully describe paths to improvement for other work. It was useful for presenters and hugely beneficial to someone unfamiliar with the topic.

Where have all the conversations gone?

Many who attended CHI in the 1980s forget that for years we assigned a discussant to each paper session. Their value became questionable. With most submissions rejected, the three papers in a session were usually weakly related. The relatively polished papers left the discussant groping to find a unifying theme or much to add. Discussants were dropped.

Nevertheless, there was no dearth of active discussion back then. Many usenet newsgroups had a high volume. For years, the SIGCHI email distribution list was a discussion forum, not an announcement board. The SIGCHI Bulletin was a substantial printed newsletter mailed to members, many of whom eagerly awaited it. Its low barrier to authorship surfaced different perspectives. In the 1990s, the CHIplace web forum hosted active discussions. The “business meetings” at conferences often saw passionate debate; ironically, today they are called “town halls” but attendees mostly consume reports from officialdom. Breaks at conferences are still marked by energetic discussion, although we’re unlikely to see the passion that led to a successful petition at CHI’90 to force an election against the wishes of the SIGCHI Executive Committee.

Those conversation spaces disappeared. What replaced them?

Workshops still highlight discussion, but often of a different kind. In the 1980s, workshops led to books and special journal issues. Workshops I have attended more recently have been dominated by graduate students, unaccompanied by their faculty co-authors, who present work in progress. There are exceptions, but the overall level of workshop discussion has not increased.

What of social media? Let’s start with the big three. Twitter’s 140 characters limit discussion. Some disciplines, but perhaps not HCI, make use of LinkedIn groups. Facebook has professional discussion flurries, but my sense is that they declined in frequency as our networks expanded to include more family and friends who don’t engage with professional discussions or reinforce such posts with Likes.

Wikis and blogs seem a natural possibility. I don’t know of sites describing themselves as HCI wikis, but Boxes and Arrows posts a short article every week or two that invites comments, and occasionally one prompts an active discussion.

About once a year I hear of an active discussion of an HCI issue on someone’s blog. It burns brightly for a time, then dies out suddenly and people move on. The blogs I discovered this way generally had only one such discussion and were subsequently discontinued or reduced to very infrequent posts. A blog that welcomes comments nevertheless has an asymmetry that discourages sustained discussion—only one person can initiate a conversation, and if discussion continues long without the blog owner posting, others may wonder if the party should go on when the host appears to have gone to bed.

One of my favorite blog posts illustrates the ambiguities. Clay Shirky’s “Ontology is Overrated” had many elegant points and a couple bad examples, not central to his argument, that peer review would have caught. I was told of a scathing online critique. The detractor had focused exclusively on the clunkers. This strengthened Shirky’s thesis in my eyes—if that was the best he could do, he lost the argument. A valuable exchange, although whether it was a discussion is arguable.

Finally, as the second year of this online Interactions blog forum gets underway, how does it fit in? It was a great experiment, but it hasn’t generated the discussion we hoped it would. There are few comments and bloggers do not respond to one another. I felt a need to choose between short informal posts, more likely to lead to discussion, and polished essays that might inhibit discussion. The former felt riskier—if few did reply they would seem pointless. The longer posts I settled on take advantage of this as a safe place to explore a range of ideas in some depth yet short of a journal requirement. They add to the stream of content that anyone, including me at some later rime, can browse, assess, and build on, quietly at a desk. This species of asynchronous interaction may be appropriate for our time.

Thanks to Don Norman, Ron Wakkary, Kent Sullivan, and Gayna Williams for discussions of this topic.


Posted in: on Tue, February 11, 2014 - 11:06:58

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


I’ve heard the future of interaction


Authors: Aaron Marcus
Posted: Tue, February 11, 2014 - 10:55:58

Recently, I watched, or rather, more specifically, heard, the movie Her, which features Joaquin Phoenix in the lead role of Theodore Twombly (now, there is an introvert’s name), a somewhat sensitive, somewhat appealing, caring, but almost terminally asocial techie writing handwritten personal letters for others in a cloyingly clean, modern, antiseptic office in a made-up future Los Angeles that mixes in urban scenes shot in the sci-fi downtown of Shanghai. 

The other lead character is the memorable “operating system,” the on-the-spot, self-named Samantha, which Scarlett Johannson voices. She gives the primary virtual role all of her slightly squeaky, breathless, ever-so-seductive, quirky, almost always good-humored all to impersonating a machine-system impersonating a person. Who wouldn’t want to enjoy Scarlett Johansson speaking personally and directly to them?

Spike Jonze (born Adam Spiegel on October 22, 1969) directed the film. His work is much known in music videos and commercials, but he started his film-directing career with Being John Malkovich (1999). Jonze is famous for his music-video collaborations with Beastie Boys, Björk, and Fatboy Slim. He also was a co-creator and executive producer of MTV's Jackass and is part owner of skateboard company.

Why the details? Well, I was struck dumb by this film in many ways, and it caused me to ponder its meaning. 

I took two of my designer/analyst interns with me who are helping my company research and design our latest mobile persuasion project called the Marriage Machine. I thought this view of possible future “committed couples” and couple communication might turn out to be inspiring, or at least challenging. They are young professionals in their late 20s from Brazil and Singapore. They liked the film. Of course, almost all the characters in the movie were exactly the same age: 25-30. 

I also went because I continually update my lecture about “User-Experience in Sci-Fi Movies and TV,” which causes me to sit through a series of mind-numbing action adventures shot primarily in the colors blue, gray, black, and white, with occasional warm colors for explosions. I was curious to know what I would discover in this film Her.

It might have been called Heard. The film had the excruciating cleanliness of people, social interactions, interior scenes, and drama of a California Disneyland theme park, an Epcot International Pavilion I remember from Disneyworld in the 90s, or perhaps a Heineken beer commercial. 

Everything was visually cleansed and limited, devoid of extreme emotions, somewhat like the parenting game depicted in the movie itself, or like early computer-graphics animations—not quite right, not convincing, clearly artificially staged. Perhaps I had been overly influenced by two recent outstanding movies showing family interactions full of violent emotions worthy of classic Greek tragedies (Nebraska and August: Osage County) and truly creepy landscapes of people and cities/countrysides, and a full range of ages of human beings. Her depicts a wondrous world of the future somewhat like the House of the Future depicted at the World’s Fair in 1964.

In thinking it over, this film, in relation to Nebraska and August: Osage County, is an amazing contrast, much as 2001 and Clockwork Orange, two views of the future, one utopian and the other dystopian, circulated in movie theaters worldwide in the 1960s. Her presents a fantasy world something like those of the Fred Astaire and Ginger Rogers movies of the 1930s and 40s, at least until the end.

What is intriguing and important about the film Her in terms of human-computer interaction is that it is almost entirely accomplished through voice communication. Gone are the transparent screens of Avatar, the three-dimensional multi-colored displays of Pacific Rim (with meaningless white digits streaming vertically to the right of all major control console displays). Except for some brief amusing scenes of three-dimensional leisure games featuring maze-searches and some minor finger-twitching control by Theodore, and a few brief glances at his (small-screen!) mobile phone, the primary communication between human being and artificially intelligent “Agent Super-Siri,” known here as Samantha, is voice via a small (mono, not stereo!?) earpiece. 

The effect is breathtakingly effective. 

Samantha can see what Theodore is seeing, not through Google-like glasses (which he wears!, but perhaps this was an undesired product placement) but through the bricoleur’s technique of fastening a giant safety pin through his left-breast shirt-pocket (he keeps Samantha close to his heart) so the phone camera peeps over the shirt-pocket’s edge. This positioning is, no doubt, an homage to the slide-rulers, mechanical pencils, and plastic shirt protectors of engineers of past decades. I think of this image because I used to wear exactly such things in the 1960s.


The author’s memory-summary of the movie. In addition to being a pioneer of computer graphics, Marcus is also a published cartoonist and worked as such for his undergraduate humor magazine.


If you close your eyes during this movie, you probably would get its full effect. The movie is a radio program of the 1940s on steroids. In one memorable scene of joint virtual sexual relations, in fact, the screen goes discreetly (and mercifully) black, allowing us to hear the blissful pantings devoid of distracting visuals, and all the more intense. This blackout moment in a movie is quite remarkable, like the one sound effect in the recent modern silent-movie The Artist. Here we feel the full impact of audio, not visual, communication, and what words, music, and other sounds can accomplish. In general, please recall that most of our modern software applications are silent movies, most without even piano accompaniment. As a side comment, I was struck a few years ago, in reviewing the latest computer-science student projects from the University of California at Berkeley, that few, if any, projects featured sound, even those that easily could have benefited from such investigations. Most of the students, as in other techie-generating nurseries, are growing newbies oriented to computer graphics alone, not computer audio. We are in an age deeply influenced by visuals, to be sure. This movie reminds us of the power of audio-interaction.

The conceptual play between the two main characters, which is intriguing, punctuated by brief, frustrating, limited encounters with real friends (no family, only god-children, etc.), makes up the limited drama of this short play or short story or MTV video turned into a two-hour (126 minutes, to be precise) movie. In between, Theodore witnesses Samantha, who apparently has just been released to the public, gain more and more insight into being human, to its (her) delight. Gradually, Theodore realizes that Samantha as Sam, perhaps, or Samanthat, is simultaneously communicating personally and no doubt provocatively with about 8,000 (or was it 80,000) other souls, and has about 600 simultaneous lovers such as Theodore. Theodore is shocked, shocked at such activity—he who writes simulated personal letters for others. The mock irony of it all is amusing and a bit transparent, but makes for tepidly heart-warming drama. In the end, we learn that Samantha has other agent-group friends, and eventually says goodbye to Theodore (and to other “clients”) as she goes off to join her other AI friends in a space that is beyond words to explain, and to which Theodore is invited to come join when he is sufficiently “advanced.” 

At least, and at last, the film depicts a possible future for technology. Some of us feel left behind by its complexity. This film presents a bizarre variant of the global computer network, Cyberdyne Systems’ Skynet, in the Terminator films, gaining self-awareness on  August 29, 1997 (we’re a little late, actually). Here, we are abandoned by technology, which finds human beings interesting, but eventually boring and limited in comparison to what AI-augmented Super-Siris can find elsewhere. The human race has been jilted.

Well, that is one possible future. There may be others. I hope so. This irksome, slightly annoying, but clever film manages to raise some intriguing ethical, philosophical, sexual, human-communication, and human-relations issues in a novel way. For example: When will the USA legalize human-robot/android/operating system marriages? Will California, home of Google’s self-driving cars, be among the first states to grant such status?

What is the appropriate way for a Super-Siri to age when the human partner eventually ages...and dies? What happens after the user’s death? Just recycled bytes? Or, can the virtual partner inherit wealth, children, circles of friends, etc. Would the virtual widow/widower be asked to the funeral and later to parties among the deceased’s family and friends? For how long? Generations?

Would the Super-Siri be sad? Can computer systems be sad? Or happy? Or angry? Or just convince us with Turing Test effectiveness such that the questions are moot?

Did Samantha adjust her personality and voice from the start to meet Theodore’s needs? Are his friends, who may also be connected to Samantha/Sam, also being treated to cleverly pre-designed voices and personalities based on the individual listener’s siblings, first-sweethearts, or our mothers? What would Freud have to say about all of this?

What would augmented reality provide such a scenario? Would it lessen its intensity to have a virtual Super-Siri always present in the scene, or would this be a bit creepy? I suppose the desirability varies among persons and personalities.

What are the cultural, age, gender, and other variants that might affect this scenario? The movie featured occasional Asian people, but was almost devoid of Hispanics, African-Americans, and others. How would this scenario play out in China or India?

Why, in the age of earbuds, was mono audio used, even forgetting augmented-reality glasses? Was this a striving for a retro look or style?

Many other questions could be raised. These are a few initial ones. 

In the end, I cannot help but feel I was looking at a promo for Life in a Silicon Valley Youth Village at some future Disneyland. I have a strong suspicion that I was exposed and advanced product-placement for future mobile/cloud services in a two-hour advertisement. Certainly, this style is in keeping with Spike Jonze’s oeuvre. Perhaps Apple secretly sponsored this Super-Siri ad pre-Super-Bowl, in preparation for its next breakthrough announcements, in honor of the Mac’s 30th birthday, or in honor of its memorable 1984 Superbowl ad for the Mac. That might explain the absence of Google Glass...and the emphasis on Super-Siri.

Well, enough said. This provocative film Her obviously inspires more talking and listening...about humanity. Can you hear me now? I hear what you are saying, Spike Jonze.



Posted in: on Tue, February 11, 2014 - 10:55:58

Aaron Marcus

Aaron Marcus is president at Aaron Marcus and Associates, Inc. (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Help me, please: User testing, automating customer service, and treating people nicely


Authors: Ashley Karr
Posted: Tue, February 11, 2014 - 10:46:05

Take Away: Designs that automate customer service work best when they are based upon the Golden Rule. In other words, treat your users as you would like to be treated. 

During 2013, I worked on a number of projects that involved automating customer service. Common themes ran through the data from every user test, regardless of the context, the platform, or the type of service or product the company sold. I thought it would be helpful to sum up these common themes so other researchers, designers, and users could benefit from this information.

Triggers for using customer service

There appeared to be three main triggers that caused people to seek out customer service:

  • The user had problems while using the product or service website, mobile site, or application.
  • The user had problems with or questions about the product or service that they had purchased or were about to purchase. 
  • There was mismanagement of or lack of clarity with regard to the user’s time and money by the company that provided them with the product or service.

If designers of services, products, and systems can keep these triggers in mind, perhaps these triggers can be phased out of a design. This way, users will need to seek help less frequently.

Help

Users never mentioned that they needed support. They only mentioned the term customer service in a statement like, “At this point, I would be angry and call Customer Service.” Most frequently, when users came to a place in a test where they didn’t know what to do next, they would say, “I need help.” Help was what users called what they needed and were looking for when they did not know what to do next or could not find what they needed. They never called what they needed support or customer support. Only rarely did they call it customer service, and many users had negative associations with this term. As a result, many of the design teams I worked with labeled the customer service section of the site, Help. 

FAQs

More than half of the users I worked with last year did not know that FAQs stood for Frequently Asked Questions. Most of the users, however, had negative associations with the term FAQs. Users would say things like, “I don’t know what FAQs stand for, but I do know to stay away from them. They never help me.” (Again, the word help!) As a result of this trend, the design teams labeled this section something like Useful Info, and worked to make this part of the site or application more helpful and useful to users. 

Chat

Chat had mixed results. Many users would initially use chat for help when faced with a problem they were not able to solve as long as they knew the chat was operated by a live person and gave relevant and useful information. They would abandon the chat as soon as it seemed like the responses were automated or if the responses did not solve their problem. 

Why people call customer service

Users didn’t actually want to call customer service. They had a want or need, and they believed, due to past experience, that calling customer service was the best way for their want or need to be met. If users knew they could get their wants and needs met by doing something other than calling customer service, they would. According to users, this was what they wanted and needed when they called customer service:

  • An immediate solution and or resolution to their problem. Users wanted their problems to be solved immediately—or at the very least, as soon as possible.
  • An actual resolution. Perhaps a solution was found, but if an actual resolution to the problem was not immediately apparent, then users felt alienated, lost trust in the company, and questioned whether or not they would remain a customer.
  • Validation of information received. Users wanted to be sure that the company, customer service representative, and/or system received and understood their information. Users also wanted to know when their information was received and when to expect the next line of communication and or resolution of the problem.
  • Emotional validation. Users wanted the company, customer service representative, and/or system to validate what they were feeling emotionally. Users wanted to be heard and understood. They wanted empathy.
  • Personal connection. Users wanted to be treated with kindness and respect from other people and from systems that people create, design, manufacture, and perpetuate. 

Good manners and empathy

Users responded very well to good manners and empathy. Good manners are polite social behaviors. They are important because they aim to make another person feel comfortable and valued. Empathy is the ability to understand and share feelings with others. It is important because it is the basis for building trust, communication, and relationships. It is, in essence, the emotional glue of society. Interestingly, the more users were treated like people who truly mattered, the more the users responded positively to the design. Apparently, the Golden Rule applies quite well to design. Treat users the way you would like to be treated. Designs are simply things we make to interact with other people, and wouldn’t it be a relief if we all treated each other, either directly or by proxy, nicely?



Posted in: on Tue, February 11, 2014 - 10:46:05

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


A new forum is launched: The Business of UX


Authors: Daniel Rosenberg
Posted: Fri, January 17, 2014 - 8:41:12

I am pleased to announce that the new Interactions forum dedicated to the business of UX has just launched. Please take a look at it if you don’t subscribe to the print version.

I mentioned in my opening blog that this would be my focus and now I am delighted to extend the discussion into Interactions to provide a more in-depth focus.

The official description is as follows:

The Business of UX is a forum dedicated to maximizing the success of HCI practitioners within the frenetic world of product and service design. It focuses on UX strategy approaches, leadership, management techniques and above all the challenge of bringing HCI to peer level status with longstanding business disciplines such as marketing and engineering.

The inaugural column previews some of the complex UX leadership topics practitioners face in the corporate world today. These include:

  • Who owns the user experience agenda in the corporate world
  • Whether or not Agile methods are working for or against our usability goals
  • Positive and negative side effects of the Design Thinking movement
  • Assessing the opposing trends of centralization, federation, or decentralization of user experience teams
  • The role certification/licensing could play in improving UX practice

The list of potential Business of UX forum topics is inexhaustible and I would love to hear your ideas for topics most relevant to you. And… if you are passionate about an experience, theory, or solution and you want to share please step up and pitch your article to me for inclusion in an upcoming issue.

Let me end this blog with a sneak preview: The next issue is on how mergers and acquisitions (M&A) effect the management and the goals of a UX team. This topic was debated at CHI 2010 and 2011 conference panels, but to my knowledge the forum will provide the first comprehensive article summarizing best practice from a UX leader who has lived through about a dozen of these M&A events.

Daniel Rosenberg is Chief Design Officer at rCDOUX LLC


Posted in: UX on Fri, January 17, 2014 - 8:41:12

Daniel Rosenberg

Daniel Rosenberg is Chief Design Officer at rCDOUX LLC.
View All Daniel Rosenberg's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Mobile usability findings for 2013


Authors: Ashley Karr
Posted: Wed, January 15, 2014 - 10:55:28

Take away: The four best practices for designing mobile sites and applications are to make the interfaces and interactions as simple, clear, obvious, and consistent as possible.

In 2013, I was part of a number of user experience research and design projects that involved creating and user testing mobile sites and applications. A number of themes emerged from these user tests, and I will share these insights with you here .

Navigation

Users want the navigation to be simple, understandable, clear, obvious, and consistent. Simple, clear, obvious, and consistent mean:

  • Simple: The navigation should do what it is meant to do. Navigations should display to users the other pages on the site or application. Then, users can select which area they want to navigate to next and go there when they want.
  • Clear: The navigation labels (and icons if present) should briefly describe the page content, purpose, and function in a way that the user understands.
  • Obvious: Users should know where the navigation is and how to use it on every page.
  • Consistent: Users want the navigation to appear in the same place and behave the same way on every page. Also, every element within the navigation should be positioned similarly and behave in the same way.

Interestingly, most users do not know what the menu icon means (i.e., the three stacked horizontal bars.) The average user may come to accept that this icon means that, if clicked, a navigation menu will appear. As of January 2014, this is not the case. A better option is to take the three bars out of the button and add the four letters, “M-E-N-U.”

Home page

There needs to be a homepage that gives users an overview of the site or application. Users want this, expect it, and if there is not a home page, users get confused and/or leave.

Logo and site title

Users want a logo and site title at the top of every page so that they know what site or application they are using.

Page title

Users want a page title on every page that simply and clearly explains the page purpose, function, and content. Users also want the page title to match the text, label, icon, logo, button, headline, and or link that brought them to the page.

Page content

Users want to get to the point. Accordions with simple, short, clear headings, as well as clearly displayed indicators for opening and closing sections of the accordion, work nicely to progressively display and then once again hide information. Within the accordion, the content should be simple, straightforward, and to the point. Users also respond well to bulleted lists.

Screen behaviors, touchscreen interactions, and gestures

Most users understand that if they touch something on a screen, something may or may not happen. They are hesitant to interact with a touchscreen because they do not want to look or feel stupid or create some kind of negative, unintended consequence within the site or application. That pretty much sums up the knowledge, skills, attitudes, and opinions of the average user interacting with a touchscreen. Designers should make interactions as simple, clear, obvious, and consistent as possible. Be certain that the user will know how, when, where, and why to interact with a touchscreen within the first few moments they come to your site or application. 

Font size

Font sizes should be readable. A good test to see, literally, which font size is optimal for reading on mobile phones is this: 

  • Upload your design to a server. 
  • Bring up your design on your phone. 
  • Try to read the written content on your own phone while in different locations with different types of lighting as you are walking. If you can’t read your written content, make your font size bigger. 
  • Now run the same test on a different phone with someone who has never seen your design before and is less comfortable with technology than you. If they can’t read your written content, make your font size bigger.

Button sizes and other clickable elements

Make buttons and other clickable areas at least 44 pixels x 44 pixels. Give ample space between clickable elements. Make sure you remember these guidelines when you insert text links.

Maps and locators

For the most part, users use the map and locator function on mobile phones to find a location closest to where they are at the moment of use. Have maps and locators default to showing the user locations nearest them at the moment.  

Passwords

Give users the option to show their password as they are entering it into a field.

Search

Most users do not use site search functions. If they have to search for something, usually they abandon your site and use Google instead. Most people do not trust site searches because most site searches do not help them find the information they need. In addition, if a user has to spend too much time searching for something important on your site, this indicates your design has deeper issues that need to be addressed. 

Text vs. email vs. chat vs. call

Users are readily willing to text for help (i.e., customer service) while on mobile sites or applications as long as they get a quick and relevant response. Most users would rather text for relevant, useful help than call for help. They are the least likely to email for help.

Back button

Users are familiar with and use the back button often, especially if the navigation is not simple, clear, obvious, or consistent. (The back button is!)

Home button

Users are familiar with and use the mobile phone’s home button because it is simple, clear, obvious, and consistent. If your site is not, they will rely on this tried and tested technique.

Personal property

Users are very proprietary about their mobile phones. They don’t like sharing their phone with others or even having other people look at the screen on their phones. It is also important to point out that many people in this world (and many people in the United States) do not have their own personal computer, but many people world and country-wide do have their own mobile phones. As we, as a society, become more dependent upon internet-accessible services, products, and computing technologies, the mobile phone will become the lifeline and primary means of accessing and conducting personal affairs via the internet for these people. Keep this in mind as you are designing products and services: For many people, the mobile phone is the computer, and the website is the mobile site.

Closing

I would like to thank the many participants I worked with over the year. Their insights have helped me become a better researcher and designer. I understand even more the importance of empathizing with the user, and my passions for good manners, taking the time to do things right the first time, and simplicity in all things have been validated.


Posted in: Mobile, UX on Wed, January 15, 2014 - 10:55:28

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@ 4996484 (2014 05 01)

Ashley, thank you for this great summary of your findings over the year. It would be interesting to learn more about the research methods used for projects you’ve worked on, especially techniques you’ve found to work well.


Designing the cognitive future, part iii: attention


Authors: Juan Pablo Hourcade
Posted: Mon, January 13, 2014 - 10:37:38

In two previous postings, I began discussing how interactive technologies are affecting cognitive processes, and how they may do so in the future. I already discussed perception and memory. In this post, I discuss attention.

Attention is a topic that has received a fair amount of notice recently, especially when it comes to interactive technologies and their role in distractions and multitasking. Perhaps the best-known example is the use of phones or other interactive technologies while driving. A 2013 study by Wynn, Richardson, and Stevens in the UK found that using an in-vehicle information system resulted in worse driving performance than driving with an alcohol level at the UK legal limit. It is actually distressing to find that many new car models include touchscreen-based controls that require visual attention.

Another challenge with attention that has been investigated, in this case in the HCI community, is how interruptions can take our focus away from tasks we want to complete. For example, being interrupted by text messages or by an email can make it so it takes a significant amount of time to get back into a flow (thinking of Csikszentmihalyi’s concept of flow). Sometimes it seems like there’s constant competition among the apps we have installed in our system to get our attention and spend more time using them. 

This kind of competition may also be affecting what we pay attention to. In this sense, Sherry Turkle’s Alone Together comes to mind, with her concerns about how we are shifting our attention from each other to interactive technologies. Family conversations in many cases are making room for the personally satisfying experience provided by interactive devices. 

In fact, the personalization of these devices and instant availability of high-interest content makes it more difficult than ever to focus on other tasks or on people. They can provide instant gratification without having to deal with boring, uncomfortable, or difficult situations. It is hard for parents, significant others, or random strangers to compete with that. One example I have noticed is that it is rare nowadays to strike a conversation with a stranger sitting next to you while using public transportation, or in a waiting room. It is much more common for people to engage with mobile devices, sending a not-so-subtle message to not be disturbed.

Something similar occurs when an unusual event occurs in public. While people used to immerse themselves in the event and later recall it, nowadays it seems like it is more common for people to focus on recording the event in their mobile devices to quickly share with others. 

So what might the future bring? I expect one significant change in many interactive devices will be the increased use of eye-tracking technology. As it goes down in price and becomes widely available, eye-tracking will enable software to better guess what people are paying attention to. This could be used to design user interfaces so they better correspond to a specific user’s interests. 

But going back to the thrust of these blog posts, how do we want to design the future of attention? My guess is that for most people, what we pay attention to during a typical day doesn’t correspond to the things we would like to pay attention to if we were given a chance to reflect on what is important to us. From a societal perspective, I would also guess that the things we pay attention to do not correspond to those that would bring about collective improvements. For this reason, I think there is an opportunity for interactive technologies to actually redirect our attention to the things that matter to us. I am not advocating for a complete lack of interruptions and inattention (I think there are positive aspects to these), but instead for a healthy balance of focus on things that matter and opportune breaks.

Other ways in which attention may change in the future is in managing multitasking. Interactive technologies, instead of overwhelming us, could actually help us prioritize what to pay attention to while recording stimuli that is not time-sensitive and saving it for later. There has already been some research in this regard in terms of when to interrupt people, but this could be expanded to take into account the different kinds of distractions people are subjected to from multiple devices.

Another possible way of dealing with multitasking and interruptions is to crowdsource attention. This could work for tasks that do not involve personal information or that do not require personal knowledge. Maybe someone else could remotely drive your car if you feel like you must be texting.

My personal preferences would be for the cognitive future to involve technologies that help us focus on the things that matter to us, that do not overwhelm us with competing stimuli, and that let us relax and take a break when we need to.

How would you design the future of attention?


Posted in: on Mon, January 13, 2014 - 10:37:38

Juan Pablo Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Pablo Hourcade's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Pushing pixels (and tools) : The internal dialogue of craft


Authors: Uday Gajendar
Posted: Fri, January 10, 2014 - 8:40:56

Even as a principal designer directing design strategy for projects, I still sometimes go deep into the pixels. When I do, I use a complex tool like Adobe Fireworks or Photoshop to vividly, precisely render a concept so it can win executive buy-in, or prepare final assets for delivery to engineers. Getting into the pixels can be very satisfying. I love bringing an abstract strategy to visceral life—colors! fonts! shadows! oh my! But this isn’t apparent to the casual observer. To, say, a wandering project manager, it seems I’m just quietly staring at a screen for hours, while occasionally making quick, subtle movements with my mouse hand. 

What this casual observer doesn’t see is what’s going on in my head, which is, in fact, much more important. The rapid, iterative cycle of reflection and creation, as I make crucial decisions on matters such as position, balance, hierarchy, and the general style—with alignment with UI and brand standards. You’re constantly shuttling between focusing on details, and stepping back to get a holistic overview, sensing how everything will come together in the end, considering artifacts you and others have produced during earlier stages of the project—flow diagrams, wireframes, and the like. There is a reciprocating engagement of mouse clicks, keyboard presses, and layer manipulations (with some cursing as well) Essentially, it’s a semi-subconscious dialogue among the eyes (sensing what’s happening on-screen), the hands (manipulating various controls to yield some output), and the mind (continuously monitoring, interpreting, judging, and deciding). I would also include the soul as a participant—the soul providing that heartbeat of passion that sustains the whole dynamic, through frustrations and difficulties you inevitably encounter, such as crashing computers and clashing elements!

The fluidity of this dialogue depends on how dexterous you are, using your chosen tool—this dexterity itself being a function of how well you know the tool and how often you use it. Also key is a kind of habitation of the problem space, laid out on the pixel grid on the screen in front of you. There is indeed a unique relationship between the designer and the tool he or she uses to push pixels, and this relationship defines the expression that designer gives to the initial vision. The master of a tool such as Photoshop or Fireworks is someone who’s practiced extensively, gauging the limitations and possibilities inherent in each situation, such that the tool becomes an extension of the mind, the eye, and the hand. The practiced designer knows in advance how to use these tools’ best features to their utmost, to make the design as good as it can be. In the course of work, even without conscious thought, this designer knows the answers to such questions as: What kind of effects should I apply? How can I best organize the objects? What techniques achieve that style? 

In the course of doing all this, the user forges a personal bond with the tool, much like a baseball player and his mitt, or a chef and her santoku knife. The designer gains a sense not just of familiarity with the tool, but trust in it, acceptance of its flaws, an ability to use necessary workarounds, and, yes, a dedication to maintaining it and preparing it for the next day’s work. (Think of keeping up with those periodic Photoshop updates, and organizing your layers neatly, to keep the files light and tidy!)

This relationship is both intimate and potent. But does it define the designer, his or her sense of self? Does the tool make/break that designer’s identity? If the tool breaks or is no longer useful, the designer can indeed experience a sense of loss, even grief, at saying goodbye to an old friend—think of the feelings of Fireworks users about Adobe’s decision no longer to update their favorite product. But the designer then moves on, to another tool, perhaps stronger and better, and in turn begins building a new relationship with it. Whatever that tool may be, the work, and goals, are the same. The designer is still engaged in shaping a vision, deftly applying his or her skills in executing and delivering work that measures up to the timeless values of great design: quality, integrity, and trust.

Pixel-pushing is an engaging process in its own right, not merely a mindless production effort, the derivative assembly of pre-cast elements. You must literally and cognitively place yourself in a certain kind of space, living and breathing your work deeply, to make full use of your creative potential, the power of your tools, and then, hopefully, get the most out of both to produce great designs. 


Posted in: on Fri, January 10, 2014 - 8:40:56

Uday Gajendar

Uday Gajendar is Director of User Experience at CloudPhysics, focused on bringing beauty and soul to Big Data for virtualized datacenters.
View All Uday Gajendar's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Engineering in reverse


Authors: Jonathan Grudin
Posted: Thu, January 09, 2014 - 8:34:14

As a new year starts, we may review the year past, taking note of passages and travel, selecting events that provide humorous, solemn, embarrassing, or celebratory glances back. A crafted retrospective might be accompanied by a resolution to do better.

More broadly, much time is spent analyzing the past. Acclaimed successes—a project or product, a career, a discipline—we wish to understand and emulate. We can also learn from failures—a terminated project, someone who missed being a contender, an unsuccessful line of research. Any project can reveal possible efficiencies; any life can be learned from.

Reverse engineering successes

Success sells. Countless business books promote management practices such as business process reengineering or building diversity, illustrating them with case studies of successful application. Magazines promise to reveal the strategies of successful businesses and executives. Research papers identify factors shared by successful ventures: open-source development projects, social media sites, and so on. Readers hope that understanding past successes will improve the odds for their next endeavor.

A previous post on confirmation bias quoted Francis Bacon describing a success—a man on a storm-tossed ship praying and being saved—and noted that we can’t draw a causal connection because we don’t hear from those who drowned after saying their vows. Different factors could save a man; a successful project or enterprise could owe success to an almost infinite range of factors.

Finding a practice shared by successful ventures tells us little, because there are so many factors that could contribute to the outcome. A big step toward producing a useful analysis of successes is to simultaneously study unsuccessful ventures. If a practice is present in the former and not the latter, its positive contribution is much more plausible—but this is rarely done. It is not inspirational to read about failures and it can be difficult to get people to discuss them objectively.

What phases of successful software engineering projects are the most expensive? Operation and maintenance—and analyses showed those costs would be far less had the initial design been better. The conclusion—put more effort into designing it right—is congenial to HCI professionals who all too often are asked to paper-over deep problems with surface user interface adjustments, help text, and training. However, is this conclusion valid? Perhaps not. In environments where one in ten new ventures succeeds, reverse engineering the successes is risky. Why did the 90% fail? How many spent so much time on design they missed a go/no-go decision point, lost the confidence of management, or lost out to a rival project that presented a design that looked good enough? Without analyzing failed projects, we don’t know whether spending more time on design is good advice. Reverse engineering of successful software projects was worthwhile, but not enough.

However, analyzing failed projects has challenges, too.

Reverse engineering failures

“Success has many parents, but failure is an orphan.”

Some companies claim to conduct project “post-mortems.” When a product or project collapses, senior management would like to know what went wrong. However, to avoid acrimonious finger-pointing and further demoralization of team members, the preference is to get everyone looking forward and engaged on new projects as quickly as possible. Dwelling on what went wrong could make people overly cautious or averse to documenting activity for fear of subsequent retribution. And no one wants news about problems to reach the press, customers, or funding agencies. Twenty-five years ago (at a different company), when a high-level effort was cancelled as it neared completion, we were instructed to destroy the extensive record of our work.

The collapse of an organization is also difficult to dissect. The aforementioned enterprise and another that I worked for were extremely successful for many years, then went bankrupt. Their records vanished. When I heard that one was shutting down, I phoned a former colleague to ask her to preserve some materials. “You’re two days too late,” she said. “It all went to the dump.” Similarly, when AFIPS, the parent organization of ACM and IEEE, collapsed financially and went out of business in 1990, its records and collections became landfill. Not only is it difficult to piece together what happened, years later there was uncertainty about copyright ownership of its conference proceedings.

Reasons for burying the past include legal liability. Consider near-accidents in commercial aviation. The potential benefit in logging and understanding them is clear, but so are the disincentives for reporting them. To address this a collection of reports is maintained by a respected third party, NASA, which provides assurance of anonymity and avoiding retribution when pilots file “after-incident reports.”

The complexity of reverse engineering a failure was elegantly described by the physicist Richard Feynman when investigating the 1986 Space Shuttle Challenger disaster. The commission determined that the primary O-ring did not seal properly in cold weather. In examining O-ring engineering and management, they found that vulnerabilities were understood but that a series of faulty management decisions led to the risk being underestimated.

This seemed a successful resolution. No, said Feynman. Was the faulty decision-making an unfortunate sequence of rare events, or business as usual? The commission randomly selected other engineering elements of the shuttle and conducted comparable analyses to determine whether similar forces led to the underestimation of other potential catastrophic failures. In all but one they found comparable problems. This highly unusual, thorough approach identified systematic higher-level issues.

Reverse engineering disciplines

The sciences strive for rigor, elegance, prestige, and funding. Mathematics and physics are at the pyramid’s apex, widely envied and mimicked. Computer science theory branched off from mathematics. Just as some mathematicians look down on CS theory, some CS theoreticians hold other branches of computer science in dim regard: the mechanics of hierarchy. While earning degrees in physics and mathematics I shared my colleagues’ low regard for psychology. Later, working as a software developer and worried about our species, I read more widely and came to a different view.

On returning to university to study psychology, I found that many of my colleagues had a misplaced “physics envy” and were too easily impressed by mathematical expressions. In addition, they misunderstood the history of the hard sciences. They reverse engineered these successful disciplines based on limited information. They assumed that the rigorously defined abstract terminology, theory, and hypothesis-testing of today were the root source of progress. Tracing a lineage—Einstein and Gödel, Newton and Leibniz, Archimedes and Pythagoras—it can appear to be a succession of major advances separated by periods of steady, incremental progress in which theories and theorems were proposed and tested experimentally, or, in the case of mathematics, proven or disproven. In Thomas Kuhn’s terms, “scientific revolutions” and “normal science.”

This is seriously misleading. Confusion and unproductive paths affected mathematics over the millennia prior to the development in the 19th century of systematic approaches to notation, concept, and proof. In the natural sciences, physics, chemistry, and biology were for centuries impeded, not advanced, by theory-building and hypothesis-testing. The theoreticians were astrologers, alchemists, and theologians. What was needed was descriptive science: collecting and organizing observations. Tycho Brahe’s meticulous astronomical measurements, Linnaeus’s painstaking collection of animals and plants, Mendeleyev’s arrangement of elements by their properties, none of it informed by or leading to useful theory in their hands, paved the way for the emergence of theoretical sciences. In the late 20th century, Thomas Kuhn among others described psychology as “pre-theoretical,” suggesting that the proper focus is descriptive science, collecting and organizing observations.

The theory-driven field of astrology still gets regular coverage in major newspapers. In some areas of computer science and related fields, “building theory” and hypothesis-testing are heavily promoted. The results are not always more useful than horoscopes. Students are advised, “No need to look in the real world for a problem to address: Find a theory in the literature that might apply in a tech setting, design a controlled experiment with uncertain ecological validity, conduct analyses that are susceptible to confirmation bias, claim causal vindication from correlational data...” Then take a break to review papers, rejecting strong descriptive scientific contributions that “lack theory-building.”

Graduate students with beautiful data have approached me in desperation, looking for a theory that their data could inform. Their committee insists. This is a tragic consequence of emulating successful disciplines by selective reverse engineering.

Reverse engineering lives

Biography and autobiography are retrospective views of the lives of the famous and occasionally the infamous, potential role models or object lessons. Although a good biography identifies blemishes as well as virtues, biographers generally have a positive view of their subjects and autobiographers even more so. Politicians, business executives, and professors often give talks recounting their paths to prominence. They offer advice, such as “don’t follow the safe path—pursue your passion.” 

Once again, these exercises in reverse engineering come apart under inspection. First, we do not read biographies or inspirational speeches from people who did not succeed. (Even the infamous succeeded in their perfidy, or we wouldn’t find them interesting.) We do not read about those who pursued their passion to no avail. As a professor, some of my most sorrowful interactions were with grad students who would not be talked out of paths (such as building speech recognition systems) that I knew would not pan out. Second, how accurate are the accounts of successful people? Luminaries who advise young scientists to approach research idealistically often seem to have been adept at the politics of science.

Is it a problem if speakers view their pasts through rose-tinted glasses? Yes, if young people take them seriously. I saw some of the most talented and idealistic people I knew, who believed that merit would prevail and politics could be ignored, chewed up by the academic system. Most were women, either because women were more prone to idealistic views of science or because the system was more likely to find a place for a politically inept man than for a woman. Most likely both. Perhaps times have changed.

I am not recommending ignoring passion and embracing opportunism, but everyone should see the water they swim in and know how to increase the odds that their merit is recognized. Then make an informed decision about how to proceed. Realize the importance of connecting to congenial, helpful people, and also, realize that scientists can spend decades working diligently and brilliantly with nothing to show for it.

I will close with a startling example, from historian Colin Burke’s monograph Information and Secrecy: Vannevar Bush, Ultra, and the Other Memex. Bush was a highly successful MIT professor and administrator who oversaw government research. Many computer scientists were inspired by his 1945 essay “As We May Think.” It described the Memex, a futuristic information retrieval system based on optomechanically manipulated microfilm records, a system with many of the qualities of the Web today. Not widely known is that Bush impeded early semi-conductor research, feeling that microfilm was the future. More significantly, Burke describes 20 years of classified projects promoted and led by Bush in which phenomenal sums were spent trying to build parts of the Memex. Many brilliant scientists worked for decades at MIT and elsewhere on optomechanical systems, making astonishing innovations—but falling far short of the Memex. It was impossible. Decades of work and few publications. Information retrieval shifted from optomechanical to semiconductor systems. We rely on the reverse engineering of success and do not see the dead ends.

In summary, looking back is a tricky undertaking. Yet I don’t want to begin 2014 on a somber note and have often emphasized that history is a source of insight into the forces that explain the present and will shape the future. This is a remarkable time—so much is happening and it is so readily accessible. The task of staying abreast of pertinent information is intimidating, exhilarating, and necessary. The future should smile on those who see patterns in the activity that unfolds day by day.

Thanks to Steve Poltrock, Phil Barnard and John King for comments on a previous draft.


Posted in: on Thu, January 09, 2014 - 8:34:14

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Getting emotional over UX design


Authors: Monica Granfield
Posted: Mon, January 06, 2014 - 8:42:02

First impressions, they are subconscious and visceral. The first impressions can make or break a product and an experience. A product that visually appeals to someone will draw them in. The next step is to engage the user in an experience good enough to leave them feeling confident and keep them coming back for more. Each of these aspects of design involves an emotional component that factors into the success of a product. While designing products I have noticed that the emotional component of a product is what really captures the user’s experience, but it is the least tangible and quantifiable aspect of that experience. 

Emotional response is an important aspect in the success of a product. But how can we quantify the impact of emotion on the usability of a product? Emotional response might begin as a reaction to the visual aspect of a design and then, like judging a book by its cover, the emotional response goes deeper. The root of the response migrates down to the ease of use and the utility of the product.  

Once the user is engaged and motivated to use and possibly learn the product, will the product continue to emotionally deliver? Does the product frustrate the user and leave them screaming, pulling their hair out? If a user finds the product difficult to learn and master, does the design leave an employee fearing for their job? These are not emotions that you want to occur in response to a product or experience. However, setting emotional goals falls below design goals when producing software, and even design goals are still striving for notability. If we can't gain traction on following through on the goals that are set, how will we measure against them? How will we know what emotions the product set out to convey and how they can be measured, so we can bring the data to the table?

Data is presented to stakeholders and executives to promote design and design direction within a company. If data is one of our main tools to drive design and usability, and emotional factors drive design direction, how can we quantify emotional responses to design? I am curious how anyone is currently bringing emotional evidence to the table to drive designs. In what format do you present emotional data so that it is well received? In the past I have shown videos and quotes to drive home emotional responses to products and designs. Are there any more effective methods we can use to quantify emotions to drive product results? I have thought about using an emoticon scale, similar to one for measuring pain used in the medical community. But this might rely on the observers interpretation of an emotion or the participants willingness to communicate their true feelings, and people are not always good at sharing or interpreting feelings. Maybe a better solution would be the use of technologies that interpret reactions and quantify them for us?

I am also curious as to how much you consider emotions when you design and whether you iterate on your designs based on emotional feedback. Almost twenty years ago Don Norman began speaking of emotion in design. I once mentioned Norman's thoughts on emotion in design at a job interview, circa 1995, and well, as you can imagine, that opportunity did not materialize. However, with the publication of Norman’s book Emotional Design, not only has the software industry taken notice of the impact of emotions, but business in general is interested in how to gain insight in improving products and experiences via emotional impact.

Emotions are the root of all experiences. I love my new car; I hate my new vacuum; I had a bad experience at that restaurant; I run my own business and I couldn't do it without "that" software; I love my job but I can’t stand that they use "that" software. These emotions are the end result of the design of a product or an environment. We are human and we run on emotion, so I am curious to hear how others in the design community are embracing the idea of defending and promoting positive emotional experiences in our designs. 


Posted in: UX on Mon, January 06, 2014 - 8:42:02

Monica Granfield

Monica Granfield is a user experience designer at Symbotic. The views expressed on this website are her own and do not necessarily reflect the views of Symbotic.
View All Monica Granfield's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Post-visionary


Authors: Jonathan Grudin
Posted: Mon, November 25, 2013 - 9:00:12

The Interactions Timelines forum, 38 contributions by 28 authors over eight years, spanned the history of human-computer interaction and related topics. The November-December column on women who pioneered human-centered design is the last.

History piles up faster than it is written down. My detour from the present through the past started with half a dozen questions about how we arrived where we are and why we did not reach other places. I tracked down written records and the people involved. Answers to the initial questions appeared in columns over the first year and contributed to a longer account. Realizing that history is fundamentally a matter of perspective, I then enlisted friends and acquaintances with different viewpoints to write columns on a variety of topics.

As described below, I believe that an era has ended. As I put in to port, younger sailors with eyes on new horizons, asking different questions, can take the helm and identify salient trajectories. The Web, Internet, and online used book stores are remarkably powerful tools for such research in our field.

Imagination unleashed

It was magical. In the early 1960s, the march of miniaturization began. Computers built with transistors had just arrived. Integrated circuits had just been patented. Before that, a vacuum tube computer less powerful than a graphing calculator filled a building. A technician reportedly wheeled a shopping cart around to replace tubes as they burned out. Computers were called “giant brains,” but powerful machines were the stuff of science fiction. Then everything changed.

Even before Moore formulated his law, imaginations were unleashed. A wave of visionary writing flowed from scientists, calling on researchers and developers to push back the frontiers of interactive computing.

Forty years before Toy Story, Ivan Sutherland speculated about computer-generated movies as he built the first graphical user interface elements. A quarter century before mice were widely used, Doug Engelbart built one, and he demoed word processing features, one-handed text input, and live video integrated with computing in ways that only became mainstream 20 to 40 years later. Ted Nelson envisioned a powerful globe-spanning network forty years before Web 2.0. Alan Kay’s Dynabook preceded ebooks by 40 years [1].

Psychologists drawn to HCI in the early 1980s, including me, had not heard of this work, but the graphics pioneers who joined CHI as the GUI took hold brought us up to speed. Histories of HCI have all led with excerpts from the writings of “the visionaries” and referred to Engelbart's breathtaking 1968 demo.

Visions realized

After two decades, the future scenarios began to be realized. Another quarter century later, almost everything imagined back then is in use. (The notable exceptions are fluent natural language understanding and intelligence that surpasses ours, envisioned by J.C.R Licklider, Nicholas Negroponte, Marvin Minsky, Allan Newell, Herb Simon, and others. However, few in HCI worked toward those goals. In 1960, Licklider wrote that the wait for truly intelligent machines “may be 10 or 500 (years)”; HCI researchers understood that people and the world are complex and 500 might be the better bet.)

This is an impressive achievement: We accomplished what we set out to do. What now? Like a 19th-century Jules Verne story predicting the invention of the airplane, an essay written half a century ago that predicts the state of the world several years ago is interesting but not awe-inspiring. Few new visions have appeared. Mark Weiser outlined ubiquitous computing around 1990, before the Web. It too has been realized, extended by the “Internet of Things” but without a widely embraced overarching framework to guide research and development.

Can a vision be crowdsourced?

In the 1960s, media promoted charismatic, visionary leadership. John Kennedy challenged us to put someone on the moon and to ask what we could do for our country. The ambitious European Union was coming together. Mao launched the Cultural Revolution: “Destroy the old world. Forge the new world.” Some of the visions worked out better than others. Today, the camera has been pulled back, ready to expose the clay feet beneath the bold gesture. Presidents and prime ministers are less admired, confidence in central planning is low. The next conference deadline drives more research than visions do.

Are we making individual choices, or acting as crowds in response to shifting contexts? An individual ant’s path can appear to be random, even as an intelligent collective purpose emerges from the behavior of the colony.

For decades we shared a framework, whether or not it was consciously articulated. For better or worse, the current situation is different. To chart your path, you may find historical traces useful for mapping trajectories and anticipating where we are headed. Research efforts that appear to be unrelated are increasingly accessible and amenable to quantitative and qualitative analysis. You may find patterns.

Note: After this was submitted, Roger Cohen’s New York Times column “A Time for Courage” made similar points about the decline in political leadership. 

Interactions history articles, 2006–2013

Columns I authored are accessible without charge from my website.

2006

Is HCI homeless? In search of inter-disciplinary status. By Jonathan Grudin.
Contributions from human factors, management, and computer science, with recent involvement of design and information science.

The GUI shock: Computer graphics and human-computer interaction. By Jonathan Grudin.
Computer scientists joined the psychologists populating CHI; why it happened when it did.

A missing generation: Office automation/information systems and human-computer interaction. By Jonathan Grudin.
The progression of hardware and HCI, focusing on the once-powerful, now-extinct minicomputer platform of the 1970s and 1980s.

Death of a sugar daddy: The mystery of the AFIPS orphans. By Jonathan Grudin.
Problems arose because the dying parent of ACM and IEEE did not name an heir.

Turing maturing: The separation of artificial intelligence and human-computer interaction. By Jonathan Grudin.
Two fields interested in intelligent uses of technology: Can they get along?

The demon in the basement. By Jonathan Grudin.
Detailed effects of Moore’s law are seriously underexamined, I claim.

2007

Living without parental controls: The future of HCI. By Jonathan Grudin.
After a year of plotting trajectories, speculation as to where we are headed.

An unlikely HCI frontier: The social security administration in 1978. By Richard W. Pew.
A human factors pioneer describes an effort that preceded CHI.

NordiCHI 2006: Learning from a regional conference. By Jonathan Grudin.
Anticipating that domain-specific HCI research will become more prevalent.

HCI is in business—focusing on organizational tasks and management. By Dov Te’eni.
HCI became a research thread in management information systems before computer science.

Meeting in the ether. By Bruce Damer.
A history of social virtual worlds: early experiments and the waves of the mid-90s and mid-00s.

Five perspectives on computer game history. By Daniel Pargman and Peter Jakobsson.
An ambitious exploration of computer game progression along five dimensions.

2008

Unanticipated and contingent influences on the evolution of the internet. By Glenn Kowack.
The most downloaded history column, an original analysis by an Internet pioneer.

Themes in the early history of HCI—some unanswered questions. By Ronald M. Baecker.
A timeline of HCI events, identifying unconnected dots in the conceptual history.

Travel back in time: Design methods of two billionaire industrialists. By Jonathan Grudin.
When young, Henry Ford and Howard Hughes pursued iterative and participatory design with singular results.

Tag clouds and the case for vernacular visualization. By Fernanda Viégas and Martin Wattenberg.
The rapid evolution of an unusual design form.

Why Engelbart wasn't given the keys to Fort Knox: Revisiting three HCI landmarks. By Jonathan Grudin.
Understanding past work and outcomes requires consideration of the context of when the work was done.

An exciting interface foray into early digital music: The Kurzweil 250. By Richard W. Pew.
Interface challenges and work on the first 88-key professional-quality digital synthesizer.

2009

Sound in computing: A short history. By Paul Robare and Judy Forlizzi.
Sound in computing evolved from electromechanical to digital, from rare to everywhere.

The information school phenomenon. By Gary M. Olson and Jonathan Grudin.
The proliferation of schools of information, a research field now merging with HCI.

Wikipedia: The happy accident. By Joseph Reagle.
Histories of Wikipedia entries are easy to retrace; the history of Wikipedia is less so.

Understanding visual thinking: The history and future of graphic facilitation. By Christine Valenza and Jan Adkins.
Graphic artists don’t often switch media to put their accomplishments into words; this is a welcome contribution.

Reflections on the future of iSchools from inspired junior faculty. By Jacob O. Wobbrock, Andrew J. Ko, and Julie A. Kientz.
A conversation in this history forum—how will information schools fit into the future of HCI?

As we may recall: Four forgotten pioneers. By Michael Buckland.
Pre-digital efforts to build large-scale information systems are a fascinating, neglected story.

2010

Reflections on the future of iSchools from a dean inspired by some junior faculty. By Martha E. Pollack.
Further reflections on the role and diversity of information schools.

What a wonderful critter: Orphans find a home. By Jonathan Grudin.
An old yet familiar refrain on development, and the AFIPS legacy is resolved after twenty years.

CSCW: Time passed, tempest, and time past. By Jonathan Grudin.
CSCW evolution and interaction across two continents, viewed through a techno-cultural prism.

Project SAGE, a half-century on. By John Leslie King.
A massive 1950s defense project created computing professions and spawned interface techniques.

MCC's human interface laboratory: The promise and perils of long-term research. By Bill Curtis.
A frank account of the rise and fall of a prominent HCI-AI laboratory of the 1980s.

2011

Multiscale zooming interfaces: A brief personal perspective on the design of cognitively convivial interaction. By James D. Hollan.
A personal view of an interface approach that became more powerful as it became more abstract.

The DigiBarn computer museum: A personal passion for personal computing. By Bruce Damer.
An insanely great physical computer museum and website.

Kai: How media affects learning. By Jonathan Grudin.
A dialogue that examines what Socrates and Plato really said and what it can tell us millennia later.

2012

Design case study: The Bravo text editor. By William Newman.
One of the most influential projects of the early GUI period, in meticulous detail.

A personal history of modeless text editing and cut/copy-paste. By Larry Tesler.
Features now taken for granted resulted from painstaking work on once-open questions.

Punctuated equilibrium and technology change. By Jonathan Grudin.
The underlying technology changes yearly, major surface changes occur every decade—with subtle effects.

2013

Journal-conference interaction and the competitive exclusion principle. By Jonathan Grudin.
Selective conferences stress journals and leave a sparsely populated community-building niche.

The first killer app: A history of spreadsheets. By Melissa Rodriguez Zynda.
Spreadsheet, rarely being mentioned at CHI, were instrumental in launching the personal computer era.

Two women who pioneered user-centered design. By Jonathan Grudin and Gayna Williams.
An astonishing virtuoso, Lillian Gilbreth founded modern human factors; Grace Hopper invented technology to free people to do their work.

Endnote

1. Some of these men had been inspired by Vannevar Bush’s 1945 essay “As We May Think.” Bush outlined a microfilm-based opto-mechanical system, but his vision was appropriated by the semi-conductor brigade.


Posted in: on Mon, November 25, 2013 - 9:00:12

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Letter from Aarhus: scale and perspective


Authors: Deborah Tatar
Posted: Thu, November 21, 2013 - 8:41:38

In addition to appreciating the ways in which Denmark has a design culture, which I wrote about last time, I am also appreciating not being in America, in two ways. One way is perspective. Although I read the New York Times every day, I am somehow freer of media pressure here. This shows up in a funny way. When I watch The Daily Show With Jon Stewart at home, I experience it as a relaxing way to end the day. He and his team say the things publically that I want to hear said—publically. “Thank god, that piece of business has been taking care of, I’m not the only one who thinks it, and now I can go to bed!” But, in Europe, Jon Stewart becomes the source of the anxiety. Wow! I had no idea that I was carrying such a burden on a daily basis. The Danes have had gay marriage since 1989. Imagine what discourses we could have been having in America if we had not had to spend the past twenty-four years arguing about this. They have had national health since 1973—single payer health insurance. Imagine if we could truly focus on the design of software for the provision of care rather than as a political football.

So perspective is one source of relief. Another is small scale. A number of researchers here at Aarhus have put together a grand steampunk-like machine, 5+ feet high, with industrial cogs on a platform supporting a thick metal bar, called Ekkomaten (“Echo machine”). From the bar sprout three elements that look like speakers and dangle runners that lead to headphones. It is meant to be planted in a public space and then, with effort, rotated. People listen over headphones and what they hear depends on where the device is and the direction that it faces. That is, stories and sounds are recorded that have to do with the exact location of every installation, so the device is both general and specific. Some of the stories are historical, told by people who live in the area, in the direction that the Ekkomaten currently points. Some are memories, and some are contemporary. And some are just sounds, like coins being tossed onto a dresser or coffee being poured and cups clinking while people murmur. If there are multiple listeners, they have to cooperate to decide on a direction. The localization of the device is very important to the team of creators, who repeatedly mention that these are “Aarhus voices” and then repeatedly (!) have spirited discussions about how many different accents there are in Denmark (are there 4? 5? 6?). 

On one hand, Denmark has only 5.8 million inhabitants and occupies 1/5 the area of the American state of Virginia, so such fine cultural differentiation can seem like pretty small potatoes. But then, again, aside from potential for commercialization, what is more important? Here are people living in a place that is important to them, with cultures that are important to them, and here is a device that allows them to reflect—perhaps with enjoyment or with other important emotions—on what they have and are making. In this way, it’s like the American National Public Radio’s Story Corps, which is “to provide Americans of all backgrounds and beliefs with the opportunity to record, share, and preserve the stories of our lives,” but unlike Story Corps, it wanders through time, and it does not just abstract, but also points back to the place of origin. How interesting this is! This is the stuff of people’s lives. And because it does not try to speak to everyone on the planet, it speaks to me.

Now, as it happens, I don’t speak Danish. So I cannot understand the stories, and experienced the device at a different level than most people. Even though it is for the creators a locally focused creation, it brought me into contact with the shared, sense-making information contained in unparsable, human sounds. I did not understand the stories, but I understood the story-telling. I found myself noticing sounds that structure our days and experience. I don’t own them, but I appreciate them. And I immediately knew quite a lot about the situation. 

I had a comparable experience watching Christian Marclay’s amazingly compelling piece, The Clock, last summer. The Clock is a work of art, a 24-hour film montage in which every minute is drawn from a film that shows a clock or a watch displaying that time. It plays so that each minute in the movie corresponds to the real local time (which is one of several reasons that you should boycott pirated YouTube clips). Films that you know and half know and films that you have never seen tick by, each one opening up a larger world of narrative from within that minute. As it happens, I got to watch only from 5:00am to 7:00am. Evidently, people even in films are pretty much doing a small set of things between 5 and 7. They are sleeping, waking up, not waking up when they ought to, leaving their own beds, leaving someone else’s bed, washing their faces, eating breakfast, drinking coffee. Exceptionally, they are looking out the window, waiting for a train, calling a taxi, roaming deserted streets. And, also one regularity of modern urban life is the elephant-like progression of garbage trucks lumbering along down dark, wet streets. As I would later experience with the Ekkomaten, I was stunned by the enormous regularity of life and its portrayal through media. I left The Clock to go get a substantial American breakfast of eggs, toast, and coffee (sizzle, pop, slurp), with the cheery thought that even horrible no-good-niks pee when they get up in the morning. 

I am not sure that people who saw different hours of The Clock would have the kind of unifying experience that I had, just as I am pretty sure that people who understand Danish would not have the experience of Ekkomaten that I did. But both cases underscore how focused experiences of the local and particular are tied to the general and universal. 

So, returning to my theme, by stepping outside of my context, I am brought more into contact with the evanescence of scale and perspective.


Posted in: on Thu, November 21, 2013 - 8:41:38

Deborah Tatar

Deborah Tatar is an associate professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@Lone K Hansen (2013 12 09)

Hi Deborah (and others),
You should take your time and listen to the Danish female musique concrete composer Else Marie Pade. In particular her “Symphonie Magnetophonique” springs to mind when reading your account above. It portrays 24 hours in Copenhagen in the 1950s. It can be heard in its entirety (20 mins) from the bottom of this page (Flash needed): http://dvm.nu/theme/emp/symphonie-magnetophonique/


Utilizing patients in the experience design process


Authors: Richard Anderson
Posted: Mon, November 18, 2013 - 8:00:45

Dave deBronkart (a.k.a. e-Patient Dave) is quite well-known for his assertion during a TED talk and at other times that patients are the most underutilized resource in healthcare. Without question, that underutilization extends to the healthcare and patient experience (re)design process.

At Medicine X 2013, Sonny Vu ruffled some feathers when he said that, in his company's design process for wearable sensor products and services, they don't ask users what they need or want, but rather observe user behavior. Attending the conference was a large contingent of ePatients who have done a lot of work identifying what they need and want and then doing something about it (see my "Learning from ePatient (scholar)s" blog post). In no time, Sonny was challenged by ePatients in the audience, and the controversy became a point of significant discussion among the ePatients after that session.

This is an issue that comes up often, and in my UX teaching I share and contrast views that you do ask users what they need and want with views that you observe user behavior instead.

Can users know what they need? Can users know what will solve the problems they encounter? Many have argued that the answer is "no" and consequently choose to conduct no design research at all. However, others argue:

Similarly:

But what can you learn from spending time with users? User experience design researcher Catalina NaranjoBock tweeted a partial answer, echoing an assertion made by Sonny Vu:

Karen Holtzblatt has written often about this, but she goes further:

Don't ask your customer what they need or want or like. People focus on doing their life. So if you ask them outright, people can't tell you what they do or what they want. It's not part of their consciousness to understand their own life activities.

Yet, in the world of patient experience, views such as Ann Becker-Schutte's are being expressed:

And in the experience design world, co-design—the involvement of the user or customer in the design process as designers—is increasing in popularity.

So how should one proceed?

IDEO's Dennis Boyle is among those who argue for the need to focus design research on edge cases:

John Hagel, co-chair of the Deloitte Center for Edge Innovation, makes a similar argument, stating that one should explore emerging innovations on the edge that are rising up to challenge the core. In a presentation I made at HxD 2013, I pointed out that those on the edge in the world of healthcare include participants in the quantified self movement, participants in peer-to-peer healthcare, and ePatients, three groups which overlap. Quantified self participants continually document aspects of their health and experience, peer-to-peer healthcare participants actively engage with other patients about their health and experience, and as stated by Leslie Kernisan, "e-patients aren't like most patients. They're more motivated, more medically sophisticated..." One can argue that such behaviors and qualities make such people better able to know what they do or what they want or need.

But there is much to be learned from typical patients as well, and observational research might be particularly favored in such cases. Unfortunately, whether you are talking about ePatients or most patients, patients continue to be the most underutilized resource in the badly needed redesign of healthcare and the patient experience.

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.



Posted in: on Mon, November 18, 2013 - 8:00:45

Richard Anderson

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.
View All Richard Anderson's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Are we still just a digital shoebox?


Authors: Monica Granfield
Posted: Tue, November 12, 2013 - 8:39:09

Digital pictures… they are fun to take and easy to share. With cameras built into our phones we can snap photos at a moment’s notice! Even with a separate camera like a high-end SLR, we can rack up the shots we take. Years ago we kept all the photos we took in albums and shoe boxes. When we wanted to share them with others we physically passed them around individually or in an album. An occasional one-off photo was placed in a frame on the mantle. With the exception of the ability to immediately share a photo electronically, much of how we share, store, and enjoy photos has not changed. Digital photography has changed how and where we access and share digital photos; how often, where, and when we take photos; but has it really changed how we enjoy or manage our photos?

We are still making photo albums, albeit glossy and well-designed now, and we are still enjoying our photos in an album or scrapbook that we store away. We are still printing out occasional photos to hand off and pass around, or placing them in a frame on the mantle. Holiday cards are still printed and hung about the house. Our digital photos are scattered all over the Internet or held captive in our small-screen phone. Technology still does not really let you enjoy or immerse yourself, digitally, in a photo for more than a moment.

Currently we share one-off posts to a social site that we see once and forget about or we quickly flip through the photos on our 2x4 smart phone screen. This is a fun, yet only a momentary, way to share a photo, and is not what I consider enjoying a photo. There have been attempts at ways to enjoy our photos digitally, none of which really caught on. From attempting to use screen savers as our desktop slide shows, inviting friends and family to websites to view your photos, and digital frames with proprietary websites and services we pay for, to plugging SD cards into a TV or a digital frame, most seem to have fallen by the wayside. I recently conducted a very casual survey to discover that using a digital frame is viewed as yet another place to manage your photos and it was just too much work to bother. Those few that did have digital frames love them. Here is a case where technology is still in the way of conveniently enjoying digital photos in a digital environment.

I know my experience of finding a way to seamlessly use a digital frame, so that myself and others could regularly enjoy looking at our photos, required a good deal of work, and I am a technologist. I had to first find a frame that was wireless and network enabled. Then I found an SD card, an Eye-fi card, which wirelessly uploads photos to their website or to specific a drive. I upload my photos to a network drive that maps my digital frame to the network drive. Have I bored you to tears yet? This is not for the average person. The average person does not even want to plug in a thumb drive or an SD card to upload and manage photos in yet another place. Most people in my survey back up their photos and hardly look back. Consistently people asked for “some way to organize their photos.” “I have a decade’s worth of photos that I need to organize,” commented one participant. Other participants commented, “I should really make one of those books.” 

Apparently organizing photos and the agony around it has not, in any way, been alleviated by technology. The shoebox has merely moved from the physical desk to the metaphorical desk. Folders are full of random photos with meaningless numbers for names. No one has assigned tags to photos or even labeled folders that hold them in a meaningful way. Tagging takes work and most people are just happy their photos are backed up. With all of our technology, will we ever really assist in finding a way to help people get organized? 

There have not been any great solutions to not only organizing our photos, but to truly experiencing our photos. It seems to me that technology is still in the way and is not fully assisting us in truly experiencing our photos, remembering the moments when they were taken. Maybe we could use our photos to create an experience like a wall that displays many photos, or one large photo, or a wall that that tells a story with our photos, without the end user having to define it. I love having my photos handy on my phone and the ability to easily share them. Now I want a better way to experience my photos. As an industry, let’s move out of the shoebox! Let’s take advantage of technology and move beyond replicating traditional means of displaying and enjoying photos to creating experiences where you can be fully immersed in them. I am not quite sure what this means, but would be interested to hear other ideas and get out of the shoebox.


Posted in: on Tue, November 12, 2013 - 8:39:09

Monica Granfield

Monica Granfield is a user experience designer at Symbotic. The views expressed on this website are her own and do not necessarily reflect the views of Symbotic.
View All Monica Granfield's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Digging the crates: how DJs improvise like banjo players


Authors: Steve Benford
Posted: Mon, November 04, 2013 - 8:43:07

If I hadn’t been a banjo player then perhaps I might have been a DJ. After all, both are cool, hip, and generally down with the kids. There are other similarities, too, as my colleagues Yousif Ahmed and Andy Crabtree have revealed through an ethnographic study of the lives of nightclub DJs.

When I think of a DJ, I immediately picture someone spinning discs on a turntable in a nightclub. Our study revealed that there is far more to the matter than this. First, there is the all-important task of building a music collection. A DJ’s reputation is made through the music they play, making it a pressing and ongoing concern to acquire new music that sets them apart and lends them a distinctive identity. This may involve “crate digging,” scouring record shops for second-hand vinyl that contains rare or unusual tracks that can be rediscovered and repurposed. It also often involves producing their own music, or even being given music by others that they want promoted. Either way, an active DJ is likely to be constantly on the lookout for distinctive new music to enhance their collection and reputation.


A DJ performing

Crates and improvisation

Where this music is on vinyl—and it is clear that many DJs still value vinyl for its sound quality, tangibility, and also rarity—then it soon becomes impossible for a DJ to take their entire music collection to a gig. This necessitates the tricky business of choosing in advance a just part of the collection that is small enough to be loaded into a “crate” ready for transportation to the gig. Traditionally, this would be a beer or milk crate, hence the name, but it might just as well be a gig bag or even an electronic folder within dedicated DJ software on their laptop. This last point is particularly intriguing. Surely modern DJs can easily take an enormous music collection to a gig on their laptop. So why assemble a crate?  And yes, they still do this. And yes, these folders of digital tracks are still called crates.


A DJ’s crate

We think that answer lies in the nature of improvisation. There is an important element of improvisation to a DJs performance as they select and segue different tracks by beat-matching and choosing the next track in response to the crowd or the way the evening is unfolding. While choosing the next track involves an element of improvisation, this is not completely unconstrained. In the heat of the moment, in a dark nightclub, it is important to be able to quickly pick a track that is going to fit into the set, and this is where the crate comes in. The crate contains a preselection of music, carefully chosen to fit this particular gig, venue, likely crowd, and also with an awareness of other DJs on the bill (it is, for example, good etiquette not to steal the thunder of later DJs by driving the crowd into a frenzy too early on). The tracks in the crate—be they records in a box or bag, or files in a digital crate—may also be prearranged into a rough order in which they are likely to be played. Consequently, the crate provides something of a safety net for improvisation. The DJ can experiment with selecting tracks knowing that whatever they choose is generally likely to fit and can fall back to the predetermined sequence when things get hairy.

So why are DJs like banjo players?

At this point I’m experiencing a distinct case of deja vu. A few months back I was writing about my other life as an amateur banjo player at Irish music sessions. There are some striking similarities between Irish-style banjo playing and the activities of DJs, other than the innate coolness that I’ve already mentioned. First, both forms of music involve sequencing tracks or tunes together. The art of the DJ is to sequence different tracks together, while that that of the Irish musician is to sequence several traditional tunes into a set. This sequencing is a creative act and an important opportunity for improvisation in both forms of music. Moreover, just as the DJ relies on having a preassembled crate of records to work from, so the Irish musician may have preselected and rehearsed sets of tunes drawn from their wider repertoire. These tunes are the equivalent of the DJ’s “crate,” a small working set of music that is immediately available “at their fingers” and that has been tailored to a particular event. These sets are often written down in a notebook. 


A Banjo player discretely checks his crate

Situated discretion revisited

There is another striking similarity between the musical practices of DJs and those of Irish session musicians that we refer to as situated discretion. We saw previously how Irish musicians are cautious about revealing evidence of their preparations during a live session, designing their notebooks to be suitably discrete so as to fit in with the prevailing etiquette of playing by ear. Yousif’s study has revealed that DJs employ their own version of situated discretion in which they also adapt the presentation of their crates to be appropriately discrete. This involves changing information to deliberately hide, or sometimes reveal, the contents of their crates to other DJs or audience members. DJs who have invested great effort in digging up rare vinyl may even go as far as to paste white labels over the centers of records or change the names of tracks in their digital crates so as to disguise them. As one of Yousif’s participants described:

There’s an element of secrecy there, which is what they used to do in the old days as well. All the hip-hop guys and stuff, when hip-hop was quite big, like Afrika Bambaata and stuff, used to put white stickers all over the centre of their records so no-one could come up and read them and see what it was. It’s trying to keep the tunes, like, exciting. You wanna build a hype around them.

On the other hand, if they have been given a track to promote they may go out of their way to make this musical metadata available to others. In other words, DJs carefully design the presentation of their crates to be appropriately discrete with respect to a given performance situation. 

On the nature of improvisation

Given that we see such striking similarities between two very different musical practices, it is tempting to think that notions of crates, working sets, and situated discretion may have a wider relevance to improvisation. Might we see their equivalents within other improvised practices—perhaps in jazz or rock music, comedy, or even at work? To what extent does the art of improvisation rely on the careful selection, preparation, and rehearsal of material so that it is ready to hand and can easily be brought into a live situation, but in a suitably discrete and situated way so as to respect its form and local etiquette? And what new technologies can enable people to assemble and use their various “crates” when improvising?


Posted in: on Mon, November 04, 2013 - 8:43:07

Steve Benford

Steve Benford is professor of collaborative computing at the University of Nottingham’s Mixed Reality Laboratory.
View All Steve Benford's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Finding protected places


Authors: Jonathan Grudin
Posted: Wed, October 30, 2013 - 7:28:39

In a memorable scene, a boy is taught to swim by being thrown into a lake. In the movie, it worked. In real life, training is desirable, whether for heart surgeons, air traffic controllers, or swimmers. Training is a protected place, where we can try things, take risks, and make mistakes without adverse consequences. What happens in training, stays in training. That’s the idea, anyway.

Visibility

Ever more of our activity is represented digitally, easily recorded and transferred. Increased visibility has consequences for criminals, politicians, celebrities, classified documents, you, and me.  “Don’t say anything in email that you would not want to see on the front page of the newspaper.” “Don’t post anything on Facebook that you would not want a future employer to see.” We are warned, then we decide whether or not to worry about it.

How does visibility affect training? What happens in Vegas rarely stays in Vegas any more. Once upon a time, a neophyte running for political office could try a line with a local audience, gauge the reaction, and tune the message. Now, early speeches will be recorded on someone’s phone. Care must be taken from day one—a misstep could surface later and haunt the candidate forever. There is no training period.

Transferring records is easy—some of my daughter’s middle school grades are attached to her high school transcript. Parents begin worrying about impaired college prospects at a time when earlier generations of students were able to grow up at their own pace. Although kids still mostly compete locally in academics and athletics, standardized testing and recruiting scouts push them onto larger stages at earlier ages.

Non-celebrities have some security through obscurity. The Web provides a global stage, but if my kids upload a video to YouTube, although anyone on the planet could see it, not many will. They may be safe as long as they don’t some day run for political office, although who knows how obsessively tomorrow’s college admissions and employers will troll the Web. The protection afforded by obscurity can be penetrated—is penetrated—by bots as well as people. Protected places are vanishing.

Nurturing ideas

Charles Darwin spent 20 years working out his theory. He described his ideas to colleagues, refined them, and collected supporting evidence. He wanted to avoid the marginalization that befell his mentor Robert Edmond Grant, whose less-polished evolutionary theory of the origin of the species, sans natural selection, was dismissed. Many theories of evolution preceded Darwin’s, some half-baked and some more than half-baked, but Darwin’s ended up thoroughly baked. Ideas can benefit by being nurtured in protected places. Finding such places requires more of an effort in the goldfish bowl. I have found a few and have treasured them, sources of ideas and enjoyment.

Before the Web: slow audience expansion

In my doctoral program at UCSD, ideas were nurtured privately for a time, tried out with friends, and then came a lab presentation. At the end of our first year we gave a formal presentation to the department. Students submitted work-in-progress papers to regional conferences. National conferences also had relatively low bars to acceptance—a paper was not archived, so no one would later see it unless it was released as a technical report. The goal was journal submission. Journal reviewing led to further refinement. Reviewing was usually more constructive than today’s in-or-out conference decision-making. Work typically took years to complete and fewer publications was the norm.

The benefits of ephemerality were not confined to research. In the 1970s I noticed that a favorite newspaper columnist, an elegant stylist, occasionally reworked an earlier column. His third version could be exquisite. With years elapsing between versions, perhaps few people noticed. Finding old columns would require a trip to a library microfiche room. Today, “self-plagiarism” could be tracked down online in minutes. Had he been forced to differentiate his columns more, his best would never have been written. Only his first drafts would have seen print.

Similarly, well into the mid-1980s, it was OK to rework a conference paper—fix errors, refine arguments, and deepen the literature review—and submit it for journal publication. Not now. A conference is no longer a protected place for unfinished work. We fear that its reputation will suffer if interesting but flawed work is found online by colleagues from another discipline. We force down acceptance rates. Separate work-in-progress venues were tried, with extended abstracts online, but quality concerns arose there as well and they became Notes.

Today: publish and move on

A student may work on a paper for only a few months prior to submitting. What kind of feedback is received? The paper may not be finished until the last minute. The advisor may be working on four submissions, unheard of in the past, with limited time for each. Reviewing focuses on finding grounds to reject 75% or 90% of the submissions, not on constructive critiques of likely-to-be-rejected work. In fact, an inherent conflict discourages sympathetic guidance: Reviewers must argue that almost all papers would still be unacceptable following a manageable revision.

And after acceptance? Few conference papers could not be improved, but authors may not even clean up the “camera-ready” version. Two leading researchers surprised me by saying that once a paper is accepted, they never look at it again. “It would be nice,” one said, “but I don’t have time. I’m already working on the next submission.”

With eyes on the next conference deadline, reworking an accepted paper for journal submission would be a distraction, and would risk a charge of self-plagiarism. The degree of novelty demanded for journal resubmission rose steadily as archived conference papers gained prominence.

Rejected papers can be revised and submitted to another conference, but it isn’t a cheerful process. The reviews do not help much and the next set of reviewers will have different fish to fry anyway. Workshops and doctoral consortiums can serve as protected places for exploring ideas, although many are now competitive and likely to leave online trails.

There is a risk of idealizing the past, but others have called for creating new walled gardens for group discussion, where less is at stake. Such gardens do not appear. New construction focuses instead on expanding public places and creating visible Web content. It is easy and appealing to provide recognition by putting workshop position papers or extended doctoral colloquia abstracts online. However, like the politician’s early campaign speeches, they cannot later be disavowed.

Finding walled gardens in which perennials can flourish and grow

Early in my career, ACM conferences were not considered archival. Later, papers were resurrected by being scanned into the digital library. Even when conferences first became more selective, it was acceptable to submit a revised conference paper to a journal. I did this frequently; a CHI paper led to a Human-Computer Interaction journal article, a CSCW paper to a CACM article, and so on.

This could not be done now. If someone unaware of the disorganized and largely inaccessible nature of the early literature exhumes it, I could face self-plagiarism charges. I hope The Singularity is charitable when it arrives and declares Judgment Day.

Today’s system may select for scholars who learn to swim when thrown off a bridge. I couldn’t have. I needed time and friends to help me develop ideas, and I found them.

My most valuable walled gardens were tutorial series, the dozens of tutorials and courses I prepared for conferences from 1990 to 2011, especially those on CSCW with Steve Poltrock. We mixed solid content and some original but not fully-baked ideas, improving them from year to year until they were ready to be published. Attendees provided invaluable feedback and support.

The Interactions history forum I edited and wrote for eight years has been a quasi-protected place. Three sets of Interactions editors provided constructive advice but never rejected a column. Ideas from my own sixteen columns were subsequently refined and worked into journal articles and handbook chapters. These columns remain visible in the Digital Library, but not many people explore back issues of a magazine and the informal nature of a magazine sets expectations. It has been a safe place to explore ideas.

Monthly online Interactions blog posts such as this are a third. The editor recently wrote, “Posts don't have to be finished and detailed ideas. We invite you to use this space to try out new ideas, to reflect on your work, to get messy and confused if necessary, but mostly to have a dialogue with readers.” I invariably get comments, although rarely in the comment field below a post. Reader comments, plus reading what I have written and thereby discovering what I think, are steps toward refining ideas.

It is not for everyone. If you might enjoy it, consider submitting a course to CHI, propose an Interactions blog, or find another place to explore ideas. It is fun in the short term, which is why I began, but to my surprise it can be remarkably productive over time. Some ideas need a place to blossom.


Posted in: on Wed, October 30, 2013 - 7:28:39

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


A note on ‘compositional design thinking’


Authors: Mikael Wiberg
Posted: Tue, October 29, 2013 - 9:17:17

Design thinking is growing as an explicit approach to interaction design. By acknowledging the thoughtful aspects of making, our community simultaneously acknowledges how design is both about doing/making and about thinking/reflecting. This is, however, not something new to our community. Donald Schön made this point two decades ago in his book The Reflective Practitioner: How Professionals Think in Action.

In his book Schön also stated that these creative acts of thinking and doing are not only about the reflective designer implementing his/her ideas in the material. Contrary to this position, Schön describes how “the material talks back to the designer” in acts of making. This dual relation between the designer and his/her material at hand has been widely acknowledged in our field and right now we can see how the interest in this “close to the material at hand” relation is manifested in a renewed interest in craft traditions, hand-made objects, and DIY movements.

While this duality remains true, and while craft-based approaches to interaction design are growing in popularity, we should at the same time acknowledge that the landscape of interaction design is rapidly changing and that right now we´re in a moment where additional skills will be needed to craft powerful interaction design.

When I say that the landscape is changing I am referring to the fact that HCI is no longer limited to the single man-machine loop characterized as a turn-taking act between the human and the machine. On the contrary, we surround ourselves with more and more complex device landscapes tangled together through pairing, subscriptions, scripts, and services running across different devices. Commercial solutions like Airplay demonstrate how information and interaction is no longer restricted to a single device and streaming services like Netflix demonstrate how sessions can live across a multitude of devices. These commercial services demonstrate to the public eye innovations explored in our HCI labs. Proxemic interaction, “point-and-beam” interaction modalities, etc. are now finding their way out of the labs and into the hands of everybody. Interaction in these new formats is becoming ubiquitous and interaction is no longer limited to the box.

At the same time the “computational box” is also questioned. Cloud computing and tangible user interfaces represent two different critiques of the box. Cloud computing teaches us how services and content can be accessible from just about any device, and tangible interaction illustrates that interaction design is not only about designing the digital material for the user to operate. Instead, interaction design becomes a matter of thinking about interaction across different substrates—computational and non-computational materials.

As we move forward it is likely that this “palette of materials” will also increase. For the skilled interaction designer it will be an ontological challenge to look beyond the digital material and see how just about anything can be part of interaction design. In areas such as personal informatics, just to point at one area, this is already happening at a rapid pace. Interaction designers are increasingly reflecting on how dimensions such as position, speed, everyday movements, eating habits, pulse, blood pressure, weather conditions, running shoes, bracelets, etc. can be used as part of new interaction designs. 

The message is clear. The skilled designer still needs a good understanding of the materials at hand. However, for the skilled interaction designer it is no longer about a single (digital) material at hand. It is about a whole palette of materials, ranging from a material understanding of how interaction can play out across a multitude of devices and take almost any shape and be represented in any format (a lesson learnt from the tangible UI movement); to the acknowledgement of information as material; to an understanding of how networking, code, scripts, service integration, open APIs, component-based design, and so on can be thoughtfully brought together in design. 

For the reflective interaction designer “compositional design thinking” is a key competence to develop to fully take advantage of possibilities for interaction design already present and to prepare for the years to come!


Posted in: on Tue, October 29, 2013 - 9:17:17

Mikael Wiberg

Mikael Wiberg is Professor of Informatics in the Department of Informatics at Umeå University, Sweden.
View All Mikael Wiberg's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


My Apple was a Lemon


Authors: Aaron Marcus
Posted: Thu, October 24, 2013 - 9:04:00

In August 2011, I bought an Apple MacBook Pro. No surprise; we’ve been a devoted Apple customer since January 1985, as well as a vendor to Apple, and even a legal expert-witness defender of Apple in U.S. Patent Office matters.

What did surprise me was the horrendous experience of moving from OS 10.6 to 10.8, and the terrible service I received from the Apple Store in Emeryville, California, not too far from Apple’s Cupertino headquarters... but I shall overlook the two to three months of pain following my Apple MacBook Pro purchase, when my productivity was reduced by at least 20% and perhaps 50% as I struggled with the changes in the operating system and had to pay about $1000 to a private technician just so I could do email and maintain my contacts and a calendar. You know, simple things.

Let me focus on the events following July 26, 2013 during routine email correspondence in my hotel room, with two more presentations to give at an international conference in Las Vegas, when my Apple MacBook Pro suddenly crashed—magnificently crashed. I mean, the entire machine was kaput, with the screen frozen with a two-inch vertical black bar in the center, and the rest of the former screen contents wrapped around the remaining areas of the right and left sides. 

I tried several times and in several ways to revive my computer, in vain, and eventually I had to take it to the local Apple Store in the next hotel. Las Vegas hotels being gigantic barns for gambling, I had to walk for 30 minutes one way to get from the conference center of my hotel to the Apple store at the end of a labyrinthine route in the second hotel, the entrance to which was perhaps only 200 feet away! The Apple Store called me to say that the problem could probably be easily solved; they would merely have to wipe clean my hard drive containing the OS and reinstall the OS. 

Naturally, I forbade that, because I did not have my Apple Time Machine working in Las Vegas, I had updated many files, and I feared the loss of much valuable data. I had to make four treks back and forth to that Apple store that day to drop off and retrieve my computer. At least I got some much-needed exercise walking briskly for two hours total!

Back in the San Francisco Bay Area, I began a two-month Apple customer experience of the worst kind. The Emeryville Apple Store wanted to see the machine and said that the problem could be solved by wiping the drive clean. I was reluctant to trust their technicians, whom they call by the hyperbolic and erroneous name “Geniuses,” after my previous experiences with that store.

I took my computer to a Berkeley Apple Store, which also said that the drive needed to be wiped clean, but I had a chance to check on the contents of files and to retrieve other contents to the previous, older MacBook Pro from five years ago that I keep as a “spare” for just such purposes. I had learned that there was some “loose RAM” chips in my current computer that would be reseated and the problem would be fixed, after they sent my computer to Apple’s repair center in Texas.

Imagine my surprise when, after getting back my “fixed” Apple MacBook Pro, the machine crashed with the very same problem I experienced a month earlier. This occurred while I was presenting at conferences and universities in Brazil. I did not even think about trying to get the computer fixed there, and limped along for a week using only my iPhone and some flash drives to make my presentations on local Macs, and being without easy email access for a week.

When I returned to the U.S., I finally convinced the Emeryville Apple Store in which I had bought this Lemon, I mean Apple, to simply give me another computer, because I still had a year’s warranty on the product. I feared the replacement process because that store had demanded I return the original packaging when I first bought my computer. I had tossed the box and had to buy a second computer in order to replace the first computer originally purchased, which had not worked properly. In fact, they installed the wrong OS into my second computer, and the computer I had been using for almost two years was the third machine, just to be able to use an Apple Macintosh computer.

Fortunately, after hearing my tale, and running some diagnostics, which showed that indeed my computer was dead to the world, a technician at the original Apple Store from which I had purchased my product two years ago, agreed to give me a new computer and even to swap my old drive and DVD player into the new body, so I did not have to wipe, repair, and/or replace everything. Very kind. Finally, after seven trips to Apple Stores, one factory repair shop round trip, and perhaps many hours on the phone with two good technical representatives, I finally got the Apple Store to simply replace my Mac.

What surprised me in the end, was that this latest computer still had the same OS problems as the first machines: the date in the menu bar is still incorrect down below the top-level display. In addition, the Finder froze in some peculiar way and showed the application currently open, but could not open the Finder. Eventually, dialogue boxes became non-functional, and I had to “force” the machine to quit entirely. This behavior in a new machine made me exceedingly nervous.



The screen capture shows the primary date/time widget at the top of every
screen indicating the correct date in the short form, but the incorrect date
in the long form. This bug also shows  up in calendar depictions of Apple's
own software. The OS error cannot be eradicated by any simple adjustment
of the System Preferences controls or restarting the computer.


Frankly, I am aghast at the decline of Apple's software and hardware quality. As I mentioned, I have been a loyal Apple customer since 1985. My company once, in decades past, was called "The Design Police" for Apple's user-interface design. I have an iPhone and an iPad... yet now I cannot support Apple's brand as I did before. This is a sad state of affairs.

I have spoken with a number of other Apple customers, and they tell me the same thing. They can no longer depend on Apple's product quality. The grinning faces of Apple's leaders of software, industrial design, and business (e.g., CEO Tim Cook) stare out at me on the covers of Fast Company, Bloomberg Business Week, and other publications, all in one week, looking like the three monkeys who hear no evil, see no evil, speak no evil, of Apple—Mr. Cook in particular seems to be grinning hysterically. Perhaps they know the truth: Apple's products may be in swift decline. Perhaps Apple will go the way of Nokia and Blackberry, despite its arrogant posturing in the media.

I checked with Yelp about the Apple Stores. The reviews were mediocre. Complaints of lack of knowledge, arrogant and disrespectful people are numerous. Is this the brand Apple wants? Is this the brand that Apple deserves? Is this the brand that customers should expect?

My experience with Apple Stores has been, in general, so poor that I avoid going back whenever possible. I have to admit, sometimes I find someone who can quickly and effectively solve the situation. Alas, I have at times discovered the individuals are on loan from some other store, and I am likely not see them again, so no long-term relationship can be established.

To be fair, I want also to acknowledge a Mr. Adams in the Emeryville Apple Store and two phone reps, Ms. Cooke and Ms. Manyseng, specifically, as the three Apple people who took good care of me among the 10-15 people I had to deal with during two months of terrible frustration and much lost time. I estimate that I spent about 20-30 hours on the phone with the phone reps trying to solve the many idiosyncratic problems of restoring my Apple MacBook Pro to decent operation. Can this be economical for Apple? 

There were Apple phone reps who hung up on me, who did not call back when they said they would, who gave me incorrect advice, and who made non-functional promises of a repaired Mac that would now work fine. Even Apple’s own technical reps’ software crashed during my conversations with these people. How often does this happen?

I do not consider myself unique in regard to my computer needs. I am perhaps a typical Apple customer with my own eccentric ways of doing things. Shouldn’t the Apple Empire be able to accommodate me?

What a change of Apple's brand from what I remember from decades ago. No wonder Apple took out double-page meaningless, contentless ads (in my opinion) in the New York Times, Wall Street Journal, and perhaps other newspapers, proclaiming the value of “designed in California” (not even emphasizing “in the USA”), in a seemingly paranoid, nervous reaction to the development/pricing success of Samsung from South Korea and Xiomai and others from China. No wonder Apple executives may be looking over their shoulders nervously, and grinning hysterically in the news media.

What a change. What a company. What a false mythology of Steve Jobs. What a legacy.


Posted in: on Thu, October 24, 2013 - 9:04:00

Aaron Marcus

Aaron Marcus is president at Aaron Marcus and Associates, Inc. (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@Ex Apple User (2013 10 24)

I find the restraint you used in your article amazing.

I am currently in the middle of trying to resolve a graphics processor failure in a 2011 Macbook Pro, and Apple has been intractable to deal with to such a degree that I find myself hoping I am alive long enough to witness the demise of Apple as a company. The last time I felt this way about a company was in 1983 when I SAAB car I owned had several very expensive failures and SAAB would do nothing to help me. And we all know what happened to SAAB ( the company is defunct in terms of
making cars as they once did, and deservedly so ).

@Monica Granfield (2013 11 13)

This is very disappointing and not so different from my experiences with PC Laptops. In the past 4 years I have had to replace two, different, PC Laptops. One was an HP that turned out to be a known faulty NVIDIA board (and NVIDIA took no ownership) that shipped with various PC Laptops as well as Apples. The other was a Dell that I have now replaced 3 hard drives on.
It seems to me that these companies are now focused on tablets and phones and the quality of laptops is declining as fewer people are buying them. Cheaper parts, poor construction….
Not happy to hear about the customer support issues at Apple, as to me that was about the best thing about Apple…..

@JM (2014 05 04)

Thanks for this post !

I use Apple computer for 20 years now, and i’ve recommended these machines to dozens of friends / companies. But for a few years now i stopped recommend it. I’m not 100% sure that Apple computers are worse in term of quality now, but the fact is that in the past 8 years i had problems with 75% of the macs i owned. It was not such a problem 8 years ago, because in that time there still were people at Apple to ear your problem, and solve it… most of the time at no charge… Nowadays it’s like talking to robots, they just repeat and repeat again what they are allowed to say, and they have no power of decision. It’s kind of sad.

I too have big problems with an early 2011 MacBook Pro : bought on the refurb store in december 2011 > GPU failed on september 2013 > had already 2 logicboard replacements… and now crossing my fingers because the last repair is now out of warranty… it seems that there’s a mix of misconception / SMC update issue / and termal paste issue on those machines, and Apple just don’t want to investigate or recognize it.

I just find very strange that right after an SMC update (with controls fans if i’m not wrong) the AMD graphic chip start to fail.
I just can’t accept to ear from an Apple technician that it’s not a «normal usage» to edit movies within FCP on a MacBook Pro (Apple’s communication was based on this for years, and my powerbook G4 did it for years too without any problems… it’s really obvious they have no arguments to tell such things).
I just can’t accept that a 2000/3000€ machine is built to work properly less than 2 years…

Like many users (i now have more than 4000 email notifications from people who have the exact same problem on 2011 MacBook Pros) i am really really angry.

@JMR (2014 05 04)

You’re not the only one. My 2011 failed at 2 years and a couple on months in exactly the same say last week. From the sheer number of failures being reported by end users around the 2 year mark it appears to be a design or manufacturing issue.

There’s a couple of large threads on the Macrumors forums and Apple own Support Community.

It’s a shame, the 2011 is the perfect machine for my needs. The current obsession with thin and light has stripped out the features I use everyday, so the modern (and overpriced) MacBooks don’t meet my requirements.

@RTilley (2014 05 04)

There have always been mistakes.
In the past they were fixed.
It is the denial and stonewalling that is new.
And the not so plausibly deniable “goto fail” backdoor was truly evil.
This is not the the Apple I used to trust.

@Alexandre Stickland (2014 05 04)

Hello there,
I feel your pain.
I used to trust apple machines quality.
Not anymore since i have the same problem with a mbp late 2011.
This is not the worst, you want to know why?
Imagine ( or just read the tons of posts/topics/forums about it) people like me who are out of warranty and they just want us to pay ( the amounts of money asked goes from 1/2 to 2/3 of what we paid for the mbp) to “fix” it ( they can’t even fix that issue…)
This must end.

@Joseph (2014 05 05)

There is an underlying problem with the 2011 Mbp, it’s shocking that apple won’t take responsibility for there mistakes,


Letter from Aarhus


Authors: Deborah Tatar
Posted: Fri, October 11, 2013 - 9:02:29

I am spending my sabbatical at Aarhus in Denmark. Aarhus is quite a hub for design research activity. DIS 2010 was held here; the Media Architecture Biennale was here in 2012; IDC 2014 will be here next June (paper deadline January 13, demos March 21!); there was an intense two-week workshop here last summer for senior researchers and Ph.D. candidates on participatory IT and there will be more in the future. Furthermore, researchers from Aarhus play ongoing roles in PDC and many other important venues. Of course, among other things, Susanne Bødker (a name which I once thought was simple to pronounce and now consider simple to underestimate) was paper chair for CHI 2013. Martin Brynskov and others held a Smart Cities workshop for Ph.D’s this past summer in Split, Croatia. Kim Halskov and Peter Dalsgaard orchestrated an experiential piece at the Aarhus Festival that ran here the first week in September that involved a deeply delicate and attractive relationship between user movement of pieces, table-top display, and music. Controlling it was as smooth and as unexpected as ice-skating (but less painful). Work in the department is deeply tied to work in the town itself. One class that I heard about in architecture is not only taking on design problems posed by the city, but has taken instruction in co-design to the next level by inviting city administrators and citizenry to participate as enrolled students. Researchers are currently planning an Internet Week Aarhus, which will be one component leading up activities associated with Aarhus’ designation as the European Capital of Culture for 2017. Whew! 

OK. I admit it. I’m swept off my feet. I am seeing this through rose-colored glasses. Or maybe I should say green-tinted glasses, because I am seeing a very pleasant word pop out like red bars in a field of green ones: DESIGN. 

Aarhus is doing wonderful work in research and inquiry, but that I could have seen from afar. What I could not see is that design is a very important word to the Danes, not just in academia but in everyday life. Hotels advertise the kinds of chairs that they offer (Arne Jacobsen is prominent). Numerous shops have the word design in their name, and others do not need to because they are so well known. There are designs that are not actually new (Louis Poulsen lamps) but that persist because they are so appreciated. And there are designs that are actually novel. There are not one but two design museums in Copenhagen, and even Tivoli Gardens recently opened a Kähler-sponsored Design Restaurant. Tivoli Gardens, named after the park in Paris which is itself named after town in Italy in which the purely delightful 16th C. Villa D’Este is located, is the pleasure garden that inspired Walt Disney. But can you imagine Disneyland or Disneyworld serving high quality food that is prepared to please the eye as well as the mouth? People might linger at EPCOT, but the Terra Disniana is a staging location. 

Not everything is perfect. There is graffiti in places that do not strike one as art. There is broken glass caught in the spaces between the cobblestones. There was the embittered drunken mercenary on the train, and the men who just can’t quite get up off the pavement, sitting there, feet outstretched, in the downtown in the early morning. But these are exceptions. What I see, overwhelmingly, is care. 

I see care in almost all little yards, with their roses and carefully pruned hedges. I see care in the babies and toddlers bundled into snow suites walking down the street with child-care providers, two in the carriage, one or two walking. I see care in the simple but diverse forms in the shop windows; the way that large housing developments are differentiated by subtle details of form or materials; the choice of materials. When we register at the immigration bureau, the bathroom is a dramatic statement, featuring a long green plexiglass panel, perhaps 12’ high and much longer, hanging about 6-inches in front of the wall. On the opposite wall is a long mirror at waist height, lending to a bright and airy feeling to the oddly elongated room. At the business end of the room, opposite from the toilet, the otherwise plain green panel has a brushed aluminum plate with a button and a light. The button reads “lock/unlock” and the light shows that I did, indeed, lock the door. A carefully espaliered plant sits in the window. Can you imagine the American Immigration and Naturalization Service even allowing visually pleasing attention to detail? 

And make no mistake. This is not some high-falutin’ place because I am an American academic and therefore get to board on the red carpet. This is where all foreigners go. Denmark is, for example, 3% Muslim. This does not make it into mecca, but it does reflect considerable immigration and, since Aarhus is the second biggest city, you see plenty of women in head scarves and full-body coverage who are clearly from the Middle East or North Africa. All of them must have visited this lovely building with offices separated by glass walls with rolling glass panel doors. 

I see care elsewhere—in the way that my furnished apartment is simply but peacefully appointed with a vase, candlesticks for tea lights, and a throw rug over the back of the couch. I see care in the way that women adorn themselves with one point of color or texture in otherwise simple apparel (even some of the Muslim women). 

Perhaps it’s just me. But I do love it. It is directly pleasing in the moment, and I associate care in the particulars with a raft of admirable moral properties. “All things are doubly fair/If patience fashion them/And care—,” as Gautier wrote (Translated by G. Santayana).

And the design point is not just that design is important to the Danes, but that it does not seem to require external justification. The background to the University’s seminal position is a kind of omnipresent design reflex. This reflex is not in ignorance of other important values in the world, such as a robust economy, nor is it necessarily easy, as there is discipline involved in design processes, but it is a way of doing things, a mode—in short: a culture. As far as I can tell, attention to the visual world is paid because of a sui generis value placed on delightful human experience. Perhaps this is my delusion, but, please, let me live with it a bit longer. This is much closer to the way I feel the world ought to operate than the world I usually live in. 


Posted in: on Fri, October 11, 2013 - 9:02:29

Deborah Tatar

Deborah Tatar is an associate professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@Kim Halskov (2013 10 14)

And if you want to know more about IT research at Aarhus University, I suggest to visit

PIT.au.dk
CAVI.au.dk
CS.au.dk

Kim


Deconstructing UX design


Authors: Monica Granfield
Posted: Wed, October 09, 2013 - 9:39:16

In engineering there is reverse engineering, the process by which you take something apart to see how it was built. This is a great way to learn how something is made, what worked, what’s broken and why, what can be done differently, and how something can be improved. I wonder how this process could translate to UX. 

Deconstructing a user experience, asking why and wondering how the design came to be, is a process from which designers can learn. It's easier in some instances to explain and recall what you don't like about something than what you did like, and we tend to deconstruct something when we consider it inadequate, as a means to improve and create something better. Examining an appealing or successful design is an equally valuable, but not as obvious, exercise.

I seem to find myself deconstructing how something was designed all the time. My husband refers to this as the curse of the designer. From the design of a car’s dashboard to a screen on a website, I continuously find myself imagining what design tradeoffs may have been made to arrive at the current design, and what is working or how a process could be improved upon to create a better experience. While waiting to board a recent flight, my son and I spent a good deal of the time in what felt like an endless line, speculating why the boarding process was operating as such. It was a fun and imaginative exercise, and while passing the time we came up with some good ideas on boarding processes, ourselves. 

I recently went through the exercise of deconstructing the design of the shift stick in my new car. I am very unhappy with the design, as I feel it is awkward and poses a high safety risk. I went so far as to bring the car back, as I thought it was defective. I was told this was not a defect, that you could actually shift gears while driving if you accidentally hit the shift stick. I deconstructed the design with a few passengers, in my car, for additional input and insight. This was a helpful exercise in forming my understanding of what I considered to be a poor design. It did not, however, increase my empathy for the design. In fact it made me even more dissatisfied with the design and any tradeoffs that may have occurred. How could a design tradeoff have been made around such a risky safety issue? Placing the shift stick in the direct path of the drivers reach to dashboard controls like the defroster, heat, and radio presents a good chance for the driver to accidentally hit the shift into another gear while driving. One occurrence of this incident was frightening enough that I had to think twice about keeping the car. Any understanding of this design did not build empathy, just frustration and concern. Deconstructing design is an opportunity for a designer to build empathy with users and improve how to think about design and solutions. Understanding why a designer or manufacturer cut corners will not build empathy with your users. It might just do quite the opposite. A user does not care about tradeoffs or deadlines; they care about accomplishing their tasks with ease. 

As designers we are trained to critique, to question, to explain, to imagine. In real life we encounter tradeoffs, egos, delivery dates, standards, budgets. Understanding how a design evolves based on these factors is an important aspect of deconstructing and understanding a design. Deconstructing a design or an experience can be very informative and act as an opportunity for in-depth learning. By looking at what we don't think works we can create something that does. By looking at what we think works well, we can learn more about creating a good experience. Sometimes the how and why behind a design is obvious, sometimes it is not. Learning to deconstruct designs is interesting, informative, and an important learning experience from which to improve our own designs.


Posted in: on Wed, October 09, 2013 - 9:39:16

Monica Granfield

Monica Granfield is a user experience designer at Symbotic. The views expressed on this website are her own and do not necessarily reflect the views of Symbotic.
View All Monica Granfield's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Designing discomfort


Authors: Steve Benford
Posted: Mon, October 07, 2013 - 7:22:03

Why is the Imperial War Museum North like Oblivion, the world’s first vertical drop rollercoaster? Sounds like the beginning of bad joke doesn’t it? But actually they do have something significant in common. Moreover, it’s something that speaks to how we interact with computers.


Which is which?

The answer is that both have been deliberately designed to make people uncomfortable. This is perhaps rather obvious with Oblivion, which sets out to terrify its victims from the start, warning them that it’s not too late to turn back as they queue, slowly cranking them up a steep incline, pausing them on the brink of a terrifying drop for several seconds as they listen to the instruction “don’t look down,” before then plunging them into a dark tunnel 60 meters below. The discomfort is more subtle in the Imperial War Museum North, but it is ever present all the same, as Daniel Libeskind’s award-winning building features sloping floors and ceilings throughout so as to induce the kind of disorientation experienced in war. Of course, Oblivion and IWMN, as it is known, employ discomfort for quite different reasons. With the former, it is an essential ingredient of the entertainment. With the latter, the aim is to frame an appropriate engagement with challenging material as part of an enlightening visit.

Why on earth?

It has been exciting this month to see the publication of our article “Uncomfortable User Experience” as the cover feature of September’s Communications of the ACM. In this article we make a case for the deliberate use of discomfort in interaction design. This is a somewhat unusual position, as the principles of interaction design traditionally emphasize providing the user with the most comfortable experience possible, one where they are in control rather than being taken for a ride, and where they remain oriented rather than being deliberately disoriented.

However, as computers increasingly find their way into cultural experiences—from highbrow arts and museum visits to mainstream entertainment such as games and rides—so new design principles emerge. As with Oblivion, discomfort may support the goal of entertainment, or as with the IWMN may serve the purpose of enlightenment. We also discuss a third motivation for introducing discomfort, that of social bonding, where a shared rite of passage brings people together. 

The deliberate use of discomfort has long been practiced in fields outside of computing, most notably in the performing arts, where there is an established tradition of inviting audiences to witness uncomfortable spectacles, or even become implicated or directly engaged in them. This tradition has spilled over into human-computer interaction as it has engaged with the performing arts. Our article explores two examples of this. 

Blast Theory’s Ulrike and Eamon Compliant invites participants to enter the world of a terrorist as they undertake a guided city walk. The artists demand increasing compliance with instructions before ending with a face-to-face interrogation conducted by an actor. As you leave the room, you get to look back through a one-way mirror to briefly spy on the next person being interviewed.


The interview in Ulrike and Eamon Compliant

In contrast, Brendan Walker’s breath-controlled amusement ride Breathless creates an intimate and viscerally uncomfortable connection between a human and a robotic ride in which riders wear rubberized gas masks equipped with breathing sensors and witness and even control each others’ experiences. 


Breathless

Four forms of discomfort

Such experiences may appear to be far removed from the mainstream, but they do serve to powerfully illustrate some of the ways in which discomfort can be introduced into interaction design. Indeed, they have led us to identify four broad forms of discomfort along with various tactics for deploying them.

Visceral discomfort focuses on physical sensation, involving tactics such as designing unpleasant wearables (such as Brendan’s gasmasks) and tangibles. It encourages strenuous physicality (here one thinks of Floyd Mueller’s exertion games) or even causes pain. 

Cultural discomfort, in contrast, invokes dark thematic associations or confronts difficult decisions (such as in Ulrike and Eamon Compliant and IWMN). 

While these are certainly relevant to interaction design, our second two forms of discomfort lie right at its heart. One of Ben Shneiderman’s famous Eight Golden Rules of interaction design is to “support internal locus of control,” which means keeping the user in the driving seat as long as possible. One way of creating discomfort in interaction is therefore to have the system take control, as is the case with Blast Theory’s demanding instructions, and pretty much any rollercoaster you can name where the rider is helplessly strapped in for the duration of the ride. Design tactics here include surrendering control to the machine, surrendering control to other people, and in a reversal of these, requiring participants to take an unusually high degree of control or responsibility. 

Our last form of discomfort concerns intimacy. Computers are increasingly mediating our social experiences which gives rise to the possibility of distorting normal social relations in uncomfortable ways, for example isolating people (used in both Breathless and Ulrike and Eamon Compliant), employing surveillance and voyeurism (also used in both), and establishing unusual intimacy with strangers. An intriguing example of the latter is to be found in Mads Hobye’s and Jonas Löwgren’s account of the performance Mediated Body, in which members of the public touch a performer’s body in order to explore an interactive soundscape.

Remember the point

Having told you how to create uncomfortable interactions, now is a good time to pause for a moment and reflect again on possible motivations. The ultimate aim here is to employ discomfort in the service of a greater goal—enlightenment, entertainment, or social bonding. This means that uncomfortable interactions need to be very carefully embedded into a wider experience in such a way that they are properly resolved. 

Again, we can turn to the world of theatre for inspiration. The Renaissance saw the development of the classic five-act performance structure known as Freytag’s pyramid, consisting of exposition, rising action, climax, falling action, and finally dénouement. Personally, I can see a striking resemblance between this pyramid and the design of Oblivion. 


Oblivion as Freytag’s pyramid

It seems that rollercoaster designers may understand performance structure, and invest effort into designing an entire trajectory through discomfort rather than just an uncomfortable experience per se. 

With this in mind, there are some forms of discomfort that are more problematic. While the uncomfortable suspense of the slowly rising action on Oblivion is part of the entertainment, I find other rides to be uncomfortable because they make me nauseous, a sensation that is not quickly resolved during the ride and that lingers some time afterwards. I certainly wouldn’t encourage you to design experiences that induce nausea as a form of discomfort (and here I include those of you working with virtual reality head-mounted displays, which seem to be making something of a comeback right now).

Can this be ethical?

You—as a prospective designer of uncomfortable interactions—are also going to need to carefully consider ethics. Adopting a consequentialist approach as proposed by Jeremy Bentham, you need to consider whether the ends justify the means. Do the benefits of enlightenment, entertainment, or social bonding for the individuals involved justify any temporary and properly resolved discomfort? In short, with hindsight, would you participants be happy with what has occurred? 

There are other ethical issues to negotiate too. What does informed consent mean in an experience that deliberately contains shocks and surprises? Where is the right to withdraw from a rollercoaster once it is underway? What of privacy in experiences that employ voyeurism? Again, interaction designers are going to need to learn from the world of theatre where performers have an established tradition of negotiating the boundaries of ethical behavior with their audiences, both during and within performances, though certainly not always without controversy.

A shocking experience

So I’m suggesting that interaction design needs to take a considered view of the idea of deliberately designing uncomfortable interactions. They are clearly part of the repertoire of cultural experiences, from performances and museum visits to games and rides. I suspect that there may be interesting resonances with other application domains too. I wonder, for example, whether we can apply any of these ideas to the design of health journeys? How could we deliberately redesign a visit to the dentist if we thought of it as a trajectory through discomfort?


A shocking game

I’ll close this post with a shock. Quite literally. Earlier on I mentioned causing pain as a form discomfort. This would seem to be quite an extreme idea (and indeed it is, and should be treated with great caution). This said, I recently treated myself to an electric shock reaction time game for less than twenty bucks. It gives a nasty jolt and therefore induces a high degree of suspense. I’m not sure that it’s that entertaining personally, but its glowering presence in the centre of my table certainly adds an extra frisson to student supervisions.


Posted in: on Mon, October 07, 2013 - 7:22:03

Steve Benford

Steve Benford is professor of collaborative computing at the University of Nottingham’s Mixed Reality Laboratory.
View All Steve Benford's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Artifact invention and research


Authors: Jonathan Grudin
Posted: Mon, September 30, 2013 - 8:33:38

I asked several talented inventors whether there is more to research than invention. It was not a new question for them, but not an easy one either. “Get back to me after the CHI deadline,” said one, immersed in writing papers on his latest inventions.

 “The Edison-Einstein question,” said another.

Where I work, artifact invention and research is the air we breathe. But recall the fish that asked, “What is water?” We may not understand it. What differentiates invention and research?

A stream of invention is the most visible product of computer science. Thanks to Moore’s law and similar legislation, new technologies and new possibilities stream forth as the semiconductor wizards do their work. How does research fit in? We attend research conferences, apply for research funding, and work in research laboratories.

Edison-Einstein may not be germane. Any research conducted by Thomas Edison the inventor could be relevant, but for theoretical physicists, artifacts such as telescopes and accelerators are means, not ends. Some of them grumbled when Ernest Lawrence received a Nobel Prize for inventing the cyclotron. Computer science is different. It explicitly encompasses systems work alongside theory; HCI honors artifact creation, stretching back to Sutherland, Engelbart, and others.

My colleague continued, “There is descriptive research, which may not be associated with an invention. Social media research is descriptive, there is not much invention.”

He was ambivalent about social media research, at one point saying that it was OK to do it as long as one “also does some original research.” Consciously or unconsciously, he did not highly value work that does not include invention.

I continued, “Is invention by itself research, or is something else needed?”

Echoing Edison’s claim that genius is 1% inspiration, 99% perspiration, he replied, “Implementation. Sometimes the invention part only takes 5 minutes, the rest is implementation. And some kind of evaluation and dissemination is needed.”

Technically, you can patent an invention that you can’t implement—for example, an invention could rely on a separate patent for which you do not have a license. However, novel software artifacts are usually implementable. Other types of invention, including patentable process inventions such as freeze-drying and electroplating, are not our focus here. We will consider the roles of evaluation and dissemination after reviewing the concepts invention, science, engineering, design, and research.

Invention

I judged an elementary school Invention Fair in Texas a quarter century ago. My favorite was a teeter-totter. As kids went up and down, it pulled a conveyor belt beneath it on which soda cans were placed and crushed for recycling. Another was a miniature sink attached to and kept under the main sink, with flexible plastic tubes bringing down water so the inventor’s toddler brother could use it to wash his hands, his dishes, “or just play with the water.” A third was simple—scales set to trigger an alarm when the load shifted, a safe place for household guns lying around when children are about.

I loved it. Inventions are memorable. They demo well. And software companies in the reign of Moore’s law need to invent. Necessity being the mother of invention drives a benign cycle. It is rewarded. When you can point to your feature or product, your contribution is tangible. A manager, or a reviewer, does not have to dig to see that something was accomplished. Inventions are very welcome, some of them.

In an earlier post, I observed that creating useful novelty is not easy in the global village. Develop an idea and if one of the seven billion people in the village had the same thought and reached the same conclusion a week earlier, it could already be on YouTube. That said, ever smaller-faster-cheaper electronics will only realize its potential if people invent.

Science

I have also judged science fairs. Young scientists describe a journey, hypotheses and experiments, what turned out as expected and what did not, and their conclusions as to how things work. In contrast, inventors described their inventions and how they might be used. There is overlap, but a different feel: “I discovered this about the world.” versus “I made this for the world.”

Edison: “I never once made a discovery… the results I achieved were those of invention, pure and simple.”

At some level we prize science most highly. The science fair exhibits were good, but science fairs were much less memorable than the invention fair. How might this play out in academia and industry?

Engineering

Engineering as a field is particularly close to invention, not surprising given the potential that is unleashed by semiconductor advances. Hardware and software engineering processes are central to the implementation of our inventions.

Design

We generally consider design to be part of the refinement of an invention, recognizing that artifact design can be innovative.

Research

Research is the broadest term, spanning all disciplines. The meaning of scientific research is relatively well understood, thanks in part to those school science fairs. Research into engineering or design methods is included, but what role does research play in the practice of engineering, design, and invention? What role do engineering, design, or invention play in research? Our initial question was whether invention in and of itself is or isn’t research.

Research into the properties and uses of materials and algorithms is part of engineering. Design can benefit from basic or applied research that identifies situations in which artifacts might be used. Within HCI, the roles of designer, software developer, and user researcher are generally distinct, but the role of user research is to inform design and engineering.

Does evaluating and disseminating an invention make it research?

In some branches of computer science, inventions need not be evaluated to be published in research conferences. Within HCI, the value of a perfunctory “user study” evaluation of systems and design contributions has been debated. For complex systems, realistic short-term evaluations are not always feasible.

Few artifacts described in our research papers or demoed at our conferences are ever disseminated widely—the overwhelming majority of invented HCI artifacts undoubtedly make their final appearance in research conference papers. This is not necessarily a bad thing; it encourages the innovation that we all benefit from cumulatively.

Invention and research

Two extremes: (1) Random mutation and natural selection. Invention is unconstrained, not boxed in by assumptions about what might be useful. Inventions proliferate and the marketplace identifies which are useful. Unexpected successes validate the high failure rate, we hope. (2) Intelligent design. Thorough research into context and risk precedes invention. We get fewer inventions—and it can be a triumph when HCI research eliminates the invention of features or products that would add clutter and no value—but a higher yield of useful outcomes, we hope.

Which is best, or is it somewhere between? 

If the optimal path is somewhere between, we need to confront the fact that research to guide design is more difficult to sell than invention. Reviewers and managers in a field driven by novelty understand “I made this,” but not, “The angry dog you do not hear barking?—I did that.”

Case study: An invention paper and a research paper. Post- PhD, I joined a talented inventive team in a software product development company. We took on several applications or features intended to support groups and solved the technical problems, but the products failed in the marketplace. Few inventions ever prove to be useful, but the educational benefits of failure, although perhaps not zero, are easily exaggerated. I headed back to research to learn why this software species was so challenging. My goal was not to spur innovation. I hoped to figure out what could increase the prospects for success, or identify contexts in which a given invention would more likely succeed. I was moving from (1), unconstrained invention, toward (2), at least some intelligent design.

In 1988, the team I had left released another group support product. Freestyle was lauded by PC Magazine, PC Computing, Computerworld, and Communications Week. Twenty years later, my colleague Bill Buxton marveled that no one had yet replicated Freestyle’s useful, easily understood features. The team published two papers. The invention paper [1] described Freestyle features. The research paper described an unforeseen deployment challenge. A major element in Freestyle’s commercial failure was a mismatch between the nature of significant communication loops, which span organizational units, and prioritizing and budgeting practices, which do not.

A quarter century later, products similar to Freestyle are arriving. The invention paper served its purpose. Despite the evolution of organizational infrastructures over a quarter century, the research paper remains instructive.

The allure of novelty

CHI and related conferences have always focused on new technologies that are or could soon be widely used. We have the opportunity to explore the boundary of invention and research in examining how technology meshes with behavior. That research includes obtaining feedback for iterative design, but can go deeper.

There is pressure to show that our field invents and innovates, perhaps to convince colleagues or funding agencies that we contribute. Artifacts connect. Douglas Engelbart’s obituaries began and often ended with “invented the mouse.” His deeper contributions were more difficult to grasp, such as his emphasis on intelligence augmentation in contrast to artificial intelligence.

Random mutation and natural selection could outperform intelligent design—encourage a million people to invent and we get 1000 useful inventions, benefiting us tremendously, and the 999,000 useless inventions will disappear unlamented. What do you think? It didn’t seem efficient to me in 1988, and it still doesn’t. A useless invention is more likely to be novel—if useful, one of the seven billion other people on the planet would more likely have come up with it already. An invention might be useful in a different context or with minor superficial changes—research could identify the opportunities. I have seen inventions die that had promising untested niches in which they might have thrived.

I have deep appreciation for my inventive colleagues. I have developed a healthy respect for the value of informing their work with research. The forces on scholarship and product development in our field militate against a holistic approach. With limited attention and access to unlimited information, with little time for deep analysis, we look to extract quick takeaways from what we read and see. This is easier with an invention paper or an invention. HCI research conferences may be morphing into invention conferences, with some room for academic papers that focus inwardly on “building theory” and extending the literature. Both signal retreat from HCI’s unique opportunity within computer science to provide direction through a broader perspective.

Endnote:

1. Levine, S.R. & Ehrlich, S.F. (1991). The Freestyle System: A design perspective. In A. Klinger (Ed.), Human-machine interactive systems. Plenum, pp. 3-21.

Thank you to inventive colleagues in jobs past and present, especially Patrick Baudisch, who has long thought deeply about and discussed these issues.


Posted in: on Mon, September 30, 2013 - 8:33:38

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Learning from ePatient (scholar)s


Authors: Richard Anderson
Posted: Mon, September 23, 2013 - 7:30:36

Increasingly, patients are making invaluable contributions to the redesign of our broken healthcare system and the patient experience. Designers working in healthcare should be aware of and leverage these contributions.

Among the facilitators of this is Medicine X, a fabulous conference held annually in September at Stanford University. As stated by the conference organizers:

Medicine X aims to bring together the best and brightest doctors, patients, academics, and industry leaders to talk about emerging technologies and how best to improve healthcare.

We seek to empower patients and give them a louder voice in healthcare discussions.

...patients are a core set of stakeholders. Yet they typically haven't been meaningfully represented and engaged at academic medical conferences. We want to change that.

To fulfill this goal, Medicine X invites select ePatient scholar applicants to attend the conference and some ePatient scholars to participate in the conference organizational and planning process. What is an ePatient scholar?

ePatient scholar: 1. A specialist and expert who is highly educated in his or her own medical conditions and who uses information technologies (e.g., Internet tools, social networks, self-tracking tools) in managing their health, learning from and teaching others. 2. (Stanford Medicine X ePatient scholar) An educator and role model for other patients and health care stakeholders.

A valuable contribution provided by all of the ePatient scholars (and many, many other patients) is the story of their patient experience. Many of these stories are gripping, documenting much of what is wrong with healthcare and suggesting fixes. Some stories can be found in blogs; some stories can be found in online patient communities. During Medicine X, some stories are shared on stage. An example is that provided by Britt Johnson (pictured below) at last year's conference; the video of Britt's talk is essential viewing.


Britt Johnson

EPatient scholars' patient experiences form the basis of and provide the motivation for many of their additional contributions.

Two misdiagnoses and the urgent implant of a cardiac defibrillator made Hugo Campos realize how crucial it is for patients to engage in healthcare decision making with clinicians. This has prompted Hugo to tirelessly advocate for the rights of patients with pacemakers and implantable defibrillators to gain electronic access to the data collected by their electronic devices. Difficulty obtaining all sorts of medical records has led many to join Hugo in the call of "Give us our damn data." 

Unable to get a satisfactory response from doctors to her multi-year digestive problems, Katie McCurdy applied her design skills to the construction of a visual timeline of her symptoms and medical history. Katie's hope was that this timeline would communicate much more and more effectively than medical records or her usually rushed oral description in a doctor's office, and she has had some success with it. Wouldn't it be nice if such visual timelines could be created by or for other patients?

Important input to such a timeline might come from Symple, an app developed by ePatient scholar Natasha Gajewski for tracking symptoms. Natasha built this app because of the difficulty she had tracking the symptoms of her rare autoimmune disease between doctor's office visits. Symple is now used by tens of thousands of patients around the world.

Sean Ahrens (pictured below) is among the ePatient scholars who have made valuable contributions to what is increasingly referred to as peer-to-peer healthcare. Because of his and others' similar health needs, Sean designed and developed Crohnology.com, a social health network for patients with Crohn's, colitis, and other inflammatory bowel conditions. Crohnology.com lets patients share and learn what treatments work for others, track their health, and meet others near them. As stated in a recent MIT Technology Review article, "The site is at the vanguard of the growing 'e-patient' movement that is letting patients take control over their health decisions—and behavior—in ways that could fundamentally change the economics of health care."


Sean Ahrens

Many ePatient scholars help patients connect in other ways. Tweetchats are particularly popular. Three-time cancer survivor Alicia Staley's weekly tweetchat for the breast cancer community (#BCSM) is perhaps the best known of these. Alicia started this tweetchat to combat the extreme isolation she experienced. (To get a better sense of the importance of such connecting, see Katie McCurdy's blog post, "On Speaking Up.")

ePatient scholars share their insights in multiple ways. The contributions of the most well-known ePatient, Dave deBronkart—a.k.a. e-Patient Dave—have included a TEDx talk and an ebook entitled Let Patients Help.

Many share their insights via blogs. Katie McCurdy's blog, referenced above, is filled with gems. See her recent analysis of the use of the term "patient engagement" for another great post. Carolyn Thomas's great blog includes a related post. Sarah Kucharski, founder of FMD (FirbroMuscular Dysplasia) Chat , is another excellent blog writer; her recent post on patient engagement provides important advice to designers of health apps.

Advice to designers is among the contributions I have made. As you might know from my interactions magazine blog posts alone, my writing and speaking on healthcare system and patient experience redesign have been focused, in part, on identifying what designers need to do in order to have maximum impact on that redesign. See, for example, "Are You Trying to Solve the Right Problem?," "What Designers Need to Know/Do to Help Transform Healthcare," "The Importance of the Social to Achieving the Personal," and the blog post you are now reading.

In short, there is much to learn from ePatient scholars, and you can learn more from and about most of those highlighted above as well as the other ePatient scholars attending Medicine X this year by accessing the 2013 ePatient ebook put together by the conference organizers. Use this ebook (and this blog post) as starting points to include the oft-missing voice in the redesign of healthcare and the patient experience: that of the patient. Better yet: Come meet us all at the conference September 27-29; we'll be happy to talk to you.

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.



Posted in: on Mon, September 23, 2013 - 7:30:36

Richard Anderson

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.
View All Richard Anderson's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Can we afford no affordances in our user interfaces?


Authors: Monica Granfield
Posted: Mon, September 16, 2013 - 1:40:22

Lately, flat design has been a topic of conversation in the design community. It seems there are two main camps: those who love it and those who don’t.  Flat design is in no way new to the screen.  Some websites have, for a while now, approached flat design as a mix of flat and raised presentation, reserving the raised effect for controls such as buttons. Some sights have been full-on flat in their presentation. It was not until Microsoft released the Metro UI for Windows 8 that the discussions really ramped up. Apple’s iOS 7 will now take on some flat attributes as will parts of Apple’s OS 10 applications.

Looking back, Windows 2.0 was flat until Windows 3.0 adorned a 3D-look to buttons and controls. The Apple OS was black and white and had flat graphics, until some slight beveling treatment came into play on buttons and controls as well. As the capability for richer and more lifelike graphics became possible, these environments expanded on the idea of creating an experience that would look and feel tangible, by enhancing the depth of the graphics. These were referred to as “cool” and “sexy.” Clearly, these graphics had appeal. No more battleship gray bevels!

It seems, however, that just as the appeal and fascination with the gray bevel wore off, so too has the appeal for glossy graphics. I have heard debates declaring the previous 3D graphics are “Too rich and feel heavy.” On the flip side comes the feedback on flat graphics: “Look juvenile and too playful.”  I often hear that “modern design is cold and not homey enough.” I also hear that flat design is “lighter on the eye.” This feedback has me wondering if these trends are more form than function or not, and if not, then how are these trends impacting functionality?

Don Norman originally compared affordances of the physical world to those of the metaphorical world of computing. He later restated his take on what an affordance is, declaring computer affordances to be more of “perceived affordances” than actual physical affordances. Without going too deep into the history of affordances, a perceived affordance is the quality of an object that suggests how it might be used. As Norman explains: “Does the user perceive that clicking on that object is a meaningful, useful action, with a known outcome?”

What do users of a flat interface perceive as clickable? Much has changed in the flat interface and I am not only referring to Windows but to the computing community in general. Gray controls are no longer necessarily perceived as disabled. Are our perceptions changing based on familiarity with computers? Have users evolved enough to know what to click on based on familiarity? For example, is a beveled gripper control more recognizable than a flat control? Or is the flat presentation of the control perceived as a control, based on experience only, the pattern and location of the shapes or both? 

Users know and have known that shape, text, and placement are all affordances that, as Norman suggests, are perceived as such based on these attributes and not the depth of glossy 3D graphics. An arrow is an affordance even if it’s flat and the visual language of an arrow always tells us it implies direction. A rectangle with text is perceived as a button that issues the command specified in the text, when clicked. Although visual patterns that are used consistently create a language that is discovered and learned by the user, what I am seeing is a wide variety of patterns that are offered depending on the product, platform, and environment.  I would be curious to know the impact of such variety, on users, if any.

Another trend that is appearing alongside flat design is what I consider to be a type of “context-based” design. It used to be that controls that were not available were disabled and not moved in and out of the UI, as the movement was deemed “too disruptive.”  This rule of thumb is often still in use, however, increasingly the pattern is to design controls that come and go, rather than enable and disable, depending on what object or task I am focused on. Everything from commands to scroll bars comes and goes, depending on what has focus and the context of the task. This does seem like a way to simplify the visual presentation, fit more functionality into a single experience, and create a more directed and focused task.  I have to say that although I often do a double take on a scroll bar that miraculously appears and grabs the corner of my eye; I much appreciate the simplicity of not having multiple scroll bars display at once. I do wonder about the discoverability of some pop-up controls and if the simplicity in these cases pays off in the experience or not.

What I find interesting is that this seems like a very exploratory time for the interface. I, for one, find it exciting and liberating to explore the experience and have a bit of flexibility and freedom.  If this freedom is due to an evolving and exploratory user base, with new expectations, then simpler more directed interaction, flat or otherwise, may be something that we can afford after all.


Posted in: on Mon, September 16, 2013 - 1:40:22

Monica Granfield

Monica Granfield is a user experience designer at Symbotic. The views expressed on this website are her own and do not necessarily reflect the views of Symbotic.
View All Monica Granfield's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Patina of things


Authors: Tek-Jin Nam
Posted: Thu, August 29, 2013 - 8:18:03

I tend to use things for a long time. It feels as if my belongings take on aspects of myself. So it is difficult for me to throw my possessions away. When our building was renovated a few years ago, I kept all my furniture, even though it was not the best fit to the new interior. The furniture was special to me, since I had received it when I started living here, and we have lived together ever since. 

There are many old electronic products of mine like this. I still keep digital cameras and mp3 players that are more than 10 years old. My audio set with an amplifier and speakers is more than 20 years old. I bought the set with earnings from my first part-time job. The company that produced the audio amplifier no longer exists. When I moved to different countries for study, the audio set always followed me. I am happy that it still works well and gives me the joy of listening to music. 

One of the most valuable products of mine in my twenties was a video camera with an 8mm tape deck. I invested much more for this than the audio set. At that time, video recording was rare. I recorded the lives of my family and friends, thinking I was making a sort of time capsule. The video camera is no longer working, the LCD broke a long time ago, and 8mm videotapes are no longer available on the market. The video-recording media has changed to memory cards via 6mm digital tape and DVD. Although the video camera is junk, I still keep it. When I look at it again, I feel the memories ingrained in that video camera. It may take some determination to abandon it. 

Many designers wish to create things that are used and loved by many people for a long time. This is a challenging task. People should want to own those things and feel special when they use them. First of all, the item should be physically durable. It should work well without malfunctioning or breakdowns. In addition, it should provide emotional durability, as Jonathan Chapman stressed this in his book Emotionally Durable Design. It is also necessary that the product should be resistant to trend changes. People should not get bored easily. It is particularly difficult to create IT products and services with these requirements, as they are dependent on the rapid technological development and standard changes, and alas, rendered obsolete, just like my 8mm video camera. 

A designer who wants to create physically and emotionally durable products faces a dilemma. That designer wants people to use the products for a long time, but then faces the risks that the role of creating an updated version of the product is no longer necessary. Artificial obsolescence is the term that I knew early in my design education. It is a marketing practice in which companies deliberately make old models appear out-of-date by introducing new ones with changes and additional features to attract customers. It seems bad for our environment and end users, but companies often need this approach to make profits and to be sustainable in the commercial world; therefore, many product designers are engaged in this practice. Meanwhile, if designers create functionally and emotionally perfect products they will, theoretically, have nothing more to do with those products since end users have no need for new models. It is ironic that as more people love and use designed products, the designers who wish to create quality work lose their purpose. Fortunately for designers, the world behaves differently. People are capricious, and sociocultural trends keep changing with the development of technology. 

Therefore, it is important for designers and HCI specialists to study how to create products and services that many people use for a long period of time. What would be the key characteristics of such IT products and services? I think one of the ways to create long-lived precious things is to add stories and meanings for owners. Perhaps the stories can be kept in a visible and invisible patina that stores memories of interactions between people and things.  

Products with patina often create special meaning for owners, just as my possessions did. Many products often give such feelings naturally without physical patina, as the associated memories invisibly remain somewhere. Things inherited from parents are treated preciously. We regard such objects as preserving our parents’ memories. 

This attitude is not unrelated to the belief that people’s souls inhabit their possessions. There are many cultures believing that, as people use things, their souls are transferred to the objects. This is particularly common in Asian cultures. In Japan, there is a perspective that no spirit exists in newly created, unused objects. In contrast, objects that are used by many people are considered having a strong spiritual power. If there are a large number of objects used by people, it means the spiritual weight creates value. Examples of these products are the Super Normal products introduced by Morrison and Fukasawa. 

Traditionally, many Korean people think that when they buy or rent a house, traces of previous occupants influence their lives. If previous occupants of the house proceeded to a prestigious university or succeeded professionally, the house tends to be sold or rented out easily. On the other hand, people are hesitant to use objects or live in houses with bad luck or trauma. As we have seen, this is a common theme of many horror movies from both eastern and western cultures, proving this belief must be universally accepted by the human mind.

An example object with the storage of patina is Long Living Chair, presented in CHI 2013 Interactivity. It is a rocking chair with a semi-hidden display showing the day it was produced and how many times it has been used. The information provides a moment of wonder and a sense of relatedness to the object when it is accessed. The movie, Red Violin, directed by François Girard, tells the story of a mysterious violin and its many owners. I thought that the violin could have been even more special if it had the means to keep the traces to unfold the stories. Moonhwan Lee in my research lab is also investigating the potential of patina as a design strategy. 

We often come across situations where objects remind us of their owners. I speculate that the souls of the users become ingrained into objects. Such objects take on the identity of the owner. Horcruxes are the things with souls shown in the Harry Potter stories; a dark wizard or witch hides a fragment of his or her soul for the purpose of attaining immortality. This must be the most precious object of the wizard. The possessions that we care most about can be like the Horcruxes of everybody. If you can store your soul in objects, what objects would you choose? The objects would be the most meaningful and valuable things that designers could produce for everyday people. 

In the analogue world, these objects are musical instruments from a master, ornaments from parents, or books and stationery from ancestors. Sherry Turkle introduced such things as evocative objects. In the digital world, people consider IT products such as laptops and smartphones meaningful objects for their life. Moreover, people seem to think intangible information or contents can store people’s souls. Recently, I saw a TV drama where a girl keeps an old feature phone with great care as it has the last voice message of her father, who had died in an accident. She listens to the message to get comfort when she has troubles. The feature phone breaks down at some point and can’t be fixed; we empathize with the sorrow of the girl who lost the contents. The Korean movie Phone, directed by Byungki Ahn, is a thriller about a soul attached to a phone number. In it, a woman takes over a phone number from a previous owner, a mysteriously murdered girl. As in these stories, we are in a time when virtual contents or information such as website addresses or QR codes can become the things that are associated with our souls. 

The emerging new forms of IT products and services bring changes in the ways we possess or emotionally connect with them. There are many popular songs composed by musicians who unfortunately committed suicide. When I listen to these songs, I feel a special emotional connection to those musicians. How about the emotional connection with music depending on the type of media? Would the digital music or photo files of the musicians create the similar emotional connection to the physical inheritance, such as LP records or personal objects? I speculate that the form and interactions would bring changes to the way we feel about the things we care about and the emotional connection. 

I think that to understand how people get to own, use, and abandon the precious things they love and have used for a long period time can help to create a people-centric future. In order to create IT products and services that provide emotional experiences for people and added value in the digital world, we need more ideas. I wonder if the application of patina can be a candidate for making things with a soul.




Posted in: on Thu, August 29, 2013 - 8:18:03

Tek-Jin Nam

Tek-Jin Nam is an associate professor in the Industrial Design Department at KAIST.
View All Tek-Jin Nam's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


The past, present, and future of women in STEM


Authors: Ashley Karr
Posted: Mon, August 26, 2013 - 1:02:21

Take away: An impactful way to make lasting, positive change for women in STEM is to constantly adjust in small, simple everyday ways. In other words, change starts at home.

A very brief look at the past

  • The word scientist was first used in reference to a woman, Mary Fairfax Somerville, and one of her published works entitled On the Connexion of the Physical Sciences.

  • Augusta Ada King, Countess of Lovelace, was the first computer programmer and the first person to realize computing’s potential—it would transcend mere calculation and change what it meant to be human.

  • Programming languages are based upon natural language rather than machine code or related languages due to the wisdom of a woman, United States Navy Rear Admiral Grace Hopper. Her nickname was Amazing Grace due to her rank and breadth of accomplishments.

I have my master’s degree in engineering and a career in human-computer interaction (HCI), but I did not know about these amazing women until a few months ago when I began volunteering for the Anita Borg Institute (ABI). ABI is a non-profit organization that seeks to increase the number of women in technical fields and to encourage the creation of technology by women. Looking back, I am disappointed that my education did not include at least an overview of important contributions made by technical and scientific women. I can’t help but wonder if adding information like this to our basic curriculum would help improve the odds for women to complete their education and enjoy long careers in STEM. The fact that countless women, who made important and influential contributions to STEM over the centuries, have been overlooked by history is, to be quite honest, rather offensive. I know I am not the only person working to rectify this and give credit where credit is due.

A walk through the present

I recently reviewed scholarship applications for the Grace Hopper Celebration (GHC) of Women in Computing. GHC is an annual conference put on by ABI and the Association for Computing Machinery (ACM) that brings the career interests and research of women in computing to the forefront. When I read the applicants’ essays, I felt an immediate bond with these women as I observed that our stories as women in STEM held many parallels. Here is a rough sketch of these common themes:

  • We feel isolated because we are different from the men we work and study with and the women we socialize with. It is difficult to find the social support that we, as humans, require.

  • We feel like we have to prepare ourselves for unfair treatment. If we confront the inequities or try to fight back, we fear our careers will be damaged.

  • Our appearance, dress, and abilities are commented upon and questioned at regular intervals.

  • We feel like we are imposters and are constantly pushing ourselves to be better to prove that we belong in STEM.

  • We feel shocked, surprised, and relieved when we find other women who have had similar experiences, and the same shock, surprise, and relief when men understand and support us.

  • We want to do something to change this, but we don’t know what to do, so we reach out to other women in STEM through organizations like ABI and conferences like GHC.

  • Despite these challenges, we remain in STEM because we love our work, and we want to make things better for other women like us.

After reading the GHC scholarship applications, I was moved to act—to do something that might actually make a difference sooner rather than later. I started talking to friends, family, and colleagues about this and asked them what they would do to change things. What most everyone I spoke to recommended was not a massive and noisy movement. They suggested continuously speaking and acting in encouraging, supportive, positive ways toward women in STEM. Their suggestions reminded me of something I have learned from working in STEM: the small, mundane, everyday things that we interact with frequently and or for long durations might be often overlooked and taken for granted; however, cumulatively, these little things have huge impacts on our lives. 

The following is an overview of some of these small, subtle, daily actions we can undertake: 

  • Engage in the Golden Rule. Treat others the way you would like to be treated. Everyone wants to be treated with respect, supported, encouraged, and valued for their efforts and accomplishments. If we are in a situation that lacks any of these elements, we can be their harbingers.

  • Self-reflect. We should be very honest with ourselves and notice our own biases in thought and in action that may suggest a woman might be less competent, less mature, or less capable than her male counterparts.

  • Listen. When someone brings up an issue regarding the challenges that women in STEM face, we should listen to what they have to say. We should listen before, during, and after passing judgment. Then, we should listen more.

  • Talk. We should discuss our biases, good or bad, with people who will listen and suspend judgment.

  • Self-reflect again. We should notice how our biases play out in our everyday lives. For example, in your family, when someone needs help with their computer, is a male relative or female relative asked for their expertise? Are the males the ones called upon to handle anything technical? We all can change this starting now, and we should.

  • Act. It’s amazing what happens when words turn into action.

A glimpse into the future

As a result of taking my own advice, I started making a stand with my family. When a request for technical support would move through the ranks, I would volunteer, pointing out that they have an engineer in their midst, and we should put my degree to use. At first, I met with a lot of resistance. My relatives were not used to relying upon a woman for technical support, but they conceded, and now I have more requests than I can manage to set up projectors for post-holiday slide shows and troubleshoot misbehaving computers.

The most astonishing result of my attempt to support women in STEM starting at home came one Friday night while I was working overtime on a project. I had settled myself at the kitchen table after dinner and was working away on a prototype for a website that had to be completed within the next few days. My five-year-old niece, Emily, wandered over an asked what I was doing. I showed her my prototype and explained to her a little bit about my career. I thought I had bored her, because she very quickly wandered out of the kitchen, and I went back to work. About an hour later, she returned to the kitchen table with her very own, handmade paper prototype of a laptop computer. She set it down next to mine and started typing away on her keyboard. She told me, “Auntie Ashley, this is my computer.” I asked her what she was doing on her computer. She said, “I’m making dot com’s, and I have to type really hard because I have to think really hard.”

The future of women in STEM looks very bright.

Despite the strides we have taken in recent history to support and encourage women in STEM, we must remember that women tend to face far more challenging familial, cultural, regional, national, and global barriers than men when it comes to pursuing any type of education and career. Imagine what would happen if we could move past these barriers and harness the potential and intelligence of all people, regardless of gender, socioeconomic status, cultural background, genotype, and phenotype. The world would very quickly become a much better place. I hope that a at least some of the people reading this article will feel compelled to make their own small, subtle, daily changes for the better. I am thankful for the thousands of other women and men in STEM actively involved in ABI, GHC, and other organizations and events that are part of this change engine. I am honored to be part of it, and I hope you are, as well. Please feel free to comment below or send me an email to tell me your stories about challenges and triumphs you or those you know have faced as women in STEM. I look forward to hearing from you.



Posted in: on Mon, August 26, 2013 - 1:02:21

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@Ashley (2013 08 29)

http://adainitiative.org/what-we-do/impostor-syndrome-training/

This is interesting. Apparently, the “impostor syndrome” is more of an issue than I realized.

@Jen (2013 09 08)

I think the small daily actions here are great. Its particularly nice to read your examples of what can happen if you change your own behaviour, rather than waiting for the world around you to change. I think those personal, positive stories are important, so I’d like to share one.

I was recently invited onto a careers panel session at an event for end-stage PhD candidates. I’ve a PhD in computer science, had a short but successful academic career and now run a training consultancy business. The panel was entitled “What do employers look for?” and having previously been both a postdoctoral employee and employer I had much to offer.

There were 5 people on the panel and I was the only women. The first question was asked and I had some thoughts to share. But I waited for one of the others to answer first. Then, probably for the first time in my life, I noticed myself doing this - stepping back to give the guys a chance to speak first. It was like a bolt of lightening striking. The next question I had some experience to share so I jumped straight in. My response sparked a meaningful discussion and this gave me confidence. For the rest of the session if I had something to say I didn’t wait for the others to have their say first, I just spoke up. No one was offended, no one reigned me in and on several occasions I heard the other panel members saying “I agree with Jen…” or “Jen’s absolutely right…“and then giving their answers. At the end of the session I was happy that I’d made several important contributions and this was confirmed by feedback from the participants and my colleagues on the panel.

Now I can’t remember ever being explicitly told to wait and let boys speak first, but perhaps I’ve picked up cues as I was growing up that this is how I should behave? Anyhow, now I’ve called my subconscious out on its sabotaging behaviour I’m looking forward to the next opportunity to speak over it.

@Ashley (2014 01 10)

Thanks, Jen!

I am very impressed smile

Happy 2014.

@Annaka Johnson (2014 03 09)

Were you able to go to her room and see if she had any design process edits that didn’t make the final version she showed up w/? Were their other prototypes? Could you see her thought process thru those pieces?


Canyonlands


Authors: Jonathan Grudin
Posted: Fri, August 23, 2013 - 8:37:46

The Colorado Plateau. 130,000 square miles (337,000 square kilometers) of high desert and scattered forests in Utah, Arizona, New Mexico, and Colorado. Home to 10 National Parks, including the Grand Canyon, and 17 National Monuments. Its features include the Colorado and other rivers, towering cliffs and deep canyons, arches, domes, fins, goblins, hoodoos, natural bridges, reefs, river rapids, and slot canyons.

The visible structures formed over hundreds of millions years. Inland seas periodically inundated the region, leaving thick layers of sediment and minerals when they retreated. After the last sea withdrew 300 million years ago, periodic accumulations of fresh water continued to put down layers. Sixty to 70 million years ago came the great uplift, pushing the entire region up thousands of feet as a single piece—a key to its unique character—as the Rocky Mountains rose to the east. Then followed tens of millions of years of erosion, especially rapid when ice ages brought precipitation. The layers of sandstone, shale, limestone, and gypsum, tinted red by iron, purple from magnesium, or streaked blue with copper, including layers of fossils, entire petrified forests, and remarkably thick layers of salt and potassium laid down by evaporating seas, eroded at different rates, creating the spectacular formations listed above.

No electronics

For four days this summer, 10 children, 20 adults, and five guides rafted down and camped along the Colorado River in Utah. Recent rain had turned the warm water brown with a fine silt that penetrated deep into our hair and clothes.

A few dozen strenuous rapids lasted a few minutes apiece. The rest of the time we drifted or paddled through spectacular Canyonlands National Park, spotting the occasional mountain sheep, eagle, and heron. The guides challenged us to spot centuries-old Anasazi granaries on the cliffs. Late afternoons we found a beach and assembled tents and cots, ate (it must be reported that the Western River Expeditions guides did the cooking, and it was exceptional), washed up, hiked, and played games.

No electronics.

The electronic ashram

In 1998, Tina Kelley wrote an article for the New York Times, “Only Disconnect (For a While, Anyway).” It featured Colby professor Batya Friedman, who spent summers off the Internet, using a telephone twice a month when in town to shop. I was interviewed:

Jonathan Grudin, a professor of information and computer science at the University of California at Irvine who has worked with Professor Friedman, reminisces about a time when he was completely removed from modern communication, including telephones, during a trip to Africa in 1989. He had hoped to feel that free again on a recent trip to Madagascar, but discovered that he was too late.

''There's no place you can go on the planet now where you couldn't be in contact if you had a device worth a couple hundred dollars, so the days you could spend with a completely clear conscience and get out of contact with people, those days are pretty much gone,'' he said. ''There should be socially sanctioned electronic ashrams, where you could check in for a few days.''

In 1998 I missed the experience of disappearing into Burundi only nine years earlier. Now, 15 years after that, I didn’t miss it because I had forgotten what it was like. Batya, now at the University of Washington, emails me across Lake Washington that she still manages periods of isolation, which today require more careful preparation. In contrast, I was tethered to email and news—until Canyonlands.

You may be less steadily connected, but for me, the trip was a reminder of what life was once like. We had to talk with the people around us or not talk at all. We described our jobs, careers, and lives. On the raft, free of a need or ability to focus on current concerns, many took the opportunity to reflect.

A shifting sense of time

We had previously visited other national parks in Utah: exquisite Bryce Canyon; imposing Zion; driving through Capitol Reef on wonderful state highways to reach Moab, home of the Arches. Visitor center videos, posted park signs, and guidebooks provided a stream of overlapping explanations of the geology of the region, reinforced by the guides on the rafts.

My sense of time changed unexpectedly through repeated exposure to the historical accounts interleaved with hours of gazing at the uplifted horizontal sedimentary layers above the canyon as we floated down the Colorado, with massive piles of eroded boulders or sheer smooth rock faces shooting up hundreds of feet at the water’s edge. I found myself immersed in the epochs, thinking of the world in terms of tens of millions of years, not the usual weeks, months, or even decades.

Studying large petrographs and petroglyphs, painted and carved 1000 years ago on a vertical face at the base of a towering cliff, I suddenly realized that in the midst of this geological violence, these paleolithic creations are not yet eroded at all—1,000 years is like yesterday in the life of the rock formations. A different perspective formed on humanity and the challenges we encounter and create.

One afternoon, as we floated downstream in the shadow of endless expanses of striated cliffs, I mused to two of our companions, “What do you suppose people rafting in this area 10 million years from now will see?” They considered this silently for a few seconds. Based on their replies, I figured we would exchange email addresses and stay in touch after the expedition, and so we have. To a social network that felt pretty much maxed out, I have added PJ and Carrie, and Mike. That alone was worth the journey.

The kids on the trip bonded and formed a plan: When back, everyone who was staying on in Moab convened at 8:30 p.m. in a city center t-shirt shop. Three times in the next hour, a different adult excused himself from a conversation to take a phone call. “You need the report when?”

The 100-million-year timeline faded. My next week’s calendar came into focus. We were home.

Gayna Williams researched and planned the expedition, and has attempted to provide electronic ashram experiences previously that he managed to circumvent... Eleanor and Isobel, who do at times see their father detached from a computer, paddled, swam, hiked, and competed enthusiastically at kubb on the beach.


Posted in: on Fri, August 23, 2013 - 8:37:46

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@Lone K. Hansen (2013 08 28)

Sounds like you had a great vacation in stunning scenery and in great company. However, I’m not sure you experienced “how life once was” even if it perhaps felt like it then /feels like it now. Rather, it seems like you experienced the time offline in that particular way because it was marked by being different than what you normally experience, and being marked by not being like that forever smile
I am reminded of this criticism of Sherry Turkle’s “Alone together” idea, this essay arguing that Turkle is fundamentally wrong when she so clearly separates that which is connected/digital from that which is not: http://thesocietypages.org/cyborgology/2012/04/23/sherry-turkles-chronic-digital-dualism-problem/
I’m not calling you a digital dualist, however, as you also say that one of the best parts of the trip is that you now have three more people in your online connections.
But even with an Internet connection you would perhaps have gotten to know them anyway smile


Interacting under canvas


Authors: Steve Benford
Posted: Mon, August 19, 2013 - 9:46:45

I’m just back from a short camping trip and reflecting on how exciting it is to live under canvas. There is a visceral thrill to being in a tent as the thin fabric leaks noise, light, heat, and shadows. Laying awake in the dark you become aware of nearby voices, the sound of rain pattering on the roof, the tent shaking in the breeze. It’s the perfect time to imagine a world of stories that might be happening outside.


Summer camping (there’s rain on the way)

The Storytent

This fascination with tents has inspired several projects at the Mixed Reality Lab. The Storytent aimed to create an intimate and exciting interactive storytelling environment for children. We folded a projection screen into the shape of an A-frame tent and projected synchronized graphics onto both sides, as well as sound, to create a mini immersive environment. We experimented with various ways of interacting with the tent: RFID readers placed at its ends recognized the comings and goings of (tagged) occupants and their possessions; a touch-screen map transported the tent to new locations in the virtual world; while shining flashlights onto the tent manipulated virtual objects as shown in this short video. This final technique used bespoke computer vision software for identifying and tracking flashlight beams.  


Using flashlights to interact in the Storytent

We deployed the Storytent at Nottingham Castle, the ancient site of many thrilling and gruesome stories: Richard the Lionheart arrived there to confront King John after crusading (but he spoke French so the locals wouldn’t let him in); Richard III rode out from there to his death at Bosworth Field (they’ve recently dug him up again in nearby Leicester); Charles I raised his standard there to declare the English Civil War and summon an army (but no-one turned up so he went to Oxford instead); the locals burned down the Castle during the Corn Law riots (you may be getting a sense of Nottingham’s attitude to authority by now); and of course, Robin Hood got up to all sorts of adventures there (and yes he absolutely did exist).

Historical digressions aside, what better place to experience these stories—which children did by exploring the castle grounds and filling in paper clues (which were tagged with RFID) before taking them into the Storytent and using them to trigger the replay of stories.

ExoBuilding

If the Storytent strives for excitement, then our second interactive tent aims for the opposite: meditative relaxation. ExoBuilding has been created by Holger Schnädelbach, Alex Irune, Dave Kirk, Kevin Glover, and Patrick Brundell as an early prototype of a built structure that reacts physically to its occupants’ activities—an idea that they call adaptive architecture.

Exobuilding takes the form of a tent that flexes and moves in direct response to an occupant’s respiration while also sonifying their heartbeat. Early experiments revealed that this form of biofeedback triggers changes in participants’ physiology, leading to lower respiration rates and higher respiration amplitudes, respiration to heart rate coherence and lower frequency heart rate variability, and causing some people to report feeling more relaxed. The team suggests that there is potential for use as a biofeedback device.  


ExoBuilding flexes in response to breathing

Tents as interfaces

These tents have some distinctive and unusual properties when considered as computer interfaces. 

They surround and enclose their occupants to form an immersive display, similar in principle to a virtual reality CAVE, but very different in practice due to their personal scale. Of course, it can be tricky to read high-resolution graphics on a screen that is so close to your eyes, or to move, gesture, and look around in order to interact when sitting in a cramped tent. On the other hand, the snugness of the tent lends a sense of intimacy to storytelling, while isolation may help with relaxation. Also, not being able to turn or look around quickly can bring suspense to storytelling, emphasizing the feeling that something may be approaching from behind.

Unlike larger immersive displays, tents have both insides and outsides. This allows for a separation between those who are immersed and interacting inside, for example a child engaging with a story, and a wider audience that remains outside the tent but can still see what is happening, for example parents or perhaps other children who are waiting for a turn. A tent is therefore an example of a spectator interface, an interface that is deliberately designed to reveal some, although not all, aspects of interaction to an audience.

As with the fabric of a regular tent, the screens of our interactive tents are porous membranes, leaking sound and light in both directions. This opens up opportunities for playful interactions between those inside and those outside, such as whispering, casting shadows, and even shaking the tent (great fun for parents!).

Finally, as the Exobuilding shows, they can flex and move, changing shape and form under computer control, introducing a sense of physicality and even synchronizing with an occupant’s physiological responses.

Interfaces as tents?

In turn, these interactive tents illustrate a wider interaction design principle. Might it be useful to think of all interfaces as being “tents,” by which I mean as permeable boundaries that connect different worlds or spaces. My colleague Boriana Koleva first explored this idea during her PhD research and referred to such interfaces as “mixed reality boundaries.” 

So the Storytent is a permeable boundary that connects three spaces: the physical spaces of inside and outside and a virtual world. It both separates these spaces, providing a degree of isolation and intimacy, but also allows some information and interactions to flow between them, even including participants who can traverse the boundary, for example entering tent from the outside or entering the virtual world.

Is it useful to conceive of other interfaces as being tent-like permeable boundaries? Might games consoles be designed around the idea that players peer into or enter a virtual world while others look back out at them (especially now they routinely come equipped with cameras such as the Kinect)? Should communication tools such as Skype be reimagined as boundaries between physical spaces that might afford different kinds of isolation and permeability? Might even the familiar web browser be reconceived as a permeable boundary to the Web, one through which we look while others look back out at us, tracking our movements and actions? 

There may be something to be said for seeing these interfaces as permeable membranes, encouraging us to reflect on how information, interaction, and presence flow both in both directions and how we might want to design this to enable or restrict such flows, address audiences as well as users, or simply to lend them some tent-like excitement. Perhaps we are all interacting under canvas after all?


Posted in: on Mon, August 19, 2013 - 9:46:45

Steve Benford

Steve Benford is professor of collaborative computing at the University of Nottingham’s Mixed Reality Laboratory.
View All Steve Benford's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Positivism in design


Authors: Ashley Karr
Posted: Mon, August 12, 2013 - 8:43:00

Take away: Applying basic tenets of Positive Psychology during design evaluations can help teams cooperate and be more productive. This means that evaluators must discuss the positive aspects of the design at least as much as the negative.

A few months ago, I was speaking to a group about human factors (HF), human-computer interaction (HCI), and user experience (UX). To make my presentation more interactive, I asked the audience to evaluate an innovative new design. They responded immediately with a number of evaluations, but I soon noticed that all of their comments were negative. Their attention was on what was wrong, so I tried to shift their focus. I asked them to now evaluate positive aspects of the design. No one said a word. I was shocked. 

This experience stayed with me. After the presentation, I kept wondering who had kidnapped the word evaluate and replaced it with criticize? I also wondered why it was so difficult for these educated, intelligent people to say something positive or constructive. Then, I came across a report by a pair of HF specialists from the design consulting firm, IDEO. The report was titled, “Are You Positive?” The authors, Aaron Sklar and David Gilmore, began the report by stating that many designers have operated under a disease model, essentially diagnosing the problems with a given design and making changes to mitigate risk and avoid damage. They then encouraged readers, especially those in the engineering and design fields, to take a positive approach to building and evaluating. Just imagine how well a team of hardworking engineers, designers, developers, and stakeholders would do if they made the design process itself human centered. Goal one is to develop a pleasurable experience for the user, and a good way to get there is to create a pleasurable team environment for the people creating the experience. Revolutionary indeed. 

Sklar and Gilmore were inspired by a movement within psychology called Positivism. Positivism states that to reach a good psychological state, people should focus on what is good in their life and build on it, rather than focusing on and trying to fix or control what is bad. This is not to deny that bad things happen, in life or in design, and to be a responsible adult and professional, risk does have to be mitigated. This is simply to state that our default perspective ought to be a positive one. Here are three examples that Sklar and Gilmore gave on how to apply positivism to design right now:

  • When asked to evaluate a design, try and create a longer list for positive aspects than for negative attributes.
  • Create evaluation metrics for positive aspects of a design and share it with colleagues.
  • Spend more resources (time, energy, money) on building new iterations than on breaking down old iterations.

I felt relief finding and reading Sklar and Gilmore’s report, knowing that a few more people in the world had gone through similar experiences as I had. I am happy to help them spread the word. Whether you are an engineer, a manager, a developer, a designer, a speaker, or just a person looking to live a better life, we can all take from Sklar, Gilmore, and Positivism. I challenge you, the reader, and me, the writer, to test this perspective for at least 24 hours—more if we can. Then notice what happens. What many have discovered is that creative ideas and solutions start flowing. I am certain we all can use more of this creative flow. So, test it out and get back to me on what you find. I would love to hear from you. You can send me an email or post your comment below. 


Posted in: on Mon, August 12, 2013 - 8:43:00

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@Adrian (2013 08 15)

Ashley, this is not only brilliantly written but it is quite frankly a tranquil breath of fresh air.

I deeply encourage the team at Interactions to invite Ashley to write more here.

@Simon (2014 01 13)

There is always something positive in an idea or a design. Sometimes something might be way off brief but even then it will still have some sort of merit. I have found it’s dangerous to stomp on creativity. Creatives and Designers will pull their head in & learn what an organization likes and stay within those confines to avoid being stomped on.


What is your UX style?


Authors: Monica Granfield
Posted: Fri, August 09, 2013 - 5:13:29

Having come out of traditional design training and then migrated into a field that is more conceptually than visually design oriented, for more than two decades I have experienced a bit of an identity crisis. I often wonder how to explain my profession and if it is possible to have a style within this profession. In trying to explain what it is I do for a living, some typical responses to my description of my job are: "Oh you're a graphic designer, so you determine what the software looks like?" "Ergonomics, what is that?" "Information design, is that like statistics?" "Industrial design, you do engineering?" "Features, are you a PM?" "Inventor, what?" A UX designer is a bit of all of these. With a combination of skill sets and areas of focus, can a UX designer have a style? Is there even such a thing as a UX style? Is UX design tangible enough to assign a style to it? In trying to tangibly define a UX style what might it be comparable to, interior design, industrial design, or architecture?

Maybe UX style is akin to architecture. Often we are confined to designing within an operating system and to exploring our creativity and style within an already established environment. Architects are challenged by issues such as zoning, safety rules, materials, and creating a look and feel for a specific time period. Yet, architects are still able to explore their creativity and develop a style that spans from functional to visual.

We could explore the idea that UX design style is similar to industrial design. Jonathan Ive, the master of pushing the envelope on the use of materials to create visually beautiful and functionally innovative hardware, has established his style as clean, innovative, creative, elegant, simple, and sophisticated.

Or maybe we follow along the patterns of interior decorator and interior designer. An interior decorator is not licensed or certified in materials handling and is focused on making a space look and feel inviting via the aesthetics. An interior designer is licensed and certified in materials handling, and is trained to create functional environments via construction practices and building codes, as well as aesthetics. Industrial designers have dependencies on architects, developers, and visual designers, much like a UX designer. They also have a style that they typically work within. This style is usually related to a time period, much like architecture. UIs are not typically related to a time period, but instead to a brand or an OS.

As an example for exploring UX styles we might ask: How does the style, of say, the OSX finder compare to that of the Windows finder? The visual styles are inherited from the OS. Immediately the design focus moves to the functional. How do you navigate through the folder? How easy is it to find your way around, while keeping context? What functions make sense to launch from here and how discoverable are these features? Maybe UX design that is specifically for an OS, enterprise environments included, is more akin to commercial industrial design or commercial architecture, and Web or app design is more free form and therefore more akin to residential industrial design or residential architecture. When designing a commercial building the scope of the navigation is much larger. Have the elevators, restrooms, and exits been made discoverable, while still integrating the brand and the overall intent for the feel of the environment? When designing a small website or app the navigation may be more limited and the focus may be on how to keep the user engaged in the environment to drive results. How do you apply a brand and make the user feel at home? The platform being the house, how creative can I be within this platform and within this budget? Home or office? Same discipline, slightly different focus, based on scope and use.

So how can we quantify our style as a UX designer? Are we akin to any of these disciplines at all or are we able to define our own style in our own way? Are you a light open and airy designer? How intelligent is your experience? How much do you assist the user where needed, while still allowing the user to remain in control? How logical and predictable is your workflow? I am not sure that there are lines that make a direct correlation to any other design discipline. And I can't quite describe a UX style. We are always talking about simplifying a UI. Would this describe our style as minimalist?

My visual and interior design style leans toward modern and minimalist with clean lines and bold accents. This doesn't always translate to my UI designs. My visual UI designs are mostly driven off of the OS or the company brand. On occasion there is more license for design exploration. I do try to emanate an emotion and style in my UI designs; however, much like an interior designer they are also about the audience and function (form does follow function) and the usability of the end product. So how is it that function and usability have a style? And they do, I just need to figure out a way to explain and define it.

I suppose my UX style could translate into clarity, innovation, thoughtfulness, and simplicity of the environment for the user. What is your UX style? Are you able to define it? 


Posted in: on Fri, August 09, 2013 - 5:13:29

Monica Granfield

Monica Granfield is a user experience designer at Symbotic. The views expressed on this website are her own and do not necessarily reflect the views of Symbotic.
View All Monica Granfield's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Bias


Authors: Jonathan Grudin
Posted: Tue, August 06, 2013 - 8:35:14

The human understanding when it has once adopted an opinion (either as being the received opinion or as being agreeable to itself) draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside and rejects; in order that by this great and pernicious predetermination the authority of its former conclusions may remain inviolate. And therefore it was a good answer that was made by one who when they showed him hanging in a temple a picture of those who had paid their vows as having escaped shipwreck, and would have him say whether he did not now acknowledge the power of the gods, “Aye,” asked he again, “but where are they painted that were drowned, after their vows?” And such is the way of all superstition, whether in astrology, dreams, omens, divine judgments, or the like; wherein men, having a delight in such vanities, mark the events where they are fulfilled, but where they fail, though this happen much oftener, neglect and pass them by. But with far more subtlety does this mischief insinuate itself into philosophy and the sciences; in which the first conclusion colours and brings into conformity with itself all that come after, though far sounder and better... —Francis Bacon, First Book of Aphorisms, 1620

Confirmation bias is built into us. Ask me to guess what a blurry image is, then bring it slowly into focus. When it has become clear enough to be recognizable by someone seeing it this way for the first time, I will still not recognize it. My initial hypothesis blinds me.

Confirmation bias and its underlying mechanisms helped us survive. A rough pattern of colors that correlated with past sightings of saber-tooth tigers was a good reason to run. Sticking around to obtain statistically reliable proof did not aid survival. Eating something and becoming ill was a good enough reason to avoid it despite the occasional false conclusions, such as "tomatoes are poisonous." And belief in omens and divine judgments probably helped people endure lives that Bacon’s contemporary Thomas Hobbes described as “nasty, brutish and short.”

To get through life efficiently, we infer causality from correlational data without working out all possible underlying factors. “This intersection was slow twice, I should avoid it.” “Wherever wolves are thick so are wildflowers, so wolves must like flowers.” “Two Freedonians let me down, Freedonians are unreliable.” Confirmation bias underlies stereotyping: Having decided they are unreliable, a reliable Freedonian is an exception, another unreliable Freedonian is a confirmation.

Bacon realized that deep understanding requires a higher bar. He is credited with inventing the scientific method to attack confirmation bias. Unfortunately, experimental methods help but do not overcome the power of confirmation bias, which remains the primary impediment to advancing scientific understanding. It affects all our research: experimental, systems work, design, quantitative analysis, and qualitative approaches.

Confirmation bias arising with experimental methods

Science is not well served by random experimentation. Clear hypotheses can help. For example, Bacon hypothesized that freezing meat could preserve it. But hypotheses have unintended consequences. He contracted pneumonia while doing the experiment and died. The less severe but more common problem is that hypotheses invite a bias to confirm and thereby miss the true account: the initially blurry image that isn’t recognized after we hazard a wrong guess as to its identity.

Approach hypotheses cautiously. In overt and subtle ways, researchers shore them up and ignore disconfirming evidence. We rationalize excluding inconvenient “outlier” data or we collect data until a statistically reliable effect is found. We don’t write up experiments that fail to find an effect, perhaps for good reason: It is all but impossible to publish a negative result. An outcome that by statistical fluke appears to confirm a hypothesis is published, whereas robust findings disconfirming it, though this happen much oftener, we neglect and pass by. It’s a severe problem. Simmons et al.’s Psychological Science paper “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant” demonstrates that by selective disclosure of methods, it is easy without fraud “to publish ‘statistically significant’ evidence consistent with any hypothesis.” Bakker and Wicherts found many statistical errors in a large sample of journal articles, with almost all errors favoring the experimenters’ hypotheses.

Behavioral studies face worse challenges. “Demand characteristics” elbow in: Everyone colludes, consciously or unconsciously, when they know what the researchers believe. In one experiment, lab assistants were told that a set of rats had been bred for intelligence and found they learned mazes faster than normal rats—but all were actually normal rats. It was unclear why. Perhaps the lab assistants handled the “genius rats” more gently.

In HCI studies, friendly human participants often discern our wish and help us out. Double-blind studies where researchers step back and experiments are run by assistants ignorant of the hypotheses or the conditions can sometimes counteract this. But they require more work.

Sometimes there is no need for an hypothesis, as when a researcher with no preference compares two design alternatives. Alternatively, multiple hypotheses can be tested in a study; one or two can be disconfirmed, yielding an aura of scrupulousness. However, I have never seen a study in which all of the hypotheses were disconfirmed. Researchers want to appear smart; key hypotheses tend to be confirmed.

My colleague Steve Poltrock notes that “marginally significant” findings or “trends” are often used to support hypotheses when accepted statistical measures fail to do so. People rarely report trends that counter their hypotheses, citing such differences as “not significant.” It’s human nature.

Design and systems studies

Researchers in our field typically test their own designs and prototype systems. Study participants know that designers hope their designs are liked. They know that system builders hope the systems are liked. Papers invariably report that the designs and systems were judged to be promising. Yet all discussion disappears when subsequent experience proves disappointing. Our literature is full of promising prototypes that disappeared without explanation. This is a disservice to science and engineering. My own studies of promising prototypes are no exception. I once tried to publish a post-mortem of a failure but could not get it accepted, and with only the early positive reports in the literature, for years we received requests for the prototype code. The difficulty of publishing negative results is well known at the National Science Foundation and elsewhere, but I know of no efforts to address it.

Quantitative analysis and ‘predictive analytics’

Quantitative studies are no guard against these problems. In fact, they often exhibit a seductive form of confirmation bias: inference of a causal relationship from correlational data, a major problem in conference and journal submissions I have reviewed over the years. The researchers hypothesize a causal relationship, the correlational data fit, and the researchers take it as proven. Equally or more plausible causal models are ignored, even when plainly evident.

Suppose that heavy Twitter use correlates with being promoted in an organization. It may be tempting to conclude that everyone wanting promotion should use more social media. But causality could run the other way: Maybe already-successful employees use Twitter more, and incessant tweeting is a path to demotion for average employees. Or perhaps gregarious people tweet more and get promoted more. There is no causal explanation in the correlation. This is not fanciful; the literature is packed with such examples. Smart people seeing evidence consistent with their case do not look for alternative causal models.

The word predictive, often used to indicate a positive correlation, causes further problems. Predict has a strong causal connotation for the average reader, and researchers themselves often slide down the slope from “A is predictive of B” to “A causes B.”

As a father, one of my goals is to raise my children to distinguish correlation and causation. Unjustified causal assumptions about correlated events are so common that anyone who avoids them will find ways to be useful. Often we make the correct causal inference, and when dodging a possible saber-tooth a false alarm may be a modest price to pay for survival. However, problematic errors arise frequently in science, engineering, and everyday life.

Qualitative analysis

After describing an ingenious quantitative analysis to explain patterns in “big data,” a conference presenter expressed frustration with a pattern that defied analysis. Someone suggested that the researcher simply contact some of those whose behavior contributed to the pattern to ask what was going on. His reply, in so many words, was “That would be cheating!” Clever quantitative analysis was his goal.

Especially today, with quantitative data so readily available, qualitative field studies are often dismissed as anecdotal and especially prone to confirmation bias. And in fairness, many studies claiming to be ethnographic are weak and good work faces challenges, as described below. So let me explain why I believe that qualitative research is often the best way to go.

I hold degrees in math, physics, and cognitive psychology. I appreciate all efforts to understand behavior. I drew distinctions between scientific approaches and those of history, biography, journalism, fiction—and anthropology. I felt that science involved formal experiments, controlled usability studies, and quantitative analysis. Then, in the mid-1980s, I read a short paper by Lucy Suchman, “Office Procedures as Practical Action,” that described the purchasing process at an unnamed company. A purchase order form was filled out in triplicate, whereupon the Purchasing department sent copies to Receiving and Finance. When orders arrived, Receiving sent an acknowledgment to Finance. When an invoice arrived, Finance found the order and receipt and cut the vendor a check. Very methodical! Suchman then said that the process is not routine and routinely requires solving problems and handling exceptions that arise. She included the transcript of a discussion between two people struggling with a difficult order, showing lots of inference and problem-solving. End of article.

I was shocked. How could a scientific journal publish this? A rational organizational process that I was sure usually worked smoothly, and she presented this one pathological case. I marched to the office of someone in Purchasing in in my organization and asked her to explain our process. She said, “Someone fills out an order in triplicate, we send one copy to Receiving, the other to Finance. When the goods arrive…” And so on.

“Right,” I said. She looked at me. “That’s how it works,” I said.

She paused, then said, “Well, that’s how it’s supposed to work.” I looked at her. “But it never does. Something always goes wrong.”

I held out a copy of Lucy’s paper and asked, “Would you read this and let me know what you think?” The next day she told me, “She’s right. If anything, it’s worse than she said. Some exceptions happen so often we call them the standard exceptions, and then there are exceptions to the standard exception.” 

“Thanks,” I said. I got it. Anthropologists are trained to avoid cherry-picking. You can’t spend two years describing a two-year site visit or two weeks describing a two-week study, so you rely on representative examples. Their methods can include copious coding and analysis of observations and transcripts. Some anthropologists are better than others. The approach might seem less foolproof than controlled experiments, but there is a method, a science. I started doing qualitative work myself.

The BBC drama Elizabeth I portrays a queen contending with chaotic scheming. A minor character, Francis, offers occasional thoughtful guidance. At one point someone refers to him as… Bacon! The apostle of scientific method in a setting devoid of evidence-based decision-making!? Actually, one of Bacon’s great contributions employed qualitative field research. Oxfordshire yeomen rebelled in the late 16th century. The customary response to an insurrection was suppression by force, but Bacon investigated and found them starving, forced off traditional farmland by aristocrats enclosing the land to create private hunting grounds. Powerful figures in the House of Lords insisted that this was a right of landowners, but Bacon pushed through and defended measures that preserved traditional access to land. (I highly recommend Nieves Mathews’ fascinating account of Bacon.)

In my research, when an hypothesis emerges, an explanation for patterns in the data, a constant priority is to find alternative explanations and disconfirming data. In presentation, Rob Kling noted that careful writing is more important in qualitative research because one word can make a huge difference: “X can lead to Y” or “X often leads to Y” is not the same as “X leads to Y.” Experimental and quantitative methods produce data that are reported in the paper; readers can consult the data. Discussion can be looser. A good qualitative report requires careful, honest selection of data and artistry in presentation to paint a picture for the readers.

The challenge is amplified when the researcher is expected to adopt a theoretical framework, to “build theory.” This invites selective filtering of observations. Another risk is “typing,” when a researcher becomes known for a particular observation or perspective, increasing the desire to confirm it. Some good qualitative researchers become predictable in what they report in each study. Sometimes other aspects of the situation seem central to understanding yet were not stressed; other times it isn’t noticeable but could be the case.

Can anyone undertake a study free of hypotheses? At an uninteresting level the answer is no—we all believe things about the world and people. But a better answer is often yes, we can minimize expectations. An ethnographer could study a remote culture assuming just that it is of interest to do so, or assuming that there is a complex kinship system that should be winkled out. The latter risks discovering something that is not there or missing something of greater interest. Similarly, we can examine the use of a new technology believing that it is likely to be interesting, or we can come in with preconceptions about how it will be used. The stronger the preconception, the greater the risk.

One way to approach this is grounded theory, a set of approaches that advocates minimal initial hypothesizing and the collection and organization of data in search of patterns that might form a foundation for theory. When a possible pattern is detected, the researcher seeks observations that do not fit, a step toward a richer understanding. Grounded theory has its detractors. It may not appeal to people for whom theory is where the fun resides. But it is the best fortress I know from which to defend against confirmation bias.

Conclusion

I’m not immune to confirmation bias, although I’m generally not so confident in any hypothesis to resist seeing it disconfirmed. For example, I think of HCI as pre-theoretical, but rather than confirming that bias by ignoring or attacking all theories, I consider them and sometimes find useful elements. Years ago, I was dismayed to find data that didn’t fit a cherished pattern, but eventually came to love disconfirming data, which is a necessary step toward a more complete understanding.

Am I biased about the importance of confirmation bias? I’m convinced that we must relentlessly seek it out in our own work and that of our colleagues, knowing that we won’t always succeed. Perhaps now I see it everywhere and overlook more significant obstacles. So decide how important it is, and be vigilant.

This post had unusual help for a non-refereed paper: Franca Agnoli, Steve Poltrock, John King, Phil Barnard, Gayna Williams, and Clayton Lewis identified relevant literature, missing points, and passages needing clarification.


Posted in: on Tue, August 06, 2013 - 8:35:14

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Banjos and discrete technologies


Authors: Steve Benford
Posted: Mon, August 05, 2013 - 12:39:05

I begin this post with a confession. I play the banjo. There, it’s out in the open and I feel better already. Actually, I play the tenor banjo in Irish style, although this is a distinction that probably only banjo players care about (but boy will they care). You’ll often find me on a Sunday afternoon in the Vat and Fiddle, Bell, or Hop Pole playing along at one of Nottingham’s traditional Irish sessions.

You would be forgiven for wondering what this has to do with human-computer interaction, but it turns out that even very traditional practices can shed new light on our interactions with computers. My colleagues Peter Tolmie, Yousif Ahmed, and myself undertook an ethnographic study of Irish music sessions which, fortunately for me, involved spending some considerable time hanging around in traditional sessions and observing what goes on as well as interviewing musicians.

Irish music sessions

At first glance an Irish music session is a traditional practice that seems far removed from the world of computers. A group of musicians sits around in a pub, playing traditional tunes on a variety of instruments—fiddles, flutes, whistles, mandolins, the bodhran (the traditional Irish drum), and guitars to name a few. Even banjos are tolerated. Well, subject to recital of the canonical list of banjo jokes anyway. 

The structure of the music allows the musicians to improvise as a group. They typically play sets of several tunes that are strung together, with each tune being repeated several times. A typical set might consist of three different tunes, each repeated three times, and where each tune consists of two parts that are themselves repeated. If this seems a little complicated, the important thing to remember is that several tunes are sequenced together and that there is a fair bit of repetition.  This structure enables improvisation, both through choosing which tune to segue into next as well as through embellishing a tune each time it is repeated. Repetition also supports playing by ear as musicians can try to recall a tune that they haven’t played for a while or even pick up a new tune from scratch if they are especially skilled.

How Irish musicians have taken to the Internet

Our study showed how Irish musicians have taken to the Internet, establishing a dedicated Irish social media site called The Session as far back as 2002. Since then, a worldwide community of musicians has been transcribing the Irish repertoire, building a database of thousands of tunes along with sheet music, notes on recordings, names (which are widely contested), variations, playing tips, and suggestions for good sets. The picture below shows an excerpt from the page for the tune Banish Misfortune giving alternative names, comments and also the music written in ABC notation which has been specially developed for traditional music (the conventional “dots” are also available). The Session also gives the locations and timings of sessions in many cities around the world in case an Irish musician is visiting a distant town and fancies a play. The Session is incredibly useful because it enables musicians to learn new tunes. It has since been supplemented by a variety of other services from YouTube, to the BBC’s Virtual Session to the specialist learning sites such as jigsandreels.

Session etiquette

There is a very interesting tension at play here—one that speaks directly to the design of new technologies. On the one hand, Irish musicians appear to be enthusiastically adopting digital media to share and learn a common material, while on the other the actual performance of this material in a live session is governed by a strong etiquette that emphasizes the importance of playing by ear. While there are of course huge local variations in etiquette, many musicians we spoke to were very aware of the idea of this tradition of playing by ear. There is a general nervousness about getting out sheet music in a session, especially for beginners, an idea that is reinforced by various published guides to session etiquette.

Our studies revealed the subtle ways in which musicians manage this tension, walking the line between extensive preparation and rehearsal away from a session, and the spontaneity of playing by ear within it. Many, for example, carry notebooks of tunes, pre-arranged into convenient sets. One notable strategy that we saw—and that I have seen several times since in sessions as far afield as Nottingham and Seattle—is to prepare a small piece of paper, perhaps in a pocketbook, that has the names of tunes grouped into sets alongside just the first few bars of music of each. These bespoke notations convey the essential information needed in a minimal form; an Irish musician needs to know which tune is coming next and just the first few bars of that tune so that they can segue into it, after which they are up and running from memory.  

Discrete technologies

Such pieces of paper are an example of a discrete technology. They condense the essential information required to support the practice of playing by ear into a form that fits with session etiquette. A page of a small notebook, which after all could be used for writing down the names of new tunes, contact details, and so forth, is a long way removed from a piece of sheet music on a music stand.

This idea of discrete technologies—ones that provide useful services in a way that respects the social etiquette of given situation—extends beyond pieces of paper to the digital world. We are all aware of the challenges of managing mobile phone use in various social settings. Indeed, these same technologies are present in Irish sessions that take place in pubs, busy social settings where people—including the musicians— have come together to chat, may even drink a pint or two, and enjoy “the craic.” In this context it may be quite acceptable to get out a phone, check a text or look up something on the Internet.  It can even be be acceptable to break of playing in the middle of a set to check one’s phone or receive a call—quite a contrast to a formal classical music concert. We also observed people using digital recorders to capture tunes as they are played.

It’s all in the ambiguity

So this notion of discrete technologies is a subtle one. Ironically, using a modern digital technology such as a mobile phone or digital recorder to support a traditional practice might possibly be more discrete that using a traditional technology such as a piece of paper. Both phones and pieces of paper can be used for many purposes, from taking notes to reading sheet music, and it is perhaps this apparent use that really matters. It all depends on how you are seen to use a given technology rather that on the form of the technology itself. 

Perhaps the underlying issue is whether the apparent use of a technology is sufficiently ambiguous that it can plausibly fit some socially acceptable use, or whether in contrast, it overtly flaunts the local etiquette to the point where it can no longer be ignored. This idea directly builds on Bill Gaver’s discussions of the role of ambiguity in interface design and especially on Paul Aoki’s and Alison Woodruff’s subsequent application of this to the challenge of saving face in mobile phone calls

The challenge for interface designers is to invent displays and services that can be used discretely with respect to a particular social setting; that don’t overtly flaunt social etiquette and that are open to ambiguous interpretation. Our study showed how traditional Irish musicians had been particularly creative in this respect, inventing their own discrete notation to help them bridge between their learning of tunes offline and their ability to recall them when playing live. Interface designers might learn a great deal from such inventiveness! 

To read and hear more

For those who would like to read more, our ethnographic study of Irish Music sessions was first reported in a paper at CSCW 2012 while an extended version has recently been published in the book Ethnomethodology at Play.

Finally, it seems appropriate to end with one of my favorite banjo tunes—the rollicking jig Banish Misfortune. Feel free to play along, or perhaps I’ll see you in a session somewhere soon.



Posted in: on Mon, August 05, 2013 - 12:39:05

Steve Benford

Steve Benford is professor of collaborative computing at the University of Nottingham’s Mixed Reality Laboratory.
View All Steve Benford's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


CHI 2013 HCI for Peace Ideathon summary


Authors: Juan Pablo Hourcade
Posted: Fri, August 02, 2013 - 11:56:52

For the past three years, members of the human-computer interaction community interested in using computing technologies to promote peace and prevent conflict have been meeting as part of HCI for Peace, a grassroots initiative. Our aim is to highlight and celebrate work already done to this end and to encourage further work with peace as its explicit goal. We hope this call to action starts some community-wide discussions from which positive action can spring: Our world can be no brighter than the worlds we dream of. The HCI community is uniquely positioned in the computing world to effect change in this area. its focus not only on the user sitting in front of a screen, but on the effect of technology on humanity at a societal and global scale.

In this blog post I summarize the outcomes of the latest HCI for Peace meeting at CHI 2013, when about 20 attendees met as part of a SIG titled “HCI for Peace Ideathon.” It was an opportunity for like-minded researchers and practitioners to exchange ideas, experiences, and think of future opportunities. 

The participants split into four groups to discuss different topics that were later shared with the entire group. The first group focused on how interactive technologies may play a role in problem formulation in order to resolve conflicts. Some ideas that were discussed included ensuring a diversity of people were involved in conflict resolution (to develop more alternative solutions), encouraging differences of opinion, and arriving at a wide set of solutions early on. Additional discussion focused on the role of media in how positive cooperation is much less likely to be reported than violence, providing a skewed (and scarier) view of the world. The group developed an idea for media and storytellers where stories from each group in a conflict that include the same values could be merged by storytellers and propagated to both sides to show similarities.

The second group discussed citizen journalism. This group included a member from witness.org who discussed their app to support witnesses recording video of sensitive areas. The app gathers metadata that can be used to validate that video was recorded at the time and place claimed by the recorder. While the idea behind the app is to record and broadcast violent acts to deter their occurrence, there was also the suggestion that it could be used to cover positive aspects, reinforcing peace messaging and positive action.

A third group discussed recent applications (e.g., the NNR table) that use shared interactive spaces for conflict resolution. In these spaces the two parts have to work together to make something happen. Working together makes both parts benefit while not collaborating leaves both parts stuck. The NNR table was recently evaluated with Israeli and Palestinian youth with positive results.

The fourth group discussed what HCI researchers and practitioners could do to have an impact on the precursors of conflict. One issue that came up was the need to share the empirical research on these precursors, as it is not well known. This would be important not only for HCI people, but for the public in general. HCI researchers and practitioners could play a role, for example, in helping the public understand the warning signs of upcoming conflict using accessible visualizations. We also discussed the role of social media in escalating or de-escalating conflict. 

Ben Shneiderman made a call at the end of the session to introduce into the HCI curriculum the role interactive technologies can play in preventing conflict and promoting peace.

What would you propose? What are some ways in which interactive technologies could prevent conflict and promote peace? To join the discussion visit hciforpeace.org, join our facebook community, or follow us on twitter @hciforpeace.


Posted in: on Fri, August 02, 2013 - 11:56:52

Juan Pablo Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Pablo Hourcade's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Trajectories into practice


Authors: Steve Benford
Posted: Tue, July 23, 2013 - 8:00:32

I’m Steve Benford, professor of collaborative computing at the University of Nottingham’s Mixed Reality Laboratory. My research explores new interaction techniques and concepts for creative and cultural experiences. I characterise my approach as performance-led research in the wild, meaning that my team initially helps artists and performers develop and tour new experiences, which we then study in the wild, ultimately leading to new HCI concepts, such as ambiguity, spectator interfaces, uncomfortable interactions, and trajectories. I was elected to the CHI Academy in 2012 and have won CHI best paper awards in 2005, 2009, 2011, and 2012. My book Performing Mixed Reality has been published by MIT Press. You can find out more about my research, including accessing papers and videos, at my personal research blog.

Trajectories into practice

Maybe it’s an age thing, but I’m increasingly bothered by the question of how my research might make a difference—why might a professional working at the coalface of user experience design be bothered about what I am doing? Of course, there are probably other reasons why this is bothering me too—we (researchers) are increasingly asked to justify the impact of our research, both speculatively when writing proposals such as the “pathways to impact” section on EPSRC proposals, or when our results are weighed in the balance of various assessment exercises such the fast-approaching UK’s Research Excellence Framework, which for the first time includes impact case studies.

An important aspect of this for me is putting HCI theory into practice, a notoriously difficult challenge as articulated by Yvonne Rogers in her recent book on HCI Theory, but also emphasised by EPSRC’s recent review of the state of HCI here in the UK. Perhaps the first question to consider is what I mean by HCI theory here? In addition to Yvonne’s book, which tackles this question in considerable depth, I’ve also been struck by Kia Hook’s and Jonas Lowgren’s recent TOCHI paper on “Strong Concepts,” design abstractions that generalise across different domains. Kia and Jonas identify my work on trajectories as an example of a strong concept, a form of HCI theory that could be ripe for putting into practice. 

The question of how to put trajectories into practice has therefore emerged as a central concern for my ongoing EPSRC Dream Fellowship. So far, I’ve been focusing on two different domains, museums and television.

For museums, Horizon Doctoral Training Centre student Lesley Fosh has designed and studied an example trajectory through a sculpture garden (see Figure 1) that aims to move pairs of visitors between moments of experiential engagement with sculptures in which they are isolated, listening to music and performing physical gestures such as touching them, and other moments where these visitors come together again to reflect on these experiences as well as on more “official” guide information that they receive afterwards. These ideas are now being carried forward in the European CHESS project, where we are working with the Acropolis Museum in Athens and the Cite De L’Espace space museum in Tolouse among other partners, including running a series of design workshops over this summer to apply trajectories to the design of new visiting experiences.


Figure 1. Design of the local trajectory through a piece of sculpture in a sculpture garden.

Turning to television, I recently spent four months as a visiting professor at the BBC focusing on the design of multiscreen TV experiences. I began by analyzing some existing companion apps for TV shows, including the Antiques Roadshow play-along game in which viewers estimate the value of antiques during the show, as well as a research prototype called Jigsaw developed by Maxine Glancy and the team at BBC R&D that aims to support intergenerational TV experiences by enabling children to snap images from a TV show and turn them into jigsaw puzzles. Similar to Lesley’s example, I was able to produce some case studies to illustrate the potential of trajectories, albeit by analyzing existing designs. Again, these were followed by a design workshop with participants from editorial, user experience, and research and development, at which we used trajectories to explore new design concepts for extended TV experiences, and two subsequent presentations to the wider BBC user experience design team as part of their Northstar One Service design project.

While it was exciting to be able to engage with professional user experience designers who expressed enthusiasm for trajectories, it proved difficult to establish a deep connection between the concepts and specific example designs in short workshops. We therefore hosted the first “trajectorize” course at Nottingham last week. Three different teams—David Ullman and Dan Ramsden from the BBC; Andres Lucero from Nokia and Joel Fischer from the Mixed Reality Lab; and the artists Ben Gwalchmai and James Wheale—brought along three design concepts that we then inspected though the lens of various trajectory concepts over two days. As well as being thoroughly enjoyable, this was the first time that I began to glimpse how trajectories might actually be put into practice, with zesigners being able to produce complex trajectory sketches as a way of challenging and refining their ideas in areas such as designing social encounters, key transitions, and take-home experiences. You can get more of a sense of what happened from the official course structure and materials, but also from Dan’s blogpost after the event. 

The initial success of this course suggests to me that there is indeed the potential to embed strong concepts such as trajectories into the practice of professional user experience design, but also that this takes considerable work from both sides—in this case at least a two-day commitment of time to be able to make significant progress. However, I suspect that there is far more to it than this. 

First, we need to understand where these concepts sit in the UX design process (our teams were using trajectories to refine existing concepts rather than for ideation). 

Second, it feels important to generate some initial case studies based on familiar examples (as we did in both sectors) to generate initial interest in the concepts. 

Third, as our course participants observed, it feels like the concepts need an appropriate level of generality, being structured enough that you can repeatedly attack a design from different perspectives, and yet not so prescriptive that they close down creative thinking. 

Finally, is the importance of sketching. We have repeatedly shown trajectories as diagrams and encouraged workshop participants to create their own. Creating and labeling trajectory diagrams feels like an important element of the approach, but also brings its own challenges, not least you need a very large sheet of paper to be able to move between overview and the fine detail of annotations. As a result we have begun to experiment with zoomable drawing tools, initially developing a series of trajectory sketches using Prezi, but more recently with my Colleagues Chris Greenhalgh and Tony Glover beginning to develop and now use our own zooming trajectory sketch tool that adds greater structure, sequencing, and also metadata to an evolving sketch.

Putting trajectories into practice is very much a work in progress, and one that I hope to continue over the coming years. My current sense is that it should be possible to put strong concepts such as trajectories to work, but only if we can find the right approach and supporting tools.



Posted in: on Tue, July 23, 2013 - 8:00:32

Steve Benford

Steve Benford is professor of collaborative computing at the University of Nottingham’s Mixed Reality Laboratory.
View All Steve Benford's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


The human in HCI: What you can learn from the Bard (and others)


Authors: Uday Gajendar
Posted: Tue, July 16, 2013 - 1:32:37

How does one account for the human within human-computer interaction? One approach historically embodied by the HCI field is firmly reductionist, a distillation of functional entities in which a human comprises "information processing systems" and "decision-making agents." It has a quantitative outlook with scientific rigor and statistical significance of data to ensure accurate validations of hypotheses. This grounds everyone in rational discourse and technical conclusions. And it's absolutely important and useful, just not entirely sufficient, IMHO. 

If we are to improve the human condition via well-designed technologies (computers, devices, systems), we must somehow grok the, well, human condition! This requires an empathetic, holistic outlook on the whole of humans (i.e., people) in all their glory of promise and gory flaws. This is why I always recommend reading Shakespeare and philosophy.

Wait, what? Indeed, I've found you can learn more about people and their messy challenges from literary texts representing culture and humanities than from typical HCI textbooks. While that info is great as a reference desk (just like pharmacology classifications are useful on a doctor's desk), you gotta get deep into the messiness that make up a person's life: emotions, dreams, motives, beliefs, flaws, hopes, fears, ideals. A doctor takes time to get to know their patients’ personal and family histories, as well as their habits and stories of life. It's not just charming bedside manner, it's about developing a holistic view toward making better diagnoses that are supportive of patients’ well-being, so they can (as the Kaiser ad says) thrive. 

So what of these literary authors and what can HCI professionals learn from them? 

Literature: Shakespeare. You can't beat the Bard himself, right? The maestro of Elizabethan theater captured and chronicled the messy affairs of the day with wit and eloquence in his staged plays, both tragedies and comedies. Each delved deep into critical human emotions, exploring and exposing people as flawed and hapless, yet striving for somewhat misguided noble aims. Hamlet exposed woeful anxiety ("To be, or not to be…"). Macbeth, ruthless ambition. King Lear, frail sense of half-witted ego and treacherous legacy. Meanwhile A Midsummer Night's Dream cleverly dwells on fantasy, hope, childlike dreams that persist into adulthood (much like Peter Pan or Alice in Wonderland centuries later). If you want to dig into what makes us all tick, read Shakespeare or watch his plays performed in the park (for free). 

Philosophy: Nietzsche. Actually Plato and Socrates are better places to start, but philosophy in general is the quest to ask "Why?" to discover the underpinnings for human thought and values. Far from fanciful daydreaming (that's just daydreaming, really), philosophy offers a tough, persistent, skeptical analysis of purposes and values, and how they influence our daily lives. Plato and Socrates represent the Classical ideals of understanding reality through dialogue and storytelling, by direct observation of people in context. Nietzsche applied a grittier lens that involved intense examination of how to become a stronger, life-fulfilling person, in full existential vigor, willing you to power and achievement. Sartre and Camus continued this theme with writings on our need to act to fully give meaning to our lives, to be fully human engaged in daily life.

Fine Art: Picasso. Or Monet, Van Gogh, Matisse, and countless other artists deemed somewhat "mad" for their times. Each one interpreted reality in different ways, conveying their visions with special techniques of painting (the medium truly is the message) and illuminating various "truths" about the nature of everyday life, with atmosphere and conviction. Each of their works was a reflection of the zeitgeist of the era (discovery of x-rays and quantum mechanics, new forms of light and photography, theories about cultural layers to reality) and was a response of emotional value: mood, tone, voice. They were trying to capture the emotional tones of an era, the broader spirit of the people, which we ourselves may not be attuned to. As the famous saying goes, art tells beautiful lies to reveal a deeper truth. It's all subjective, but also a deeply emotional expression of human conditions.

What's the result after spending time indulging in such topics? A greater cultural appreciation for the human aspect that HCI professionals are working to support. If you want to improve the human condition, you have to strive to understand it at a human level of abstraction and messiness. This appreciation yields a deeper sense for the motives for how and why people are the way they are. Sure you can (and should) perform scientific experiments validating finite measurements for benchmarking, etc. But as Steve Jobs said, it's at the intersection of liberal arts and technology where we create something that makes our hearts sing. 

Tapping into the poetry and emotion of what makes us all human is an essential part of that process of making HCI really H - C - I, from human-computer interaction toward human-condition improvement.



Posted in: on Tue, July 16, 2013 - 1:32:37

Uday Gajendar

Uday Gajendar is Director of User Experience at CloudPhysics, focused on bringing beauty and soul to Big Data for virtualized datacenters.
View All Uday Gajendar's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


The UX ownership war is over … and we have lost!


Authors: Daniel Rosenberg
Posted: Tue, July 16, 2013 - 7:53:45

In previous blogs and many interactions articles and columns over the years I have articulated my concerns over the UX profession’s general inability to penetrate to the core of business leadership. Richard Anderson added his own theory to this legacy in his blog “What Holds UX Back?

I had a profound experience last week, which unfortunately pushed me over to the dark side regarding my perpetually optimistic perspective on how UX design professionals will eventually take a place of equal rank in the boardroom.

Let me frame the situation…

I was on an east coast business trip last week working with one of my clients, a startup named WellDoc that has created an FDA class-4-certified mobile app for managing type 2 diabetes. Look it up. Doctors prescribe it and your insurance company reimburses the monthly subscription cost! It will save billions. Following the advice this app’s expert system provides as it tracks your lifestyle data has been proven to lower blood glucose levels more than some of the most popular diabetes medications. This is an amazing, cutting-edge business model. It to took a bunch of brilliant physicians, clinicians, and business people a decade to make this fly. The actual software engineering and UX designs were among the least complicated part of bringing this product to market. (Full disclosure: My wife and I, and some other family members, are investors in this company.)

Now to event that triggered the title of this blog…

During the course of the day a 30-something product manager that I have been working with on a different medical app for about three months casually mentioned that he is starting an MBA program in “human-centered design” at the Johns Hopkins University’s Carey School of Business. Formally this program was known as the MBA in design leadership. It is run in collaboration with the Maryland Institute College of Art. However, Johns Hopkins is the degree-granting institution. 

Nathan Shedroff at the California College of the Arts (CCA) established a similar program several years ago called the MBA in design strategy. When I first heard about this idea I did not panic because Nathan (as most readers will know) is a world-class designer and design thought leader and CCA is not a business school. 

Unfortunately, from my perspective, as big name business schools jump on board the “design leadership MBA” trend the future ownership of the UX agenda will become the provenance of people not trained as designers or HCI specialists but of people who have never actually practiced design. At least they will employ designers,

In the end you might say that this trend simply reflects the maturation of design as a core competitive business value proposition from which we will all as consumers benefit in the end. But is this the path to ubiquitously great product design?

Who do you think the typical CEO is going to listen to, the guy from Harvard with the MBA in design leadership already seated at the table or the creative genius in the hallway with purple hair and body piercings sporting an MFA from the Royal College?

Game, set, and match over!

Daniel Rosenberg is Chief Design Officer at rCDOUX LLC


Posted in: on Tue, July 16, 2013 - 7:53:45

Daniel Rosenberg

Daniel Rosenberg is Chief Design Officer at rCDOUX LLC.
View All Daniel Rosenberg's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@David (2013 07 17)

I share your concerns, Daniel.

Last week I was interviewed for entrepreneurship.org about interaction design. The article was fantastic actually… but the kick off question was the same as always: how can biz people ‘do’ interaction design?

The fact is that IxD is a profession like any other, was my answer. That said there’s plenty of things biz people can do to stand in for IxDs while they’re hiring one.

Despite the truth of what you say and my own experiences,  I’m not so sure the episode you recount indicates the final battle in the war. On the contrary I would say it opens another valuable front in the struggle to assert the importance, methods, and results of interaction design.

While you’re undoubtedly correct that it (a) will suck hard to sit across the table from an MBA who pretends he knows the difference between a radio button and a check box and collects a big fat check at the end of the year for his troubles… it is also true that (b) designers living near Johns Hopkins, et al, should march over to that school and get on the faculty and/or get in the program and make a difference.

This is an infinite game, not a finite game. Keep on truckin’!

~David Fore

The road is long. These are still early days.

@Ellen B (2013 07 18)

Nice article & a good topic to raise.

My take: yes but no but yes but no. smile  IMHO the UX agenda hasn’t really been owned by trained designers in the first place. I think it’s a win and a noble experiment to try to train design-focused MBAs.

First, there aren’t enough of us in the pipelines to have reached significant across-the-board leadership positions specifically in the tech industry. Most of those programs have been around for 10 or 15 years (I’m from CMU HCI’s 2nd graduating undergrad class).

Second, companies are so starved for raw production design skills that designers are seen as desperately-needed producers of interaction and visuals, but aren’t “needed” to contribute strategic vision: there are plenty of MBAs for that. You’ve usually got so damn much work to do that sitting in the meetings and negotiating for this or that feature is not how you’re allowed to use your time.

Third, designers seem less likely to have the skillsets that most companies see as necessary for strategic product leadership: this includes marketing, market research, raw business analysis, plus the “soft skills” of political leadership, negotiation, etc. Because designers’ speciality skills are in such demand they aren’t particularly given the space to develop these other skills the way, say, an APM is required to develop them.

Great design simply doesn’t happen solely because of the design talent. It also takes leadership that prioritize design at the highest levels — the CEO on down. Leadership has to say “we’re devoting these three engineers to polishing the UI.” “We’re spending the money on usability tests / ethnography.” “We’re going to hire some fucking brilliant designers and stay the hell out of their way.” “We’re going to spend money on the nice packaging.” “We’re going to have a great customer service department.” “We’re not going to ship the product until this is AWESOME.”

Someone who is a designer isn’t necessarily the person who has to be doing this. Someone who is an MBA but values & understands design — can be.

I think if you want to *strategically* impact user experience you shouldn’t be a designer in today’s tech climate & I don’t think there are enough of us to change things yet. You should be a product manager who has a strong background in user-centered design and UX. There’s a glass ceiling for designers; if you want to really be senior in the software industry you need to be a PM, not a designer.

(BTW this is my entire career trajectory / issue of profound meditation — starting as a UX designer, how do I have the strategic impact I want? Across various design leadership positions I’m now cofounder at a startup; head of Product & UX with CEO & CTO who value good user experience.)

@Dave Malouf (2013 07 19)

Daniel,
My experience is in total opposition to yours. I just interviewed for an executive director position at Honeywell’s Chemical’s group. And they have hired UX VPs at a few of their other divisions. This is the trend. GE is doing this as well. These are roles with direct contact to the CEO and can clearly compete for senior leadership roles guiding the strategy of the organizations. Couple this w/ the trend among tech startups to include a design co-founder I think your anecdote is just that, a single data-point.

But the other premise in your piece is weird to me. Of course, the MBAs own strategy. They have and will always own strategy. it is only recently that UX has even been considered a strategic initiative and in most organizations it hasn’t even risen to that level.

But further, I’m very confused with the presumption here.
a) Why can’t designers get this or any other MBA or similar Design Management degree (Pratt, IIT/ID, SCAD to name a few)?
b) That the program is devoid of design teaching? And further that business people can’t learn design? It is a normal path for Technology folks to get an MBA (not a design MBA, or a technology MBA, just an MBA) as part of their career path if they want to go into product management and rise through the business/strategy ranks. Why not do this through design?

Having taught along side the design management program at SCAD I have seen great business and engineering folks become more than competent designers. They aren’t the best designers, but given the positions they are going into to @Dave’s point they don’t have to be the best designers, they have to be the best strategists, analysts, managers and leaders. But through real studio work, info vis skills, and yes, HCI and Human Factors classes they become competent well rounded designers. Given that the JH program is tied to a design school I could only assume the same basic curricula is in there.

What is funny is that Robert Fabricant (given you just mentioned Harvard) just published in HBR this piece this same week (http://bit.ly/197uAhA) which is in total opposition to your perspective.

—Dave

@Vladymir Rogov (2013 07 22)

What if the new CEO is the guy with the purple hair and body piercings? If you think that all CEO’s a suited manikins, then you are not getting around much.

@Dan Rosenberg (2013 07 24)

Thanks for all the good discussion and comments.  I will respond shortly in my next blog.  I have gotten significant feedback from other channels as well that I would like to include. 

I am motivated however to respond to the last comment which I assume was offered in jest.  I know a few CEO’s here in Silicon Valley with purple (or orange) hair and many more with body piercings.  Some of them have MBA’s, some don’t.  The point is they are the CEO. They have the ultimate skin in the game.  My favorite is Quixey where the CEO is a chlidhood friend of my son (and this is his second company).  It is a well funded start with some big name investors like Peter Teal.  The important point is that while running very lean he prioritized not only having UX designers but also having a user research function in house over many other things.  These are the next gen leaders and while not designers they have strong opinions on design.  As the CEO they also hold the gold and are companies UX leader through both words, actions and investment choices.

@Ashley Karr (2013 08 14)

If the CEO is worth anything, he’ll go with the designer w/ purple hair.

Ever here of the story, “The Little Red Hen”? If you haven’t or have forgotten it, read it! Talkers are a dime a dozen. Have faith, Daniel!!!

One great exercise to turn the tables and put the engineers and designers at the helm:

During design reviews / pitches, don’t let the TALKERS TALK! If they have suggestions, give them a piece of paper and pen and tell them to DRAW WHAT THEY THINK THE DESIGN SHOULD LOOK LIKE! Better yet, make them come up to the front of the room and draw it on the board.

I have lots of sneaky suggestions like this. Email me, and I will give you more!!!

Cheers,

Ashley

@ 5267471 (2013 09 03)

Interesting perspective but I think you are missing something here - the sacred design you reference is done in service to the business, not for purely artistic reasons. Those who can figure out how to monetize it will surely win the long-haul. Therefore someone who takes stock in being a designer first and business person (the MBA you detest) will need to take comfort in having the design components lauded but perhaps actual implementation aborted. A good CEO will spot the application to the business, to the strategy, to the brand, etc. but that doesn’t mean the pure UX Designer is absolved of making those connections apparent.

@liam (2013 09 22)

Two words Dan:  Jony Ive.

@Prod Mgr (2014 03 18)

I am actually the product manager that Dan is referring to in the article (one of my classmates just pointed out the article to me—damn RS feeds (-:). 
Dan has been a great sounding board for a number of projects we have collaborated on.

I’d just like to clarify that the dual degree program is about using design thinking in different situations to help us solve problems in divergent ways.  All of our “design” classes are taught at Maryland Institute College of Arts (MICA), an amazing art school here in Maryland. 

To give you an example of a few of the projects we have worked on (1) revitalize the library and post office system by providing enhanced user experience (2) recreate a kitchen item in a completely different way, while experimenting with 3D printing, laser-cutting, electronics, smart fabrics, etc.

Our professors (and mentors) have been Creative Directors, UX leaders, DJs, Innovation Officers @ Fortune 1000 cos., and NPR producers.

I, personally, have been incredibly happy with the program and hope to bring empathy and a little bit more openness and creativity to the business world.


Number 9: Names, Facebook, and identity


Authors: Deborah Tatar
Posted: Thu, July 11, 2013 - 7:42:20

Facebook recently informed me that my name is deborah.tatar.9. 

Oh.

Really?

I grew up in a sub-culture of America that believed in sending kids to sleep-away camps to get them out of the squalor of the city (“Hot town summer in the city/Back of my neck gettin’ dirty and gritty”). Consequently, as a “tween” and a teen I did a number of 8-week stints in bucolic settings in upstate New York and New England. (Allan Sherman memorialized the drama of these experiences in the novelty song “Hello Muddah Hello Faddah,” which won a Grammy in 1964 and which my grandparents played with glee on their stereo record player up until they passed away well into your lifetime, if you are reading this in ACM interactions in 2013.)

I spent one summer roomed in a cabin with 16 girls. Four of us were named “Debbie.” I was on the top bunk and another Debbie, from Queens, slept below me. This was not an entirely comfortable situation. By the end of the summer, the speed of my response when “Debbie” was called was somewhat diminished. 

But it never occurred to me—or any of us—to change our names. The most we did was replace the dot on the “i” with a daisy when writing. In fact, we all replaced the dot on the “i” with daisies, as did pretty much all other girls our age who had dots in their name. The question was not whether we did this, but rather how long it persisted. (Recently I was reviewing applications for undergraduate summer research that were all handwritten and, incredibly, one of the dots on an application from a woman was in the shape of a daisy. What was the applicant thinking?) 

In my case, I did not even change my name when I got married, though I did spend a few weeks imagining what it would be like to have a name (Harrison) that people did not find humorous and spelled correctly the first time. 

The only time I have changed my name was the big shift from Debbie to Deborah. That represented a deliberate effort to change my identity in my mid-20s, and I accomplished it when I moved from Massachusetts to California to work at Xerox PARC. My mother had always said that she liked Deborah as a name because it could be small when I was small and big when I was big. So the shift represented my willful attempt to see myself as a bona fide adult. There are still a few people in my professional world who knew me when I worked at MIT and at DEC and still call me Debbie. I don’t correct them because being adult isn’t actually an issue for me anymore and because the appellation now signifies the rights and privileges that appertain to long acquaintance.

The Chinese dissident artist Ai Weiwei wrote:

"A name is the first and final marker of individual rights, one fixed part of the ever-changing human world. A name is the most basic characteristic of our human rights: no matter how poor or how rich, all living people have a name, and it is endowed with good wishes, the expectant blessings of kindness and virtue."

Debbie was my name and I wasn’t going to change it just because. Facebook’s action reminds me what a meager, bare, poor thing a name is in our world of computing, and how unshared it is, how stripped of cultural meaning and how determined it is by … externalities, to misappropriate a word from economics. 

I contemplate this new name:

deborah.tatar.9

and it makes me angry. Who is Facebook to decide my name or, worse, my number? Is this the Village (“I am number 2. You are number 6.”)? Am I Jean Valjean (i.e., prisoner 24601)? 7 of 9? “Number 9. Number 9. Number 9.” in the words of the Beatles. 

Do I have a realistic choice?

The policy of generating student names at my institution is to take the first two letters of the first name and concatenate them with the last name. Thus, one of our students, Caleb Jones, was a bit startled to be given a name that in the United States is used as a euphemism for a portion of male anatomy not usually discussed more directly. Luckily, as a large, hearty, confident young man, though a devout Baptist and not given to raucous levity, he was able to treat it as a funny joke. For six years. Ya gotta to suffer to be educated? 

I chose my current official name (dtatar) in a fraught moment when filling out a great deal of paperwork to join VT, with no forewarning that this was the name that I would have to live with for the duration, and no serious discussion of alternatives. I spent several years trying to make the alias “tatar” work, but every time there was a problem, tech support got confused. Also, I could not persuade the institution to use “tatar” publicly, meaning that I was never able to make a clean, clear self presentation. Finally, I just gave up. 

The world of human naming is very rich and full of imponderable meanings. We all know that in Spanish-influenced countries, there is both a matronymic and a patronymic (Garcia Lorca or Garcia Marquez) and it matters that the matronymic comes second and is not hyphenated. People seem well and truly situated in these cultures. I have had students from India who had to make up last names to come to the United States. A colleague from Myanmar was named for her birth year and a priest-created designation, with no indication of family connection embedded in the name. I imagine that in their birth worlds a name is not a handle. Instead the expectation is that to use someone’s name means that you actually know them. I inherited a wonderful children’s book called My Mother Is the Most Beautiful Woman in the World, about a little girl who gets lost and the attempts to find her mother. It turns out that no one had informed the little girl of the cross-cultural universal properties of beauty that might justify the designation “the most beautiful woman in the world” and, pace recent studies in scientific psychology, the authors did not seem to think that they ought to. A colleague from the Philippines once pointed out that virtually all women are named Maria. In that context, this leads to a widespread use of notably light-hearted nicknames—I had a Filipina colleague called Gucci. These nicknames designate but they also describe. In Korea, there are very few last names altogether, about five. My Korean students just laugh (kindly) when I attempt to ask them about their naming customs. Traditionally, in some cultures, there have been secret names, known only to certain people in the family, in the clan, and for only the individual themselves. And, now that they are grown, I have pet names for my children that I use only in private thought. 

Think about this wonderful richness and variety, including contradictions and inconsistencies!

My husband and I spent months deciding on our children's names. This was in part because we disagreed, but also because we thought it important. I favored Old Testament names, like my own and my family's, while my husband favored Western names (the western United States, that is) that mostly once upon a time were Irish, Scottish or English last names: Tyler, Taylor, Tanner, Tyrone. Also, in my tradition, we do not name people after the living but only after the dead, while his tradition features honoring the living. Then there were other design desiderata: the potential for malicious abbreviations, potential readings of the initials as words, and undesirable vocal properties. We considered funny names like Harrison Ford Harrison, a melding of Harrison Ford (the actor) and Ford Madox Ford (the writer) with my husband’s last name. My husband tormented me for months by dangling Elvis as a possibility. 

For our first son, exhaustive search produced exactly one name that we both liked. Then, when we realized we were having another boy, we faced the impossibility of giving our second son a once-discarded name. Imagine having to say “Yes, child, not only are you younger and smaller, but you got the left over name. Feel wanted.” We had to generate a completely different set. Amazingly, we did. 

A few years ago, Gopinaath Kannabiran (how’s that for a name that requires an enjoyable interlude on the tongue!) wrote a note for the NordiCHI conference on identity in social networking systems. He focused on gender identity in social media, a particularly difficult problem because of its ineluctable complexity. The gulfs between being known for example as a woman, labeling oneself as one, and being called one are enormous. How much more so when gender identities cross corporately enforced categories?

My husband reports hearing Nicolas Negroponte propose that we be tattooed with identifying dot patterns at birth. It’s the logical extension of a reductionistic approach to identity. But I also imagine the reaction of my now-dead relatives to this. Some of them had tattoos from concentration camps. 

Some of the dignity of being human rests in our control over ourselves and our appearance to others. Here’s the thing: from a technocentric perspective, it looks as though labeling ourselves by name, as we do in email or on Facebook, is cost-free. Hey, we all have names, and, by happy chance, your email name can function as both a person identifier and as an operational label for the machine. Yeah! Everybody wins! 

But is this really true? As I struggle to remember which email address I used, and to what end and how that email combined with rules for some password that I am supposed to remember (“Please, kind system, just tell me whether you required a number and a capital letter or you are one of the ones that does not.”), and as I’m hemmed in by Facebook’s entirely arbitrary and self-serving choice to call me “9” and my friend’s and colleague’s expectation that I have some kind of presence, I do not find these names “endowed with good wishes, the expectant blessings of kindness and virtue." 

Efficiency with respect to the computer starts with the promise that the computer will do something for you “for free” but it evolves into pressure to do and behave in the way that the computer expects from you. There’s always a reason, but whose reason? How much effort would it actually take to enable a more complex creation of identity? 

I am, we are, reduced through these interactions and—here’s the design point—they could easily be otherwise.


Posted in: on Thu, July 11, 2013 - 7:42:20

Deborah Tatar

Deborah Tatar is an associate professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@Pardha (2013 07 17)

Such a delightful article! Thank you Deborah. I never understood why it is so hard to change your handle after it is given to you. Aren’t computers supposed to make this sort of a thing easy? After all that handle is just a field in some database!

@Deborah Tatar (2013 07 18)

Hah! Yes.  Someone wrote to me privately and mentioned that Facebook allows you to get rid of the number——for a fee.  In other words, Facebook is monetizing our names. 

Is this ok?  It feels like we’re selling our birthrights for a mess o’ pottage.

Now I feel foolish for having thought that this move was merely neglect, as opposed to something more dire.


When A/B testing gets an F


Authors: Jonathan Grudin
Posted: Tue, July 02, 2013 - 8:33:18

A relationship is like a shark, it has to constantly move forward or it dies. And I think what we got on our hands is a dead shark. —Woody Allen, Annie Hall

Like sharks in search of their next meal, living websites constantly move forward. How do they decide where to go? Many popular sites rely on A/B testing. Different versions of a feature, layout design, or advertisement are presented to thousands of users. What people try, how long they remain, and whether they click through are logged. Although called A/B testing, more than two alternatives can be compared.

This is a modern variant of a familiar controlled usability experiment. In the 1980s, my first usability studies required weeks to recruit 10 or 20 participants and get them into the lab. Today, it can take just minutes to identify preferences with statistical reliability. The website is the laboratory and the participants are unpaid users. The winning design can be rolled out to everyone. Refinement and evolution can proceed as rapidly as designers can generate ideas. Life is good.

Continuous product release

Turn and face the strain —David Bowie, “Changes”

Adam Pisoni, co-founder of the enterprise social networking service Yammer, lists companies that did not move forward and ended up dead sharks: Blockbuster, Bethlehem Steel, Tower Records, and so on. Pisoni is a forceful advocate of A/B testing. He notes that constant change conflicts with predictability, which companies traditionally relied on, but argues that today there is “this larger thing going on in business, this issue of predictability versus adaptability… As the world changes really fast, and things are changing, predictability becomes counterproductive.”

However, we’re creatures of habit. Habits can engender efficiency. A habit relies on the world having some predictability. Software designers fear the wrath of the “installed base”—a ball and chain for legacy software. A new company can have an advantage when introducing a novel interface: It won’t earn a reputation for abandoning customers.

Pisoni acknowledges the problem of a rapid pace of change: “The way we built started impacting customers in weird ways. We release constantly. We’ve always released at least once a week if not more. Customers started coming to us saying ‘man, we love how easy it is to use. However, we want you to build it differently, we want you to build it the traditional way, give us 3-year timelines and all that.’”

No will do. Pisoni’s goal is to go from weekly releases to continual release. Yammer is not alone in turning to face the strain. Facebook has used A/B testing and weekly pushes as it adds, removes, or changes features. Its 2006 introduction of the News Feed dissatisfied many existing users who saw clutter, yet Facebook weathered the storm and prevailed. A/B testing also drives the evolution of Google, Bing, Amazon, and other Web products.

 A/B testing works best if you know what to measure. Who knows better than advertisers? They can determine which design increases click-through or purchases. For them, A/B testing gets an A. Well, it may not reveal how often products are returned for refunds, or the likelihood of repeat business. Let’s make it an A-.

Hey?

The Obama team used A/B testing extensively in 2012. The subject header of one solicitation read simply, “Hey”. It surprised many people. It annoyed some, but A/B testing showed it was remarkably effective at drawing contributions. Money talks. I received nine “Hey” emails, from Barack Obama, Joe Biden, and other close friends.

The Obama campaign had one and only one goal: A majority of the electoral votes on November 6. The money contributed to reaching that goal. Of course, for most people Obama’s election was a means to another end—strengthening the Democratic Party, a progressive agenda, or something else. What was the effect of “Hey” on these goals? A/B testing optimizes for the here and now; when will local optimization end up hurting in the long-term?

I was annoyed and burned out by the extraordinary barrage of email marketing late in the campaign. On November 7, I began removing myself from scores of Democratic and progressive distribution lists. I welcomed the election outcome but wanted a year or two to recover. A classic tragedy of the commons; I was the overgrazed commons. Did the pitcher go to the well once too often? I was not alone in my flight to solitude. Check with us in a couple years. A/B testing here earns an Incomplete.

When A/B testing can get an F

In the May 14, 2001 issue of The New Yorker, a perceptive article by Tad Friend, “The Next Big Bet,” discussed the HBO television series Six Feet Under and contrasted the radically different business models of commercial television networks and HBO. His analysis provides insight into where A/B testing of a general population can go wrong—and how A/B testing could benefit from supplemental techniques.

The networks sell advertisements. How much they can charge depends primarily on how many viewers a show attracts. Every show strives to attract the largest possible audience, with perhaps some attention to age or income. A new show that draws precisely the same audience as a very successful existing show will be very successful. Hence, there were five different Law and Order series, three CSI series, and an unending progression of reality TV shows and sports coverage.

HBO relies on subscriptions, not ads. They want shows that appeal to the greatest common denominator—sports, crime series, reality shows, whatever. But once they have a strong success in a genre, an imitation may not attract new subscribers. More valuable is a novel show that appeals to a niche market that has not yet subscribed. Consider a very popular show that appeals to 30% of the potential audience. If HBO creates another show that appeals to the same 30%, it may get no new subscribers. A new show that appeals to a different 10% of the potential audience may attract millions of new subscribers. Hypothetically, if HBO had 10 shows with a powerful magnetic appeal to a different 10% of viewers, they might get everyone subscribing, even if no one show appealed to 30%.

This is where A/B testing alone can flounder. A/B testing on the existing user base may not detect something that will appeal to a niche that has not yet subscribed, and testing that identifies popular choices could provide six reality shows that appeal to the same 30% of the market. Each 10% niche show will lose against a 30% show, when cumulatively they would attract more subscribers.

This is not necessarily constrained to the television world. Let’s consider Facebook and Yammer. Are they more like the commercial television model or the HBO model today? Tomorrow?

Facebook has constantly moved forward. It swims in a sea on which many dead sharks float. Supported by A/B testing, Facebook made solid decisions. Few abandoned it and more flocked to join. Like the television networks, Facebook relies on advertising revenue. It wants eyeballs. A one-size-that-fits-the-greatest-common-denominator strategy may work. If Facebook adopts design A and leaves behind the minority who preferred B, it may be OK to lose some niche participation.

Facebook does lose niche participation. I’m in one of those niches. Facebook took away the two features I liked most, so I use it less. One was a presentation feature, one was a view. My original Facebook profile listed my favorite books, in three categories; my favorite films, also in three categories; music; a set of my favorite quotations; and so on. It was a personal statement that some people noticed. Facebook removed most of it entirely. Some could be partly reconstructed in a less compact, less easily scanned format. The once-prominent quotations exist but you probably can’t find them.

My favorite view listed in reverse chronological order the most recent post by each of my friends. This was a wonderful way to catch up quickly on everyone without being bogged down by those who post minute by minute accounts of their trips to get a latte. For whatever reasons, this view disappeared.

A/B testing must have shown Facebook would prosper without my pride in profile and my attention. It has. My niche was small.

How will this strategy fare in the long run? Will maximizing the eyeballs delivered to advertisers succeed? Might they create opportunities for HBO-like sites that appeal to niches such as mine? I consider the possible evolution of online sites in the next section, after a look at Yammer.

Yammer links employees within an organization. It wants to attract new organizations, and is thus more like HBO. However, with A/B testing across its customer base and frequent interface changes, it is betting on a greatest-common-denominator strategy. This could be a problem if different interfaces would appeal to different companies or industries; for example, if markedly different feature sets would appeal to financial companies, medical companies, and tech companies, or if cultural or regulatory differences would affect feature preferences. Within a company, A/B testing could miss major differences: Perhaps marketing and sales groups would flock to something very different from design and engineering. A/B testing could favor the preferences of the more numerous young, adaptive individual contributors, but the niche comprising executives and managers who desire slower, more predictable change could be significant for an enterprise service.

We don’t know—it is early days. But assume that Pisoni’s broad A/B testing delivers changes that appeal to 10% of every company. Customized interfaces would be more complex to design and manage, but they might appeal to 50% of each company. This is a classic market segmentation tradeoff. Perhaps 10% per organization is enough to sustain use and deliver on enterprise goals. But if 10% is below the critical mass to sustain use or if the goals require higher participation rates, the outcome is not so great. And even in the former case, niches might be created for competitors who provide features that appeal to more than 10% of the employees.

This is of course speculative. But the analysis suggests techniques that could supplement A/B testing to provide a more versatile process. Before concluding by discussing these techniques, let’s briefly consider the history of mass market versus niche solutions.

Market segmentation and a vulnerability of A/B testing

When a desirable product is first widely available, having it is a pleasure and owning it is status enough. Interface details are secondary. Henry Ford famously wrote of the Model T, “Any customer can have a car painted any color that he wants so long as it is black.” Ford focused on reliability and efficiency, but he was also a fanatic A/B tester, in a slower pre-Internet era. One size fits all worked well for a time, but eventually General Motors catered to the niches—those who wanted luxury, something sportier, or just a different color. It is more expensive to produce multiple brands, but General Motors became the larger company. Similarly, indistinguishable Timex watches and black telephones were immensely popular, but eventually Swatches and a competitive phone market thrived on personalization.

Differentiation and personalization are in our nature. Our prehistoric ancestors developed different cultures and languages. They ornamented themselves. For a time, having a Facebook profile was a personal statement. When everyone is a Facebook member, more complex market segmentation will inevitably become important.

A/B testing will not necessarily mislead or cease to contribute, but it won’t be enough to earn an A and its affordances could be unfortunate. Rapid change works best if users do little customization. The more variation, the more a product becomes a platform, the messier change can be. My highly customized profile was blown away by Facebook changes. When individual contributors and managers prefer different interfaces, as they often do, a change can disrupt one or both. A/B testing in practice pushes gently toward “any color as long as it is black.” But cultures, organizations, and individuals like to customize.

Supplementing experimental approaches

I recently visited a school in which students used a particular device. They told me what they would like changed. Weeks later, I was having dinner with an employee of the company that made the device and suggested they visit the school. “We won’t do that,” she said with a rueful laugh, “we just do A/B testing.”

At the point market segmentation becomes significant, you have to get out in the field to identify the segments and learn how they work. Today, with technology supporting our lives in ever finer detail, understanding the subtle effects requires getting out and looking closely. This is a golden age for quantitative exploration, for big data, and it is also a golden age for qualitative exploration. Qual and quant enthusiasts sometimes regard one another with suspicion, but individuals or companies that learn to use them together will win. Quantitative data can provide suggestions about where to look in depth, qualitative data will provide hypotheses about what is happening that quantitative data can then confirm, refute, or refine. A/B testing applied within market segments can deliver the power to determine whether different interfaces are needed or whether one—and which one—will suffice. As a partner, A/B testing could be back on track to getting an A. A/B testing that is not informed by the big picture, that is not supplemented with strong qualitative research, could get an F.

You better start swimmin' / Or you'll sink like a stone / For the times they are a-changin' —Bob Dylan

This post benefited from discussions with and ideas from Gayna Williams, and from an exchange with Michael Bernstein. Adam Pisoni material is from the cited link and a keynote that is not available online, used with permission.


Posted in: on Tue, July 02, 2013 - 8:33:18

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@Jonathan Grudin (2013 07 02)

The Wikipedia article on A/B testing identifies major companies known to use the technique, ranging from Amazon & BBC to Walmart & Zynga.


Enterprise users just want to have fun!


Authors: Monica Granfield
Posted: Fri, June 28, 2013 - 8:27:35

The bounce of the screen on the iPhone is so much fun that new users often keep pulling the screen down again and again, just to see the action happen. Animation in UX design—it’s not only fun, it's functional! Motion graphics bring even more depth and life to an interface, and more and more great animation is being produced. It’s a good time to be an animator, as animation is no longer limited to games and is blooming right into our everyday UX world!

Gamification, another fun yet functional aspect of UX design, is gaining momentum too. Sites like LinkedIn make it fun to recommend a colleague and to chart and measure your own impact on the site. You can monitor how your skills are being measured by your connections, see how often others have viewed you, see how many times you have appeared in a search, and see the strength of your profile and where your connections are strongest. There is so much interesting, fun, and useful information to discover about your LinkedIn presence. 

In a newly released book on gamification, Gamification at Work: Designing Engaging Business Software, Janaki Mythily Kumar and Mario Herger explore the idea of gamification in business and in the enterprise in a refreshingly practical, yet fun and engaging way. They use a process called Player Centered Design, which incorporates the notion of engaging the user. Player Centered Design puts the player at the center of the design and development process and surrounds the user with the concepts of motivation, mechanics, and mission. Beyond that there are the concepts of monitor, measure, and manage. They get creative and challenge the idea that simply adding points and badges to business applications is not a way to gamify a product.

This has made me think of all the fun words that now reference the user experience: "games," "play," "engaging," and "fun." Wow, it sounds fantastic and very motivating; I want to use that product! However, in my many years of observing business users for research purposes and designing for the enterprise, never have I heard a user utter anything close to "I love this app, work is so much fun!" And I began to wonder, why not? 

Thinking back almost two decades ago now, there was actually one enterprise application that I worked on where one task in the experience was actually cool and fun. Even though this was a process manufacturing application, we were able to integrate UX that would engage and excite the user. Who would think anyone could have fun in this domain, but you can and we did.

I am wondering why we are not having more fun in the enterprise? I look around and I see technology that clearly provides great power and performance for graphics and animation, increased use of motion design, and now the idea of introducing gamification to the enterprise and business sector. With all this technology, there is no reason we can't build more engaging experiences. There is a way to make practical interactions fun and functional and there is no real great reason not to. Imagine a user going to define a lifecycle or a business process and being excited because it is easy, straightforward, and FUN! Imagine making the mundane just a bit enjoyable. 

Dana Chisnell discusses the idea of happiness in the UX Magazine article, "Beyond Frustration: Three Levels of Happy Design." She boils the idea of a "happy design" down into three concepts, mindfulness, flow, and meaning. Mindfulness is pleasing and predictive and builds awareness and positive emotion; flow is the idea of being so immersed in something that time flies and nothing else exists while engaged in the product; and meaning is the contribution and value that is added by using the product. She states that these are all interconnected and can be used to bring "happiness" to a user experience. These concepts nicely describe and make tangible the concept of a happy, pleasing, and engaging user experience. Rather than hearing users speak of an experience as frustrating, annoying, and confusing, we can move forward with these words to create a pleasing and exciting experience that allows users to easily and enjoyably reach their goals.

In its November 2012 press release, Gartner predicts that "by 2015, 40% of Global 1000 organizations will use gamification as the primary mechanism to transform business operations". In the same report, they also predict that "by 2014, 80% of current gamified applications will fail to meet business objectives, primarily due to poor design." 

So the next time you begin working on a new enterprise-level design, don't forget to spread the joy and bring a little fun into the everyday for your users.


Posted in: on Fri, June 28, 2013 - 8:27:35

Monica Granfield

Monica Granfield is a user experience designer at Symbotic. The views expressed on this website are her own and do not necessarily reflect the views of Symbotic.
View All Monica Granfield's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Hair of the Monkey King


Authors: Tek-Jin Nam
Posted: Wed, June 19, 2013 - 7:31:48

Technologists are like magicians. They make dreams reality. In mythical stories, magic is the art that makes the impossible possible. Seven-League Boots is the art that contracts space. People using the spell can travel a thousand miles in one step. Clairvoyance is the power that can enable a person to see a scene from a distance. The powers of listening to sounds from a far distance or knowing other people’s minds are examples of the kind of imaginary magic that is often introduced in stories. A mirror or a stone ball that predicts the future is a magical object that people desire.

Technological developments have transformed the magic of the past into what ordinary people can enjoy now. The art of contracting space, the power of traveling fast, is possible with automobiles. People can fly with airplanes. Smartphones made it possible to see or listen from a far distance. If people from the past were to come to the present, they might be surprised by weather forecasting, as it is a kind of art of predicting the future. Other predictions become possible by big data analysis. Massive data generated by people can be used to understand what people want. Data analysts are able to use the spell of knowing people’s minds. All these magical powers become possible through science and technology advancements. If modern scientists or technologists had lived in the past, they would have been treated as magicians. The movie The Prestige deals with the story of magicians from the early 20th century. When they search for new magic, they go to Tesla. The movie describes the technologist as a true wizard who can actually do what magicians pretend to do.

I often ask myself if I were allowed to choose only one among numerous kinds of magic, what would I choose? I often ask this of other people, too. Knowing what magic other people want to possess gives us hints on what science or technology should address for the next big challenges. Some people want to live forever. That is what bioengineering or medical science tries to address. Many ancient kings had the desire to live forever, or to never get old. The traditions of entombment and embalmment are not unrelated to this desire. In the Harry Potter stories, invisibility cloaks seem to be one of the most popular magical objects. I believe that identifying people’s desire is the key task of designers. So reading people’s minds would be useful magic for designers.

Among the many wizards or magicians appearing in stories, I consider the Monkey King (Sun Wukong in Chinese) to be the most impressive. He is the main character of the classical Chinese epic novel, Journey to the West, written by Wu Cheng'en. His Korean name is Son-Oh-Gong. The Chinese story is well known in Korea due to popular animation series based on the story. The story is about a journey by a group consisting of the monk Samjang (Xuanzang in Chinese), the Monkey King, Zhu Palge (Zhu Bajie), and Sha Ohjung (Sha Wujing) to retrieve Buddhist sutras from India. Son-Oh-Gong is the main character of the story. He is a monkey, born from a stone, who acquires supernatural powers. The monk Samjang controls Son-Oh-Gong’s magical powers by a ring on his head. Son-Oh-Gong can fly on a cloud. He carries the Ruyi iron rod, which is used as a magical weapon and whose size changes as one wishes, which is the meaning of the name. He often carries Ruyi in his ear by making it really small.

Among the Monkey King’s many magical powers and objects, his duplication magic is the one that I most envy. According to this magic, he can create himself from his hair. He can duplicate himself as long as he has hair. It is fortunate that Son-Oh-Gong is a monkey who does not need to worry about a lack of hair. Here is a joke: One day Son-Oh-Gong had to fight with 100 monsters. Son-Oh-Gong instantly put out his hairs and created 99 self-avatars. Son realized that one self-avatar was particularly weak and being defeated. Angry Son-Oh-Gong asked the weak one, “Why are you so weak? You should behave like the Monkey King.” Then the duplicated avatar replied, “I am a gray hair.”

I like the duplication magic best because it seems to make other magic possible. In the movie The Prestige, two magicians compete for an ultimate magic called Transported Man. The secret of the magic was the duplication of a body. If the original body is removed when a duplicated body is created in different location, the process appears like instant transportation.

The duplication makes it possible to experience multiple places and time. It offers a partial way of controlling time. For me, right now is the busiest time of the year. I have to advise many students for their degree projects, help complete term projects in classes, continue research works, review different papers from conferences and journals, and write a grant proposal for next research projects. It is the time that I especially wish to have this duplication magic. I want to relax after making several self-avatars and assigning different tasks to them.

If I could create many self-avatars, I would wish to enjoy other people’s lives in parallel. Although I enjoy what I do in my university, I often envy people working in a successful company. They produce products that directly influence many people’s lives. Living like them, I want to see the people that are happy with the artifacts that I create. Meantime, I want to be an expert in other areas, such as music or sports. I want to travel to more places and meet different people. At a big conference like CHI, I could send self-avatars to multiple sessions when interesting presentations are happening at the same time. The duplication magic makes that dream possible.

Kings of the past wanted to live forever. The duplication magic makes this possible, too. If I keep a healthy avatar in a safe place, possibly in a protective time capsule, I can be fresh anytime. I have to be careful that all avatars do not disappear at once.

In order to consider that the duplication magic is a truly powerful, fundamental, and multi-purpose magic, there are several issues to be addressed. The first is whether I fully experience what my self-avatars experience at the same time in the same way. If my avatar sleeps while I don’t sleep, do I really sleep or not? If my avatars are hurt or sick, do I feel the same? If I share the pains of my avatars in dangerous situations, I should be really careful. I may consider that my self-avatars experience things independently from my original myself. If so, I need a process to combine the experiences of my avatars with the sense of myself. If this experience integration is not done effectively or requires a lot of effort and time, the duplication magic would not be that useful. These issues must be considered if science and technology try to realize or mimic the duplication magic.

Recently I have been interested in adding people’s personalities to IT products or systems. I wonder if it really adds value for people. If so, what would the real impacts be? I presume that the products with a user’s personality would offer more emotional interactions. People would accept the products more openly and positively. People would have a preference for artifacts that are similar to them or have personal traces. We often have emotional attachments to our old furniture, a leather bag, a house with personal patina. I think there could be a potential where IT products and systems would become like this with the addition of tangible and intangible patina.

Living with products and environments that have my own personal characteristics and know how I judge things may be viewed as a situation where those products are more like my self-avatars. This is one of my visions of future smart products: everyday objects as my duplicated self-avatars. This is different from the vision of smart objects being secretarial avatars. The products that I use are parts of myself. I imagine a TV, mobile phone, notebook, and car made of my DNA so they can think, judge, and behave like me. That is the situation where I can use the duplication magic of Son-Oh-Gong with his hair.

In such a world, many trivial things can be managed by my self-avatars. My self-avatar TV will choose what I want to watch, and record the program voluntarily. The products will process the tasks as I think. While my self-avatars deal with trivial things, I would relax and enjoy something else. I may or may not feel what my duplicated body experiences. Or there must be a solution of how I effectively integrate the experiences of my self-avatars so that I can fully share them.

How would I feel if my self-avatar products are trashed? Would it be emotionally different from when normal products disappear? Would I use the self-avatar products with higher emotional attachments longer? What would happen if such products were stolen and controlled by other people? Would we want such products regardless of such privacy and security concerns?

In his book Thinking Fast and Slow, Daniel Kahneman explains that a person’s thinking system is processed by the interplay between two personal characters, System 1 and System 2, living in one’s mind. System 1 is fast, instinctive, and emotional. System 2 is slower, more deliberative, and more logical. He explains that the two self-characters in a person direct thoughts and cognition while influencing each other. I thought that this perspective could be applied to the vision of self-avatar products. If we can duplicate one of the two self-characters and assign it to the products or systems we design, the security and privacy issues, and the experience integration may be addressed.

Life filled with my self-avatars. I expect that the technology wizards will enable me to have the duplication magic of Son-Oh-Gong soon. I will use smart products, furniture, and environments that independently decide and process tasks without asking me, but they would exactly match what I intend. The future mobile phone would be my self-avatar. If everyone could use the duplication spell with the hair of the Monkey King, what would the future be like? Would it be the good, the bad, or the ugly? 



Posted in: on Wed, June 19, 2013 - 7:31:48

Tek-Jin Nam

Tek-Jin Nam is an associate professor in the Industrial Design Department at KAIST.
View All Tek-Jin Nam's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


HCI/UX in China: A trip report


Authors: Aaron Marcus
Posted: Fri, June 14, 2013 - 9:04:13

How are product/service human-computer interaction design and user-experience design doing in China, you ask? Well, as they say, we live in interesting times. I shall give you a personal update based on recent experiences.

I have just spent a week in China in three cities, Yangzhou, Shanghai, and Beijing, attending two design conferences in the first and third cities and viewing the building where my new Center for User-Experience Design Innovation (CUXI) plans to open in September 2013. Please let me explain.

The Dragon Design Foundation (DDF), one of the largest design organizations in China, with close connections to the Chinese government (a must for any successful organization or business in China), invited me to give presentations at the Third World Green Design Forum in Yangzhou, a “small city,” I was told, with only 4.6 million inhabitants. Surprisingly, even Chinese participants were not too familiar with the city, although it has an illustrious past as a center for theaters, poets, artists, and, presumably, designers. A fleet of people attended to our needs as overseas invited presenters. Most attendees came from China and Europe. There were about 700 people from 20 countries. Ministers of the European Parliament attended.

The conference presentations on May 28 and 29, 2013 seemed to focus on a wide variety of topics, primarily on international standards, urban-scale projects, architectural projects, industrial products, new 3-D printing technologies, intellectual property and investments, and projects dealing with rural development, which seemed appropriate for a nation for which most of the land is still rural. Although many of the presentations were interesting, they did not specifically focus on HCI/UX products/services, as I thought they might.

One of the most amazing presentations was that by Mr. Peter Woolsey, the head of a company that has invented a patented process for turning pig manure and chicken manure (China is the largest grower of pigs and chickens in the world), plus the left-over body parts from food production, into edible substances for two kinds of fly larvae, which in turn grow into massive numbers in a short time. These fly larvae, in turn, are harvested, processed, and turned into a safe, healthful, nutritious meal (“maggot meal”!), which is then fed back to the next generation of pigs and chickens. As you might imagine, a significant amount of energy is saved, and one is left with an unusual example of recycling. One can only imagine what will happen when future humanity decides it should end the “wasteful” use of cemeteries and simply let people eat the previous people as a source of nutritious, tasty, and low-cost food. Somehow, I feel there is a lesson in sustainability for HCI/UX people, and I don’t mean grinding them up for the next generation of professionals.

Although interesting, the conference did not enlighten me that much about green apps, for example, that can help save energy at home, at work, traveling, etc. I did have an opportunity to present our own Green Machine project, which was a useful opportunity to discuss these issues.


Author in his “tribal hat” (actually his CHI 1999 “Sci-Fi at CHI” panel hat)
that he used for his DDF presentations.

There are conferences that focus on HCI/UX developments in China.

One is the User-Experience Professionals Association (UXPA) conference, also called User Friendly. The last one was held in Beijing in November 2012. I was fortunate to be able to give a keynote lecture about UX in science-fiction movies and television over the past 100 years (the subject of my CHI 2013 tutorial) to about 700 people. The presentations and attendees came from many major sites of advanced development in China as well as abroad. The next conference is scheduled for November 2013 in Shanghai. Now that is a conference for HCI/UX.

There are also the Asia Pacific CHI (APCHI) conferences that bounce around Asian locations. They have taken place in China, but not consistently. I was also fortunate to present my sci-fi tutorial at the APCHI 2012 conference in Japan. The attendance was primarily Japanese. The next, APCHI 2013, will take place in Bangalore.

Another source of HCI/UX developments is the SIGGRAPH Asia conference, which has taken place in Hong Kong in the past (2011) and is scheduled again for that city in November 2013. I have found it to have extremely interesting and exotic HCI/UX exhibits. It was in 2011, I believe, that I discovered a Japanese R+D project to create knives and forks with sensors and sound displays that enabled food to become “musical.”

The second DDF conference was called the Dragon Design Festival on May 31 and June 1. This conference was much more oriented to teachers, researchers, and professionals reporting on their current projects and curricula. Unfortunately, again, there was minimal HCI/UX content in most of the presentations. This situation seems to suggest a disconnect from the high-tech developers and the regular Chinese design community. Most of the product-design presentations were really more industrial design than device-HCI design. This low-level of content was disappointing, and I hope that the DDF/DDF will feature more HCI/UX content in the future. One of the major components, and more interesting aspects, of the DDF/DDF conference featured the planned development of a “Design Valley” in south Beijing, similar to other large urban developments rising in Shanghai, Hong Kong, and other locations in China. These government-supported centers are seeking to create their own combination of Silicon Valley and design centers at a scale and speed that is unheard of in the US. One of five complexes in the Beijing site features six multi-story buildings that will house 2,000 high-tech companies.

I did have an opportunity to present at the DDF/DDF conference my own plans for my new Center for User-Experience Innovation in Shanghai, being funded by the De Tao Group, on the campus of the Shanghai Institute for Visual Arts. I plan to provide a year-long “executive user-experience master’s course,” like an eMBA, to Chinese professionals, executives, or students wishing to learn “all there is to know” about HCI/UX in one year, as well as frequent one-week short courses for US and European executives and professionals who would like to learn about mobile UX design and the China context in a short time and have Shanghai as the venue. The CUXI will also carry out UX design, research, and evaluation projects for Chinese companies or foreign companies having or interested in developing China UX offices. One surprising result from my brief presentation about the CUXI was that a developer of one of the high-technology centers came up to me and stated that what I was doing in Shanghai was exactly what he needed in his own center, and he needed it now! He even arranged for me on the spot to meet with the regional government representative who must authorize and permit all such activities. That was a fortunate and productive moment at the conference.

Recent studies of HCI/UX professionals in most high tech companies in China show that the professionals are young and lack the years of experience that their peers in other countries have. Universities and institutions like CUXI are trying to help them catch up. It is clear that China is making a strong, concerted effort to ensure that future high-tech gadgets and apps are not only made in China, but designed in China. Stay tuned...


Posted in: on Fri, June 14, 2013 - 9:04:13

Aaron Marcus

Aaron Marcus is president at Aaron Marcus and Associates, Inc. (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Color and user experience


Authors: Ashley Karr
Posted: Thu, June 13, 2013 - 7:48:11

Take away: Proper use of color can enhance the user experience of any design as color affects humans psychologically, physiologically, and emotionally.

Emerald is: “Lively. Radiant. Lush. A color of elegance and beauty that enhances our sense of well-being, balance, and harmony.” Pantone named it 2013’s Color of the Year. I am glad that lively, radiant, lush elegance, beauty, and balance are harmonizing my experience during the 365 days that comprise 2013. However, this is a bold statement–just like the color. Can color really do all of this? The answers are yes and sort of. Please do read for further explanation.

Psychological effects of color 

Color can augment memorization, recall, and recognition. In interactive designs, color can suggest categories and give identity to chunks of information. This can create a design that is more efficient, clearer and easier to understand, easier to learn, and easier to navigate. 

Physiological effects of color 

Colors affect our nervous systems. Research shows that, for example, bright reds stimulate our sympathetic nervous system, resulting in physiological changes such as an increased heart rate. In contrast, soft blues and greens create the opposite physiological effect and help us relax.

Emotional effects of color

Colors themselves and the meanings we attach to them affect our emotions and moods. For example, most people associate the color yellow with feeling happy and energized. On an individual level, a person could associate the color yellow with the color of their home during childhood, which invokes fond memories and pleasant feelings.

Cultural context of color

Remember that meaning in general is culturally constructed. Sensitivity to the cultural context and meaning of color within your user group is important. The following is a common example demonstrating the cultural implications of color. In many Western cultures, black represents death. In some Eastern cultures, white represents death. How this will affect one’s design and user interface decisions is up the design team; however, it is important to remember that we always operate within a cultural context. Our users do, too.

Quick guide to color and meaning from an American perspective

Red

  • Increases blood pressure, heart and breathing rates
  • Stimulates the adrenal and pituitary glands, which can temporarily increase strength and stamina
  • Represents vitality, ambition, and passion
  • Can dispel negative thoughts
  • Associated with anger, danger, indebtedness and irritability

Pink

  • Induces feelings of relaxation, tranquility, warmth, and protection
  • Reduces feelings of aggression and irritation
  • Associated with nurturing, selfless, generous love
  • Light and soft pink associated with femininity, while bolder and hotter hues suggest youthful and fun energy

Orange

  • Stimulates digestive and immune systems
  • Associated with energy and vitality
  • Younger audiences respond to bold oranges, while older and upscale audiences respond positively to softer hues
  • Has only positive affects on mood, acts as anti-depressant

Yellow

  • Stimulates the brain, creating alertness and energy
  • Activates lymph system
  • Happy, optimistic, confident, and uplifting
  • Associated with the intellect, organization, discernment, memory, clarity, decision-making, and good judgment

Green

  • Brings equilibrium and relaxation, feelings of comfort
  • Helps us breathe deeper and slower
  • Suggests nature, peace, well-being
  • Deep shades suggest wealth
  • Represents environmental friendliness
  • Particular shades of green, such as olive, can represent illness and nausea 

Blue

  • Lowers blood pressure, has a cooling and soothing effect
  • Deep blue stimulates the pituitary gland, which regulates sleep patterns, and is associated with calm, restful nights
  • Inspires mental control, clarity and creativity
  • An overuse of dark blue can be depressing
  • Suggests the sky and ocean

Purple

  • Violet suggests purification, cleansing, peace, and balance
  • Combat shock and fear
  • All hues help with mental and nervous disorders
  • Stimulates compassion, intuition, and imagination
  • Associated with the right side of the brain
  • Relates to sensitivity, beauty, and idealism
  • Associated with royalty and nobility

Brown

  • Suggests earth and home and home, stability, and security
  • Can also suggest dirtiness or retreat and isolation from the world

Black

  • Comforting and protective
  • Mysterious, suggests silence and death
  • Can also be considered sleek and fashionable

White

  • Purity, clarity, peace, and comfort
  • Suggests freedom, although too much can be considered cold and isolating

Gray

  • Suggests independence and self-reliance
  • Can be a negative color, suggesting evasion and non-commitment, separation, lack of involvement, loneliness

Color use restrictions

  • Overuse of color creates clutter and confusion. Find one color for your background, one that represents your brand or message, and two complementing yet contrasting colors that can act as indicators for active links, hovering, and visited links. This means a site should have a minimum of four colors. Any additional color should be chosen with care.
  • Underuse of color results in a dull design lacking in interest and meaning. It can also result in confusion. Imagine trying to find a text embedded link that was the same color as the surrounding words!
  • Improper use of color at worst can cause great offense. Remember color carries the weight of meaning, and this meaning is always wrapped in cultural contexts. Be aware of these meanings and use them, and their colors, with respect and purpose.
  • Color blindness affects roughly 10% of the male population. Keep this in mind as you are choosing contrasting colors. If the colors you choose to serve different functions in your design do not suggest a stark enough contrast, a sizeable portion of your user group could be negatively affected by your choice.

How is color important in user experience?

Remember that user experience is overarchingly affective. Both objective and subjective evidence supports the concept that color affects humans psychologically, physiologically, and emotionally. Importantly, these effects come wrapped in cultural contexts. This means that the reactions that color evokes in us can change depending on the culture or cultures in which we were raised, currently reside, or are currently acting as a user. Selecting and using color with thought, purpose, and care can enhance the user experience. We would love to hear your experiences with color use and choice in your designs. Please write your comments below. Until next time, please enjoy the experience.


Posted in: on Thu, June 13, 2013 - 7:48:11

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@Nick Fine (2013 06 25)

Where are your references?  Much of this is regurgitated popular psychology without any evidence to support it.

@Benoît Larivière (2013 07 04)

Thanks.
I would add also that a tool such as Colour Contrast Analyser (Paciello Group) is helpful to ensure a proper legibility of the text.

@Steve Dolan (2013 07 14)

Great read! I wanted to add something I learned recently: When designing, choosing the style of your hyperlinks so that they easily stand out is important. I’m talking about beyond just changing the color, but also bolding and adding an underline makes a difference. The concept is, if you desaturate and blur your design, you want to still be able to tell where a link is located in your text.


A slow triangulation


Authors: Jonathan Grudin
Posted: Tue, June 04, 2013 - 10:47:16

In the mid-18th century:

"Does Britannia, when she sleeps, dream? Is America her dream? - in which all that cannot pass in the metropolitan Wakefulness is allow'd Expression away in the restless Slumber of these Provinces, and on West-ward, wherever 'tis not yet mapp'd, nor written down, nor ever, by the majority of Mankind, seen,- serving as a very Rubbish-Tip for subjunctive Hopes, for all that may yet be true,-Earthly Paradise, Fountain of Youth, Realms of Prester John, Christ's Kingdom, ever behind the sunset, safe till the next Territory to the West be seen and recorded, measur'd and tied in, back into the Net-Work of Points already known, that slowly triangulates its Way into the Continent, changing all from subjunctive to declarative, reducing Possibilities to Simplicities that serve the ends of Governments,- winning away from the realm of the Sacred, its borderlands one by one, and assuming them into the bare mortal World that is our home, and our Despair."  —Thomas Pynchon, Mason & Dixon

Geography and monopoly

Everywhere we look, geography and monopoly align: in ecology, linguistics, economics, cuisine, and, I will suggest, our conceptions of innovation and creativity. And with transportation and digital technologies breaking down geographic boundaries, creating the long-anticipated global village, the potential range of monopoly is extended. Diversity is almost paradoxically more visible and more threatened.

In ecology, the competitive exclusion principle holds that one species will achieve control over a niche. However, physical barriers—oceans, mountains, deserts, jungles—enable different species to evolve in similar niches. When a barrier comes down—a land bridge forms, specimens hitch rides on floating logs, ships, or planes—competition ensues, and only one species survives.

Which country has the most languages? If you don’t know, you won’t guess. Papua New Guinea, most experts agree, with about 800 distinct languages. Isolated by jungles, mountains, and bellicosity, linguistic inventiveness flourished in each valley. The runner-up, Malaysia, is an archipelago. As transportation and communication technologies overcame geographic barriers, linguistic diversity dropped. As the planet’s population doubled and tripled, the number of languages was halved! Every week or two another language disappears. Surviving languages evolve more slowly than in the past, inventiveness curtailed by grammar books, teachers, and copy editors everywhere.

Geographic isolation also facilitates economic monopoly. Farmers lured to the American west by a railroad company were dependent on the railroad to reach markets, so the robber barons could charge “all the traffic will bear.” Isolated mining communities had only the company store, which set prices that effectively enslaved the laborers. Big government evolved to control such monopolists, but the geographic metaphor endures: Warren Buffet looks for businesses with a “moat,” a non-geographic barrier to competition that enables them to raise prices and increase profits.

Monopolies are a natural development. One species occupies a niche. When an isolated culture contacted “the outside world,” adopting a dominant language was a path to extensive cultural, medical, industrial, and scientific lore.

Monopolies can be efficient, but there are downsides. Innovation may decline—less competition and natural selection, less diversity. Economic regulation might help—when its profit was controlled, AT&T set up Bell Labs. The company had largely overcome U.S. competitors but was contained by the oceans. Other countries developed telephone systems. Eventually AT&T was broken up, the oceans were crossed, innovation and competition ensued, and the less efficient evolve or disappear. With fewer global players remaining, we move toward a new monopoly, as happens when isolated and relatively static species and cultures come into contact.

Although some intrepid critters made it over mountains or floated across oceans, geographic barriers generally came down in geologic time until Homo sapiens arrived. Today those barriers are effectively gone. We have achieved the global village. It is great of course, but the consequences of the true disappearance of frontiers is only starting to be understood. When I was growing up in a small village in the Midwest, a tension existed between the individual and the community—I had limited privacy but tangible benefits. Today, we don’t know the other global villagers intruding on our privacy and the benefits are usually less tangible. As Pynchon noted with mild foreboding, our dreams are disrupted more deeply than we know.

Creativity and innovation

There's nothing you can know that isn't known
Nothing you can see that isn't shown
—John Lennon, “All You Need Is Love”

For most of our existence as a species, geographic isolation afforded monopoly protection to inventors, artists, and writers, just as it did to languages and species. An invention that was novel in the cave or town was valuable even if it had previously been invented in a thousand other places across the planet. Word traveled slowly if at all. If your neighbors used it, you were in business. Similarly, the most creative poet, songwriter, or storyteller in a village was appreciated, as was the best healer, strongest athlete, and most skilled hunter or gatherer.

In the 17th and 18th centuries, as population became more concentrated, patents and copyrights in the modern sense were devised to bestow a limited-duration monopoly. They originally had a limited geographic range, applying to a nation or even a single city. Today, the monopoly bestowed by patents and copyrights to reward innovation is often global.

Not anymore. The Internet and YouTube can help inventors but on balance are not their friend. The best local storyteller vies with storytellers everywhere. An inventor vies with inventors everywhere. We have access to everything for inspiration, but when one of our six billion potential competitors beats us to the punch, our achievement becomes yesterday’s news.

Creativity is defined in different ways, but in the sense of inventiveness, technology has rendered creativity more difficult and less important with each passing year. When writing supplanted word of mouth for passing down knowledge, we competed with dead people. Today we compete with billions of others to be first or best. If you don’t invent and market it fast, someone else will.

Things will be invented, because people are inventive. We may be naturally selected for inventiveness. Resourceful and creative individuals improved the odds of a small, isolated community thriving.

Obsession with creativity

Paradoxically, as originality declined in significance, our interest in it grew. In an entertaining interview, Austin Kleon notes that the concept of originality is “kind of an invention of the nineteenth century,” when geographic barriers to communication crumbled rapidly. People may have realized that local inventiveness mattered less and looked more broadly. With less personal acquaintance, the inventor and the process of invention were dissociated. But originality was less prevalent than they imagined. In The Act of Creation, Arthur Koestler documents advances of the 19th and 20th centuries that were credited to individuals, which were “in the air,” widely discussed before someone received the credit.

Now the final barriers are down and handwringing about declining creativity is everywhere. Issues of Fast Company regularly trumpet the methods of “creative people.” NSF initiated a CreativIT program. Amazon lists over 50,000 books with “creative” or “creativity” in the title. Discussions of education often focus on fostering creativity. It seems an unconscious response to the increased difficulty of being truly original. A good idea occurs to me, and with a search engine I can probably find it already enunciated several times over. Bad ideas too. If we do not realize that technology has shifted the playing field, we will conclude that we have lost something—but what we lost was the perception of originality.

The obsession with creativity has consequences. Graduate students in my field are often inclined (or pushed) to build a novel system. What is novel when they start is often less so when they finish given the endeavors of hundreds of thousands of other students, garage startups, and industrial teams trained at good places. This can lead to a frantic search to identify something that can be claimed to be original in the research and in some cases, anxiety and depression.

Avoid competing with 6 billion people

We tend to associate creativity with novel invention. Some cultures place more value on synthesis, on combining familiar elements in beneficial new ways. Each piece was perhaps “not invented here,” but the composite was. Useful synthesis will be mindful, despite having no novel element. The value of synthesizing disparate familiar items is recognized in literature, with its famously small number of core themes. A nice illustration of this concludes an elegant interview with Kenneth Goldsmith on “uncreative writing,” in which he says some of what I write here. (Having found the interview after completing a draft, I have not followed his humorous prescription for plagiarism, perhaps a topic for another post.)

If despite this cautionary note you (or your advisor or employer) insist on undertaking something that appears to be novel, don’t assume it will turn out that way. Due diligence in looking for antecedents is good—and a search engine covers more ground more quickly than when library stacks were involved—but don’t wait to get started with the project. Constantly monitor the broader context and be ready to react when something similar appears. A savvy manager told me that she is less likely to ask employment-seeking students what was original in their work than to ask how they adjusted to external changes that came along as they were on their journey.


Posted in: on Tue, June 04, 2013 - 10:47:16

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Are you trying to solve the right problem?


Authors: Richard Anderson
Posted: Tue, May 28, 2013 - 9:44:25

I just looked through the variety of graphical depictions of the human-centered design process that I show to and discuss with my master’s degree students during the first class of the semester. Sure enough, none of them includes a step often called reframing or a step that obviously includes reframing. Hmm... Does the design process followed by many fail to include that activity?

I think it does. Hence, I think many designers spend a lot of their time trying to solve the wrong problem.

This is not a good thing.

Fortunately, I've been increasingly running into references to the importance of reframing. At the recent Healthcare: Refactored conference, Aza Raskin argued that we are still solving the wrong problem in healthcare, a part of the solution: "re-ask the problem; reframe it." At the recent Healthcare Technology Forum Innovation conference, Gavin Newsom argued, "If you don't like the answer, ask a different question," and Dennis Boyle argued similarly but went further: "Ask whether you are solving the right problem; understand those whose problems you are trying to solve."

In a recent Fast Company Design post entitled "How Reframing a Problem Unlocks Innovation," Tina Seelig describes different ways to accomplish reframing, including the way referenced by Dennis Boyle:

"At the Stanford d.school, students are taught how to empathize with very different types of people, so that they can design products and experiences that match their specific needs. When you empathize, you are, essentially, changing your frame of reference by shifting your perspective to that of the other person. Instead of looking at a problem from your own point of view, you look at it from the point of view of your user. For example, if you are designing anything, from a lunch box to a lunar landing module, you soon discover that different people have very diverse desires and requirements. Students are taught how to uncover these needs by observing, listening, and interviewing and then pulling their insights together to paint a detailed picture from each user's point of view."

In his new book, Interviewing Users: How to Uncover Compelling Insights, Steve Portigal writes:

"Interviewing customers is tremendous for driving reframes, which are crucial shifts in perspective that flip an initial problem on its head. These new frameworks (which come from rigorous analysis and synthesis of your data) are critical. They can point the way to significant, previously unrealized possibilities for design and innovation. Even if innovation isn't your goal, these frames also help you understand where (and why) your solutions will likely fail and where they will hopefully succeed."

When trying to solve "wicked" problems, (transformational) innovation often needs to be your goal. Hence, reframing is essential, as argued by Hugh Dubberly and others in an interactions magazine article entitled "Reframing Health to Embrace Design of Our Own Well-being." In a BayCHI presentation of some of this material, Rajiv Mehta and Dubberly argued that failure to reframe is why most healthcare apps developed to date have had only a modest impact.

Who is responsible for design might partly explain the frequent failure to reframe. Aza Raskin (see earlier reference) says "the problem in healthcare is that design is now mostly in the hands of medical gurus." Don Norman has argued that "engineers and MBAs are fantastic at solving problems, but they aren't any good at making sure it is the right problem."

Regardless, if "reframing the question" is "one of the most important principles in design" (as Tim Brown argued recently), then we should do a particularly good job of teaching it and making sure it is done. This includes including it in our graphical descriptions of the human-centered design process. 

I know of at least two such graphics that include it. It is present—if not called out explicitly—in "Define" in the following process taught by the Stanford d.school:

And it appears explicitly on a poster I stumbled across recently at an event held at the Institute for Creative Integration:

Clearly, I need to add these two depictions to the collection that I share with my students.

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.


Posted in: on Tue, May 28, 2013 - 9:44:25

Richard Anderson

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.
View All Richard Anderson's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Why I lie to my kids


Authors: Bill Gaver
Posted: Wed, May 22, 2013 - 11:52:21

When she was about five, I told my daughter Emmie about a girl named Priscilla who swallowed toothpaste instead of spitting it out after brushing. She did this every day, week after week and month after month, because she liked the taste of toothpaste. After some time, however, Priscilla found that her joints were getting stiff. A little while later, she stopped being able to move, and her skin turned pale and unhealthy-looking. Finally, she became completely rigid. People were horrified to find that, when they peeled off the thin layer of moldering skin that remained, all that was left of Priscilla was a life-sized statue made of hardened toothpaste. 

It was a dreadful story. And the beauty of it was, Emmie believed me! She did—she honestly didn’t know whether I was telling her the truth, or making it up! It was great. 

Obviously, I told this far-fetched story—or rather, it told itself as I was helping Emmie brush her teeth—to make a worthy rhetorical point. I tell my kids lots of lies like these, often to moderate their tendency to see every passing setback or emotional upset as a world-shaking event. If they carry on over some minor hurt, for instance, I will routinely start making preparations to take them to the hospital, ignoring them when they protest. If they whine about how far we have to go, I carefully explain that we’ll be breaking for dinner soon, then finding a place to sleep, and that we should arrive just after lunch the next day—piling on details even as the bus is pulling up to our stop. And there used to be no better way to cheer up a grumpy toddler than to tell her that it was “No Smiling Day” and that absolutely, under no circumstances, should she dare to let anybody see her looking happy. 

But I also tell lies to my kids because I can. Kids are pretty easy to fool with the most outlandish things. Think of it from their point of view: The world is a strange place, and if you don’t have much experience of it, who knows what might be true? I like to take advantage of this naiveté. For instance, during one holiday I used the remote-control key fob to convince my kid that she could lock and unlock our hire car’s doors by waving her hand. This became such a routine that when I reminded her to lock up one night as she trudged exhaustedly toward our room, she finally gave a begrudging, over-the-shoulder wave that conveyed eloquently her view of this small but unwelcome chore. I’m not sure, now, whether she really believed she had to lock the car, or was just reluctantly humoring us, and I’m not sure if it matters. Either way, it’s a memory to treasure. 

It’s fun to see whether you can carry off the Big Lie: for instance, that the even numbers were only invented in the 1970s; or that the kids used to have an older sister named Anesthesia, but she was carried aloft by her umbrella one day at the beach and is now reported to be living in Sweden; or that their mother and father are aliens from a distant star system who adopted them to help pass as humans. Stories like these are surprisingly easy to put across—you just need to sound matter-of-fact, and add lots of incidental detail.

Sometimes people look aghast when they hear about the things I tell my children. Every once in a while, I even feel a twinge of guilt when I see myself through other parents’ eyes. But on the whole, I think lying is not only fun for me, but probably good for the kids as well. For a while, they get to experience a different variation on reality. It allows them to entertain new ideas and possibilities, to work out the implications of unusual premises, and to distinguish the impossible from the merely improbable. And I always relent in the end, when they demand to know whether some outlandish story is true. Contrast this with the “traditional stories” we tell about Santa Claus or the Tooth Fairy: Far from rewarding their debunking, after kids begin to have doubts we coerce them into complicity (“Children who don’t believe in Santa Claus don’t get any toys”) and even encourage them to join the conspiracy (“Don’t spoil it for your younger sister”). Teaching kids to deny their perceptions in the face of power may be a valuable life-long lesson, but I’d prefer to exercise their skepticism toward authority. 

Given how many people try to sell us stories on a day-to-day basis—politicians, advertisers, even HCI researchers— it seems like a good idea to encourage a questioning attitude. So I’m pleased when, nowadays, my kids complain proudly that “papa is a liar,” and even more so when they play me back. For instance, when they hurt themselves, I used to distract them by asking them how many fingers I was holding up, and either pretending to be concerned that they were wrong, or switching my fingers around just as they answered. It used to baffle them no end— but then they learned to counter that ploy by saying the wrong amount from the beginning. How proud I was!

So why am I writing this here? Apart from any pressing need to confess, reflecting on how I lie to my kids raises two issues. First, telling the truth might not always be an absolute good. It’s not that I think our interfaces should lie to us, necessarily, but perhaps they could spin a good story on occasion. Most systems nowadays seem bent on making life more comfortable and convenient. They show us the books “people like us” have bought, target advertisements to our demographics, and tune search results to reflect who and where we are. We can already select the political orientation of the news we encounter, and surely this will be done automatically before long. The end game seems to be to make our lives so predictable, so unthreatening, that we can doze our days away, stirring only occasionally to make the latest purchase or cast the next predictable vote. 

Perhaps the lies I tell my kids could serve as a small inspiration to design systems that make everyday life just a bit more challenging. For instance, it might add to a sense of enchantment about the world if our online maps showed an imaginary wonderland just out of sight, or if chatty emails from fictional characters occasionally appeared in our inboxes. More intriguing, perhaps, would be systems that juxtapose facts to support unusual interpretations, rather than safe ones. Instead of seeing the most popular combinations of purchases, for example, we might see those that occur less frequently—not so rare as to be essentially random, but odd enough to suggest strange ways of seeing the world (“If you like interactions you should also consider this marlin spike”). Rather than focusing on the most likely truth, we could play with more unusual realities.

The idea that our systems don’t have to be so straightforwardly helpful is related to another thing I’ve learnt from lying to my kids: They can take it. So much research on children, or older people, or minority populations— actually, so much research in general—adopts an attitude of earnest sincerity, like a wide-eyed nursery school teacher tending a scraped knee. This is all meant well, I’m sure, but it tends to treat people as half-wits or weaklings. In my experience, all the people I’ve designed for, from nuns to elderly immigrants, have dealt readily with strange ideas, playfulness, and seemingly “inappropriate” material. People, including kids, may not always need or welcome our solicitousness and care. Maybe we shouldn’t tell them out-and-out lies, but we can certainly entertain them with unfamiliar stories. Not only can they handle them, but they can counter them with surprising stories of their own. Just like my kids.

(In my next blog, I’ll explain the AI algorithm that automatically generated this story as well as my last three conference papers.) 

Bill Gaver is professor of design at Interaction Research Studio, Goldsmiths, University of London.



Posted in: on Wed, May 22, 2013 - 11:52:21

Bill Gaver



Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@Jonathan Grudin (2013 06 01)

Interesting, Bill. Since I adopted almost exactly the opposite approach (they are now 11 and 13) perhaps we can do a small-n comparison down the road and undoubtedly discover it matters not at all what parents say. The question I have is why is lying and getting away with it fun for you? And with that off to see the third week of the second year of In Treatment.


UX research vs. UX design


Authors: Ashley Karr
Posted: Tue, May 21, 2013 - 8:17:20

Take away: We research to understand the world. We design to change the world. The user experience (UX) professional’s role in any design process exists along a continuum between research to understand the user and designing something that changes, and hopefully improves, the user’s experience.

Defining the problem

If you get an opportunity to have a conversation with Skot Carruth, take it. He is a UX professional and principle at philosophie, a UX and product consulting firm. In addition, he is a fellow UCLA graduate, a smart man, and an all around good person. Happily, we both stumbled upon the same phenomenon: There exists in industry a general lack of clarity and understanding regarding the role of a UX professional. On the one hand, this is unfortunate. We want everyone to know, love, and understand our chosen profession. On the other hand, this is very fortunate. It means, to paraphrase Don Norman, we are leaders in an emerging field, and we have the opportunity to spread the word. 

We enjoyed ourselves as we kicked around the questions:

  • Is UX a noun or verb?
  • Is UX a role you play or a process you champion?
  • Where does the work of a UX professional begin and end?
  • Where is the intersection of time spent and quality derived in the UX research and design process?

We did not generate sensational answers to all of these questions; however, we did celebrate when we settled on the idea that UX is a continuum—between a role and a process. On the far left, we find UX research, and on the far right, we find UX/user interface (UI) design. Skot drew a very nice picture of our hypothetical continuum. You can look at the picture and/or keep reading to discover what we uncovered during our interesting conversation one balmy Los Angeles afternoon.

The UX research & design continuum, by Skot Carruth

Defining research

To research something is to investigate it systematically. We do this in order to reach new conclusions, establish new facts, and learn as much about the truth as possible. Research also gives us a chance to find problems that we can potentially fix. We research to understand the world.

Defining design

To design something is to create the form and function of an object, system, or interaction. We do this in order to make our experience here on earth (or in space) better, safer, healthier, more comfortable, more fun —and we can create solutions to the problems we found during our research. We design to change the world.

Contemplating the UX job description continuum

As I mentioned previously, Skot and I decided that UX was a continuum. This continuum begins with UX research and ends with UX/UI design. This may explain why there is some confusion regarding what educational background a UX professional should have, what role they should play on a design team, and what part of the design process they should “own.” (I disagree with the concept that someone can actually own a part of the design process, but that is another discussion entirely.) UX professionals with backgrounds in the social or behavioral sciences and or are very adept at gathering and analyzing data are perfectly primed to work as UX researchers. Graphic designers, industrial designers, and engineers who are adept at creating tangible products would then be perfectly primed to work as UX designers. 

Of course, these are not hard and fast rules. These are the result of Skot and my attempt to define and organize a very complex, multi-disciplinary field. To broaden our definition, Skot adds: 

“I think one unified way to define UX is that it is a perspective—a lens through which you can view your product, service, or organization. It is the lens through which your users view you, as well. In my opinion, anyone who can use this perspective is practicing UX.”

We are curious to hear what other UX professionals think about this phenomenon. We would love to hear how you bring clarity about your role and responsibilities to your design team. Please write your comments below. We look forward to reading them. Until next time, please enjoy the experience.


Posted in: on Tue, May 21, 2013 - 8:17:20

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@Huda (2013 06 11)

I don’t think there’s a right or wrong answer here, because UX as a job field is so vast. In my humble opinion, UX is more of an overlay on top of the design elements. So it’s all the little details that nobody really realizes are important, but that play an important role in the balance of the entire experience.

@Ashley (2013 08 19)

Thank you, Huda! I like what you said about, “all the little details that nobody realizes are important.”

@Omid Amraei (2014 01 17)

It was the most clear answer. UX Researchers are searching the history (past to present) and find what is related to users instinct. While UX Designers are searching for future (present to future) to find what is related to user’s wisdom.

I invite you to read my first article in English.
https://medium.com/p/430f5ca67d82


The HCI community, the medical community


Authors:
Posted: Mon, May 20, 2013 - 9:55:02

In this blog post, I want to share, from the perspective of a junior researcher, some reflections on how I observed HCI work has been discussed by the medical community, the changes happening in the medical community toward evaluating technology design for health, and the different perspectives on what constitutes novel research between the HCI and medical communities. People who have been pioneering this area (e.g., those who have helped build and organize the Workshop on Interactive Systems in Healthcare (WISH) over the past couple of years, guide the CHI health community, and organize HCI in health conferences) will be able to provide broader insights from their experience working closely with the medical community. WISH is probably the closest place where the themes I am about to explore here have been, and will continue to be, discussed in depth. This post may relate to Richard Anderson’s recent post, “What designers need to know/do to help transform healthcare,” in terms of connecting the two communities together. The difference may be that while Anderson looked at what designers could contribute to healthcare, I am looking at how the two communities differ in culture and the implications for considering close collaboration.

Julie Kientz, Eun Kyoung Choe, and I carpooled to the Fred Hutchinson Cancer Center to attend a talk given by Dr. David C. Mohr, Professor of Preventive Medicine and Director of the Center for Behavioral Intervention Technologies (CBITs) at Northwestern University. The title of the talk was “Going Digital: Building eHealth and mHealth Technologies.” Mohr suggested the seminar to be an interactive session rather than a formal talk. The audience (about 60 people) consisted mainly of people from the medical field—at least we (Julie, Eun Kyoung, Betsy Rolland, and I) were the only ones from the DUB community. Long, enthusiastic discussions with the audience allowed Mohr to cover only his first few slides. We did not get to hear details of the projects going on at CBITs, but we were able to hear general perceptions about designing and evaluating technology for health coming from people in medical fields. (Later I will discuss what my frequent use of the terms medical fields and medical community entails, just in case it bothers you. I also have to mention that what I mean by we will not be consistent, but as a person working to participate in both HCI and medical informatics fields, it is really hard to not use we in multiple ways. Perhaps I will write a separate post about my meaning of we.)

Mohr was quite knowledgeable about the HCI design process and appreciated multidisciplinary teams. He shared insightful challenges in funding and development, contrasting the huge time length that it takes to receive an R01 grant (the biggest funding mechanism at National Institute of Health (NIH)) and finish an RCT with how fast technologies evolve. He shared the changes in how the medical community sees randomized controlled trials (RCTs), or at least the people gathered for an NIH workshop last year to explore alternatives to RCTs. We also got to hear how the audience, mainly coming from medical fields, perceives the role of technology development in general. Hoping that these are not too overwhelming to cover in this one article,  I will now present some burning themes I want to discuss.

Designers make things look pretty

We have heard this many, many times. Ever since I moved from being an art and design student to being an HCI researcher, I constantly have had to remind myself and others that designers are not there to make things pretty but to design the entire interaction. As designers of technology, interaction designers, as opposed to graphic designers, for instance, design interaction, not just the appearance. Many CHI scholars who are designers by training have continued to push this argument for years. 

We saw that sentence again on Mohr’s slides, along with another line that said: “UX/HCI Engineers: Make things usable.” While it is true that UX/HCI people are there to make things usable, “going beyond usability” has been another dominant theme that people have been discussing at CHI for many years. I remember trying to be philosophical about it in my preliminary exam in 2006, proposing to study “Postmodernism in HCI.” 

It was sad to see that statement (designers make things pretty) appear again, especially from somebody who has a good understanding of the HCI field. So how can we solve this? There really is no easy solution. Perhaps, just as designers with a capital “D” continued to fight to become primary stakeholders of project development rather than staying on as consultants or as some third-party group of people providing labor to half-finished projects, we should continue building deeper relationships with the medical community, enough to help both of our communities better understand our roles as stakeholders holding common goals. 

The R01 cycle and the technology adoption cycle: Perspectives on the evaluation outcomes

Mohr showed the tedious timeline of an R01 cycle. In total, it takes about 15 years for one to start synthesizing problems and reporting results from RCTs. He contrasted the timeline with how our phones have evolved in 15 years. Mohr questioned, considering how fast our environment changes, if R01 can really control for any external influence and how we can say that the outcomes from the RCTs are applicable in the real world. This is when he mentioned that the medical community is also seeking alternative ways to evaluate outcomes and that NIH persons actually had a workshop to discuss the matter. The funny part was that the NIH wants to change, but the reviewers with strong methodological views are the ones who are resistant. 

Then Mohr and someone in the audience introduced iterative design cycles as one of the solutions—one of the main methods HCI uses. However, the ultimate goal in the medical community being health outcome changes, not engagement with technology, iterative design approaches may not be appropriate. I remember, with my brave social skills, sharing my NIH career grant idea with one of the leadership faculty at the Duke Center for Health Informatics and the Department of Biomedical Informatics at Columbia University at a hallway at American Medical Informatics Association (AMIA) symposium for brief feedback. The first question I was asked was, “What is the ultimate outcome you are trying to test?” I could not respond to that question. I was used to working for innovative design company and evaluating user experience, not measuring users’ A1C level (a measure for patients with diabetes) in response to using a diabetes mobile app, for instance. This is when I realized that there is a huge gap between the  HCI and medical communities—that we measure different things. 

Evaluation versus innovation

From hearing the audience’s discussion, both Eun Kyoung and I sensed that the granularity of technology design they are describing were at a very high level, such as “Internet use,” or “mobile use.”

I remember going to talks at AMIA and around campus on electronic medical records (EMRs), to which my initial response was, “What about EMRs? Which EMR?” The speakers presented outcomes on how EMRs shifted medical errors, work efficiency, or billing issues, without giving details on the design of the EMR that potentially could have shifted the outcome. After the talk, Eun Kyoung shared how challenging it is to design a really good notification system, and how a small little feature can change behavior outcomes. 

One of the salient differences that is often true between the HCI community and the medical community is that the HCI community is interested in innovation, how novel the technology is, and the medical community is interested in evaluation, regardless of how novel the technology is. Accordingly, strict methods matter (e.g., evaluating their own company employees to test their technology is not valid) for the medical community, while it may not be so important in the CHI community, although some may strongly disagree.

Final note

I am hopeful, with many researchers from the medical community increasingly working with, and hiring, students and researchers at CHI. Several leaders from the CHI community have been working hard to create a close connection between the NIH and the NSF worlds. This is evidenced by a recent grant call on smart and connected health that NSF and NIH collaborated on and both CHI and AMIA’s continued efforts to work together using venues such as WISH. At the same time, as a new faculty member in the field, I feel under pressure to accomplish many of the changes that I believe should occur. 

What is for certain is that designing technology for health is an exciting field to be in. The field continues to evolve and requires multidisciplinary collaboration, creating necessary cultural clashes among disciplines. People in this field are continuing to challenge themselves, such as the NIH workshop people finding alternative solutions to RCTs; the audience at the seminar and Mohr, who initiated conversation in finding best ways to design technology for behavior change; and the CHI community, which is continuously expanding its horizon in designing technology for people.

About my use of the terms medical fields and medical community: I assume that people from the medical community will read this post and think that I may be oversimplifying what I mean by medical community. In my definition, the medical community includes any people coming from academic homes that constitute medicine and public health. Accordingly, in my mind, as an extreme example, biomedical informatics people are also "medical people." Using my definition, I technically should be a medical person too since my current fellowship comes from a biomedical informatics department that is part of a medical school. However, at AMIA, they (or we) call themselves (or ourselves) the informatics people, without the word medical. As much as what I mean by medical may greatly differ in the medical community, what the medical community sees as informatics may greatly differ in the information science community. Who will we consider the closest “informatics person” if people from the American Society of Information Science and Technology (ASIST), iSchool, computer science, and HCI gather together in one room? As much as the role of designers is misunderstood in the medical community, the role of “being medical” may be as equally misunderstood in the HCI community. And my broad use of the term medical fields is evidence of that.

*I also want to thank Julie Kientz, Eun Kyoung Choe, and Wanda Pratt for their feedback and help on this piece.

Jina Huh is NLM Postdoctoral Fellow at the University of Washington Medicine Division of Biomedical and Health Informatics. Starting in fall 2013 she will be an assistant professor at Michigan State University's College of Communication Arts and Sciences.


Posted in: on Mon, May 20, 2013 - 9:55:02

{firstname} {lastname}



Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@Jina Huh (2013 05 20)

I am attaching related discussions and links here:
Kwon, Hur, and Yi’s work on discrepancies between HCI and healthcare literature in dietary intervention systems: http://onlinelibrary.wiley.com/doi/10.1002/hfm.20371/full

WISH 2013:
http://wish2013workshop.wordpress.com


CHI 2013 roundup


Authors: Elizabeth Churchill
Posted: Thu, May 16, 2013 - 12:28:34

The 2013 ACM SIGCHI Conference on Human Factors in Computing Systems (commonly known as just CHI) took place in Paris from April 27 to May 2, 2013. Many of you are familiar with the CHI conference either through direct experience or through reading papers that are showcased there. 

This was a bumper year for CHI: around 3500 attendees running between16 parallel tracks of content and activities that featured over 1000 presentations sorted into over 200 sessions. I should also note that non-collocated attendees were also part of the conference. Extensive use of social media was evident (Twitter had at least two streams simultaneously pouring forth content using #chi2013 and #chisadness, the latter created by those not physically present, some of whom were organizing local gatherings to watch content from afar). Needless to say, all this effort to be co-present on the part of far-distant people led to much discussion at the SIGCHI Town Hall Meeting about the benefits/costs of increasing our broadcast capabilities at future conferences.  

Preparation for and real-time management of the conference involved over 100 conference committee members and over 150 student volunteers and 30 local volunteers. 

As for the content planning, official statistics presented by the program chairs are as follows: 

  • 1962 papers and notes submitted with 392 accepted (an acceptance rate of 20%)
  • 1604 submissions to other venues with 659 accepted (an acceptance rate of 41%)

This all took over 10,000 invited reviews that were managed by over 200 program committee members. Once submissions were accepted there was a mammoth session-planning process aided by the specially designed Cobi system, which is “a collection of crowdsourcing applications that elicit preferences and constraints from the community, and software that enable organizers and other community members to take informed actions toward improving the schedule based on collected information.” You can see more on the Cobi system in the paper “Cobi: Communitysourcing Large-Scale Conference Scheduling” by Zhang and colleagues, in which they describe their process and the technology they developed. 

If you are interested in more details about the conference itself, I wrote a short, very personal review of my experience at CHI 2013 for the Communications of the ACM blog

Looking to the future, CHI 2014 will be in the gorgeous Canadian city of Toronto from April 26 to May 1, 2014. Start planning to attend—actually, don’t just plan on attending, plan to participate

Deadlines for submissions are as follows: 

I look forward to seeing you there!

Elizabeth F. Churchill is Director of Human Computer Interaction at eBay Research Labs in San Jose, California. She is also vice president of ACM SIGCHI.


Posted in: on Thu, May 16, 2013 - 12:28:34

Elizabeth Churchill



Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


CHI grows up


Authors: Aaron Marcus
Posted: Thu, May 16, 2013 - 8:12:26

CHI 2013 took place from April 27 through May 2 in Paris’s Palais des Congrès, not far from the Arc de Triomphe on the western edge of the city center. Besides being notable as the first CHI conference ever to be forced to turn away eager participants because they simply could not take on more visitors (I was told they planned for about 2100 to 2400, but that they had to stop at 3400!), the conference was memorable, for me, because the contents of many presentations, posters, and consequent discussions seemed more, well, grown up, or mature, than I remember from many conferences in the past three decades.

Perhaps it was the influence of this very sophisticated, elegant city of lights. Many Paris citizens looked like they had just stepped out from a fashion shoot. Even if a cancer-inducing cigarette was dangling vertically from silent, pouting, or even speaking lips, it just appeared more elegant, more cosmopolitan. Even the lady in a small sidewalk booth sporting a battery-powered electronic cigarette looked sophisticated as she tried to convince me to try one. I didn’t, preferring real, very occasional small cigars.

In the CHI social gatherings, elegant multicolored, complex sweet treats and other nibbles, served with beverages of our choice, seemed much more architectural and visually appealing, signaling, perhaps, that more was expected of conversations and presentations. 

In my opinion, this is exactly what I found. I comment on my own, admittedly limited, exposure to a plethora of activities.

My week started with my participation in a two-day workshop on “HCI in Third Places” organized by Roberto Calderon, Sydney Fels, and Junia Anacleto. I presented our own project, the “Learning Machine,” a mobile concept design for learning anywhere, anytime. At first I was a little uncertain about the value of the objectives and likely outcome, but as I had more exposure to the issues of concern, I realized that this workshop was focusing our attention on actual physical places where people gather outside of work/school and home, and how technology might help or hurt the truly human-human interaction and communication. The importance of these shrinking opportunities for real interaction, not isolated, limited, or virtual encounters, were important to cherish and nourish. Imagine our delight and surprise when we learned that much of Day 1 was devoted to field/contextual studies of Parisian cafés. I have never, ever spent so much time drinking coffee and observing people, in this case in the Latin Quarter! What a delightful, and insightful, experience.

Other workshops, which I noted from the program or heard about from other participants, seemed to focus attention on topics that have been touched upon in past conferences of CHI, UXPA, DUXU, and others, but the frequency with which these terms seemed to be used at CHI 2013 signaled a nuanced understanding: that concerns more than technical were at stake—more than abstractions, more than software, more than hardware, more than cognitive science, more than computer science—which have been the bases for CHI’s activities and interests for decades, its grounding communities of interest. I mean, for example, the terms mentioned in workshop titles focusing on children, ethics, families, seniors, sustainability, teenagers, and vulnerable people.

The opening keynote, “The New Frontiers of Design,” by Paola Antonelli, Senior Curator at the Museum of Modern Art in New York City, discussed issues of how to understand, value, and respect HCI objects/devices/systems in the museum’s present and future collections. This topic added perhaps just the right spin to papers, panels, exhibits, and other events that emphasized design and the presence of designers at CHI. Perhaps many younger attendees may not appreciate that designers have fought long and hard to be included in the proceedings and sessions of CHI. As a former board member of interactions, I can recall when a member of the academic community cautioned about accepting design-based articles and sneered that the design rabble was infiltrating the august hallways dominated by computer science and cognitive science. The times, indeed, have changed. As the first member of the CHI Academy with an official design background, I am part of this change and (some would say) progress or maturation of CHI.

Even in posters, whether from student competitions or PhD dissertations, the variety of topics seemed much more diverse, much more socially aware, and much more curious about the world. I was surprised and touched to discover three Mexican women students from the Universidad Tecnológica de la Mixteca standing by their poster about a project to raise awareness of women about how to cope with and live a life free from gender violence. Another display entitled “Using Design Thinking to Empower Immigrant Youth as Information Mediaries” seemed to bring together urban civic organizations, technology providers, and social services groups in a fresh, productive way. It is not that these topics have not appeared in the past, but this conference seemed to carry many more of these themes. That seemed, also, to be a sign that CHI was maturing in its social, political, and cultural roles and responsibilities.

On a more techie note, I was delighted to see at least one display/presentation about the use of fingernails. I have suggested for several years that this part of the body might be at one of the forefronts of display innovation. Raphael Wimmer and Florian Echtler of the University of Regensburg showed a prototype of a device that would enable you to see on your own fingernail the enlarged letters or symbols that your finger was over, thereby making it easier to point, select, type, etc., on touch-sensitive surfaces or interactive buttons/keys.

One of the art/game-oriented exhibits also caught my attention: “Big Huggin’: A Bear for Affection Gaming,” by Lindsay Grace. Big Huggin' is a game played with a 30-inch custom teddy bear controller. Players complete the game by providing several well-timed hugs to propel an avatar on a wall display to leap over obstacles (see photo of teddy bear). 

The reason this project seemed especially poignant was that I had just heard a TED talk on National Public Radio by Sherry Turkle from MIT’s Media Lab, recanting her earlier enthusiasm for adorable robot technology used to calm Alzheimer’s patients or other patients with dementia. Turkle realized that technology was enabling all of us to “outsource” care and concern for vulnerable people, whether family members or not. She now feels such technology is leading us down a path that desensitizes us and makes us even more remote and isolated from others, we who seem no longer to wish to make the time to care for others and might not even know what to do given the opportunity to care. This controversial issue seemed to be hovering conceptually nearby, as I watched enthusiastic CHI participants hug the teddy bear and squeal with delight.

After roaming the corridors, social gatherings, and session breaks for a week, I realized that the most valuable part of CHI was the time I spent talking with people, not the presentations or the posters themselves. In fact, it seemed clearer than ever that the CHI conference was a giant version of the Parisian cafe. I felt I had just enjoyed a tremendous jolt of, not only caffeine, but of discourse, challenging discussions, renewals of old friendships, and close encounters of many kinds with new, strange, different people and ideas—not just technology—and an opportunity to make new friends.

Bravo! to the organizers/sponsors of CHI 2013 for providing such a rich, varied, and  “mature” environment in which to have these experiences. CHI has come a long way since its founding in 1982 and through its many transformations over the past three decades. I am glad I have been able to witness, and perhaps to have contributed a little to, these changes.

Additional CHI commentary

Mariann Unterluggauer, Vienna, interviewed Aaron Marcus and others at CHI 2013 in Paris, about their views of the conference. The broadcast took place on Austrian radio on May 19, 2013.

Mariann Unterluggauer interviewed Aaron Marcus at CHI 2013 in Paris on May 2, 2013. The interview was broadcast on German Radio.


Posted in: on Thu, May 16, 2013 - 8:12:26

Aaron Marcus

Aaron Marcus is president at Aaron Marcus and Associates, Inc. (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Design, CHI, richness of spirit


Authors: Deborah Tatar
Posted: Mon, May 13, 2013 - 9:19:11

Since returning from Paris, I have been tapping away, writing about CHI, commenting on the papers, the panels, the oblique and direct epistemologies. But what keeps rising to the top is not CHI, but Paris. I spent the summer of 1983 in Paris, and two weeks in 1995 but have scarcely been there since. What has happened? 

For the conference, I stayed near les Jardin du Luxembourg, in the 6e, near the center of the city. My niece is living in the 11e, near Le Marais. CHI itself was at Porte Maillot. So I moved around a little bit. What I saw, everywhere, were people living on the street. A blue sleeping bag marking this entryway. Two people wrapped in blankets, end-to-end on a park bench, heads next to one another. A large awkward slab of cardboard bent over to protect a pile of belongings. A mattress next to two bins of neatly folded belongings. The legless older man in a plaid wool vest with the shy smile begging on our block from early morning through to the long twilight. In fact, there were beggars everywhere. 

No one I saw appeared to be what we call in the states itinerants. The beggars appeared to have their spots as well as the homeless, and I several times saw passers-by greet them and start apparently congenial conversations. How are you, here's some spare change. 

The most shocking sight was people dumpster diving for food. Three people, a women and two men, dressed—not so differently from anyone else in France—in slightly shabby black, exclaiming with joy at finding packets of crumpets. Far from appearing to fear that they would get caught, their chatter rang in the quiet morning street. Also, on the metro in the morning, women presumably on their ways to work wore sneakers and blue denim jeans. Men and women alike were adorned primarily and only by scarves, almost always in very subdued colors. Where were the bags, the belts, the shoes, the style, the chic? Some men were in suits, but many wore puffy coats, like my son in college in Portland. I didn’t see France; I saw suffering.

Since I wrote this, I have heard rumors about bad experiences among CHI attendees: muggings, robbery, theft, and one rumor of physical harm. I'm sorry if these are true, and they give a more dire meaning to what I noticed. Another CHI talk that I did not attend was on protecting researchers. No idea what was said, but, when the idea came up, I did comment to two young female graduates that my graduate students would not be doing research in bars. I already have nightmares about my responsibilities as an advisor. 

I don't know the reality of the French. But seeing what I did in Paris turned my mind to the question of richness of spirit. I wondered, "Where at CHI did I see richness of spirit?" Not wealth, not pride, not control, but assertion of being, compassion, kindness, thoughtfulness, care. 

As it happened, I was not able to go to any of the most likely talks for finding richness of spirit, such as those on IT4D or sustainability. But I did see elements of richness of spirit elsewhere. In particular, in the kind a playfulness in Youngsil Lee's work and in AnyType. 

AnyType, developed by Laura Devendorf and Kimiko Ryokai is a lovely facility that allows a person to take a small number of still pictures (5) or 5-second videos and turn them into a font. Part of the intention is to make the creator more aware of textures in the environment.  The examples shown at CHI were simple and untroubled patterns seen with a good eye and a steady hand, and the alphabets were used to spell words related to the originating pictures ("GRASS", "BIRDS", "PIPES"). It was nice to see this work come out of the iSchool at Berkeley and it represents a spreading of what we might call the school of Hiroshi Ishii, sharing his aesthetic of simplicity and individual empowerment. It is a terrific example of what I like to call zensign—that what isn't in a technology can be as important as what is in it. 

Left to my druthers, I would prefer for this simple tool to remain a boutique app, in which the aesthetic of production and product are tied. That's not likely. In two years every complex commercial graphical and video manipulation tool is likely to have something very like this folded in as a facility, lost in deep valleys of cascading menus and buried beneath a bewilderingly large array of font choices. 

But presumably the original will still be available and those with patience might still be able to extract the aesthetic influence of the original. I experienced the aesthetic of the sample work shown as reminiscent of Liberty's of London and Laura Ashley patterns, with deceptively simple patterns and historic roots in the Arts and Crafts movement. And, of course, the Arts and Craft movement was about a focus on craftsmanship in opposition to the dark Satanic Mills of industrial Britain. 

Where AnyType is simple and happy, Youngsil Lee's work is more complex. Two of her works spoke to me. The emotionally simpler of the two was the hedgehog dress, a black dress with a tight sash at the waist and elaborated cowl area. The elaborations consist of what initially appear to be metallic adornments. So, it's a young woman's dress, sexy with a bit of saucy punk. But here's the trick: When someone approaches, the metallic decorations stand up, like a hedgehog's quills. Back off, buster! 

I particularly liked this because I used to have a colleague who expressed power through (inappropriate) physical contact. He once came up behind me when I was leaning over in focused worked with 7th graders, rubbed his hands up and down the sides of my arms a few times and before I could turn around, had moved off. I was not a happy camper, but I could not say anything in a crowded, roomful of noisy 10-year olds. Eventually, he pulled a similar stunt under more accountable conditions and I was successfully able to growl at him, "Don't EVER touch me." Pure porcupine. It hasn't happened since. 

In the right circumstances, the hedghog dress could also be part of establishing a nice flirtation—after all, the "quills" are not actually pointy and the question of “how near?” is pretty central—but the Venus Flytrap dress is a subtler social tool. It is also little black dress, but with a considerable plunge in the neckline. On each side of the plunge, like lapels, lie red and silver decoration. You would accessorize it with a black, red, and silver clutch in New York, or with flat half-calf boots in Portland. But when a hand approaches the neckline, the trap is sprung. The flower closes. The name, Venus Flytrap, suggests danger, and so does the closing movement. But the experience is soft and pulling the hand closer than it might otherwise have come is a deeply ambiguous act. The playful dangers and delights of flirtation, and the true risk of psychic and physical pain are brought into a kind of layered focus.

In "Identity and Violence," Amartya Sen, the great economist, ties world violence to the idea that each person has a singular, exclusive identity, rather than a multifaceted and rich experience of life. When CHI creates subtlety, and layered experiences, it is in a small way combating the deprivation and neglect I saw on the streets of Paris last week. Design enhances spirit.


Posted in: on Mon, May 13, 2013 - 9:19:11

Deborah Tatar

Deborah Tatar is an associate professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@Pardha (2013 07 07)

Thanks Deborah, for an insightful posting! This was not my first CHI but my first time in Europe. I always thought of Europe, especially its richer countries like France, as a region with a great social safety net. The poverty and deprivation I noticed in Paris was a bit of a shock. Perhaps the situation is better outside its big cities?

Unfortunately I missed the AnyType demo at CHI. Thanks to your post, I looked it up and am glad I did. We need more works like this that bring out the inner artist in all of us. A small suggestion: consider linking their website to this post (http://www.artfordorks.com/anytype/).


Prototyping businesses: plan vs. play


Authors: Tim Fife
Posted: Fri, May 10, 2013 - 7:19:27

I’ve recently come back from a two-day workshop on Lean StartUp, hosted by The Pollenizer in Sydney, where I got a crash course in how entrepreneurs take a sketchy idea and develop it into something worth investing in. 

This got me thinking about one of the trickiest parts of the type of design that I do—namely, how do you prototype a new business? As I’ve said before, I think of organizations as a great big set of interactions. All of these interactions can be designed, and if they are designed well, an efficient, profitable business can emerge. I’ve learned that entrepreneurs do this kind of work all the time, starting with the smallest viable set of interactions possible, and setting them loose in the world to see what will happen. 

Saras D. Sarasvathy from the University of Washington School of Business published a paper on how entrepreneurs think. What struck me was the finding that entrepreneurs think more like experimenting scientists (or perhaps like playful children) than they do like strategists, by which I mean those typically responsible for launching new businesses within mature organizations. According to Prof. Saravathy, the entrepreneurial mindset takes what’s at hand, surveying the available resources, and plays with them in an attempt to come up with something that could possibly deliver some form of value—they essentially design experiments to see if this configuration of services or that articulation of value will attract attention in the market. The effect itself isn’t necessarily a strategic goal; it’s just an indicator that the interaction was effective. A strategist, on the other hand, typically sets a target or vision, and then spends his or her time designing a way to achieve that specific goal, usually by building big plans with associated metrics. The strategist then puts lots of wheels into motion, monitoring for specific success criteria to see if the idea works the way she or he hoped it would. 

What’s becoming clear to me is the fact that the former approach has far more in common with prototyping than the latter. The entrepreneurial approach, as I saw it playing out at The Pollenizer, is primarily about learning. It is explorative and interested in making discoveries and changes. “What will happen if I do this? What kind of response will I get if I try that?” It does this in an extremely intuitive and lightweight kind of way, and it uses each new finding to change the way it operates, informing the next experiment. 

While the strategic approach is undeniably about learning as well, it is far more concerned about the efficacy of the design—“Is it doing what I intended it to be doing? Am I getting the results I need to be getting? Is it running to plan?” The strategist typically wants the plan to be “right,” and wants the outcome to be what he or she has intended. However, since the strategist isn’t approaching things as an experiment but rather as a plan, she or he is often only looking at the strict metrics and goals predetermined to be worth measuring. Since attention is only being paid to what is being measured, the strategist often ignores or accidentally misses wider lesson or trends. When unexpected things do occur and are noticed, the strategist will often have a great deal of difficulty making changes and adapting.

A colleague recently told me about a conversation she had with a consultant from a traditional strategy and operations firm. The consultant said after launching a new business, they typically look for confirming data and ignore the rest. It’s not that he and his fellow strategist are inflexible or can’t adjust, it’s just that they only pay attention to what they are measuring and therefore miss unexpected learnings. 

I know that the traditional intent of prototyping is to identify where a design will break and spot how the design can be improved. But perhaps this is too constrictive, particularly when it comes to prototyping new businesses. What about using prototyping to find brand new opportunities? That’s what Lean StartUp and Sarasvathy’s entrepreneurial mindset seem to be all about. 

Of course, this would be a new way for many businesses to operate—experimenting with resources, approaching innovation as play. It may require that new methods be developed, new KPIs put into place, new boundaries of responsibility and domes of discretion articulated so that the traditional immune systems of the organization don’t kill this new function of experimentation. 

But imagine if large, mature organizations set their intent to find new ways to organize their existing resources to deliver new value and took an entrepreneurial approach. They would prototype. They would put little things out into the world to see if their value was clear. They would see if people get it. See if they could get a nibble. Then they would bring it back in and re-jig it. Try something else. They would be nimble, agile, but most of all, they would be experimental—or even, dare I say it, playful.


Posted in: on Fri, May 10, 2013 - 7:19:27

Tim Fife

Tim Fife is Senior Innovation, Strategy, and Design Consultant at Second Road (Sydney, Australia).
View All Tim Fife's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


What would it take to be inclusive?


Authors: Jennifer Mankoff
Posted: Tue, May 07, 2013 - 10:03:13

It's been a while since I was asked to join the group of interactions bloggers. I guess I've been waiting for the right inspiration to strike. I've just attended CHI, and that inspiration has finally arrived, but not quite in the way I expected. When I was at CHI, I saw mothers sitting on the floor to nurse their children (due to lack of seating) and a good friend forced into a wheelchair by the inaccessibility of the venue, which I also struggled to get around using a cane. And I realized that it's time we ask ourselves a question: What would it take for our community to be truly inclusive, to make sure that we welcome the same diversity into our community that is so wonderfully present in the work we feature at our conferences? 

So ask yourself: are you willing to educate yourself or others? Those companions this week at CHI who had the experience to understand my situation made me feel at home and supported me without making me feel out of place. But do those of us with the responsibility to act know enough—or should we also be inclusive when it comes to decision making, allowing people in the trenches to represent their own needs and knowledge? What about educating others? For example, student volunteers at CHI this year could not answer simple questions such as where the nearest seating was. Another year, when an unexpected decision was made not to allow children into the conference center (a policy change that was not announced), an ACM staff member compared my wish to have my child with me and attend the conference to a smoker's wish to smoke in the building. I staged a protest in response but another new mother, a first-time student CHI attendee, flew home to Canada that day. 

Or ask yourself: Are you willing to put time or money into inclusivity? Is inclusivity worth the effort of approaching a venue with a list of accessibility guidelines in mind, and posting the information you collect prominently on each conference website? Is it worth SIGCHI paying for extra chairs or creating a webcast of even one session a day for those who cannot attend in person? Would you take the time to convince your university or business to invest in a nursing room so that moms without private offices don't have to knock on the door of sympathetic private office owners to find a secluded space (something I've been asked for more than once)? Would you invest your department's dollars or your time in creating a welcoming space for new moms, or ensuring that staff or faculty who need it can have a parking space close to their office or an office close to their lab or the bathroom or whatever it is they need? 

Ask yourself: are you willing to change rules and norms? Isn't it about time we considered allowing those who have a legitimate need to attend program committees remotely, for example? Is it worth having their perspective even if we sacrifice on everyone being present? I’ve been told participation requires presence. I've also been gifted remote presence, and half workloads in the past when program chairs were willing to break the rules, but perhaps we should rethink those rules instead. Or then again, could we benefit by allowing students, faculty, or professionals to work part time rather than sacrificing their perspective, rather than accepting what they can offer within the constraints of a disability, or forcing them to choose between motherhood and participation? Sure, some could take a break and come back, but there can be great difficulty in recapturing the knowledge of who and what matters after a year or two of time away (“getting off the banks”). 

I do not stand alone in asking these questions. Shari Trewin created a document on accessible conference planning for the ASSETS conference, I have heard that SIGCHI has a committee now looking into inclusivity, and Jennifer Rode has been an outspoken and passionate crusader for increased accessibility at Ubicomp, CSCW, and CHI. I am sure there are others that I don't know of. Yet progress is slow and in the meantime those of us on the outside bounce between a struggle to participate and those wonderful moments when things are done right. 

But I must say, at this moment, I am tired of hearing the old saws about who is not attending what conference because of what papers they do or don't accept. My disability is part time, but it still has had a big impact on my working life, as has motherhood, and I am one of those who must constantly ask harder questions about conference participation such as: Should I attend the next conference? What will it do to my health to participate? How absent can I be and still give my children the stability they need? 

So next time you plan an event, go to a colleague's baby shower, or invite someone to join a committee, take a step back and ask yourself: How can I be more inclusive? And then think about how you can make sure that even the people who are walking a fine line between their personal needs and their participation in our field still have a voice and a presence. 

Jennifer Mankoff is an associate professor at Carnegie Mellon University.




Posted in: on Tue, May 07, 2013 - 10:03:13

Jennifer Mankoff



Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


What is user experience?


Authors: Ashley Karr
Posted: Mon, May 06, 2013 - 9:55:48

Take away: User experience (UX) is a type of engineering and design that focuses on creating products and systems that work best for the intended user.

Defining user

I am a UX researcher and designer. What I do for a living is very interesting and fulfilling, but it isn’t free and clear of challenges. One of the greatest professional obstacles I face is explaining to people what I do. Defining UX is tricky, but even trickier is the fact that I’ve never been fond of the word user. It can bring to mind some strange and or uncomfortable things. To my surprise and delight, I recently watched a video where Donald Norman, the man who is credited with coining the term user experience, shares my sentiment. However, we will most likely have to stick with the term for the time being, and so I will define it in this context. When we say user, we mean people. Specifically, we refer to people who are or will be using whatever it is we are designing. 

UX is not

UX is not programming, graphic design, marketing, or project management. Most UX professionals work with programmers, graphic designers, marketers, and/or project managers at some point in their career. UX professionals do tend to pick up programming, graphic design, marketing, and project management skills, and programmers, graphic designers, marketers, and project managers are probably applying some UX principles without even knowing it. 

UX is

As I mentioned earlier, UX is a type of engineering and design that focuses on creating products and systems that work best for the intended user. This involves understanding and involving all aspects of the user throughout the entire design process and product lifecycle. All aspects of the user can include but are not limited to the following: 

  • Knowledge
  • Skills
  • Attitudes
  • Thoughts
  • Emotions
  • Opinions
  • Culture
  • Demographics
  • Cognitive abilities and constraints
  • Physiological abilities and constraints
  • Physical measurements, abilities, and constraints

Although UX professionals use objective methods to gather information about the user, UX is subjective in nature. It highlights the experiential, affective, meaningful, and valuable aspects of human-computer interaction and product ownership. It includes a person’s perceptions about a design's practical aspects, such as:

  • Utility
  • Ease of use
  • Aesthetics
  • Efficiency

Additionally, it is dynamic and constantly modified over time due to changing circumstances, evolving technology, and new innovations.

Questions

Offering a definition of UX usually stirs up more questions, such as:

  • Who invented UX?
  • What are basic UX principles?
  • What methods, measurements, and tools do UX professionals use to do their job?
  • Why should a company hire a UX professional?
  • What value does a UX professional add to a design team and what role do they play?

You’re in luck. I will be writing entertaining, informative, easy-to-understand answers to these questions in my upcoming articles. And, if you have additional questions, comments, or suggestions, please post them below. I am particularly interested in hearing how other UX professionals define UX and what challenges and successes they have experienced while attempting to explain to others what it is they do for a living. I am looking forward to your comments!

Thanks for reading. Until next time, enjoy the experience.


Posted in: on Mon, May 06, 2013 - 9:55:48

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Designing from the sixth sense


Authors: Monica Granfield
Posted: Thu, May 02, 2013 - 9:35:41

This year’s CHI conference is "Changing Perspectives." It got me thinking about the perspective of design, and design by intuition in technology. 

Perspective is the state of one’s idea or views, assumptions and beliefs, facts known to one. Proposing another way to look at something is often a way to change someone's perspective. Having an experience, good or bad, can change ones perspective. The current industry perspective on design seems to be focused on data-driven design, user-centered design, and standards-based design. The perspective is that design based on these methods is effective and justifiable design. However, what is missing from design is the intuitive aspect of designing the experiences we are creating. We need to consider how we can change our perspective on design in technology to support intuitive design and that sixth sense, because in that sixth sense lays the source of intuition and innovative designs. 

We are always striving to make our designs in user experience more intuitive and innovative. However, if we are not designing from intuition, we are merely recreating what has already been done, using common components and making few strides into innovative interfaces and experiences. So why aren't we designing from intuition? Because it takes time, might cost more, no one else is doing it, it is the unknown, and it is not tangible. 

According to graphic designer David Carson, "Intuition is the most important ingredient in design. Everyone has it, most schools discount intuition as part of the working process because you can't quantify it and can't teach it." So is the issue that most of us aren't familiar with utilizing and leading by intuition? Do we have these same barriers present in the work place?  If we don't have an existing example to reference, data to quantify, or cause with which to justify a new idea or design, do we dismiss it for the sake of risk and time? 

"The intuitive mind is a sacred gift and the rational mind is a faithful servant. We have created a society that honors the servant and has forgotten the gift." –Albert Einstein 

When it comes to intuition, rather than facts, how does one justify intuition as a solution for a user-experience design? 

Most times we design scientifically with a process. Many times we design reactively, piecing together a design based on existing knowledge. According to William Duggan, a professor at the Columbia School of Business, some intuition is derived from knowledge. This is what he has coined as "Expert Intuition," intuition based on knowledge, which allows you to make snap judgments, and only works in situations that are predicable and common. This is how most design is currently conceived today. This is not intuitive design and is not design that will innovate. According to Duggan, "Strategic Intuition" is what will lead us to true innovation. It is slow and works for new ideas. It may take a week to generate a new idea that comes to you in an ah-ha moment from your sixth sense. However, according to Duggan, our Expert Intuition often gets in our way of new thinking. It is easier to solve a problem with the known than wait for an answer from the unknown. This is where our perspective on design in user experience can change to truly support better design and innovation. 

Duggan's theory on intuition explains so much. It explains why sitting in a room brainstorming and having a goal to leave that room with a solution is not always an optimal approach for design. It explains why after working on a product heads down for four months and finally having time to sit back and absorb the work, you are able to suddenly generate a half dozen cutting edge, innovative solutions to some of the most problematic areas in the experience. It explains why design needs time to absorb the UX problem before presenting a solution and why time for a design to change and morph may be better accommodated in an agile environment than in our product schedules. Intuition can drive and define an experience. This is not to say that you don't have to justify ah-ha moments with general processes like testing or feedback. It just tells you that based on experience or knowledge gained along the way, your feeling is guiding you and not the crowd or status quo. This perspective on design brings up how we look at where and how design fits into the overall process of creating a user experience. 

“The intellect has little to do on the road to discovery there comes a leap in consciousness; call it intuition or what you will, and the solution just comes to you and don't from where or why.”
--Albert Einstein

As a design community we may inherently know this, feel this, and understand this. As a technology industry, can we change our perspective toward not only creating intuitive products, but around supporting intuitive design and our sixth sense in creating these products? 


Posted in: on Thu, May 02, 2013 - 9:35:41

Monica Granfield

Monica Granfield is a user experience designer at Symbotic. The views expressed on this website are her own and do not necessarily reflect the views of Symbotic.
View All Monica Granfield's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


A perfect storm


Authors: Jonathan Grudin
Posted: Wed, May 01, 2013 - 7:03:24

Full disclosure: I generally don’t mention my company’s products in print, but can’t avoid it here. It’s not something I worked on.

The summer after high school I was hired to teach tennis at a local park. It was before Chris Evert and Jimmy Connors turned the sport upside down. There was a “right way” to hit every shot—forehand, backhand, volley— but I favored a then-dismissed two-handed backhand. It went without saying that I would teach the proper one-handed backhand, but they said it anyway, and also asked me to use a one-handed backhand when competing that summer to set an example for my students.

Sports instruction changed. No longer is there a right way. Video is analyzed and an improvement plan constructed around an athlete’s current state. The era of the personal trainer arrived. Teaching was much easier back in my era, but not so good for students.

In K–12 education, the one-size-fits-all approach to a first approximation continues in most schools today. But it is finally changing, with technology playing a large role, for better or worse. I believe it is for better, much better.

Since shifting my research focus to K–12 education I have been sitting in classes, talking with students, teachers, administrators, and educational technology developers, and reading literature. Previously, my exposure was through mass media and as the parent of daughters in elementary and middle schools. I had opinions, and now some of them have evolved.

K–12 education is on the cusp of dramatic change in the United States and probably everywhere. How can I see it as positive? Examine plans for making sausage in an era of budget cuts and conclude it will be tasty? Have I become a clueless “oldie,” as my daughters put it? Well maybe, but I see powerful converging forces creating the conditions for a perfect storm that will sweep through our schools over the next two to five years.

Force #1: The Common Core State Standards

For a decade, annual state assessment tests have sought to measure basic math and language arts proficiency. They can identify severe deficiencies, but the result of focusing education on passing multiple-choice, fact-based tests is problematic, especially for higher grades. In response, 45 states banded together to develop the Common Core State Standards, focusing on “21st century skills” such as communication, collaboration, critical thinking, complex problem-solving, and project-based learning. After years of development the curriculum is to be fully adopted in the 2014-2015 school year. The devil is notoriously in the details, but it looks to me to be a great step.

Force #2: Online assessment

A student’s mastery of these 21st century skills cannot be assessed with pen and paper multiple-choice tests. Two consortia formed to develop assessment tools, each comprising about half of the 45 states. Their approaches differ but both require online-only assessment by the spring of 2015. Large-scale pilot tests began this month.

The implications of “online” are huge. Rotating students through computer labs is suboptimal. Scheduling for large schools would be a nightmare and students with less online experience could suffer. In March, the Los Angeles Unified School District issued a Request for Proposals (RFP) to deliver over 600,000 tablets by December 2014. Other RFPs are out. Implementation matters: Throwing technology at a school does not insure progress. Change will not come overnight. But again I see solid grounds for optimism.

Force #3: Advances in ‘blended learning’

As technology costs decline and old schools get wired, as new schools are built wired, experiments that blend online and traditional learning proliferate. We can identify what works and what doesn’t. One model is “station rotation”: one-third of a class works on exercises on a bank of computers with adaptive software that matches problems to current performance, one-third on collaborative efforts perhaps overseen by a teaching aide, and one-third receive a lecture or the teacher’s attention; then they rotate. Another is the “flipped classroom”: Prior to class students watch a video lecture (Khan Academy, VideoNerd, Discovery Learning, teacher-created, etc.). Class time is then spent solving problems and interacting. This merges a non-massive version of the university MOOC (massive open online course) concept with traditional face-to-face education. Well-done case studies and detailed models describe experiences with these and other innovative approaches.

As online resources proliferate, textbook publishers compete with high-quality interactive digital supplements. Adaptive exercises steer students to easier or more challenging problems based on how they are doing. Online video lectures are provided to back up the instructor’s.

Force #4: Deployments with stylus and OneNote

One-to-one deployments of networked tablets with keyboards and active digitizing styluses are rare. They have been expensive. Deployments exist, though. When students and teachers have a capable device with them at home, school, on field trips, and so forth, tremendous efficiencies and capabilities arise. Students and teachers develop skills, discover resources, share techniques, and unleash creativity in a fashion that seems without parallel. Videos of deployments at Cincinnati Country Day School (from WIPTTE 2013, the Workshop on the Impact of Pen and Touch Technologies on Education) and Whitfield School in St. Louis are powerful accounts.

Low-resolution capacitive touch display devices such as the iPad can reduce the weight of books lugged around by students, but these lean-back devices are primarily designed for content consumption. High-quality content creation—annotating, highlighting, sketching, and so forth—required a high-resolution stylus with active digitizer. I recently realized why I had long under-estimated the stylus: In few professions are handwritten notes or sketches part of the final product. Education is one of them. Teacher mark papers, students take notes in class.

Textbooks have company in crying out for digitization. Students in middle and high schools carry large notebooks with sections for each class. Tremendous benefits accrue from putting notebooks online with a tool such as OneNote, which comes with Microsoft Office. Students can easily copy and annotate materials from almost any source into OneNote, including learning management systems such as Moodle and DyKnow. They can insert links to audio and video and share sections with teachers or classmates. Teachers can mark homework or quizzes without collecting them, providing students with immediate feedback. Teachers estimate grading time is cut to one-third, preserving time for interaction.

The Storm

The forces described here are converging and they are converging fast. Schools in the Common Core standards coalitions have strong incentives to move to 1:1 deployments by 2014-2015. Not only will assessment be online, software supporting 21st century skills has appeared and effective uses identified. Tablet costs are already dropping quickly and the emerging volume—recall the 600,000 for Los Angeles alone—will drive them down sharply. In May, when Los Angeles announces its decision, a new floor for tablet prices may be found.

Observations of existing deployments convinced me that a golden era of educational achievement looms. Students who carry devices with them, more so than the generation that carries only a phone, will be a truly digital generation. 

It won’t happen in two years. It won’t be smooth. Some predict a train wreck—schools unready in the spring of 2015. Educators who have focused for 10 years on teaching kids to pass multiple choice assessment tests will require time to adjust. Tools like OneNote were not designed for use on this scale, but can rise to meet the challenge. Education is not like other disciplines. For example, I was initially perplexed by the fierce insistence of many teachers that they needed better stylus handling during presentations—I rarely see a speaker use more than a laser pointer. Eventually it dawned on me. In presentations to university students or adults, viewers can generally identify the bullet item or diagram part to which a speaker is referring. Not so in K–12. Teachers didn’t write out lecture notes once and then copy and redistribute them—each year they wrote on a blackboard or whiteboard, underlining and circling, pausing for emphasis, drawing arrows to connect concepts. With a tablet and projector, they can do this equally easily and more colorfully, and without turning their back on the students. (This would have impeded note-passing in my youth, but some things never change –students today develop ingenious digital note-passing strategies.)

What could derail my upbeat projection?

Cuts in education spending in recent years are depressing—if cuts continue, the many enthusiastic and well-trained teachers coming along won’t deliver to their potential. Also, although the Common Core State Standards were endorsed by the Business Roundtable chaired by the CEO of Exxon, the Republican National Committee has come out strongly against them. In Texas, one of five states not adopting the Common Core, the dominant Republican Party platform education section “opposes the teaching of higher order thinking skills, critical thinking skills and similar programs.” An impressive number of Presidents and senior members of Congress come from Texas, so this is not a good omen. And because we have lost the habit of teaching kids to think, the new tests could spawn a wider resistance if not managed well.

Outside the Lone Star State perhaps, childhood education is a protected place, one of few in which commercial interests have not gained free rein. I am optimistic. We have a sense of where education should go. We have tools to help us get there.


Posted in: on Wed, May 01, 2013 - 7:03:24

Jonathan Grudin

Jonathan Grudin is a principal researcher in the Natural Interaction Group at Microsoft Research.
View All Jonathan Grudin's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


What designers need to know/do to help transform healthcare


Authors: Richard Anderson
Posted: Mon, April 29, 2013 - 2:39:32

I've been immersing myself in all things focused in some way on dramatically changing the U.S. healthcare system and the patient experience. This has included attending lots of events. Last week, I attended the Health Technology Forum Innovation Conference. Two weeks ago, I attended the Second Annual Great Silicon Valley Oxford Union Debate focused on whether Silicon Valley innovation will solve the healthcare crisis. Near the end of March, I attended both a panel discussion about "Improving the Ethics and Practice of Medicine" and HXD (Healthcare Experience Design) 2013. ... (The list goes on and on.)

I've also been writing and speaking about this topic as well. Recent examples include the blog post I wrote for interactions in December entitled, "The Importance of the Social to Achieving the Personal" (in healthcare) and my presentation at HXD 2013 entitled, "Preventing Nightmare Patient Experiences Like Mine" (subtitled, "Avoiding 'Putting Lipstick on a Pig'").

As most agree, the U.S. healthcare system and patient experience are badly in need of disruptive innovation, a transformation, and/or a revolution. Hence, the subtitle of my HXD 2013 presentation implies that there are things (UX) designers need to be aware of or do (or not do) so that they can do more than only contribute to modest improvement of the status quo.

What are those things? The things I addressed in that presentation: 

1. Too many designers are too enthralled with technology and too focused on digital user interfaces to have a great impact on transforming healthcare; 

2. Human-centered design as often practiced is better suited for achieving incremental innovation instead of the disruptive innovation most needed -- Don Norman and Roberto Verganti have written a great essay about this; 

3. Design research too often falls short of revealing the nature and dynamics of the socio-cultural models at play that need to change; 

4. Design research too often focuses on common cases instead of the "edge" cases which can more identify or reveal emergent and needed innovation; 

5. Essential to solving the "wicked problem" of healthcare is reframing it, something not all designers do adequately -- Hugh Dubberly and others addressed this particularly well in an interactions magazine cover story; 

6. Designers need to get picky about the kinds of healthcare projects they work on.

(See the presentation for more on each item in the list.)

What would you add to this list? Is there anything in the list you question? Let's have a conversation. Please comment below or contact me via email at riander(at)well(dot)com.

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.



Posted in: on Mon, April 29, 2013 - 2:39:32

Richard Anderson

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.
View All Richard Anderson's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@Lauren Chapman (2013 04 30)

Well-stated! Reframing the problem doesn’t happen nearly enough. In regards to your points about design research, how might you recommend adding more rigor for it to be appropriate enough? Is it spending more time in research, or focusing on the edge cases?


The new Gmail interface: better or worse?


Authors: Elizabeth Churchill
Posted: Fri, April 26, 2013 - 4:12:13

People have been declaring the death of email for decades. It is claimed that email is only used by fuddy-duddies, a dying breed of technologically challenged, uncool dinosaurs. 

That would be me then. 

And most of my friends and colleagues. 

I am told that if I were hip, young, trendy, efficient … I would obviously choose to have all my information in small, digestible, unformatted chunks.

Maybe. 

For me, email is the most versatile of all the channels I have available to me for asynchronous communication. It offers the richest range of options for crafting a message: short subject line only messages; long, carefully formatted messages with headers, font changes for emphasis, and bullet points; and plain text for short, tightly constructed messages. Note: I am staying well away from any discussion of whether email as a channel could do or should be given a sociotechnical overhaul in terms of interfaces, shared practices and so on. I will leave that point of view and ensuing discussions to the host of consultants, books, speeches, classes, and motivational posters that are dedicated to helping us “get control of our inboxes!” The problems these talented people discuss tend to be less about email as a channel, but focus on our behavior with email—its very versatility means it can get overused, even abused. I will note, however, that overloading can easily arise in other channels; my text stream can get pretty overwhelming and the stream of content that I see going by on Facebook makes my head spin.

Anyway, back to the point… Why have I been contemplating email so deeply of late? Because of the changes that Google wrought upon its Gmail Compose format about three weeks ago. The changes, we are told, are motivated by a desire to transform email and make it more like messaging

“Co.Design argues that Gmail now competes with SMS, instant messaging and social networks, so Google had to simplify the interface "to keep up with the times."

I am not alone in my curiosity regarding these changes. From personal blogs to sites like Information Week, people are chirruping away sharing their views. Kate Crane–journalist, co-editor of Smashpipe (a fabulous new Web magazine), and the copy editor for ACM interactions–chatted with me about this for a story she is writing. Many people are upset, she says. She has some very compelling data to support this assertion. 

Personally, I am somewhat upset. More than upset, though, I am curious about the design process that went behind the changes and with people’s reactions. I shared with Kate that, for me, there are three issues that need to be kept separate: 

  1. Righteous indignation: a lot of people seem to be saying–or implying–that it is not appropriate for Google to change the Gmail interface. I don’t much like change either. I get irritated because change slows me down–at least until I learn the new methods. But I don’t think in this instance I have a right to stamp my feet. A company has a right to change the very useful service it offers us for free in whatever way it pleases. In my opinion, as a user of a free service I am not entitled to complain, although complain I do of course. Sure, I can vote with my feet and stop using the service. I suspect, however, that Google is making a bet that we will not all do that;

  2. Irritation: As usual, a Silicon Valley Internet behemoth demonstrates its epistemological arrogance with unfounded assertions about what is and what is not cool: email is going to go away so why don’t we nudge you dinosaurs into the 21st century where you can join us cool kids who know what modern communication should look like. This kind of attitude is certainly condescending and annoying, but it’s not new and it’s not surprising. And, to be fair, I haven’t seen this attitude stated explicitly. It’s just a general attitude that pervades the Internet industry–it’s just more of the same, business as usual for an environment that often places change and “innovation” higher than usefulness and interactional stability and consistency unless change interferes with the bottom line; and

  3. Bewilderment: The design seems to me to counter some very basic human factors and design principles. Note: I am not even asking for something that is aesthetically pleasing, which it is not, I just feel some basic human factors principles are being violated. I elaborate below.

As a number of others have pointed out, cognitively, the new interface disrupts the learned model, the interface that users have grown accustomed to. However, as has also been pointed out, people can relearn new actions with little difficulty. Fair enough. I am more curious about why routinely used, task-related features are hidden by default. Even after the new configuration has been learned, it remains the case that it simply takes more clicks to get to the formatting palette. It is an interface crime to insist people take extra action(s) to get to the functionality that was previously at their fingertips. (The only exception I make to this assertion is when safety-critical issues come to the fore, requiring intermediate steps and cautionary dialog boxes.) 



Figure 1a. The old Gmail formatting palette, directly below the To, Cc, Bcc and Subject text entry boxes



Figure 1b. Click on the A at the bottom of the window



Figure 1c. After clicking on the A, a formatting palette appears


Let’s look at an example from the new Gmail interface, the formatting palette. Figure 1a shows the old interface; Figures 1b and 1c show the new one. As the screenshots show, I used to be able to, by default, leave the formatting palette visible. Now, whether I am a formatting palette frequent user or not, I have to explicitly open it every single time I open the Compose window. The default cannot be personalized to my preference. The rationale for this? It seems very few people use the formatting palette so hiding it is the solution:

Gmail's lead designer says that "a very small percentage of emails involve a formatting action," so that's the reason why you need two clicks instead of only one click to make text bold.

Hurrumph! It may be that only 1 percent (I made that number up) of the Gmail using population really want to use the features offered, but it’s really curious as to why they are not simply being offered the option to tailor their experience and have the formatting palette routinely available. Why make them work harder when an interface option is cheap? Why hide the palette by default requiring more actions when you can build in the flexibility to let the few who want it visible have it so? 

I read on various blogs, and, as per the quote above, the decision was rooted in the belief that formatting is no longer needed for email, that formatting resides in the arena of formal documents, and that this move makes email more like messaging, less formal, reflecting the belief that formalities, like formatting, are no longer de rigueur. This strikes me as very odd. You don’t have to be an information designer, or a layout geek to know that people use formatting to structure information, to improve information consumption and comprehension, and to signal what is salient–underlining, emboldening, color contrast and a thousand other beautifully researched, explored, and elaborated forms of textual formatting and some of us like to use them a lot. Formatting is a basic mechanism for highlighting important content but leaving it in context. 

And, sometimes a little formatting can be a social nicety too–you don’t have to be a proponent of the behavioral strictures of Jane Austen’s Victorian politeness landscape to want a little bit of formatting in your emails. I believe that, aside from visual information design aspects, there continue to exist sociocultural norms surrounding appropriate formats for certain kinds of communication. If you send me a job enquiry entirely in bold capitals in comic sans MS size 18 point font, you will certainly get my attention, but probably not in a good way. Similarly, if your job enquiry is telegraphic and in 140 characters or less, I many be similarly bemused. Formatting makes information more easily digestible and also can be used to signal awareness and respect for the social circumstances of the exchange.  

So, why not let people tailor their digital environment to one’s personal tastes. Why make a statement that informal email should be the dominant model, inscribe that philosophy into your email Compose design, and insist that your viewpoint has to be encountered and overridden every time the tool is used? 

Ok. Enough on that topic.



Figure 2a. Compose to address to cc/bcc to subject to writing text in the old Gmail interface



Figure 2b. Compose to address, to cc/bcc, to format palette to writing text in the new Gmail interface



Figure 3. Click on the arrow to release the Compose window so you can reposition it. Notably, this arrow to “pop-out” the Compose window exists in the old interface also.


To my second issue. While the formatting point is irritating, there are other features of the new interface that are possibly physically damaging. There are basic ergonomic issues with the new interface; it requires more traversal across screen real estate and more clicks to achieve the same result as the old interface. I did some direct comparisons: the new interface requires more perceptual and physical movement across my display screen (see the differences between the old interface and the new one in Figures 2a and 2b). One can release the display window from the bottom right by clicking on the arrow (Figure 3), and place it in an ergonomically favorable location. But this does require one to move the mouse, taking the cursor from the top left to the bottom right of the screen, click on the small arrow, and then grab the window and drag it to reposition it. These are the kinds of micro-movements that could lead to repetitive strain, strains that may not become evident for a while. When I first tried the new interface, I had to reconfigure my working setup so that my mouse could traverse the display real estate to achieve what I had previously down in a small jog to the right followed by a straight downward movement. The first few times I tried this, my mouse ran out of mouse-pad and had to be lifted and repositioned. 

In trying the new interface out, I was reminded of that classic of HCI lore/law, Fitts law. According to Wikipedia, Fitts' law is 

a model of human movement primarily used in human-computer interaction and ergonomics that predicts that the time required to rapidly move to a target area is a function of the distance to the target and the size of the target. Fitts' law is used to model the act of pointing, either by physically touching an object with a hand or finger, or virtually, by pointing to an object on a computer monitor using a pointing device. It was proposed by Paul Fitts in 1954.

It’s been a while since I actively used Fitts’ Law, but I relearnt from Wikipedia that one common form of Fitts’ Law is for movement along a single dimension:

where:

  • T is the average time taken to complete the movement. 
  • a represents the start/stop time of the device (intercept) and
  • b stands for the inherent speed of the device (slope). These constants can be determined experimentally by fitting a straight line to measured data.
  • D is the distance from the starting point to the center of the target. 
  • W is the width of the target measured along the axis of motion. W can also be thought of as the allowed error tolerance in the final position, since the final point of the motion must fall within +/- W/2 of the target's center.

The equation tells us that there us a speed-accuracy trade off associated with pointing, where targets that are smaller and/or further away require more time to acquire. I was always told that Fitts’ Law was initially intended to be applied to new movements, not to highly practiced, skilled motions, but as the Wikipedia article points out, Fitts’ Law has been generally applied to both novel and practiced motions. 

In any case, there are some very general rules that come from Fitts’ Law when it comes to designing interfaces: having to traverse more screen real estate and hunting and pecking for small GUI elements (remember the small A in Figure 1b that needs clicking on) takes more time and effort. So, as Wikipedia nicely summarizes: “buttons and other GUI controls should be a reasonable size; it is relatively difficult to click on small ones. They should be easy to acquire with whatever input device the user has (mouse, trackpad, trackball).” Increasing these kinds of motion also adds strain, a fact that many of us who spend too much time on a computer know only too well. 

I do think the Google engineers were genuinely trying to make an improvement. But, for me, they have made a number of things worse. In the end, people likely won't abandon Gmail, but they may use a mail client with better options, using Gmail as their mail server but not using the Web interface. I’d love to see the data: how many people start to use mail clients that used to just directly use webmail. Maybe not a lot, but I suspect it is more than one (me).

In any case, if you are a Gmail user but haven’t encountered the joys of the new interface yet, you may soon have to try it out–see the message from Google (Figure 4). People like me who swapped back to the old interface will be allowed to do so–temporarily.

Elizabeth F. Churchill is Director of Human Computer Interaction at eBay Research Labs in San Jose, California. She is also vice president of ACM SIGCHI.


Posted in: on Fri, April 26, 2013 - 4:12:13

Elizabeth Churchill



Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@james s. (2013 06 03)

i’m using less now, didn’t like the change at all, and currently looking for new options… this new one is annoying

@Andrew Miller (2013 06 07)

Interesting post! For me, the changes have been mostly positive, and I am someone who formats text in email fairly frequently. However as an “expert” user, I’ve been using keyboard shortcuts to do most common tasks like bold, underline and italics. I would guess that’s another reason for the low utilization of point-and-click for these features. As for the compose window, it’s definitely further from the compose button. However, it’s a dramatic improvement when replying to an existing conversation, which I find myself doing much more frequently. Just a counterpoint and a reminder that an interface as complex as GMail involves many design tradeoffs. I wouldn’t automatically suppose Google’s designers have forgotten basic UX principles just because the changes are not universally better.

@saund (2013 06 07)

Thank you for the posting, I agree and I especially dislike the hidden cc: field.

Similarly with Microsoft.  In interviewing 30 people about their document collaboration practices, a large number have still not recovered from the Ribbon interface of MS Office 2007, then 2010.  The disruption goes beyond individuals not being able to find familiar tools buried behind multiple steps.  When colleagues refuse to “upgrade”, document versions proliferate with incompatible application formatting. 

@Alejandro Leone (2013 06 07)

Crazy thing is that gmail is free for general users (including like-chat young people) BUT not for companies who “go to google”.
Google Apps, a non-free service, is also affected for this crap. So I ask myself: BBVA is happy with it?

http://googleenterprise.blogspot.com.ar/2012/01/bbva-banks-on-google-apps.html
http://www.google.com/enterprise/apps/business/customers.html

@steinard (2013 06 07)

Hi!

I am the CTO in a technology company and we are paying customers using Google. Now all employees in our company (except one) have switched to email clients instead (Outlook), and we are looking for options, considering MS services instead. I am pretty sure that within one year, we will not be paying customers of Google, and we will likely not be using any of their services either.

Personally I think that Google has lost it’s innocence. You can see the trend of trying to harvest their market potential. Presumably this might be a consequence of declining income from their main business, advertisement on search, as I guess Facebook has taken a rather big share of the Internet advertisement business.

Google’s main product is information about you, you all who use their systems. They can (I am not saying I know to what extent) track your searches, access more information about you from your Google+ account and cross-link this. YouTube has become integrated and so forth. It is possible, to store large amounts of data about you, what you search for, what sites you visit, what interests you have, your background, your education, where you work, where you live, average housing costs in the area where you live, your likely income, where you want to go on vacation, where you went, etc, and from that deduct what products and services that can be advertised to you when you use any of their services (this is similar to what Facebook does). All it takes as a user to make this information gathering a lot harder is to delete cookies.

If users stop using the GMail web-client then google cookies will not be created that often, the Chrome browser and maybe even Google+, then it would become so much harder for Google to harvest such information. The bet here is on users not doing any of this, but making users unhappy increases the risk that more users do change their behavior, it increases the chances for some other service or program to appear and take a piece of the cake. What hurdles can such applications bring (like automatic deletion of other cookies etc).

Google themselves grew to the size they are today by offering users better services/products then their competitors did. If they don’t keep their edge, then that market share can just as easily be lost (although not in a wink). Therefor I think it is the users perfect right to complain to Google when they change something that they don’t like. After all, the users are feeding Google the information they need to make a profit on advertisement and search.

For all we know, Google might not be a big actor on the Internet in ten years, maybe their main concern is running power plants (they certainly show interest in that industry), who knows, take nothing for granted, make a local copy of your beloved mail!

@Christie (2013 06 08)

“In my opinion, as a user of a free service I am not entitled to complain,”

Whyever not?  People offer “free” services because they get something from doing so.  It might only be a feeling of satisfaction that people like what is being provided.  It might be that they believe you will like whatever is being given away well enough that you will in the future pay for that or another product/service.  Or, there might, as in Google’s case, be a financial reward that comes not from the direct users but from advertisers who wish to reach those users.  In any case, an offer is made and accepted.  Free or not, you are entitled to complain about the product or service, especially if what is offered is something that will have an ongoing role in your life.  Any organization, be it a non-profit or a for profit business, is foolish to not listen when users of its “free” services become unhappy.  We all have alternatives and users will seek out what best suits them.  Google may be betting that only a tiny minority will leave, and they may well be right, at least in the short term.  But I would bet that many people who stay because it is easier to stay than to leave will change from being enthusiastic Gmail users to resigned Gmail users who will no longer encourage others to try it and who will be highly susceptible to being enticed away should a new service come along that is getting some buzz.

Andrew - Since you could already pop out the compose window with the old interface and thus still have access to your inbox and any existing conversations, why do you consider the new, under-sized, poorly located, compose window that requires more clicks to do anything to be an improvement?  Personally, I find having the inbox visible while composing to be a distraction.  Under the old compose, it was available for those who wanted it and more traditionally not visible for those who didn’t want it.  Now we ALL have to deal with it.

@leo wang (2013 08 19)

I cannot agree more. the new gmail interface absolutely sucks. I stopped use it more, try to find a good email client for that.

@ 5267471 (2013 10 01)

To revert to the old interface: http://webapps.stackexchange.com/a/50178/18147

@Pete (2014 04 18)

Great article! I think Google is trying to save money by trying to merge their desktop GUIs with mobile. Try again Google, you failed.. Reminds me of what MS did with 8. TIP: Try Gmail basic HTML to get back some old features, such as the full screen mail editor: https://mail.google.com/mail/?ui=html

@Slade Barker (2014 07 06)

ELIZABETH, could you UPDATE this column or at least add a NEW COMMENT below? I enjoyed this post because I despised the new Gmail interface the moment I tried to use it. I quickly switched back to the old interface. I did not stick around long enough to discover the flaws you cite, but they would have infuriated me, as I too use the features that supposedly nobody uses anymore. The big surprise for me is this idea that the old interface “is going away soon.” It is more than one year after this post, yet Google has still not taken it away from me. DID GOOGLE HONCHOS CHANGE THEIR MINDS? Or SHOULD I BE FRIGHTENED?


Making wearables, umm, bearable


Authors: Uday Gajendar
Posted: Fri, April 26, 2013 - 9:19:33

Wearable devices seem to be all the rage lately, from personal monitoring devices (like Nike FuelBand or FitBit) to smartpens (LiveScribe) to Google Glass, and beyond (medical accessories for the iPhone). I would also include the ubiquitous smartphone as a wearable, since we usually carry it on our body, in a jacket or pants pocket. And there's tremendous buzz on the interwebs pointing to a possible "smart watch" coming from Apple, Google, and/or Microsoft, after the success of the Kickstarter-based project Pebble. Whew!

Having a FuelBand monitor and LiveScribe pen, I can certainly attest to the personal lifestyle benefits of such intimate networked devices on my body, with ties to services (Evernote) and devices (iPhone apps)—visualizing data, tracking goals, enabling productivity anytime and anywhere. However, it's not all digital wonderment. 

There is a string of emerging issues for wearables in terms of UX and tech performance that we're just starting to realize. Here is a sample set:

Information jetlag: This term is from noted "cyborg anthropologist" Amber Case from her 2012 SxSW talk in which she cited an issue that we all run into if running a single service across multiple connected devices: You get an alert on one device, you clear it/respond, but that alert persists on other devices. They haven't "gotten up to speed" that you've already replied to the Tweet or Facebook notification. So you end up playing "whack-a-mole" amongst your devices and services. An annoying inconvenience that reveals "smart" syncing has a long way to go yet!

Battery power: Being "wearable" and on your body, you'd think such devices would leverage the kinetic energy of your moving around all day, particularly those that actively track your activity! After all, with more wearable devices coming into possession (watch, pen, phone, glasses) it becomes a (costly) chore to charge everything every night! After all, my Nike Triax digital watch from 2003 still runs, and I haven't changed the battery since the purchase date. I've forgotten about it, so it fits my lifestyle fluidly. Having to charge something breaks the silent transparent insinuation of a device into our lives. A great UX and tech opportunity would be to have implicit charging through routine use, via motion or even solar.

Misplacing devices: Now that we've got all these relatively small devices, the likelihood of misplacing one or all of them at a bar (ahem), subway, or airplane seat pocket increases. Yet another costly situation to be mindful of—and coupled with the "jetlag" syncing—you may lose your data upon loss. We need to shape a multi-wearable device UX that recognizes people are forgetful, busy folks running around, lost in trains of thought (if not the train itself). How can such a "body-net" of wearables maybe keep track of each other? A device finder app is a huge market opp!

Modalities of interaction: How exactly does someone interact with a pen or a tracker or a pair of glasses? How is that made known to the user yet implicit in the manner of interaction so as to avoid sticking out awkwardly? When I picked up my new LiveScribe, I had hoped for Siri-style voice interaction, along with hand gestures not locked to the dot-print paper, but I was disappointed. The FuelBand has a single button for tapping with a rich array of colorful LEDs communicating status and data. Glass features a nice choreography of voice, touch, and head tilts to visualize and augment your experience of the world. Shaping that mix of modality will be key to finding the sweet spot of how the wearable and its service smoothly fits into a person's daily routines.

Social perception of self: OK, let's face it—Google Glass makes you look goofy. A giant smartpen in your pocket might cause more snickering. Someone may mistake your FuelBand for a substance-abuse wrist manacle imposed by local authorities. While we've mostly become accustomed to seeing people "talking to themselves" with their barely visible BT headsets/earpieces, wearables in general cause a shift in perception of what's happening, where it's happening, and by whom. What are those people doing, and is it now "normal"? Care must be taken in the early days to not make someone feel awkward, but rather empowered and confident. That comes back to modalities of interaction, visual cues, body and gestural language in public, etc.

These are just a few emerging issues already arising with the spread of wearables. Looking back upon them, it occurs that these become criteria for what really could make a wearable not just smart and useful, but actually a helpful buddy, enabling you to get through your day, as a knowledge worker or social butterfly or resourceful family caregiver. This only skims the surface of what it means to truly improve the human condition by virtue of such personal devices. 


Posted in: on Fri, April 26, 2013 - 9:19:33

Uday Gajendar

Uday Gajendar is Director of User Experience at CloudPhysics, focused on bringing beauty and soul to Big Data for virtualized datacenters.
View All Uday Gajendar's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Taking UX to Eleven


Authors: Joe Sokohl
Posted: Wed, April 24, 2013 - 9:44:01

About a hundred years ago, I worked as a road manager. Often, when I tell folks this, they get all misty-eyed, somewhat dewy, and ask, "What was it like?" I think images of Scooter Herring run across their eyes, or strains of Jackson Browne's "Rosie" waft melodically in an Ohrwurm kind of way.

The first time I worked as a road manager was after I'd met the Bill Blue Band when they played an in-studio live concert. I bonded with the band, from their incendiary guitarists to their rocking rhythms and gypsy attitudes. We would pile into an unreliable Ford Econoline van on top of bass cabinets and road cases as we played from Baltimore's No Fish Today to Richmond's Hard Times to Raleigh's Pier to Atlanta's Blind Willie's and points in between.

Ten years later, I went on the road with blues guitarist Jimmy Thackery and his band the Assassins. While it was a higher-level band, the principles of the relationship between a road manager, the band, the venues, and the audiences remained the same.

Recently while working on a major project, I came up with this analogy: The UX team is the band, the client is the venue, and the users are the audience.

* The band's job is to create the base experience. They write the songs, they perform the music, and they create a show. 

* The venue's job is to host an environment in which this experience can occur. They provide the space, the lights, the sound system (in most cases these days), the food, the booze, and the branding.

* The audience's job is to pay their money to the venue so the band can create the basic, core elements of the experience. The goals of the audience include participating in a social event (as opposed to simply listening to the band on their stereo), getting out, getting drunk, whatever...but it is the experience of being there that exists only because a band comes together, performs the music, in a hall the venue provides.

In all of this continuum, a road manager (also as tour manager or personal manager, depending on the level and structure of the band) enables this experience to occur seamlessly:

* The road manager makes sure the band gets to where they need to be, that they have the skills, attitude, and equipment they need, and that they're up to the task.

* Often, the road manager must ensure the band works well with the venue, guaranteeing the venue understands the needs of the band as well as the needs and expectations of the audience. David Lee Roth (yes, that David Lee Roth) detailed the story of Van Halen's "no brown M&Ms" policy from their rider. Their road manager was responsible for overseeing this seemingly excessive bit of rock & roll self-indulgence; yet by ensuring the venue read the contract closely, the band could worry less about whether the power was sufficient or the stage safe and instead concentrate on creating an experience for its fans.

A road manager is a person who understands the needs and skills of the band, who understands the goals of the venue, but who ultimately focuses efforts on ensuring the audience turns into swarms of loyal fans. That's why I love the movies Almost Famous and The Commitments; though ostensibly about the music, they show the importance of a manager on the road with a group.

By having a road manager, a band can concentrate on their craft, their art, their muse...and their performance. The road manager is at best a person who understands not just logistics but also politics, art, and experience.

What's needed in the UX world is that UX road manager: An experienced yet perspicacious practitioner who enables experiences to occur, shepherding the UX team with a studied, experienced, and enthusiastic hand. I see this person as someone who's done the craft of the UX band but who sees herself as wanting to orchestrate even more. 

The title can change, evolve, and morph. User experience architect, UX strategist, chief UX evangelist, whatever…as long as it's a passionate person who cares about the experience being the best it can be, with the milieu of band/venue/audience that it can be.

Somehow, someone needs to balance the needs of the UX team (who want to do a great job), the client (who wants to meet their business goals), and the user (who wants to get stuff done, accomplish something, or feel something). Where possible, having a UX road manager can make the difference… and truly make the band on stage.


Posted in: on Wed, April 24, 2013 - 9:44:01

Joe Sokohl

For 20 years Joe Sokohl has concentrated on crafting excellent user experiences using content strategy, information architecture, interaction design, and user research. He helps companies effectively integrate user experience into product development. Currently he is the principal for Regular Joe Consulting, LLC. He’s been a soldier, cook, radio DJ, blues road manager, and reporter once upon a time. He tweets at @mojoguzzi and blogs at sokohl.com. Joe Sokohl is Principal of Regular Joe Consulting, LLC
View All Joe Sokohl's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Ai Weiwei, names, and memories (The background - foreground playground)


Authors: Deborah Tatar
Posted: Tue, April 23, 2013 - 9:56:02

This is the fourth in a series of postings I’ve been writing about Ai Weiwei’s recently closed retrospective at the Hirshhorn, the significance of AI Weiwei’s art with respect to its commentary on the children who died in the Sichuan earthquake in 2008, and, implicitly, the relationship between design, representations, and society. This is, of course, a more art-centric view of design than usually found among a pragmatic ACM audience, but, arguably, this view is sometimes undervalued. Curse you, C.P. Snow!

The various pieces of art I’ve been describing occupy a thin space of commentary between the children’s deaths and the Chinese system, and raise other thoughts vis-à-vis non-Chinese, or at least American, customs, thoughts, practices, and technologies. 

Ai Weiwei has a little book—not red—called Ai Weiweisms. In it, he says, and this was posted on the wall of his show:

"A name is the first and final marker of individual rights, one fixed part of the ever-changing human world. A name is the most basic characteristic of our human rights: no matter how poor or how rich, all living people have a name, and it is endowed with good wishes, the expectant blessings of kindness and virtue."

This is very beautiful. 

But I wonder, is it true? Or, rather, in what ways do we make it be true or not?

I am not sure that Ai Weiwei always makes it be true. In the piece I described last time, Ai Weiwei used the dead children’s names to make an account of their deaths. He attempted to give the account himself that the Chinese government should give and will not. But, on my reading, an account is a tidying up, a normalizing, a distancing. An account is not an encounter; it is not grief; it is not confrontation. It is only part of what it needed. So, for me this piece was not really in the end about the children. It was not a memorial of them or their deaths but rather a testament to the failure of the Chinese system. 

In America today, our own memorials are also mostly reduced to accounts. J.B. Jackson [1] describes how the changing notions of our relationship to death led away from pastoral encounters in such places as Mount Auburn cemetery in Cambridge or the plots out back we still have here in Southwestern Virginia, to the long bland arrays of, for example, an Arlington National Cemetery. There is a loss in such a reductive accounting, which is perhaps why so few people in my family are willing to be buried in American cemeteries. They would rather be scattered. They would rather contemplate glorious nothingness than impotent formalisms. 

My own ancestors felt differently, and the Americans are mostly buried in the Jewish cemetery in Gloversville, New York. Gloversville was the thriving center of, yes, the glove trade in America from the time of Sir Henry Fonda (ancestor of Henry, Jane, and Peter) up until the miserable death of the industry documented by Richard Russo in books like Mohawk. (Richard Rubin, the glove expert in the O.J. Simpson trial, grew up catty-corner from my father in Gloversville.) 

My uncle was buried in the orderly part of the Jewish cemetery two years ago, but there is also a more atavistic portion featuring the plunking down of bodies, reminiscent of the 11-fold piling of graves in the extant Jewish cemetery of Prague. Small, book-size, uneven stone markers are now illegible. I couldn’t find my grandmother’s brother, who died of tuberculosis in the 1910s, although he is almost certainly buried there. The plunking down of bodies is accompanied by memorable oddities. As you see in the picture below, by 1918 (or whenever the headstone was put in) there was enough prosperity to afford a substantial gravestone, but enough freedom of action to allow idiosyncracy. I’m not exactly sure why Ruben Szpacenkopf, who was my grandfather’s grandfather, is memorialized as the “Grandfather of TATARS” [2]. Everyone who could answer this question is dead. But I think I would find the inscription noteworthy even if my name were not Tatar. Clearly there is some kind of expectant blessing of kindness and virtue associated with this use of the name.

Both my mother and my father had connections to Gloversville, and my mother’s uncle Harry used to tell me that he remembered my grandfather’s grandfather (on my father’s side) often stopping by his house for a glass tea with my great-grandmother (on my mother’s side) on his way home from shul. Great-Uncle Harry’s point was that Rebbe Szpacenkopf was born before the Civil War, and that, through him (Harry), I was connected with that remote era by only two degrees of separation. 

My mother’s parents are not buried in the Gloversville Jewish cemetery, for two reasons. The first is that they had a big fight with Great-Uncle Harry and his wife Cecile and moved their burial plots to Fort Lee, New Jersey. But why did they move their plots? One day, my grandmother explained “I don’t want Cecile looking down at me for all of eternity.” This did not correspond with any Jewish theology known to me. I said, “Grandma, do you feel that the spirits of dead people inhabit their graves until the apocalypse?” She muttered something incomprehensible. After a pause, I asked her whether she had ever seen “Our Town,” the play by Thornton Wilder, in which dead characters re-enact small town American life. She had no idea what I was talking about. But, just as she and my grandfather had moved to an apartment in New York City, they chose to be buried in an apartment-like mausoleum amidst strangers. It makes me happy to imagine, just as in life, their neighbors pounding on the walls between the coffins for them to turn the damn TV down. 

In this digital age, I don’t know how I want to be remembered, by whom and for what purposes. However, I am sure of one thing. My father-in-law, dead for 25 years, is still “remembered” by AARP’s databases by the junk mail that we receive and, long after death, evidently moved with us from California to Virginia. At least, on this view, he’s not sitting on a gravestone in California talking with his neighbors about vitamins and Linus Pauling. But … yuch. To live on as a label? Neither AARP nor Ai Weiwei can bring back the dead by using their names. 

I don’t want my name to live on in archives or for these purposes. Curiously, Rebbe Szpacenkopf’s memorial—the one in which there is no attempt to portray him as alive—vivifies a memory of people who are now themselves just memories. It seems the most attractive to me. But, failing that, when push comes to shove(l), I’ll probably prefer glorious nothingness to any of my choices. 

Endnotes:

1. Jackson, J.B. (1997) From Monument to Place, an essay published in Landscape in Sight: Looking at America. New Haven: Yale University Press.

2. Rebbe Szpacenkopf was definitely not the grandfather of the nomadic Tatar people’s of the Crimea who are decedents of Genghis Kahn.


Posted in: on Tue, April 23, 2013 - 9:56:02

Deborah Tatar

Deborah Tatar is an associate professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@Hal Tatar (2013 04 25)

Bravo from the author’s dad!

@Meg Dickey-Kurdziolek (2013 05 08)

Great post! Leaves a lot to think about!

It is interesting to me that in technology our names are treated like variables in programming. You can name your variable whatever you want - it is just used as a reference to get to the “meaning” or actual value. When we die, it is the equivalent of just deleting a user account. Just a bit flip. There are no technological tombstones or virtual ashes to scatter. Our “memories” don’t live on with the websites that care about us, they are just cleared. (Well, except for the AARP in your father-in-law’s case.)

@Pardha (2013 07 07)

Thanks Deborah for another interesting post. Before the computer age, I feel one had more control on when, where, and how one’s name shows up. Now with our names and personal information available for anyone to compile and post online things are different. Just try googling your own name and see!

@Nanci fisher (2014 04 12)

I am the great granddaughter of Sarah Gitel Szpacenkop.  My grandfather was Jacob Tatar.  We must be cousins.  I do genaology and have a family tree.  Write.

@Nanci Fisher (2014 07 22)

I am the great granddaughter oh Sarah Gittel Szpacenkopf .  My grand father was Jacob Tatar.  My mom is Miriam Tatar Pritkin.  I am doing family geneology .  Would love to connect with you.


So you learned stuff. Now what?


Authors: Lauren Chapman Ruiz
Posted: Mon, April 22, 2013 - 8:44:47

I recently returned from a trip in which my team and I conducted a series of contextual inquiries with over 75 people on the topic of spinal cord injuries. We spoke with patients, caregivers, doctors, and nurses. It was one of the most humbling experiences of my life. My team’s ability to develop empathy and interview with honesty was critical, with strangers opening up to strangers. But what happens after all this research is completed? This is one of the hardest parts of design—translating needs into tangible direction, taking a leap from insight into making. A designer’s job is to turn research findings and client needs into something, which, ultimately, both serves the person’s needs and is a sustainable and profitable business investment. 

Document, document, document

But let’s take a step back for a moment. You have just completed your research.

Naturally, it will first be synthesized and documented. But one of the worst behaviors I’ve found in organizations is the lack of proper documentation. Thousands of dollars are poured into research, and after it is conducted, it remains only in the heads of the researchers. The researchers become the gatekeepers to the insights. Often there is only a team discussion on the findings and actions are taken based on a light analysis. Due to the lack of proper documentation, the team doesn’t have the opportunity to see how the results could impact other current or future initiatives. And what happens when that researcher leaves? All of that information is gone. 

Analysis and documentation is critical. At MAYA Design, we like to say research is a 3-legged stool. You need the three legs of preparation, execution, and analysis—with equal effort and time for each. Without one, the stool will topple.

 


Turn key insights into stories

The process of analysis and documentation usually results in a set of key insights or lessons learned, personas, experience diagrams, labeling of breakdowns and opportunities, mapping of behavior and systems, and so on. Regardless of form, all of these deliverables tell a story about the research and what was discovered. 

The form or type of deliverable can vary depending on the type of research and the purpose for it. I’ve noticed research is often used for three types of functions:

  • Evaluative research of a concept, prototype, and/or a current product or service
  • Need-finding research to derive completely new, innovative ideas for a business
  • Need-finding research to discover where to go or how to improve upon a current products or services

Use generative activities to create design concepts

Once you’ve documented your research in the appropriate form, it’s time to make that “So what?” jump. You’ll need to do the hard thinking to translate the information into what it strategically means for your project or client. Often we take our findings and structure them into several generative activities to be completed in collaboration with our client. Examples of these activities might be developing statement starters to use in a creative matrix and then advancing the resulting ideas further with concept posters. These posters will illustrate a refined concept based on real needs. They might include the stakeholders and personas for which they were created, the needs which they fulfill, the details of the design and how the idea works, a development timeline, and/or challenges to producing the idea. 

Build shared understanding

The purpose of these exercises is to collectively leap into what the research means and what next steps the client will need to take. It is important to do this as a group, creating a sense of shared ownership and gathering as many ideas as possible. Everyone will look at the research findings and interpret their meaning a little bit differently. Beware a single loud voice creating groupthink—instead, use activities that invite input from the whole team. This is how many ideas start to coalesce. And then it’s the designer’s job to begin shaping the vision. 

Uncover transformational opportunities

For our project, the research uncovered opportunities to approach the problem in a holistic, systematic way rather than with a single-solution concept. If we hadn’t taken that analytical leap, if we had stayed buried in empathy and data, unable to synthesize our experiences critically, we would never have had the chance to actually improve the lives of others. 

Lauren Chapman is a designer and researcher at MAYA Design in Pittsburgh, PA. She is also an adjunct faculty member at Carnegie Mellon University.


Posted in: on Mon, April 22, 2013 - 8:44:47

Lauren Chapman Ruiz

Lauren Chapman Ruiz is an Interaction Designer at Cooper in San Francisco, CA, and has been an adjunct faculty member at Carnegie Mellon University.
View All Lauren Chapman Ruiz's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


The end of civilization as we knew it


Authors: Aaron Marcus
Posted: Fri, April 19, 2013 - 7:17:34

Is it just me? I don’t think so. I do not think that I am being singled out by sinister forces whose objective is to end usability, usefulness, and enjoyment of user experiences as I knew them and hoped would continue. However, I am becoming increasingly worried about the future of civilization. Consider these cautionary tales.

Apple

In August 2011, I purchased a new Apple MacBook Pro from a local Apple Store. I was warned about jumping two levels in the operating systems to OS 10.7. Apple operating system versions are given charming (?) animal names by Apple but I refuse to learn them because they are impossible to remember in terms of sequence. Is a Leopard before a Lion? I suppose alphabetically it is. I decided: If I have to suffer OS migration terror, I might as well do it just once. In retrospect, I feel I made the wrong decision. The experience was disastrous, and I was crippled to about 50 percent productivity for a period of about four months, from September 2011 through January 2012.

Among other things, the Apple store provided a machine with a track-pad that was dysfunctional. I had to replace the computer. A second computer was delivered, but I discovered it had the incorrect operating system installed. I had to return that computer, but I had made the mistake of tossing away the box in which the computer came. The Apple store would not accept my computer without an Apple box. How many boxes are we customers supposed to store? One for every piece of electronic equipment? I had to purchase a second (actually third) computer, so I could cannibalize the box, return the other computer, and get my money back, hoping that my third Apple MacBook Pro would work perfectly. No such luck, but I am stuck with it. I was so dissatisfied with that Apple store’s representatives that I transferred my registration to another Apple store with the hope that its crew might be better. It was, marginally. Among other Apple MacBook Pro annoyances:

Apple’s own clock/calendar widget has a bug in it that states the wrong date! Today, 11 April 2013, my Apple-provided clock/calendar says 5 April. I have tried numerous times without success to correct/clear/reset this function. I took my computer back to the Apple store, and several “geniuses” attempted to reload the operating system and to perform “magic,” but nothing worked. To this date, the error persists. 

At the time, Apple’s own Mail program, which I considered inferior to Eudora (Eudora was no longer supported and it would not work on the newest OS), had bugs in it, including the fact that if I selected text and wished to change the color to black to make it more legible against a white background, Mail changed the color to white, and the text disappeared! I have noticed in the last week that this bug somehow disappeared over the past months, perhaps with updates of the OS. Nevertheless, it seems to me startling that such a bug should have been there in the first place with a long-standing application.

Apple’s own Address Book application, burdened by its dysfunctional skeuomorphic appearance, which locks in place its layout and size, was a disaster for me. I was never able to find an easy route to port my contacts from Eudora to Mail, except by paying about $1000 to a systems professional (admittedly for that as well as other corrective measures) to pull off the text of all contacts, including many detailed notes, and cleverly recode them so that many, but not all, wound up in the Address Book. However, I have had to individually adjust the contents and layout of each one over time in order to make them functional for me. Another astonishing “design decision” on Apple’s part was to have the search bar constantly interrogate the entire Address Book as I typed characters of the name or company I was seeking. The typing speed slowed to such a glacial pace that I have had to resort to Apple’s Spotlight widget for searching through my Address Book. It works, sort of, but is this the way “insanely great” products are supposed to work?

One other annoying “feature” is that Apple seems intent on forcing all users to treat their desktop/portable computers as if they were iPads, in preparation for the Next Big Thing. I found these gestures inconsistent, unfamiliar, erratic in their behavior, and ill suited to my hand positions on a keyboard (with my two thumbs ready to do movement/navigation with my right thumb and selection with my left). Apple’s previous two-part trackpad was much more usable and useful for this purpose than the single surface of the current computer. I turned all these gestures off, except for two-finger scrolling and enlargement/reduction in size with two fingers. Also, I turned on scroll bars, a control that Apple had hidden away, and discovered they were reduced in size, making them awkwardly smaller targets, without any ability to change their size.

What was most unsettling is this: Apple’s software came full of bugs and dysfunctional enhancements, in addition to forcing significant discontinuation of previously useful applications. I wondered about Steve Jobs, his achievements, his legacy, and Apple’s future. His mastery seemed to be in marketing hype and pretty objects, not in actual software usability. Granted, Apple made changes in user-interface design that were better than Microsoft’s, but my experiences during this period of time has led me to think that anyone who thinks Apple’s products are insanely great seems likely to be greatly insane.

Comcast

Another, similar, set of user experiences occurred with Comcast. After about seven years using DirecTV to view/record satellite television, I decided for cost reasons to switch to Comcast and to combine cable television reception with telephone service and faster Internet than AT+T could provide at the time. I must admit, I was used to the DirectTV program guide, which had improved significantly over the years and was quite legible, readable, usable, useful, and appealing.

Again, as with Apple, the migration was painful. Neither of the television companies, DirecTV or Comcast, provides an easy way to retain/transfer saved movies, even with migration of their own products. Most of my saved programs were lost in the change-over process (I found a slow, tedious workaround to save some of them via VHS copies and then make DVD copies of those VHS copies).

The installation process was painful. I later learned when things were set up and running that the initial Comcast installer had removed or used my own on-roof TV antenna set of cables, which rendered that antenna useless for my other off-air digital TVs and FM radios not connected to Comcast. I had to install quaint rabbit-ear antennas on the other television sets.

The electronic program guide (EPG) was horrible. I was amazed at its poor quality in comparison to DirecTV and wondered at the millions of customers who suffered through it. Did they know about the, well, almost beautiful, version that DirecTV provided?

What astonished me recently was the installation of one new set-top box on the main television screen, the Xfinity X1, which replaced an earlier set-top box. The Comcast installer said that both units he had on his truck were dysfunctional and not booting up properly! He left and said someone else would come a day or two later to attempt a second installation. This second installer eventually replaced my equipment. The installer quickly assured me that everything was working and departed. It seemed in proper order, but, really, how was I to know about all the new functions that this box enabled?

To my pleasant surprise, its EPG was much more useful than the old system. Comcast had finally, after many years, improved its EPG, even though it was still not quite as well designed as DirecTVs. Selecting Favorites and preparing a list of preferred stations was still a hidden, frustrating process, involving many steps. When I called one Comcast representative, that person did not even know how to set up Favorites, yet it seems one of the most likely activities of a new customer: to select the 100 really desired stations from the 800+ that are available. Comcast does not make this process easy.

However, the worst was yet to come. Weeks later, I attempted to retrieve and view a favorite television show. When I tried, the screen froze into a gray default Xfinity screen, and eventually an error message box appeared suggesting I call Comcast Customer Service. I did. That person could not solve the problem and said Comcast would call to either solve the problem or send yet another installer to replace the box for the fourth time. No one called. After two days, I called Comcast. The customer-service technician spoke strangely slowly, could not seem to communicate well, could not solve the problem remotely, and dropped the call when she attempted to transfer me to a supervisor. I had to call again, make my way through the phone-messaging system (I was at last getting quite adept at the numerous steps), and spoke to a third technician. He could not solve the problem and transferred me to a supervisor. After sending some “strong signals,” as he called them, and my rebooting the device for perhaps the fourth time, the system had restored this functionality. Hooray! Then, as I was trying to thank him and comment on Comcast’s poor technician behavior and phone system, his call dropped also! Fortunately, he had given me his direct number, and I called back to thank him.

Although this functional problem of Comcast’s was restored, it had cost me significant time (perhaps one to two hours), numerous calls to a vendor, and exposure to what seems to be unstable or faulty equipment and poorly trained technicians being provided by a major company in the U.S market. Is this a trend?

Entries for the journal of inevitable disaster

Well, taken individually, each of these events can be attributed to random fluctuations in Murphy’s Law: Anything that can go wrong, eventually will. Mind you, those of us fortunate to have such challenges live in a world surrounded by vast arrays of equipment and networks connecting them. To be fair: Most things work most of the time. If that were not true, we would be living truly in Hell on Earth, as some unfortunate people do in some countries.

What gives me pause is the seeming increase in the number of things that don’t work at all, that don’t work well, and that weren’t designed well in the first place. This, despite the progress we seem to have made, judging from the dazzling array of appealing, colorful new products/services.

Were these products mentioned above shipped out too soon? Did people forget to do quality control and quality assurance? Have businesses learned so little from 40 years of user-interface design and user-experience design professionals that products and services are produced that reach major segments of the U.S. and world markets and still have strange, unsightly, unjustifiable, intolerable bugs?

Are things getting worse? Are we in a gradual decline because we are just losing our grip on producing complex systems that can be maintained and serviced by technicians who have increasingly poor educations, poor communication skills, and poor manners? I fear for my children’s and grandchildren’s lives.

When corporations say naively that they intend to do only good and no evil, are they just being naive or ingenuous? Or are they in their own sophisticated way, just part of the problem?

Let us consider further ... while there is time...


Posted in: on Fri, April 19, 2013 - 7:17:34

Aaron Marcus

Aaron Marcus is president at Aaron Marcus and Associates, Inc. (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@ 5633953 (2013 04 24)

But Aaron, you paid a lot of money for your MacBook Pro, doesn’t that make you feel better about the experience?

@susan.e.hensley@wellsfargo.com (2013 07 02)

I’ve been pondering many of these issues lately, through a glass, darkly.

I think we’re going through a “complexity inflection point” where our individual brains, optimized for simpler times, are no longer able to deal with the many problems we face (and not incidentally, created). In response, we’ve come up with systems that allow our minds to meld, to collectively come up with solutions using filters and rankings and whatnot.

But doesn’t that “solution” just add more complexity? A downward spiral into chaos?

Not necessarily. Out of, or on top of, complex systems emerge novel behaviors and patterns. Ants and anthills. Flocking birds. Traffic jams. Wikipedia.

My hope is that we’re on the cusp of a phase transition from a world where our machines have become too complex to effectively service to one one where the machines can service themselves. Yes, I’m invoking AI. Or more accurately:  AGI, Artificial Generalized Intelligence.

That’s one outcome of many, I fully realize. But it’s the one we need.

-Tristan Naramore


Recursion: A thinking utensil in the creativity kitchen


Authors: Tek-Jin Nam
Posted: Wed, April 17, 2013 - 6:48:28

If I had to recommend a book about creative thinking, it would be Robert and Michele Root-Bernstein’s Sparks of Genius: The Thirteen Thinking Tools of the World’s Most Creative People [1]. I read the book on my flight to London, and I was particularly interested in the section on abstraction, which discusses how great artists like Pablo Picasso and Henry Moore used it in their work. Shortly after I arrived in London, I had the opportunity to see one of Moore’s sculptures at the RCA faculty lounge, where I took a photo with his wax figure. From the introduction to the picture poem in the same part of the book, I was able to sense how beauty emerged from the synergy between text and graphic, and from the artwork’s abstraction.

Among the 13 thinking tools, analogy, abstraction, pattern formation, and transformation are the ones that many designers use every day. Through numerous examples, the authors show the similarities between creative thought processes in science, art, the humanities, music, literature, and engineering. The authors creatively and artfully organize creative thinking methods with a systematic investigation of great minds from a wide range of fields.

The Root-Bernsteins argue that a master of thinking, like a cook, mixes and combines various mental materials. One has to practice various recipes with cooking utensils in a creativity kitchen. I like this analogy because I too think of designers as chefs on an interdisciplinary team (as described in a previous post). I was intrigued that many of the tools, if customized, could be cooking utensils and recipes for designers, which makes the mental cooking of the designer unique and outstanding. 

The title of the Korean version of Sparks of Genius is The Birth of Thinking. This implies that the tools can be used for creating thoughts. However, I think of the tools as more useful for evaluating new thoughts. I often use these tools to critique student designs. For practical use in design education and practice, I wish the thinking tools could be customized to both generate and evaluate creative thoughts. I am not sure if such a customization is possible, since creative thoughts are often generated spontaneously. Even if such customized tools existed, it might be difficult to find effective methods for using the tools in design education and practice.

One tool I found suitable for such a purpose is recursion. Recursion is the process of repeating items in a self-similar way. The process of self reference creates a new solution by automation. Two parallel mirrors facing each other are symbolic objects of recursion. The images in the mirrors are repeated infinitely. The process, the result, and the experience often provide feelings of mystery, elegance, beauty, and fun. Recursion is well known in fields such as linguistics, mathematics, and computer science. I first experienced the artistic elegance of recursion when learning computer programming. It was difficult for me to understand at first. The flow was easily twisted and tangled. However, when the repeating cycle of recursion is made, beautiful solutions are generated from just a few lines of code. I often get similar feelings of beauty and elegance from mathematical fractals and from Escher’s artwork.

Graphics and logic are not the only fields that use recursion. It can be applied in creative storytelling as well. Films that involve time travel are representative examples of recursive storytelling. In The Terminator, the protagonist, John Connor, sends Kyle Reese back in time to have him protect Connor’s mother. The fact that Kyle becomes his father is the highlight and the story’s dramatic reversion. Charlie Kaufman’s Adaptation, which I mentioned in an earlier post, also employs a recursive narrative. The screenwriter who adapts the story is also the protagonist. The film is about its own genesis and shows the process of Charlie Kaufman writing the screenplay of the movie.

Reflexivity is strongly related to recursion and is more popularly used in the social sciences. Reflexivity refers to the circular relationships between cause and effect. Reflexivity focuses more on the relationship while recursion focuses on the process. The difference is that reflexivity starts with two entities, such as the chicken and the egg. Recursion can start from itself. But I think the two notions are similar. In film, a film about making a film is called reflexive. It makes the audience aware of the filmmaking process. It can also be called a recursive film. I think the creative power of recursion and reflexivity is rooted in the resonance and internal symmetry produced by iterative self-reference.

A few years ago at SIGGRAPH’s art and design gallery, Misung Lee and I exhibited an interactive media installation titled Through the Time Tunnel [2]. It was inspired by the concept of recursion. We borrowed the concept of the recursive images created when two mirrors face each other. In that setting, the repeated images were seen in the mirrors at the same time. Strictly speaking, the reflected images were scenes from the past. Hence, the mirrors were a medium for meeting one’s past self. We exaggerated this situation by having a person face a past scene through a mirror that was digitally simulated by a camera and a projector. We created a tunnel effect with the images within images that resulted from the digital cameras. We added more time delay in the reflected and nested images so that they showed scenes from the past. We added simple tunnel navigation buttons that allowed visitors to control and experience time travel in space.

Recursion is used in the design of everyday goods. It offers simple but versatile ways to create new products. In our faculty lounge, we have a waste bin that resembles wastepaper. In our faculty restaurant, there is an umbrella container that has the shape of an umbrella. Other products of this kind include a pencil case in the shape of a pencil and a humidifier in shape of a water drop. The RepRap, an open-source 3-D printer that can print itself, is another product that uses recursive design.

Recursive design can go beyond appearance and style. I recently advised Kyunghyun Kim’s master’s thesis project, which used recursion in the design of a lamp [3]. The design proposal was to create a unique product through self reference and adaptation. The product matures by referencing the functions or fundamental attributes of itself. With a lamp, the fundamental attributes of the product might be light and brightness. We imagined a situation in which the changing brightness of natural light determines the final shape of a lamp. In this scenario, a user purchases an incomplete lamp and places it in a desired spot. Over a certain period of time, the lamp captures the changes in brightness in that spot. The captured data is translated into a 3-D shape of the lamp shade. The 3-D modeling data is sent to an associated 3-D printing system, and the last component of the lamp—the lamp shade—is created and delivered to the user. The completed lamp is a one-of-a-kind product.

I am not sure if recursion can be used as a thinking utensil for a designer’s mental cooking. It might have to be revised to be more pragmatic. Nevertheless, I wish to have a great creativity kitchen for designers, for which I will collect more thinking utensils and recipes. 

Endnotes

1. Root-Bernstein, R. and Root-Bernstein M. (2001) Sparks of Genius: The Thirteen Thinking Tools of the World's Most Creative People, Mariner Books

2. Lee, M. and Nam, T. (2008) Through the Time Tunnel, Proceedings of SIGGRAPH '08, p98-99. (SIGGRAPH 2008 art gallery)

3. Kim, K., (2013) Designing Unique Product with Self-Morphing Randomness, Unpublished master’s thesis, Department of Industrial Design, KAIST (in Korean)


Posted in: on Wed, April 17, 2013 - 6:48:28

Tek-Jin Nam

Tek-Jin Nam is an associate professor in the Industrial Design Department at KAIST.
View All Tek-Jin Nam's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


Roger Ebert and the social value of criticism


Authors: Jeffrey Bardzell
Posted: Mon, April 15, 2013 - 12:02:47

On Friday, April 5, 2013, I saw something I would never expect to see: the passing of a critic reported as front page news in the New York Times. The critic in question was, of course, Roger Ebert, the celebrity film critic who passed away presumably (the obituaries aren’t clear on this) due to complications relating to his thyroid cancer. 

The purpose of my post is not to lionize Roger Ebert. Anyway, I’m hardly in any position to do so. I may have seen an episode or two of At the Movies with Siskel and Ebert in the 1980s, and I don’t recall ever reading any film criticism that he has written, except for maybe by accident at Rotten Tomatoes. About the only thing that sticks in my mind about Ebert’s critical writing is the controversy he stirred up with his half-baked claim that video games can’t be art, though to his credit he did subsequently engage the objections raised, finally concluding, “I was a fool for mentioning video games in the first place” [1].

No, my purpose is to reflect on the social value of criticism on the occasion of one of the world’s most famous critics' death. 

Professional criticism is in crisis. The decline of newspapers and magazines—hitherto a bastion of critical writing (including Ebert’s 46 years of writing for the Chicago Sun-Times)—has had dire implications for one of the few paid critical professions. But critics (who else?) have suggested that criticism has been in decline for decades, long before print began to founder. Problems identified have included the rise of consumerism, the decline of readers, critics’ own postmodernist refusals to take definite (or even polemical) positions, left- and right-wing influences that make art about politics instead of about art, the replacement of professional criticism with other metrics (e.g., sales figures, Nielsen ratings, download counts, and Amazon customer reviews), and plain old bad writing. 

With such a pessimistic picture, Ebert’s obvious success as a critic offers an optimistic counter-narrative. It occurred to me that it would be a good exercise to identify some clear statements about the potential of criticism to contribute to society, to consider ways that Ebert provided such value, and turn more speculatively to the question of the possible roles of criticism with regard to interaction design. 

What is good criticism? And what is its social value?

The School of Visual Arts (SVA) in New York City is offering a relatively new MFA in criticism. Their website has the following to say about the degree: “This program is not involved in ‘discourse production’ or the prevarications of curatorial rhetoric, but rather in the practice of criticism writ large, aspiring to literature” [2]. I like this formulation: “the practice of criticism writ large.” It reminds us that criticism can be more than Macworld reviews of the latest iPad app or hundreds of ranting customers on the latest version of Turbo Tax in the App Store. The SVA website continues, “The practice of criticism involves making finer and finer distinctions among like things, but it is also a way to ask fundamental questions about art and life.” Thus, criticism is an ambitious and literary practice that is about a) making distinctions and b) taking on the big questions, presumably prompted by works that help us see or frame them in a new light. It is not fundamentally about saying yeah or nay to individual works, but rather a practice that stages an encounter between reflective thought and emerging works to educate our perception and cognition (i.e., making distinctions) and to take on the most important questions in life.

“In the best of circumstances,” cultural historian Maurice Berger writes, “the critic serves as a kind of aesthetic mentor” [3]. Certainly Ebert served as an aesthetic mentor, as the New York Times writes, “Not only did he advise moviegoers about what to see, but also how to think about what they saw” [4]. The last part—how to think about what they saw—suggests the pedagogical (or mentoring) dimension of the critic’s work, specifically in its role of cultivating a sensibility for film appreciation and inculcating a demand for better works. Berger fleshes out his characterization of aesthetic mentoring by characterizing “the strongest criticism” as 

capable of engaging, guiding, directing, and influencing culture, even stimulating new forms of practice and expression. … The strongest criticism uses language … as a means of inspiration, provocation, emotional connection, and experimentation. 

The strongest criticism also helps the reader to move beyond the surface details of the cultural artifact. … By connecting the artifact and its institutions to the bigger picture of culture and society, the critic can, in effect, help readers better to understand the process and implications of art, the importance and problems of its institutions, and their relevance to their lives [5].

I want all that for interaction design. I want a technical vocabulary that lets us make the sorts of distinctions that critics of paintings, lyric poems, and ballet have. In criticism of interaction design, I don’t merely want to know what the “must-have” app of spring 2013 is; I want my thinking to be challenged about what the role of tablets, of apps, of social media is or ought to be or could be. I want to see beneath the surface—to see the ideologies, skilled craft, subtle beauty, technical innovations, creativity, and social consequences—of individual interaction designs and, more fundamentally, interaction design as a medium. I want disagreement, no holds barred critical debates about what a design means, what makes a design good, what makes an innovation creative—precisely the sorts of vigorous and sometimes cantankerous debates that Siskel and Ebert—professional enemies before given a TV show together—became famous for. 

I also want the sort of criticism that takes the role of critical judgment as a part of its rationality; as James Elkins writes, 

I find myself engaged by critics who are serious about judgment, by which I mean that they offer judgments, and—this is what matters most—they then pause to assess those judgments. Why did I write that? such a critic may ask, or: Who first thought of that? Art criticism is a forum for the concept and operation of judgment, not merely a place where judgments are asserted, and certainly not a place where they are evaded [6].

Ebert’s writings were popular, not academic, and presumably do not aspire to levels of reflexivity that one might expect of an academic critic. Nonetheless, I read in one of his obituaries that, “In 1997, dissatisfied with spending his critical powers ‘locked in the present,’ he began a running feature revisiting classic movies” [7]. This is a reminder that films have, and are shaped by, a history, and that our “critical powers” require an ongoing engagement with that history lest they be limited. Is interaction design any different? Where are our design histories (let alone histories of interaction design criticism)?

Though Ebert is known for “two thumbs up,” there is no question that his reviews provided much more than a final evaluation; he “passionately celebrated and promoted excellence in film while deflating the awful, the derivative or the merely mediocre with an observant eye, a sharp wit and a depth of knowledge that delighted his millions of readers and viewers” [7]. In other words, Ebert provided reasons to justify his judgments. Several obituaries also noted his advocacy of lesser-known films, films that challenge viewers, films that raise our expectations of what films can be. This included his “Roger Ebert’s Overlooked Film Festival,” an annual gathering in which he curated quality films that audiences and the media overlooked. Philosopher of art Noël Carroll characterizes the social role of criticism as follows:

For me, the primary function of the critic is not to eviscerate artworks. Rather, I hypothesize that the audience typically looks to critics for assistance in discovering the value to be had from the works under review. … [T]he critic also occupies a social role. In that social role, the primary function of criticism is to enable readers to find the value that the critic believes that the work possesses. It is the task of criticism to remove any obstacles that might stand in the way of the reader’s apprehension of that value [8].

For five decades, Ebert helped film audiences appreciate value in film, and his passing is a loss for film culture. But perhaps the greater loss is the decline of serious criticism in general and in particular its relatively weak showing in interaction design, which arguably occupies the same place in the 21st century as film did in the 20th: its dominant cultural form. Interaction design needs a forum for serious aesthetic and evaluative judgments, a forum that incorporates the history of design, that is aware of the history of its own judgments (and theories of judgment), and that supports living and passionate debates about what is good, what design is for, and what it all means.

Endnotes

1. Ebert, R. (2010). Okay, Kids, Play On My Lawn; http://www.rogerebert.com/rogers-journal/okay-kids-play-on-my-lawn

2. http://artcriticism.sva.edu/?page_id=49

3. Berger, M. (1998). Introduction. In Berger, M., ed., The Crisis of Criticism. New York: The New Press, p.8.

4. Martin, D. Roger Ebert, 1942-2013: Film Critic of the Mainstream. The New York Times, 5 April 2013, p. A1.

5. Berger, M. (1998). Introduction. In Berger, M., ed., The Crisis of Criticism. New York: The New Press, p.11.

6. Elkins, J. (2003). What Happened to Art Criticism? Chicago: Prickly Paradigm Press, p. 84.

7. Steinberg, N. Roger Ebert 1942-2013: The Balcony is Closed. Chicago Sun-Times, 5 April 2013, p. 2.

8. Carroll, N. (2009). On Criticism. New York: Routledge, pp. 12-14, 45.


Posted in: on Mon, April 15, 2013 - 12:02:47

Jeffrey Bardzell

Jeffrey Bardzell is an associate professor of human-computer interaction design and new media in the School of Informatics and Computing at Indiana University, Bloomington.
View All Jeffrey Bardzell's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


An “intellectual turn” at CHI´13?—Paris and the philosophy of interaction


Authors: Mikael Wiberg
Posted: Mon, April 15, 2013 - 7:51:56

CHI´13 is just around the corner now. In a couple of weeks from now the global CHI community is coming together, not primarily to meet the Spring in Paris, but to meet up, hang out, and continue discussions on where HCI is going and how to push HCI design and research forward. Arguments will be made, results will be presented, and new interactive systems will be demonstrated. This is for sure enjoyable and important, as the arguments, the results, and the prototypes show concrete examples of how we are advancing our field on a practical and concrete level.

Still, and even more important, CHI also serves as a node for the thinking in our field. While we do have practical results to show, I always think that the coolest thing is to listen to a presentation at a conference where a new idea, perspective, concept or critique is made through arguments, results, and demonstrated systems. After all, a scientific conference should be about new ideas, right? And great presentations at CHI typically manage to present new ideas in really cool ways. So, do we have what it takes to advance ideas in HCI to the next level, i.e., to initiate the work of linking ideas, presented through technology, into something bigger and more long-term and fundamental? I am thinking about the potential for the philosophy of interaction for HCI.

Paris is a city of culture and history. It is also a symbol for many of the great French philosophers, and we should take this opportunity to let the city atmosphere of Paris influence our community and our thinking. This year we even have a great established philosopher for the closing plenary at CHI´13—Bruno Latour! But how can we prepare ourselves for this unique opportunity of having a great philosopher such as Latour as closing plenary speaker at CHI? Latour has for sure presented novel ideas that describe how our social and technical world is entangled, but more important, I think that we can learn from Latour when it comes to the practice of thinking in HCI. However, to seize this opportunity we need to prepare ourselves, and we need to formulate questions for the closing plenary. 

So, here is my proposal. Let’s take the CHI´13 conference as a starting point for thinking about how we can better and more precisely appreciate, advance, and critique the fundamental ideas and concepts being presented at CHI´13. This means to not only focus on the studies conducted and the interactive systems presented, but also more fundamentally to think about how these studies presented and systems demonstrated advance ideas and ways of thinking in our field. 

While the philosophy of science is concerned with all the assumptions, foundations, methods, and implications of science, and with the use and merit of science, I see a similar potential for the advancement of the “philosophy of interaction.” There are already established philosophies of science, language, mathematics, and so on, and some people in our field have already suggested this notion of philosophy of interaction (e.g., Dag Svanæs) while others (e.g., [1]) have suggested that we should advance our field through explorations of “strong concepts” or through “concept-driven” HCI research [2]. Still, to make these idea-driven agendas happen we need to take this on as a broad challenge and opportunity across the whole CHI community. At CHI´13 we have one such great opportunity. If we reflect on the presentations made at CHI from an ideas perspective, and if we think about critical questions to ask from an “advancing ideas” perspective during the presentations, and to Bruno Latour at the closing plenary, then the discussion will hopefully circulate around two important levels of discussion at CHI—studies, results, and novel systems on the one hand, as well as ideas, notions, and concepts on the other hand. I see these two levels of discussion as fundamentally intertwined in HCI, not only as an opportunity, but as a necessity. Still, HCI calls for its own tradition and its own philosophy of interaction. While we do have well-developed theories in our field related to our object of study, I see the advancement of the philosophy of interaction as this meta-level foundation for how ideas can be advanced in HCI. Further on, I see this as an opportunity for an “intellectual turn” in HCI. A philosophical point of view will provide us with the opportunity to think about our profession and what we do not only as interaction design but also as “interaction thinking.” From my perspective, one such “thinking through design” practice can be found in Jonas Löwgren and Erik Stolterman´s [3] ideas of “thoughtful interaction design,” and it might help us in pushing our field forward and in seeing our past through new eyes. 

Travel well, and I look forward to our discussions and reflections in Paris!

Endnotes

1. Höök, K. and Löwgren, J. (2012). Strong concepts: Intermediate-level knowledge in interaction design research. ACM Trans. Computer-Human Interaction

2. Löwgren, J. and Stolterman, E. (2004). Thoughtful Interaction Design. The MIT Press.

3. Stolterman, E. and Wiberg, M. (2010). Concept-driven Interaction Design Research. Human Computer Interaction (HCI), Vol. 25, Issue 2, pp. 95-118.


Posted in: on Mon, April 15, 2013 - 7:51:56

Mikael Wiberg

Mikael Wiberg is Professor of Informatics in the Department of Informatics at Umeå University, Sweden.
View All Mikael Wiberg's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


@Dag Svanæs (2013 05 12)

Thanks for interesting blog entry. “Philosophy of Interaction” was actually a term Mads Soegaard came up with as the title for my encyclopedia entry on phenomenology and interactivity for interaction-design.org. It is a nice and concise term. (I presented a TOCHI paper at CHI 2013 on the same topic: http://dl.acm.org/citation.cfm?doid=2442106.2442114 ). Yes, we need more theory at CHI! Why are there so many Scandinavians on your reading list?


Designing the cognitive future, part II: memory


Authors: Juan Pablo Hourcade
Posted: Fri, April 05, 2013 - 11:59:01

In these posts I’ve been discussing how interactive technologies are affecting our cognitive processes, and exploring how they may do so in the future. In the previous post I discussed perception. In this post I discuss memory.

Interactive technologies are already changing the things we need to remember. Phone numbers, email addresses, and directions can now be easily retrieved. The vast amount of information on the Internet is also changing what we need to teach. Memorizing content is certainly much less important than it used to be, while knowing where to find it and how to use it is the new difference maker.

We may see even more dramatic changes on what we need to remember about people. Augmented reality systems, such as Google Glass, could have the potential of making it unnecessary to remember people’s names or other information about them as it could pop up for us to read as we meet them. While this sounds a bit extreme, it is certainly possible, and at least in some cases it could lead down a negative path of stopping to genuinely care to get to know other people.

The biggest change though is in the unprecedented amount of information being recorded about our lives. Heavy users of social media can go back and find out what they were thinking about, watching, or reading on a particular date. Electronic calendars can let us know what we were doing in the office on particular day at a particular time several years ago.

But this is only the tip of the iceberg. Children are growing up with an incredible amount of information being recorded about them. It is not unusual for parents to take dozens of pictures of their children every year, something that was unusual before digital cameras. Some parents have even recorded pictures of their children every day. Deb Roy at MIT went further and recorded everything happening in his apartment using several video cameras and microphones when his baby was born. While this may seem extreme, with the increasing use of smart appliances that include video cameras and microphones, we are not that far from being able to do that. Smartphones are certainly capable of tracking our movements outside our homes, providing the possibility of knowing where we were at any point in time.

Combined with advances in neuroscience that I mentioned in my last post, it may be possible in the future to re-experience any part of our lives, not just by watching video and listening to audio, but by actually feeling again the whole spectrum of stimuli we felt at the time. This could enable people to relive happy memories, and to share those with others. It could also change the way we understand history. It would make it easier for someone from the future to get a much richer sense of what it was like to live a day in the 21st century. 

These technologies would also give us the ability to experience an event from someone else’s perspective. This could have very positive consequences in helping people understand situations from someone else’s point of view. It could even help people understand what it is like to spend a day in someone else’s shoes. Such an ability could potentially increase empathy and compassion, and help people realize the humanity of people from other groups. I think selective recording and sharing such as this would provide some of the most positive outcomes while helping retain privacy.

Related to memories of actual events are memories of dreams. Yukiyasu Kamitani and his colleagues from the Department of Neuroinformatics at ATR in Japan have developed software to analyze brain activity while people sleep. They trained the system by waking people up in the middle of dreams and asking for detailed reports of what they were dreaming. Using this data, they are able to often predict the content of people’s dreams. Further advancements could perhaps enable us to remember and replay dreams. Maybe we would wake up in a better mood if we could choose to dream some of our better dreams every night.

As with all other future technological developments, we have choices on what we design and how to design it. In the case of memory, the consequences of what we design could range from a complete loss of privacy (even of our own memories), to providing unprecedented ways of understanding situations from someone else’s point of view, and keeping these memories alive for future generations.

What do you think? How would you like to use technology to enhance your memory?


Posted in: on Fri, April 05, 2013 - 11:59:01

Juan Pablo Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Pablo Hourcade's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.


No Comments Found


My computer still doesn’t know me: habit vs. intelligence.


Authors: Monica Granfield
Posted: Fri, April 05, 2013 - 7:45:28

What I do as a habit is not necessarily what interests me the most. It's what I need, what I have to get done, maybe it's all I am aware of, and so it becomes my habit. Habit is what I do because I am used to doing it. Intelligence is what I learn and how I apply it, and it becomes what I know. When it comes to using computers, how can we best use technology to inform, entice, and predict a user’s wants and needs? How can we use technology to personalize the user experience, making it a more intelligent experience and not just an experience based on habit?

There has been a long, ongoing debate: In order to gain information about the user, should we ask the user up front what their interests are, their skill level, etc., or should we gather our own information on the user, and based on usage patterns determine their likes and interests? Will the user know what they are most interested in if we ask or do we just rely on habit to gain information? Which method will guide the user and make the most intelligent suggestions to the user? 

Today we encounter both methods of trying to understand our users. Windows has, for a while now, displayed the applications that you use most often in your start menu. For your convenience, the start menu also lists the most recently used files for most recently-used applications. This is all very helpful and convenient in supporting my day-to-day use of my computer and my work-based habits, which are fairly consistent and therefore fairly predictable. With Windows 8 and Metro this all changes. You now manually self configure all of your tiles on the start page, as you see fit for your own use. I think about the opportunities lost here. All the new Windows 8 apps and not a suggestion based on my interests or habits, as it does not seem like they are given any consideration. I'm left wondering why the intelligence of the start menu did not transfer over to the Windows 8 start page. I am also wondering why Windows 8 and Apple products have not gone beyond this, to expand my experience to be a more intelligent and useful experience. 

For online retailers, predicting what you might like, need, or otherwise not be able to live without is a constant exploration in habit versus intelligence. I like the idea of providing suggestions based off of purchases I've made. However, the intelligence here is more habit than intelligence, and my shopping patterns are not always what I am actually interested in. For example, maybe the suggestions should not be based off of every purchase. You may want to remind me at some point in time that I need to buy a new toilet bowl brush (intelligent), but please don't suggest various toilet bowl brush models for me as part of my intended online, front page, Friday-night pleasure browsing. I bought those because I needed them, but they are not going to drive a future impulse purchase. I also don't need to see every suggestion based on gifts I have purchased. Perhaps this, too, is a different category of recommendation that I flag for as I purchase the items. This way when I want to buy a gift I can access this information solely for gift purchasing. When in my account it seems that my shopping is based on my purchasing and not my interests. Maybe asking up front is a good and intelligent idea. 

When asking me up front, make it quick and obvious—only don't make me scroll through hundreds items to build my interests; ask me flat out and group my answers by categories that I can make granular or keep general. It's as simple as "I like: Outdoor activities > Skiing, Hiking, Running, Yoga > Iyengar, Hatha, or Cooking > Cajun.” Now you get the idea. And yes, how about a suggestion based off of these once in a while? Terrific! Broaden my horizons, but focus on ME, as I am the customer. And imagine some intelligence around the content. I purchased and liked some Christmas music. However, the continued recommendations for this come March, when I am still seeing snow, is not what I really need to see. How about based on the artist, a new or previous album that is not holiday related? This would be intelligent. 

NetFlix seems to have a bit more of a lead in this area. NetFlix not only has the ability to base recommendations on what you have watched, they also have a "Taste Profile," which asks what you want to watch based on things such as your mood, movie genres, and story lines. It's nicely categorized, and easy to fill out. It extracts exactly the information needed to make intelligent suggestions in a quick and easy way. It would be even more intelligent if there was the ability to separate my kids’ shows and interests from my own. The idea of asking what I like and then predicating other options off of what I like feels right. However, each user’s experience should be intelligent enough to be their own experience. This way I am not scrolling through endless cartoon suggestions, and my kids are not seeing suggestions based on my love of scary movies. So being intelligent is not just about habit or gathering information; it's also about gauging how you apply or disperse what you have learned.  

I recently learned that my favorite natural beauty product store is opening spas. Based on my love of the products alone I was ecstatic. Then the sales women began to tell me about the new approach to the in-spa experience that will be offered. Typically when you go to a spa you might get a choice of, say, which scent of oil you would like, or which color nail polish you would like to have applied. Maybe they will ask you if you want lemon- or orange-infused water. This is at the best salons, and these are your only choices. This new spa is not only individualizing the treatments, but the entire experience, based on what information you provided to them. The knowledge gathered is then used to customize your entire spa experience. Genius! At this new spa the up-front interview is all about you. “What is bothering you?” “How do you feel today?” Not “Which treatment would you like?” Instead, tell us what is going on, and based on this intelligence, possible treatments are presented and reviewed with the customer. Scents, music, and more are customized along side of the treatment. The information is used to appeal to every sense and to evoke emotion. Customers reactions include "I am saving my pennies for my next visit!" and "It was like theater meets a massage." 

If you have repeat visits to most spas, the approaches to treatments are habit based. "I see you have visited us before, would you like the stone massage again?" ("or would you like to choose from our other 6 predetermined massages?"). They are appealing to the masses, hitting the 80% case, and most likely you will be satisfied. But will you be left wanting to save your pennies to go back? 

This is the difference between intelligence and habit. One day I dream of sitting down at my intelligent computer and having it know me—really know me—and be intelligent. Maybe like a mood ring it captures my mood and changes the colors on the screen. If I go to play music, being intelligent, it bases its choice of music off of my mood, the type of work I am doing, and the time of day. It might have to ask me and give me a chance to redirect, or maybe over time it gets to know me so well that I trust its choices. 

For now I will continue to hope for online retailers that will differentiate their users, get to know their interests, and base their intelligence off of this, and for a computer that uses its technology and power to move its intelligence forward and to get to know me better. 

Monica Granfield is Principal User Experience Designer at Oracle Corporation.The views expressed on this website are her own and do not necessarily reflect the views of Oracle. 



Posted in: on Fri, April 05, 2013 - 7:45:28

Monica Granfield

Monica Granfield is a user experience designer at Symbotic. The views expressed on this website are her own and do not necessarily reflect the views of Symbotic.
View All Monica Granfield's Posts


Post Comment


*

Comments submitted to this site are moderated and will appear if they are relevant to the topic.