A solution without a problem? Seeking questions to ask and problems to solve within open, civic data

Authors: Caroline Sinders
Posted: Mon, June 01, 2020 - 10:16:57

The city of Amsterdam is using artificial intelligence (AI) to help sort through and triage their version of 311 calls. Chicago is using AI to help analyze and decrease rat infestations and to prevent them in the future. There is value in applying AI to urban challenges, but that value must come with explicit protections to privacy and citizen safety. AI has known issues with bias, from a widely documented inability to recognize different genders and races to its use in predictive policing. Thus, when using AI as infrastructure or in technology that interacts with society, safeguards must be put into place to mitigate harm. When, how, and why we use AI must be analyzed and unpacked. This article will examine a potential use case for using AI in understanding city and civic data, and the potential benefits as well as harms that could arise. 

What data is right for civic challenges?

The power of AI is that it can recognize patterns at scale and offer insights and predictions. For civic technology, this may be a great way to see patterns across a city to potentially prompt deep investigations into recurring mold, faulty pipes, and potholes across neighborhoods and larger districts. These problems should be viewed as data; for example, the reports that citizens make are data. Data is important—it's the backbone of artificial intelligence. Smart cities, IoT devices, and open government initiatives are all creating astounding amounts of data. Data presents stories about the people who live in the communities and cities where the data was gathered, and this data must be safeguarded against all different kinds of bad actors. Every data point from a software application, from the amount of usage to data from a social network, is made by humans. Meaning, data isn’t just something cold or quantitative; it’s inherently human. Thus, the privacy and transparency of datasets—who collects it and how it’s stored—is incredibly important. 

The problem, and the lynchpin, with using AI will always be data. Data can be messy and unstructured, so even before thinking about using AI, one has to ask: Does the data we need even exist? If so, is our dataset big enough? Is it structured? And there is always the moral quandary: Do we really need AI here or does a city or community just need more human power behind this problem? 

AI in cities...right now

The city of Amsterdam is using AI to help sort through their version of 311 calls (free service-request calls in cities ) in a product called Signal (Figure 1). Residents can file requests about issues in public spaces through calls, social media, or a Web forum. Tamas Erkelens, program manager of the Chief Technology Office innovation team that built Signal, told me in an interview about the process that routs service requests, saying that users had to pick their own categories. But, as he said, “Users think in problems and not in governmental categories. Instead, we started asking users to describe the problem or take a photo, and we use machine learning or computer vision to [help sort the problems].” Signal uses AI to help sort residents’ requests into different categories, following the logic of the user to put residents—and residents’ frustrations—first as a priority for complaint triage. Erkelens also told me that his team has successfully created a model capable of detecting more than 60 categories in the Dutch language but that the system is also audited by a person. This human auditing is important to ensure safety and understanding. Even when we leverage AI to handle an influx of complaints, humans are still needed to assure quality. 

Figure 1. Screenshot from Amsterdam’s Signal app.

Data: It’s more than just numbers

When working with civic technology and data initiatives from local, state, and federal governments, a variety of problems can pop up. Generally, there may not be similar data standards across cities, or even across the same city’s various departments, and datasets can be of differing sizes and structures. Comparing or combining data from different municipalities can be difficult. As Georgia Bullen, executive director of Simply Secure, a nonprofit focusing on privacy and design, explained, "A lot of the big problems internally for cities are actually that different departments have different datasets, and it’s really hard to combine them...Cities are storing data in different ways, and they have different policies in place that affect what those values even mean.”

When integrating any form of technology into civic and civil services, the technology itself coupled with the service it’s intended to augment must be viewed holistically within the context of a service design problem. That is, if an app is trying to help solve the problem of how to reduce the number of residents contacting the city over potholes, the app combined with the city service must be viewed as an integrated and holistic problem under the service design for the user experience of contacting the city. What kind of biases or issues can the app add? What kind of unintentional hierarchies can it create? How much data is it creating, and how is that data stored? These are some questions that designers and engineers can ask in this situation.

We can look at the New York City’s 311 as an example of unintended design consequences. A 311-type of system can lead to the police being called, even unnecessarily, when the system is very easy to use, or because of patterns learned by machine learning. As Noel Hidalgo of Beta NYC, a civic technology organization, says, “It’s really easy to file a noise complaint on the 311 app.” The 311 app promotes noise complaints as something you can report using the service. In the app, when a user selects “apartment” to denote more information about the noise complaint, the police immediately become notified. So even if a user is just trying to report a noise complaint without contacting the police, they actually can’t do that because of how the app is designed.

The problem with this design for 311 is that it relies on a belief in government to respond to your needs. For a lot of citizens, government can be a thing to fear, such as neighborhoods of color who face over-policing or gentrification, which can lead to clashes among new and old residents. For some citizens, their local and federal governments, including 311, are not systems that they can necessarily trust. There are so many facets to the way in which a governmental system can be used, but do citizens understand that, and does the AI model reflect all of these facets? The model and AI system must be designed for how citizens actually interact with governmental systems, meaning that they may not completely understand or fully use 311 (e.g., calling reactively instead of preemptively). Yet the system must also be designed for how the city or government needs it to be used. This means 311 should be designed for both reactive and preemptive reports. 

A real-world example

As an example, let’s unpack Million Trees NYC, a citywide, publicly and privately funded program to plant one million trees in New York City from 2007 to 2015. If a researcher wanted to use machine learning to understand why certain trees are planted in certain parts of New York City, that researcher would need to consider a number of factors, from different boroughs’ tree-planting budgets, to soil conditions, to tree prices and availability, to the resources available in different neighborhoods to water and care for the trees, to the specific cost-benefit analyses of planting in certain postal codes. And then other questions come up: How many trees existed before? 

Million Trees NYC could hypothetically use AI to discover which tree types were successful, meaning which trees grew and flourished, and where they grew and flourished, to help determine optimal conditions for tree planting. Even posing this question raises more questions. Can we assume that wealthier neighborhoods get more trees? If not, do trees survive at the same rate across New York City? Across New York State? Exploring this data just to find a problem to solve requires asking many initial questions. A researcher would need the additional expertise of understanding all of the factors listed above and more to find the right dataset and build the right model. The deeper data story isn’t just that rent is high in the New York City borough of Manhattan versus Queens. It’s that even planting trees is complicated by policy, history, and other systems. Using any robust system like AI to unpack large datasets can help call out systemic inequity in cities, but the datasets themselves must also be analyzed. Intention is important.

Data is complex. What is needed to make the data more whole? 

A dataset is so much more than the initial spreadsheet you have. What are the factors that contributed to that dataset? Those factors not captured in your initial dataset are the factors that make it more whole. A first step when working with data is to outline all of the related data pieces, similar to a recipe. Bread isn’t just flour and water; it’s flour that is sifted, to which a certain amount of water is then added. With my trees example, I picked the data apart: Where are trees in New York City currently? Historically? What kinds? More important, how do we know what makes a tree healthy? 

Bullen stresses that we ask specific questions, such as what makes a tree survive? Can we extrapolate that from data? And what makes the money spent on that tree useful? That is, if planting a tree costs X amount of money, what about really growing and caring for that tree? Were the tree box filters big enough for the root systems? What were the weather conditions, and who is responsible for caring for the tree? 

All of those factors—weather, tree box filter size, kind of tree, history of the neighborhood, environmental history—are related to the Million Trees dataset, even if those data points aren’t captured in that one dataset. All of these types of factors are not just questions, but also data points that need to be interrogated while collecting and analyzing data. 

Don’t just identify, but question and audit patterns in the data

When a pattern emerges from a dataset, don’t take it at face value. Ask: Why is this occurring? Is it correct? Does it feel right? What would happen if this was wrong? What would the real-world outcomes be? Would it cause any person real-world harm? Whatever question you’re analyzing, try asking it from the opposite point of view. For example, when analyzing what makes a tree healthy and looking at neighborhoods that have healthy trees, look into what kinds of trees you’re looking at, and ask: What is the history of those neighborhoods? Now try analyzing unhealthy trees and look for whether this refers to the same tree types. What season were they planted in? Historically, what trees were in that neighborhood? You might find how a city has changed over many decades. For example, a highway that cuts across a neighborhood or borough can have downstream affects of radically changing that neighborhood. A highway adds noise and air pollution, which in turn can cause lower property values. This kind of historical analysis needs to be taken into account with city data, and in this case, tree data. Data has so many other kinds of factors that can affect outcomes. 

What are the parallel and contextual data related to this dataset? 

From the dataset, can you tell if trees that are dying are dying in neighborhoods that have been systematically underserved? And how do you put that into context? Think about parallel context data: What else has happened in these spaces in the past 20 years? 30 years? Bullen points out that a lot of the datasets for civic technology may be only 10 years old, and that there may not be a lot of available historical data. The patterns coming out of your dataset are made up of so many nuances, with historical roots in policies such as gentrification, segregation, and redlining, which may be reflected in technology. A city is complex, so a dataset, even about trees, will have all of the complexities and biases of a city (such as redlining or segregation) built into it. 

Expanding the idea of bad data

Bad data isn’t necessarily harmful data, or data that seems adversarial at first glance. It’s data that is incomplete. Examples of this can appear when analyzing the dataset about trees and how well trees grew in different parts of New York. There are a lot of factors that affect tree growth that aren’t in that dataset. This kind of fuller reasoning makes the Million Trees dataset incomplete and it needs to be viewed as such. 

Bringing intersectionality to data

Designing for cities, without AI, is already a nuanced, thorny, large, and, at times, difficult problem. It is a wicked design problem, bound by legislation, bureaucracy, architecture, and then technology, as cities themselves and those who work in civic technology update to keep up with changing technology. Technology affects cities across the board, from the architecture, hardware, and design of cities that are becoming “smarter” cities, to the software and processes of civil servants updating cities with new kinds of technology. Adding AI into the mix is not easy and shouldn’t be engaged with as a kind of pan-techno-solutionism, even when looking at something as seemingly benign as tree-planting data. As it’s been outlined in this article and in other published examples, AI and technology writ large amplifies bias and injustice, regardless of how complete or incomplete a dataset is. But AI could be used effectively if we unpack how deep technology solutions inside of cities work, and where AI would fit into the mix on top of preexisting technology. Technology alone doesn’t solve problems and can create unforeseen problems, as we see specifically with 311; those issues must also be scrutinized. What is the line between calling 311 to fix a problem and the solution to that problem being to deploy a police car? Are users aware of what the response will be when placing a 311 call? This kind of response needs to be unpacked, examined, and then fixed. This similar kind of interrogation must be applied to AI as well. Cities have a diversity of inhabitants, and that means a diversity of responses to problems. So how technology responds to, sorts, and understands those issues and then provides solutions must always be analyzed as potential design flaws, as well as policy flaws when the solutions result in unintended harm. Moving forward, all technology, including AI, must take an intersectional perspective, especially in cities, where historical racism and injustice were a part of the status quo, and the legacy of that injustice is still affecting data and design in cities around us today. 

Posted in: on Mon, June 01, 2020 - 10:16:57

Caroline Sinders

Caroline Sinders is a machine learning designer/user researcher, artist, and digital anthropologist examining the intersections of natural language processing, artificial intelligence, abuse, online harassment, and politics in digital conversational spaces. She is the founder of Convocation Design + Research and has worked with organizations including Amnesty International, Intel, IBM Watson, and the Wikimedia Foundation.
View All Caroline Sinders's Posts

Post Comment (2020 06 02)

What’s up, with that “Patti Salmon” if that’s really her real name. It could be Salmon Patti for all we know? Any-Who, she is telling the whole world that at some place named “Kutcharitaville” and someone called “Chef Captain Kutchie Pelaez”? Is that something like “Jimmy Buffet” the singer has down in Florida? Any-Who she says that he makes the best key-lime-pie that this world has ever seen and eaten. Also, if you can believe it, this so called “Chef Captain Kutchie Pelaez” of “Kutcharitaville”, well he supposed to serve the “World’s Greatest Prime Ribs”? First of all, how can anyone be both a “Captain” and a “Chef”? Does that mean that he is the captain of all the chef’s or what? Son of a bitch I just don’t know do you?

She also states that this so called “Chef-Captain Kutchie Pelaez of Kutcharitaville” gives too all that order one of his “World Famous Prime Ribs” a coupon too fill-out and enter his monthly give away of, get this, One $Million Dollars! Who in their right mind gives away a Million $Dollars every month to people that eats his or anyone else’s Prime Ribs, Who-Who-Who? I might have been born at night but it wasn’t last night.

If all this could really be true, perhaps we should all go too this wonderful place called “Kutcharitaville” drink a few of those so-called Kutcharitas, eat “The World’s Greatest Prime Ribs and Key Lime Pie” and fill-in the blanks and win ourselves a Cool $Million Bucks. I ask you “What Have You Got To Loose?”

So, if you don’t win a Million Bucks, then all you have to blame is not me but someone named “Patti Salmon”. Or is it really “Salmon Patti”? Who knows?
Don’t Google Me, I Won’t Google You! We’ll All Win, Yeah!

Thanks Boss, I really love the way that you think, keep up the good work and please keep us informed about Chef Captain Kutchie Pelaez and all his giveaways! He’s the Man!

Is remote the new normal? Reflections on Covid-19, technology, and humankind

Authors: Yvonne Rogers
Posted: Thu, May 28, 2020 - 1:46:39

Covid-19 forced governments to urge full or partial lockdown measures to slow the progression of the pandemic. By the end of March, more than 100 countries had “locked down” billions of people. During that time, Yvonne Rogers wrote a series of blog posts on the topic of “remote,” structured around the themes of living, working, numbers, and tracking (the full articles and more posts are available on her website: She asks: Is remote the new normal? As we contemplate when we will all meet again face-to-face, Rogers helps us reflect on what remote means now for living and working, while also considering fresh ideas on how we plan to slow the pandemic with technology and save lives. 

Remote working: March 19, 2020

Since March 12, 2020, we have been working remotely, as the university instructed us to do because of the escalation of coronavirus. It feels like I have had more videoconferencing meetings than hot dinners! One moment it is Skype, the next Teams, then Zoom—many have been back-to-back. Even though it is great that we can keep in touch in this virtual way, it is frankly exhausting—but in a different way from a usual tiring day at work. While my twice-daily train commute takes a toll, being glued to a screen for hours on end, talking to virtual colleagues and students elevates fatigue to a new dimension. The exhaustion is less physical than it is enervating, like after a long Sunday of too much binge watching.

This phenomenon has since been dubbed Zoom fatigue. Part of the new tiredness stems from meetings that differ greatly from the usual—dealing with so many updates each day on what has been planned, decided, revealed, or mandated by government, university, or university department. Right now, an awful lot of “cascading” is interspersed with checking up on and reassuring each other. There seems to be much less actual work, but one hopes this will shift once routines begin to settle into place.

Then, out of the blue, you might get an email from one of your colleagues letting you know they are not feeling well and have begun self-isolating. It is quite anxiety-inducing, worrying if they have contracted Covid-19. I have heard now from quite a few people that they have developed flu-like symptoms and are self-isolating; some situations seem more serious than others. It is all a bit discombobulating, like Russian roulette. You can but hope they will be better the next day.

Today it was raining, so we put our umbrellas up, making it easier for us to social distance. At one point I entered a shop, and while waiting in line, other customers came in and stood two meters apart from us and each other, abiding by government guidance. It felt a little strange and silly as we carefully navigated the small place. But even though it seemed unnatural, it felt prudent.

Later in the morning, when I peered out of my study window that looks onto a primary school playground, I saw many 5- and 6-year-old children playing together during their break time without a care in the world (as it happens, it was probably the last time for a while; U.K. schools closed soon after). For them, the concept of social distancing must seem alien. As for taking part in social isolation, it must seem even stranger: Why can’t we go out to play? Why must we stay indoors without physical contact with our grandparents or playmates?

This new world order brings out the best and worst in people. There are so many acts of kindness being reported that it makes you feel warm and fuzzy, realizing that, as human beings, we like to look out for each other. However, the flipside is just how many people are looking out for themselves. Many can’t resist the temptation to stock up on tins, bread, toilet paper, and other staples. Panic buying maybe, but it gives them something to do and makes them feel safe. I found myself today, after failing to find any cereal left on the shelves, wondering if I should buy the last remaining cereal bar that I spotted. I would never normally buy such a thing, let alone eat it. But self-restraint is tough when irrational fears enter our psyche. As it turned out, one of my neighbors came to the rescue and brought around a tasty loaf of bread. 

We have all started to sign off on emails with “Stay safe.”

Remote living:  March 25, 2020

So the official lockdown kicked in on Monday, March 23, 2020, for those of us living in the U.K. The rules are a bit more lenient than in some countries, where they have draconian curfew measures in place. Here, we are allowed to leave home for one exercise session a day, and to go shopping to buy necessary food and medicine. We are also allowed to go to work if we absolutely can’t work from home. So for the time being, if you are a construction worker, you can carry on as normal—as long as you social distance. It seems like a good time to be a crane operator, high in the sky looking down on empty streets. I suspect not for long. It must be very difficult for a government to balance its country’s economic needs against how best to flatten the pandemic curve—all while determining how to change human behavior into something so very different from how people normally live their lives.

Yesterday I did my beach walk alone at 8 a.m. to start the day. There were many joggers and dog walkers. There was also plenty of space, so we managed to keep our distance. The sea looked calm and serene, and for that hour I could think of something other than coronavirus. Today I am saving up my permitted outside exercise for later in the day. By midafternoon yesterday, I was getting quite restless. At one point I looked at my watch thinking it was 4:15 p.m.—nearly time for a planned chat with a friend—only to discover it was actually 3:15. Normally, being surprised by finding I have an extra hour would be a joy. I can easily fill it in by catching up on work. This time, I can honestly say my heart sank a little, reminding me of when I was a teenager on a long Sunday when the clock stood still… 

Meanwhile, all around us is a flurry of activity online. I see that lovely Amanda is streaming her yoga class to us at UCL on Friday at work. A couple just got married in Birmingham, before the country banned weddings; over 100 guests watched it being livestreamed on Facebook. There are also sweet videos of grandchildren now doing the rounds, waving at their grandparents through the windows or patio doors in their garden, some even squashing their little noses up to the glass. Lots of people have celebrated their birthdays with their friends and family by holding their birthday cakes up to the camera for others to see.

Eating alone together online has also started to become popular again among families and friends. I remember a few years ago, when Skype was becoming mainstream, some of us tried it as an experiment. For example, when I was in South Africa on sabbatical, I had a Skype dinner with a friend back in the U.K. It was nice to catch up with her while doing something, but the eating part actually felt quite odd. We had both served ourselves something simple—a pasta dish—and started eating at about the same time. But somehow picking up our knives and forks together did not synchronize, and the eating of the meal did not feel natural. The smells, tastes, and noises of eating together were lost in translation.

At the end of a tiring day of back-to-back remote work meetings, I now look forward to a FaceTime chat with a friend or two, glass of wine in hand. Of course, it is no substitute for the real thing, but it can be surprisingly relaxing and enjoyable. We make sure we have a good laugh, crack some jokes, and try to see the funny side of life. And then it’s dinner, Netflix, and the 10 o’clock news before going to bed.

Another Groundhog Day in these strange times.

Remote numbers: April 9, 2020

The day before the coronavirus lockdown started in the U.K., I had a smart meter fitted in my house. After the engineer finished, he walked me through all the various functions shown on the digital display. A dashboard of numbers provides all sorts of stats and data about how much electricity and gas you are using and how much they cost per hour, alongside an easy-to-read traffic-light barometer that moves into red bars if you are using a lot of energy (e.g., when boiling a kettle) while rewarding you with green bars when you are being energy efficient. The idea is that you use the various numbers and bars to change your behavior, and in doing so, reduce your energy usage and save money.

I looked at the display a few times but did nothing to change my own behavior. Quite the opposite, in fact. I started using more electricity and gas, making more cups of tea, cooking more meals, spending more hours in front of my laptop and TV, and doing more washing—all a result of being stuck at home 24/7. The best place for the display? Hidden in a drawer.

Meanwhile, like everyone else, I have been gripped by the numbers that come out each day about coronavirus—uncomfortably so. The tally of new cases and new deaths rises daily. At first, two or three people dying was considered shocking. Now we are up to nearly 1,000 a day in the U.K. It is no longer shocking but expected. We have all become engrossed by the graphs that the scientists generate to help the layperson understand what the numbers mean with respect to where we are in the quest to flatten the curve. They project how steep the curve is each day relative to day zero. The color-coded ones show where the U.K. is relative to other countries we might care about. I catch myself comparing how we are doing against the U.S. or Italy—thinking we are better off or not doing as bad. Why are we being shown this, as if it was a competition? To make us feel better? Comparative graphs are a mechanism commonly used in behavioral change, known as social norms. By seeing how well you are doing relative to others (e.g., peers, other families, neighboring cities or countries), you can relax if you are below the others or worry if you are above—there is a loud and clear indication of whether you are using more or spending more (if it is exercise, the reverse is true).

More and more of these visualizations are appearing, including Sky’s “Coronavirus: How many people have died in your area? Covid-19 deaths in England mapped.” Residents of remote areas like Suffolk can let out a big sigh of relief that there are no big blobs nearby. Those who live in London or other densely populated areas, on the other hand, will notice big blobs splatted over their home turf. No wonder so many Londoners flocked to the countryside when they could—that is, before those who live there full time told them where to go.

For the most part, there is little we can do other than worry when looking at these comparative coronavirus graphs. They are fodder, too, for the media and politicians. For example, this headline: “Singapore Wins Praise For Its COVID-19 Strategy. The U.S. Does Not.” A CNN headline was more in tune with the way science happens, through competing predictions and hypotheses: “New U.S. Model Predicts Much Higher Covid-19 Death Toll in UK. But British Scientists Are Skeptical.” The U.S. team predicts that nearly 70,000 will die in the U.K. The British scientists, on the other hand, predicted only 20,000 to 30,000 would die in the U.K., based on their brand of mathematical modeling. Who do we believe?

It goes without saying that mathematical models need lots of data in order to make accurate predictions. When predicting the weather, a tsunami, or an earthquake, millions of data points are used. The current pandemic, however, in comparison has relatively few data points that can be used. It would be hubris not to remember the failure of Google’s Flu Trends program a few years back, when its developers claimed, based on analyzing people’s search terms for flu, that they could produce accurate estimates of flu prevalence two weeks earlier than official data. Sadly, it failed to do this for the peak of the 2013 flu season. Then, they had access to big data—masses of it. The current modelers only have access to small data—very little of it. Let’s hope all the lockdown restrictions that have been put in place in nearly every country, based on current predictions and remote numbers, fares better. 

We can but hope.

Remote tracking: April 13, 2020

It is great to see tech companies coming together to help curb the coronavirus. Apple and Google have been collaborating on a platform that could help governments worldwide monitor, track, and manage the pandemic more effectively. Their proposed system works by using Bluetooth and encryption keys, enabling data collection from phones that have been in close proximity with each other. From this data, it can be inferred who else phone owners have been close to for a set period of time (e.g., the quarantine period of 14 days). Users can also alert health authorities if they have been diagnosed with Covid-19; conversely, the system can text users if they detect that their phone, and indirectly themselves, have been in close contact with someone who has been diagnosed with the virus. The term coined for this new form of remote tracking is contact tracing, as illustrated by Apple and Google’s graphic (Figure 1). 

Contact tracing.jpg
Figure 1. Apple and Google’s contact tracing system.

If everyone opted in to the system and carried their phone at all times, it could prove an efficient way of letting people know to self-isolate before they unwittingly spread the virus to others. Epidemiologists would also be able to analyze massively more data and be able to develop more accurate predictions. Governments could be better informed about the efficacy of introducing different policies and restrictions about human movement. It seems to be a win-win. However, it requires fairly universal buy-in to the philosophy and the practice as the best way to stop the global spread of the virus. There may be some resistance when it comes to privacy concerns. But such worries need to be weighed against the potential gains of having a pervasive tracking system in place whose sole objective is for the greater public good. 

One way to address these concerns is to reassure the public. Much thought has gone into how to avoid unnecessary data collection; Google and Apple’s proposed method of contact tracing is limited in what it tracks and how the data it collects is stored. Compared with GPS that tracks people’s physical location, their proposed use of Bluetooth technology is to pick up signals of only those mobile phones that are nearby, sampled every five minutes. Hence, the data collected won’t know that you were on a bus or in the supermarket at a certain time. It will know only that you were close to a person who has just been diagnosed with Covid-19. This is an important point to be really clear about—as to how much of what someone is doing is actually being tracked. It also helps to address privacy concerns if the data being collected is encrypted.

To enable such a tracking system to have widespread uptake, governments can either be authoritarian and imposing (as is the case in several countries in Asia) or democratic and encouraging—through educating, persuading, incentivizing, and nudging people to opt in. However, this takes time, during which dissenting voices in the press and on social media, together with conspiracy theorists, may create a groundswell of worry. To overcome scaremongering and anxiety requires open debate about what is acceptable and what is not, and how this can change over time and in different cultures and circumstances. Consider CCTV: It is now widely accepted in many countries as a technological deterrent against crime, yet when it first became mainstream in some countries like the U.K. and Germany, many people were up in arms, not least the Snoopers Charter. Since then, however, public opinion has changed. Police authorities found the cameras very useful in helping in their investigations and through acting as a deterrent. Nowadays, cameras of every shape and size have become the order of the day, from webcams worn by frontline workers to massive multiplex CCTV security setups in shopping malls.

Part of my research agenda is to investigate public opinion and sentiment about “creepy data.” We carry out studies to see which technologies people find acceptable and which make them fell uncomfortable, compromised, or threatened. In the early days of mobile phones, I worked on a project called Primma ( that investigated how to enable people to manage the privacy of their own mobile devices within a framework of acceptable policies. One of our user studies, called Contravision, explored public reactions to a fictitious future technology called DietMon. The proposed tech enabled people who seriously needed to lose weight to track their calorie consumption by providing them with information on their phones about the amount of calories in the food they were contemplating eating. A chip was also embedded in their arm that sent data about their physiological states to their GP. Participants were shown either negative or positive videos of how people managed their everyday lives when using such monitoring tech. Their reactions were mixed. Some people were grossed out; others saw the potential benefits of the system. Importantly, it resulted in an open debate where a diversity of different perspectives was explored—in sharp contrast with the scaremongering that the media often presents to the public. In the end, many different opinions and concerns were voiced. 

In another study we conducted (see, which investigated concerns over the use of tracking in public, one person said, “Privacy is important. But I would like to know if I was sick and this is a good way to do it.” This sentiment is at the heart of the current contact-tracing dilemma.

Closing words 

My next blog is called “Remote nurturing.” I extol the virtues of all the latest crazes that promote being social and feeling human—pub quizzes, making bread, street concerts, growing vegetables. Now that some of the lockdown restrictions are beginning to ease throughout the world, we can begin to establish a new normal, helping each other out while we gradually discover what it means to be together again—albeit at an indefinite social distance. 


Many thanks to Johannes Schöning for editing the blog entries for this article.

Posted in: Covid-19 on Thu, May 28, 2020 - 1:46:39

Yvonne Rogers

Yvonne Rogers is the director of the Interaction Centre at UCL (UCLIC) and a deputy head of the computer science department. She is interested in how technology transforms what it means to be human.
View All Yvonne Rogers's Posts

Post Comment

No Comments Found

After the iron horse: Covid-19 responses in education

Authors: Jonathan Grudin
Posted: Wed, May 27, 2020 - 10:25:09

Research from the distant past regains relevance! From 1998 to 2000 and 2017 to 2020, I focused on higher education. Suddenly, some of the early work is again relevant. I’ll describe it here, along with some thoughts arising from conversations with educators, administrators, and my daughters.

Forced by Covid-19 into functioning remotely, the education field quite sensibly seized upon familiar and reliable “digital iron horse” substitutions. Zoom was widely adopted for class meetings in primary and secondary schools that had access to technology, and more broadly for university lectures and final exams in TA-monitored breakout rooms. The limitations of the webcam wave, however, soon steered attention to asynchronous approaches. Mark Guzdial reviewed their uses for remote learning but said that he hopes not to need them again [1].

Twenty years ago, a group I was part of at Microsoft Research deployed streaming-media prototypes to explore real-time and asynchronous remote education. Students and faculty had little time to prepare. Our systems were used in lecture courses at the University of Washington and MIT, and in multisession internal Microsoft training courses, with the results published in CHI, CSCW, and beyond. This was pre-wireless, running on PCs less powerful than your phone. The human side, though, has not changed as much.

Remote live lectures and audience feedback

Studio audiences were devised for early radio comedies and dramas because performers feed off audience energy (studio audiences were also used to record laugh tracks). In March, late-night television hosts had to adjust to the disappearance of live studio audiences. Classroom instructors faced the same challenge.

In “Evolving Use of a System for Education at a Distance” [2], my colleagues and I describe Flatland, a system to explore numerous tools for student feedback. It included video of the presenter, slides including interactive forms, student questions that they could vote to prioritize, a chat panel, Too Fast/Too Slow and Clear/Confusing buttons, hand-raising functionality, and a list of attendees (Figure 1). We reported on feature use and non-use, but high-level observations are more significant today.

Figure 1. Flatland, presenter view. A reminder of what interfaces looked like in 1999!

Here are some key findings:

  • Teaching style matters. The most effective classroom lecturers can have difficulty online. One great instructor’s classroom approach was to race through material and monitor students closely to see where to slow down. Unfortunately, students did not use the Too Fast button. With no feedback telling him when to brake, he finished lectures in half the allotted time. He knew that few students had kept up but could not adjust. Less extroverted lecturers who prepare methodically can fare better.

  • Video is great for connecting but not for feedback. Flatland did not have student webcams. Today, seeing everyone at the beginning of a class can be wonderful, although once class starts, video must be managed carefully to avoid distraction. Surprisingly, video is not good for passive feedback. In a separate controlled study, four-person groups carried out engaging tasks either face to face or connected by high-resolution video. Task performance was similar, but when connected by video, people’s faces were far less expressive. Deadpan images won’t help instructors gauge student reactions.

  • Establish social conventions for student feedback and interaction. Feedback tools have a learning curve to reach consistent use. For example, an instructor who uses the chat channel to greet early class arrivals creates an often unmet expectation that chat will be watched during the lecture. Will a hand icon be used in voting initiated by the instructor, or initiated by students to signal a desire to comment or speak? Can students verbally jump in? Appropriate social conventions must be designed and communicated.

Recorded lectures and flipped classes

We also built asynchronous video systems to support a “flipped classroom” approach, in which students watch a lecture before the class meets and spend class time discussing or working with the material presented. We went further, enabling students to interact before the class (Figure 2). A student watching a prerecorded lecture and slide presentation could contribute to a time-indexed discussion: Questions and comments by previous viewers scroll by in sync with the lecture. Versions of the Microsoft Research Annotation System were used at MIT and the University of Washington, and in internal training courses [3].

Figure 2. Viewers could access the table of contents, discussion, or private notes (lower left). The topic of the comment by a previous viewer that is closest to the current point in the video replay is highlighted, with the full comment text below the slide.

Some key findings from this project:

  • On-demand viewing with pre-class discussion created work for faculty. Instructors had to record lectures without an audience (though today some instructors reuse last year’s lectures without raising eyebrows). When the annotation system succeeded in fostering extensive discussion, faculty had to read it and could struggle to use the subsequent class period effectively: The discussion was over. One professor complained that to use it again, he would have to design different uses for class time.
  • Students generally liked having discussion and personal notes indexed to the video. We also devised advanced approaches for varying playback speed (faster or slower) while maintaining comprehensibility. Faster playback could increase learning by focusing attention. Other advantages: Students can choose when to watch a lecture, and with remote classes they save commuting and walking time.

  • Procrastination. Prerecorded lectures benefit students who watch the video but adversely impact those who do not and thus can’t follow the class discussion. This effect was amplified by our Discussion feature, which also penalized the many who watched the lecture at the last hour, leaving no time to participate in the online discussion. To address this, we modified the system to support the insertion of group tasks [4] and exercises/quizzes [5] during playback, which could be assigned for completion days before the class. Most students preferred questions that popped up unannounced during the lecture.

  • One size does not fit all. A crucial discovery was that each instructor wanted custom features based on their course content and use. Our out-of-the-box system was not flexible enough. An MIT film professor needed two video windows. We were asked to create an annotation playlist capability to group comments together. Terminology was tripped over: After a programming language course had minimal Discussion activity, a student explained, “I had questions, but I didn’t want to discuss C.”

  • Shorter lectures? Long lectures are motivated by the cost of converging an instructor and students in a room, but long lectures are usually less appealing and effective than short lectures. Remote education makes the breaking of long lectures into segments more practical.

Next steps

Here I described video-based systems that most students (but few faculty) said they would sign up for again. Having to respond to Covid-19 is terrible, but it is also an opportunity to reflect on pedagogy and technology. From chalkboard to overhead projector to projected slides—what’s next? A visionary lecture system [6] built by David Kellermann at the University of New South Wales creates a learning community in large courses that include remote students. This crisis is also forcing a close look at assessment. Can we assure integrity in high-stakes remote exams without intrusive video systems? Should we move more rapidly to other means of achieving and measuring learning outcomes?

Education is more than learning. Engagement and motivation, key concerns today, are socially constructed. Whether motivated by competition or collaboration with peers, by discussions between classes, or by a smile from a teacher, students lose such incentives and drivers when the school bubble is replaced by the bubble of home. Seeing my daughters’ athletic awards and graduation ceremonies via video rather than a banquet or assembly is a reminder of the creative challenge in designing effective technology to supplement the natural real-time and face-to-face interactions that people have relied on for millions of years. 


1. Guzdial, M. How I’m lecturing during emergency remote teaching. Blog post, Apr. 6, 2020. 

2. White, S.A., Gupta. A., Grudin, J., Chesley, H., Kimberly, G., and Sanocki, E. Evolving use of a system for education at a distance. Proc. of HICSS 2000. 

3. Bargeron, D. and Grudin, J. As users grow more savvy: Experiences with a multimedia annotation tool. Proc. of HICSS 2004. 

4. LeeTiernan, S. and Grudin, J. Fostering engagement in asynchronous learning through collaborative multimedia annotation. Proc. INTERACT 2001, 472–479.

5. LeeTiernan, S. and Grudin, J. Supporting engagement in asynchronous education. CHI '03 Extended Abstracts on Human Factors in Computing Systems. 2003, 888–889. 

6. Kellermann, D. Transforming the learning experience (video). Campus Connection Summit 2020.

Posted in: Covid-19 on Wed, May 27, 2020 - 10:25:09

Jonathan Grudin

Jonathan Grudin works on support for education at Microsoft. Access these and related papers at under Prototype Systems.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

A crash course in online learning

Authors: David Youngmeyer
Posted: Fri, May 22, 2020 - 5:58:53

As in many countries, when New Zealand went into lockdown in response to the Covid-19 pandemic in late March, tertiary education providers had to suddenly move from face-to-face teaching to online learning. Although providers already offered some online courses and had considered the need to expand online learning, the requirement for an immediate about face while a trimester was in progress was a jolt for both teachers and students alike. 

In New Zealand, universities and other tertiary providers responded by halting face-to-face classes for a short period before resuming teaching activities completely online. 

The limited response time frame, along with differing levels of expertise in online teaching, varying expectations among learners, and the shared context of a pandemic lockdown created a unique learning environment. This has required adaptation and flexibility from both teachers and students. Participating in online learning during the pandemic has provided a window into our interaction with technology and each other in a unique social setting. 

While not without issues [1], the online video platform Zoom has shown itself to be a helpful tool that can effectively re-create multiple educational activities, including lectures, tutorials, and lecturer office hours. While Zoom has been used particularly in the business world pre-pandemic, many of today’s users have been either introduced to it recently or required to use the tool in new ways and in a different context [2]. As such, new social behaviors are being produced.

A tutorial via Zoom, for example, mimics an in-person tutorial by allowing for real-time interaction, along with visual, auditory, and written communication. Physical proximity is missing, but users gain the ability to see all other participants on their screen (or at least their names when their video is turned off or unavailable). 

Talking to a classmate while the tutor is addressing the class is mimicked via the private chat function. This is less disruptive when done online but it is a distraction. When added to other distractions, such as managing a mobile phone and dealing with people in the same physical space, it has the potential to be a kind of “phubbing” [3] on steroids. It may be that multitasking while participating in an online tutorial is as disruptive and unwelcome as researchers have found the use or mere presence of mobile phones to be during face-to-face interactions [4,5]. 

The ability of users to control their video input—when that functionality is available—adds a new dimension to the traditional tutorial, where a student is either definitively present or absent. Online, a student may be able to see and hear other users yet keep their own appearance private. Why would they do this? Possibly for security reasons, but also because of appearance-related issues (no haircuts under lockdown) or because they want to maintain the privacy of the home environment. Some users add a still photo instead of video, while others seek to control their physical background with a virtual background. Turning off the video creates a new social situation with communication issues related to a lack of visual cues. It is akin to a student joining an in-person tutorial by telephone, while being able to see the other participants.

The online tutorial can also support the sharing of work—such as visual arts projects— where feedback can be given by the tutor and other students. This is achieved by a student sharing their screen and the tutor pointing digitally to elements of the work on-screen. This scenario effectively replicates its counterpart in a physical classroom, with a good amount of real-time interactivity.   

It is important to acknowledge that online learning may not be best suited for certain types of courses, such as those requiring hands-on activities and access to specialized equipment. Additionally, it may be problematic for those on the other side of the digital divide [6]. The sudden move online in response to the pandemic, however, has allowed a good deal of learning to continue, as opposed to the alternative of closing shop completely. At the same time, new user behaviors and online social situations have been created that are deserving of further study. 


1. Doyle, P., Mortensen, J., and Clifford, D. The trouble with Zoom. Australian Financial Review. Mar. 24, 2020. 

2. Neate, R. Zoom booms as demand for video-conferencing tech grows. The Guardian. Mar. 31, 2020.

3. Chotpitayasunondh, V. and Douglas, K.M. How “phubbing” becomes the norm: The antecedents and consequences of snubbing via smartphone. Computers in Human Behaviour 63 (2016), 9–18. 

4. Kadylak, T., Makki, T.W., Francis, J., Cotton, S.R., Rikard, R.V., and Sah, Y.J. Disrupted copresence: Older adults’ views on mobile phone use during face-to-face interactions. Mobile Media & Communication 6, 3 (2018), 331–349.

5. Przybylski, A.K. and Weinstein, N. Can you connect with me now? How the presence of mobile communication technology influences face-to-face conversation quality. Journal of Social and Personal Relationships 30, 3 (2012), 237–246.

6. Vishkaie, R. Hit by the pandemic, war, and sanctions: Building resilience to face the digital divide in education. ACM Interactions ‘Covid-19 blog’, Apr. 10, 2020.

Posted in: Covid-19 on Fri, May 22, 2020 - 5:58:53

David Youngmeyer

David Youngmeyer studies design at the University of Waikato. He plans to start Ph.D. research in 2021 in the field of human-computer interaction. He holds an M.A. in communication from the University of Maryland.
View All David Youngmeyer's Posts

Post Comment

No Comments Found

Design-led innovation: Lessons from the Scientific Revolution

Authors: Ron Gabay
Posted: Thu, May 21, 2020 - 2:58:42

In recent years, design has established itself as a strategic business asset that helps organizations innovate and grow. From first being associated with aesthetics, design has become a measure for user centricity and an engine for creating meaningful experiences and improved business performance. An increasing number of organizations recognize the value of design and seek to interweave design capabilities into their operations. With the ongoing market pressures to stay relevant and competitive through innovation, organizations face the multifaceted challenge of integrating business and design, while also experiencing an unprecedented rate of change. 

By examining other periods of disruption and innovation in human history, we learn that this kind of challenge is not completely new. The Scientific Revolution, much like today’s market, was a period of disruption that brought with it an unprecedented rate of change. Different disciplines and groups, similar to business and design professionals today, started collaborating on topics that at that time were considered distinct and isolated. Throughout the years, research about the Scientific Revolution has produced insights about interdisciplinary work and its role in generating breakthroughs. This article examines these insights in order to provide perspectives on how to interweave business and design in the current disruptive era. In addition, to put these perspectives in context, a critical review of design sprints is offered.

The business value of design

As organizations become data driven and seek to quantify their decision-making processes, collecting data about the business value of design is key. Today, we not only feel that good design is good business, but we also have the data to prove it. This makes design compatible with the modern, data-driven business world. Consultancy firm McKinsey conducted one of the most extensive studies on the financial value of design in 2018, collecting more than 2 million pieces of financial data and recording more than 100,000 design actions [1]. By correlating design actions (e.g., appointing a chief design officer or establishing a user satisfaction matrix) and financial performance (e.g., revenue or shareholder returns), McKinsey found correlations between investments in design and improved financial performance (Figure 1). 

Figure 1. Source: McKinsey Design Index, The Business Value of Design Report, 2018 [1].

Another study conducted by the Design Management Institute (DMI) tracked the share value of design-centric companies. The DMI monitored investments in design and their impact on companies’ performance relative to the S&P 500 Index. Over a 10-year period, the Design Value Index (DVI) showed that design-centric companies have outperformed the S&P 500 Index by more than 200 percent [2] (Figure 2). 

Figure 2. Source: DMI Design Value Index, Design Management Institute [2].

The data-driven understanding of the business value of design amplifies the question of how best to integrate business and design. To answer this question, one must first gain a deeper understanding of business and design, in particular their differences. 

How business and design are different

The first dimension in the multifaceted challenge of integrating business and design in today’s world is their unique and distinct nature. Table 1 highlights some of the key differences between business and design. By examining various criteria, it is clear that business and design differ in many aspects. 

Table1. Business and design comparison [3].

While the comparison may simplify the complexity and interconnectedness of business and design, its aim is to provide a viewpoint, a viewpoint that emphasizes that business and design are opposing yet complementary in nature. Moreover, the table reveals that business and design are also incommensurable (do not share a common measure). This incommensurability presents itself through different terms, methods, and processes. Thus, any attempt to lead business and design integration must address their unique characteristics and incommensurability. The failure to recognize and properly address the lack of common measures may lead to tension, misunderstanding, and even hostility between business and design professionals. To learn how to deal with incommensurability and prevent it from hindering innovation, it is worthwhile to investigate the Scientific Revolution.

The Scientific Revolution and incommensurability 

The Scientific Revolution, which took place during the 16th and 17th centuries, was a disruptive period that generated many breakthroughs. During this period, new knowledge about physics, biology, mathematics, astronomy, and chemistry emerged, transforming how people view the world. In his influential book The Structure of the Scientific Revolutions [4], Thomas Kuhn reviews the history of science. Kuhn, an American philosopher of science, shares his observations about different scientific theories and their relationship to one another. He identifies the presence of incommensurability when he fails to compare and see developments from one scientific theory to another. 

The inability to compare or see how one theory relates to the next is rooted in the fact that different scientific groups worked independently and used different terms and methods. The lack of common measures created a situation where groups were neither able to communicate nor exchange valuable information. Moreover, because of this incommensurability, relevant information, and even solutions that could have helped other groups create breakthroughs, were simply overlooked.

Kuhn’s Loss—a significant barrier for design-led innovation 

According to Kuhn, incommensurability became a significant barrier for breakthroughs and led to scientific stagnation. The main reason for this stagnation was that different scientific groups were simply not able to communicate. Groups could not understand and benefit from previous knowledge or perspectives due to inconsistency and unfamiliarity with one another’s terms, methods, or contexts. This notion of not being able to funnel and leverage existing knowledge toward a new paradigm is referred to as Kuhn’s Loss

Business and design professionals, much like different scientific groups during the Scientific Revolution, have their unique terminologies and methods. For example, as the table above shows, business perceives an individual as a customer who is defined by terms such as age, gender, or income. Design, on the other hand, views customers as humans and studies their personality, needs, and desires. Overall, as design-led innovation becomes a new paradigm founded on business and design integration, facing incommensurability is inevitable. Therefore, organizations that wish to maximize the powers of design-led innovation must address the concept of Kuhn’s Loss.

No-Overlap Principle 

To avoid or minimize the negative effect of incommensurability, Kuhn developed the No-Overlap Principle. The principle suggests that terms must not have an overlapping meaning or reference, and that when one term is used, it must be explained with its relevant context. For instance, “to learn the term liquid, one must also master the terms solid and gas” [5]. In the world of innovation, a No-Overlap Principle could be valuable for various reasons.

First, today’s world presents complex challenges that require companies to adopt interdisciplinary approaches. For business and design teams to work effectively, they need to be able to communicate and understand each other. For example, exchange value (revenue, costs, and profits) needs to be explained alongside use value (experience, emotions, and desires). Alternatively, customer or market analysis ought to be accompanied by a qualitative evaluation that includes personality types, cultural backgrounds, and traditions. Second, in a world that constantly changes and in which technology is easily accessible, human capital becomes the most significant source of innovation. The No-Overlap Principle supports some of the basic human needs from Maslow’s hierarchy: recognition and respect [6]. In essence, the principle expresses to business and design professionals that they are both essential contributors to innovation and success. 

A critical review: Design sprints

With the increasing attention that the business value of design receives, companies seek to interweave design into their operations. One way that design is being introduced to companies is through the design sprint, a five-day framework for solving challenges by applying design tools and methodologies [7]. A design sprint uses design activities such as research, user journey creation, sketching, and prototyping to efficiently generate meaningful concepts. Design sprints are becoming a popular multidisciplinary method for design-driven innovation. Since the goal of a design sprint is to leverage design methods and processes in the business environment, we must ask ourselves whether design sprints satisfy the No-Overlap Principle. 

The good. Design sprints introduce design to organizations and non-designers in a bite-size, five-day process. A sprint is usually a controlled environment where design activities and methodologies can be introduced to various stakeholders. The fact that sprints take place in a neutral environment (usually not where daily business meetings take place) creates the opportunity to generate the right physical and psychological setting for learning, experimenting, and engaging with design for the first time. The physical and psychological distance from daily routines can help loosen some of the participants’ deeply ingrained business mindsets and defuse tensions or preexisting hostility.

Design sprints also provide organizations and non-designers a degree of certainty about the design processes, steps, and expected outcomes. The following is an example from the website of Google Ventures (creators of the design sprint):

On Monday, you’ll map out the problem and pick an important place to focus. On Tuesday, you’ll sketch competing solutions on paper. On Wednesday, you’ll make difficult decisions and turn your ideas into a testable hypothesis. On Thursday, you’ll hammer out a high-fidelity prototype. And on Friday, you’ll test it with real live humans [8].

The five-day, step-by-step framework corresponds with how businesses operate and assess processes. Overall, design sprints can be an effective way to introduce design to non-designers because they simplify design in a manner to which non-trained participants can relate. In other words, a design sprint does satisfy, to some extent, the No-Overlap Principle through familiarity with terms and methods. 

The bad. Design sprints have a very clear set of rules and milestones that make design relatable and comprehensible to non-designers. However, design sprints also erode some core qualities of professional design, which, in turn, minimize its potential impact. First, design sprints portray design as a linear process. Much like a recipe, a sprint provides the notion that if only the predefined steps are followed, an innovative Michelin-star dish awaits at the end of day five. Design sprints provide participants the false impression that design is a clean, short process when in reality it is often iterative, messy, and long. 

Moreover, the design sprint’s gate-like process pushes participants to explore and arrive at meaningful directions within a short, predefined period. This future-oriented exploration is complex and involves many unknown territories where “mistakes” are inevitable. Yet in the search for innovation, mistakes are actually experiments. This notion, coupled with the usual business pressure to deliver, could harm and limit the potential impact and quality of design. Essentially, during the sprint, non-designers are asked to produce quality design work by engaging in activities that are often new and counterintuitive to them. We must question whether participants will truly immerse in exploration and will be able to refrain from the old business habit of associating potential mistakes (experiments) with failure.

Overall, a design sprint may satisfy the No-Overlap Principle by introducing design terms and methodologies to non-designers. Yet it fails to provide the broader context and deep understanding for design. If an organization is not design-centric, it will not be able to develop or scale the outcomes of a design sprint, as exceptional as they might be, in a meaningful manner. Essentially, running design sprints without a border integration of design into business will limit an organization’s ability to innovate because it alters and reduces design to its lowest common denominator.

Conclusion: The future of business and design integration

Design-led innovation has established itself as a catalyst for growth and differentiation. Across the globe, organizations seek to build and scale design capabilities to tackle current problems and to shape the future. To unleash and interweave the powers of business and design as a source for innovation, it is important to recognize the extent to which they differ and that they do not share a common measure. Interdisciplinary frameworks such as design sprints do help in democratizing design by reducing design to a level that non-designers can relate to. Nevertheless, reducing design may actually limit its puzzle-solving powers and long-term integration within the business context. I believe that organizations that wish to become design-centric and reap the fruits of interdisciplinary work must understand that the lack of common measure between business and design does not imply shifting toward finding the lowest common denominator. Instead, organizations should hire and invest in specialized professionals who can continue to perfect their skills while addressing the concept of Kuhn’s Loss. 

The key is finding T-shaped individuals—experts in their own domain who are able to collaborate across disciplines or functions. These individuals will help organizations avoid the trap of overspecialization, which leads to the loss of flexibility, openness, and curiosity—all cornerstones of innovation. Ultimately, within the organizational structure, organizations must find the right people and incentivize them to work together because, as C.P. Snow noted in his famous 1959 book The Two Cultures, “The clashing point of two subjects, two disciplines, two cultures—of two galaxies, so far as that goes—ought to produce creative chances. In the history of mental activity, that has been where some of the breakthroughs came” [9].


1. The Business Value of Design - McKinsey Report. 2018. 

2. Rae, J. Design value index exemplars outperform the S&P 500 index (again) and a new crop of design leaders emerge. dmi: Review - Design Management and Innovation 27, 4 (2017), 4–11.

3. Gabay, R. Breaking the wall between business and design—Becoming a hedgefox. Design Management Journal 13, 1 (2018), 30–39.

4. Kuhn, T.S. The Structure of Scientific Revolutions (Second Edition, Enlarged ed.). The Univ. of Chicago Press, 1962.

5. Oberheim, E. and Hoyningen-Huene, P. The incommensurability of scientific theories. The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), E.N. Zalta, ed.

6. Maslow, A.H. A theory of human motivation. Psychological Review 50, 4 (1943), 370–396.

7. Knapp, J., Zeratsky, J., and Kowitz, B. Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days. Bantam Press, London, 2016.

8. The Design Sprint - Google Ventures (GV) Official Website.

9. Snow, C.P. The Two Cultures and the Scientific Revolution. Cambridge Univ. Press, 1959.

Posted in: on Thu, May 21, 2020 - 2:58:42

Ron Gabay

Ron Gabay is the head of innovation and venture design at JoyVentures. He builds and manages the foundation for venture creation by interweaving technology, science, and design. His passion is to enable cross-disciplinary collaborations that foster curiosity, human-centricity, and creativity while applying a commercial lens. He holds a BA in industrial design and an MBA.
View All Ron Gabay's Posts

Post Comment

@Yoav Pridor (2020 06 02)

Great stuff Ron. Very interesting.

Evaluating immersive experiences during Covid-19 and beyond

Authors: Anthony Steed, Francisco Ortega, Adam Williams, Ernst Kruijff, Wolfgang Stuerzlinger, Anil Ufuk Batmaz, Andrea Won, Evan Suma Rosenberg , Adalberto Simeone, Aleshia Hayes
Posted: Tue, May 19, 2020 - 4:52:13

The Covid-19 pandemic has disrupted life as we once knew it. The safety and well-being of people are paramount, and there is no exception for the human-computer interaction (HCI) field. Most universities and research labs have closed non-critical research labs. With that closure and the student populations having left campus, in-person user studies have been suspended for the foreseeable future. Experiments that involve the usage of specialized technology, such as virtual and augmented reality headsets, create additional challenges. While some head-mounted displays (HMDs) have become more affordable for consumers (e.g., Oculus Quest), there are still multiple constraints for researchers, including the expense of high-end HMDs (e.g., Microsoft Hololens), high-end graphics hardware, and specialized sensors, as well as ethical concerns around reusing equipment that comes in close contact with each participant and may be difficult to sterilize. These difficulties have led the extended reality (XR) community (which includes the virtual reality (VR) and augmented reality (AR) research communities) to ask how we can continue to practically and ethically run experiments under these circumstances. Here, we summarize the status of a community discussion of short-term, medium-term, and long-term measures to deal with the current Covid-19 situation and its potential longer-term impacts. In particular, we outline steps we are taking toward community support of distributed experiments. There are a number of reasons to look at a more distributed model of participant recruitment, including the generalizability of the work and potential access to target-specific, hard-to-reach user groups. We hope that this article will inform the first steps toward addressing the practical and ethical concerns for such studies [1].

There are currently no strong ethical guidelines for designing and running experiments in VR and AR for HCI. Most of the VR and AR studies in HCI are conducted in research institutions, where researchers must follow local laws and the directions of the local institution’s ethics board. VR and AR systems allow researchers to control the virtual environment and collect detailed user data in ways that might not be familiar to participants, so careful consideration of participant privacy is especially important. Further, some experiments might require direct supervision through an experimenter while the user interacts with the virtual environment, for example, to watch for behaviors that circumvent the objectives of the experiment. The rules and laws for remote data collection and direct supervision of experiments, which  can vary between different countries and regions, becomes an issue.

Short-term solution: Use lab personnel and infrastructure

The most immediate solution to performing remote experiments is to collaborate between labs to provide participants for each other’s experiments. The subjects are likely to be lab members, or people associated with the labs in some manner, who have the correct equipment at hand.

A well-known concern for most work with human subjects is the issue of working with populations of convenience. This problem can be particularly acute in this case. Groups of lab members may have too much knowledge about the field to react “naturally.” They may guess the experimenter’s aims and intentionally or unintentionally behave in accordance with or in opposition to them. They may also have strong existing opinions about interaction or visualization techniques, which can bias the outcomes. Finally, their experience with XR—either AR or VR—may make it difficult to generalize their data to the general population. Specifically, their expertise in the use of these platforms can be a confound to the outcomes of usability testing new tools and experiences.

However, there are also circumstances in which distributed studies across labs could be better than the usual population of convenience. Rather than a mix of participants who are familiar with XR and new to XR, a population of lab members would all be familiar with the equipment. This might be more generalizable to populations who might actually engage with the research. Assuming that lab members are in general less susceptible to simulator sickness (e.g., through self-selection), it could also reduce the risk of losing participants to this affliction. In short, it’s important to carefully consider which tasks can be best run with experienced participants.

Medium-term solution: Recruit external users who have the necessary hardware

In order to develop a more sustainable participant pool, a more organized effort is needed to start recruiting outside of the research labs. This phase is still limited to participants who have the equipment required for a given user study. However, given that six million people currently own a VR headset, there is clearly the potential to reach out to these individuals. Unfortunately there are no easy-to-use tools to run VR experiments online, and there are various technical issues with implementing and distributing experiments to consumer devices. A few early works, though, have demonstrated the possibility of controlling enough aspects of the design to produce usable results (e.g., [2,3]).

An initial step would be a website that allowed participants to register for recruitment and research labs to advertise their experiments. This system could use crowdsourcing websites to recruit participants, who would then be redirected to the site. This in itself brings many challenges. Simultaneous efforts by different regions (e.g., the EU, the U.K., the U.S., Canada, Japan, and Brazil) would be required to grow the effort by collaborating and seeking regional funds. For the site to be successful, different regional needs would need to be considered. For example, a study approved by an ethics board in the U.S. might not be acceptable to a panel in another country. This involves not only ethics but also local laws, such as the European GDPR.

Long-term solutions: Generate pools of users through funded hardware distribution

While the medium-term solution will improve the way we do remote distributed experiments, one suggestion for the long term is to provide equipment to a pool of subjects. Based on the experiences learned in the previous phase, this solution will continue to improve the tools and methods created, but it needs to identify ways of finding participants who can be lent equipment, in the hope that they then use it to participate in multiple experiments. This in itself is an expensive goal, but we believe it is possible because the equipment might become cheaper. Some governmental scientific funding bodies (e.g., the National Science Foundation in the U.S.) could provide funds to acquire the required infrastructure to expand the pool. This would offer an opportunity beyond Covid-19. First, it would allow us to have expert users test a VR application while also having access to naive users when needed. It would also enable us to validate research results with different subject pools from multiple regions, and would remove the need to bring participants into the lab—unless this were required due to specialized hardware needs or other restrictions imposed by an experiment’s design.

In other words, we could seed a community of participants with specialized technologies to allow for a more diverse subject pool. This would ideally look like the distribution of HMDs or similar technologies to volunteers around the world, who in exchange would agree to participate in experiments. These seeded participants would be registered through a citizen science crowdsourcing site. The benefit of having these seeded users would be a new level of diversity among participants. The distributed HMDs will be new technologies (at least for some time) and these participants would represent novice users. The research lab would still be able to provide incentives (e.g., additional compensation) to maintain interest among these users. This style of crowdsourcing has some precedents. Some people make a portion of their income from crowdsourcing sites such as Amazon Mechanical Turk and Prolific or by participating in paid medical trials. In some cases, these participants are not ideal test subjects, because they may be very experienced with common experimental tasks or situations (e.g., overfamiliarity with the “trolley problem” in psychological experiments). However, having an official pool of representative, well-compensated participants could also address issues of undercompensation and unrepresentative samples.

Ethical considerations

While ethical (and legal) considerations may vary depending on the country, review board, and institution, the following are some points to consider:

  • Pooling students to run each other’s experiments. While this seems to be an attractive idea, there is the problem that faculty members might induce students to take part in the pool. Thus, it is essential to have strong requirements about there not being an inducement to take part; for example, it cannot affect grades, funding, or progress toward degrees. A possible solution is to add a “non-inducement” clause while checking that it is enforced.

  • Desktop sharing. If desktop screen sharing is used (e.g., to make it easier for the experimenter to control the remote apparatus), this poses potential ethical risks for the participants. For example, the experimenter may then see personal notifications. One potential solution is to use only window sharing for the VR application, if this is viable. Still, it is vital that these risks are assessed and correctly managed by the experimenter and are mentioned in the ethics review process.

  • Running studies on social VR platforms. This poses several data-processing problems, as the confidentiality and security of emerging platforms is not assured. The platforms may require personal information to sign up, and as users we can’t be sure what data is being collected. While these risks may be alleviated through careful design (e.g., recruiting participants within the platform), they pose new concerns compared to, say, collecting data on social media platforms.

  • To that end, open platforms do exist and are gaining ground. Platforms that can be hosted on a server secured by the operator, such as Mozilla Hubs, or custom-made solutions solve many data-protection problems. We expect exemplar or template systems to emerge in the next couple of months.

  • While using videoconferencing and screen sharing to assist with remotely operating equipment are attractive, they present new challenges. In particular, they may be hosted or relayed by servers in different countries and may not be secure. This is one area where institutions may have policies driven by contract agreements with existing providers. Screen sharing presents specific problems because of the potential risks of personal-data disclosure. Therefore, it is important to be aware of such limitations before designing an experiment.

Health considerations

Safety issues may constrain some types of experiments. For example, while labs are often wide open spaces, domestic environments used for VR might be small and/or cluttered. While applications can be coded to fall within the “guardian” space that the user configures for the system, this might change. Thus, while games that encourage exaggerated movements are commonplace, we suggest not involving dynamic expansive gestures. Further, we suggest making sure that the experiment operates within a modest amount of space, and if, say, locomotion is important, that this is a key filter in the recruitment of participants.

Another issue, especially if hardware is shared among participants, is “hardware quarantine.” If a headset is used by only a single person, hygiene may be less of an issue. However, once hardware is shared, obviously hygiene has to be taken into account. Hardware should always be cleaned thoroughly between participants, but extra precautions will need to be taken to limit the possibility of, for example, spreading an infection. Using disinfectant wipes and additional masks that the user can wear underneath an HMD can be valuable, but they only offer limited protection. Recently, decontamination systems have become available that make use of ultraviolet light and nano coatings that offer additional benefits, yet they will also not cover every nook and cranny. Current research seems to suggest that contamination on surfaces may cease to exist after 72 hours for Covid-19 [4]. As such, cycling through HMDs that have been put away for some time may be an additional precaution. Currently, a combination of hygienic measures seem most appropriate, and users should always be informed about potential risks (e.g., in informed consent forms). Not doing so seems unethical.

Validity of the results

Running remote experiments makes keeping a uniform apparatus difficult. In a lab study, typically there is only a single apparatus. Yet remote participants might have varying hardware and, if not explicitly requested by the experimenters, even different headsets. Clearly, controlling for the uniformity of setups helps in isolating other factors that could affect the results. If, in order to reach a sufficiently large number of participants, it becomes necessary to relax the conditions for exclusion (e.g., by allowing users with different headsets to participate), it remains an open question how to consider the validity of results obtained in such a manner.

It can be argued that one of the goals of this type of research is to devise novel methods that can then be applied to a wide range of setups, which could then be expected to continue providing comparable performance, as indicated by the empirical results. However, some types of experiments can be too dependent on the specific combination of headset and controllers. Thus, if remote experiments with heterogeneous hardware become an acceptable platform to run experiments, how far of a divergence in hardware can be accepted? For example, an interaction technique designed to be operated with the now-standard trigger button of a specific VR controller might still work the same way with a controller designed by another maker, even though the ergonomics of the device will differ. Promoting the culture of replicating studies might provide the solution to these challenges.

Remote experiment design guidelines

Participants can be recorded completing the experiments over Zoom or any videoconferencing platform. However, these platforms have associated security risks. Another approach is for researchers to observe participants through a videoconferencing platform, without recording. This would also provide more control of the consistency of the procedure between participants. Because recording a participant and their home adds additional concerns for privacy, researchers should weigh the benefits of recording versus observing the participant in real time as they participate in the research study. Labs can still use research assistants to run consistent studies remotely without recording a participant’s home, personal space, and/or unwilling family members in their environment.

Some general remote experiment designs recommendations can be found in [5]. Researchers should remove or minimize as many accessibility barriers as possible. This can be achieved by adding feedback systems such as text, voice, and interface prompts. Researchers should also make sure that the language used is accessible to their intended audience. With the lack of a live audio or video connection, it might not be possible to further explain instructions after an experiment has started. Reminders can help to ensure that participants complete remote experiments. We also suggest that experimenters take cultural and regional differences into consideration. For instance, for a VR/AR driving simulator designed for left-hand driving countries, user performance and experience might vary in right-side driving contexts. Similarly, for VR text-entry studies, authors might consider the many different keyboard layouts around the world.

The required mental workload of the participant should be reduced to a minimum. This can be achieved by removing set-up steps and automating parts of the experiment. Screen or input switching during the experiment should be limited. If possible, all surveys should be displayed within the HMD, or at least in an application that is automatically started on the desktop. Since current text-entry UI solutions in VR are not as efficient as keyboards, experimenters might want to choose drop-down menus, sliders, radio buttons, or even voice-recording options to collect survey data in virtual environments. In a perfect setup, the entire experiment would be run from one executable, with consent, instructions, task, and post surveys all completed while wearing the HMD.

Experimenters should collect data on the device used, the specs of that device, the computer used, and the frame rate at which the experiment ran (ideally not just the mean, but also the standard deviation or other meaningful statistical information). The instructions of the experiment could be recorded in advance as a video or an animation in VR and shown to the participants. This could also include any training or context necessary to complete the experiment.

Before launching the experiment, researchers should solicit feedback from their own (and potentially other) labs. Ideally, experiments should be piloted with participants from outside the research group. Such feedback will help solve any setup challenges or other potential sticking points during the experiment and highlight potential safety concerns. This will also allow for a more accurate prediction of experiment completion times, which can then be communicated to participants. During such pilot studies, any applicable screen-capturing protocol can also be tested.

If the experiment is to be run over a video call, the connection speed for both parties should be tested. This will inform whether it is possible to record the video on the participant’s computer or the experimenter’s computer, as needed.

Any data logged by the application should ideally then be automatically uploaded over the network to avoid using methods that could de-anonymize data, such as asking participants to send log files to the experimenter via email. For this, the system needs to be able to detect whether or not the upload of the data has happened successfully. Should it have failed, the application would indicate where these files are stored and how to upload them anonymously, for example, via a file-upload web form that does not collect any data besides the log file. This, however, might open the potential of “fake data” being uploaded maliciously. Thus, experimenters should consider solutions to verify if the data being uploaded is genuine. Experimenters should also consider different end-to-end encryption methods to protect the participants’ data.

With this new style of remote experimentation, it is also advisable to ask participants questions about their entire experience of the experiment after completion, to allow for continuous improvement. These questions could include inquiring about their overall satisfaction, levels of immersion, the ease of use of the system, and how intuitive or clear the process was. More suggestions for testing for remote experiments can be found in [6].

While VR and AR HMDs are more popular than ever, they are currently not as widely used as TVs, monitors, or smartphones. As a result, most research on such systems is being conducted in research laboratories within specially designed environments. For instance, tracking base stations may need to be positioned according to the purpose of the experiment to achieve the needed accuracy. Windowless laboratories may be necessary to avoid incoming sunlight and increase tracking reliability. Meeting such environmental constraints might not be possible in participants’ homes, so design decisions may be different in distributed experiments. Experimenters have to consider these differences and design their experiments accordingly.

Remote experiment open questions

  • What is the best protocol for the transmission of projects and experimental data?

  • How can payment be sent to participants?

  • How much of a limiting factor is participants’ bandwidth for streaming video and results?

  • What are the ethical considerations needed to ensure the privacy of participants’ data?

  • What is the trade-off between acceptable quality of a recording versus its size?

  • Should we expect that participants have space on their computers to record to?

  • Is having experimental results streamed to computers outside of a lab a potential ethics issue?

  • What is the best way to monitor for simulator sickness when the experimenter is not present?

Advice to reviewers

Regardless of how this new wave of human-subjects experiments is handled, reviewers must be aware of the changing nature of article submissions. In response to the concerns around user studies during Covid-19, many conferences in HCI and VR/AR have sent out calls for participation, highlighting the appropriateness of contributions from systems, design, methodology, literature review, or other contributions focused less on user studies. It falls upon the program chairs to communicate these new criteria down the line to program committee members and reviewers. This change in mindset will be a collective process across all of HCI and AR/VR. Authors must clearly describe in the submissions the exact way the experiment was administered and also discuss the pool from which the applicants were recruited. For example, if an experiment predominantly uses members of VR/AR labs or enthusiasts, this has the potential to distort the outcomes. This is due to existing biases or perceptions coming from frequent exposure to VR/AR systems, relative to participants who have rarely or never experienced VR/AR. Such a biased participant pool could thus represent a limitation for a study. Yet, given Covid-19, these limitations may not invalidate the work being presented, as long as the authors are clear in the description of the participant pool and the reviewers are encouraged to work with the understanding that this is currently one of the very few options for running VR/AR studies. In other words, transparency of the process is one of the best ways that we can usher in this new wave of publications.


In situations such as the current pandemic, the use of the short-, medium-, and long-term solutions discussed earlier enables the field of HCI and XR to continue to forge forward with experimental work. A secondary benefit of the use of members of other labs in the community is that it increases the amount of transparency in the field by making people more aware of the exact nature of each other’s experiments. It could potentially improve the external validity of experiments by increasing the diversity of platforms and participants used for a given task. At the vary least, Covid-19 has strengthened this community and inspired new collaborations between researchers. While there are both ethical and practical concerns for distributed user studies, solutions for XR will likely be useful for other areas of HCI, and, indeed, any field that relies on human experimentation. This article provides a starting point. We hope other articles will follow with more specific information, either expanding topics presented here or offering new ideas.


1. We invite people to join the discussion. The current community is formed of researchers from the IEEE VR community, specifically a discussion launched at the online conference in March 2020. Email to be added to the discussion.

2. Steed, A., Friston, S., Lopez, M.M., Drummond, J., Pan, Y., and Swapp, D. An ‘in the wild’ experiment on presence and embodiment using consumer virtual reality equipment. IEEE Transactions on Visualization and Computer Traphics 22, 4 (2016), 1406–1414.

3. Ma, X., Cackett, M., Park, L., Chien, E., and Naaman, M. Web-based VR experiments powered by the crowd. Proc. of the 2018 World Wide Web Conference. ACM, 2018, 33–43.

4. van Doremalen, N., Bushmaker, T., Morris, D.H., Holbrook, M.G., Gamble, A., Williamson, B.N., Tamin, A., Harcourt, J.L., Thornburg, N.J., Gerber, S.I., Lloyd-Smith, J.O., de Wit, E., and Munster, V.J. Aerosol and surface stability of SARS-CoV-2 as compared with SARS-CoV-1. New England Journal of Medicine 382, 16 (2020), 1564–1567.

5. Cooper, M. and Ferreira, J.M. Remote laboratories extending access to science and engineering curricular. IEEE Transactions on Learning Technologies 2, 4 (2009), 342–353.

6. Nickerson, J.V., Corter, J.E., Esche, S.K., and Chassapis, C. A model for evaluating the effectiveness of remote engineering laboratories and simulations in education. Computers & Education 49, 3 (2007), 708–725.

Posted in: Covid-19 on Tue, May 19, 2020 - 4:52:13

Anthony Steed

Anthony Steed is head of the Virtual Environments and Computer Graphics group in the Department of Computer Science at University College London. He has over 25 years’ experience in developing virtual reality and other forms of novel user interface. He received the IEEE VGTC’s 2016 Virtual Reality Technical Achievement Award.
View All Anthony Steed's Posts

Francisco Ortega

Francisco R. Ortega is an assistant professor at Colorado State University and director of the Natural User Interaction Lab (NUILAB). His main research area focuses on improving user interaction in 3D user interfaces by eliciting (hand and full-body) gesture and multimodal interactions, developing techniques for multimodal interaction, and developing interactive multimodal recognition systems. His secondary research aims to discover how to increase interest for CS in non-CS entry-level college students via virtual and augmented reality games.
View All Francisco Ortega's Posts

Adam Williams

Adam Williams is a Ph.D. student in computer science at Colorado State University. His research is on multimodal inputs for augmented reality, specifically, user-elicited gesture and speech interactions. His research goals are to create novice-friendly interactions for 3D learning
View All Adam Williams's Posts

Ernst Kruijff

Ernst Kruijff is a professor of human-computer interaction at the Institute of Visual Computing at Bonn-Rhein-Sieg University of Applied Sciences and adjunct professor at SFU-SIAT in Canada. His research looks at the usage of audio-tactile feedback methods to enhance interaction and perception within the frame of AR view management, VR navigation, and hybrid 2D/3D mobile systems.
View All Ernst Kruijff's Posts

Wolfgang Stuerzlinger

Wolfgang Stuerzlinger is a full professor at the School of Interactive Arts + Technology at Simon Fraser University in Vancouver, Canada. His work aims to gain a deeper understanding of and to find innovative solutions for real-world problems. Current research projects include better 3D interaction techniques for virtual and augmented reality applications, new human-in-the-loop systems for big-data analysis, and the characterization of the effects of technology limitations on human performance.
View All Wolfgang Stuerzlinger's Posts

Anil Ufuk Batmaz

Anil Ufuk Batmaz has a Ph.D. in biomedical engineering from University of Strasbourg. He is currently affiliated with Simon Fraser University as post-doctoral fellow working on human-computer interaction and virtual and augmented reality.
View All Anil Ufuk Batmaz's Posts

Andrea Won

Andrea Stevenson Won is an assistant professor in the Department of Communication at Cornell University. She directs the Virtual Embodiment Lab, which focuses on how mediated experiences change people’s perceptions, especially in immersive media. Research areas include the therapeutic applications of virtual reality, and how nonverbal behavior as rendered in virtual environments affects collaboration and teamwork.
View All Andrea Won's Posts

Evan Suma Rosenberg

Evan Suma Rosenberg is an assistant professor in the Department of Computer Science and Engineering at the University of Minnesota. His research interests are situated at the intersection of virtual/augmented reality and HCI, encompassing immersive technologies, 3D user interfaces, and spatial interaction techniques.
View All Evan Suma Rosenberg 's Posts

Adalberto Simeone

Adalberto L. Simeone is an assistant professor in the Department of Computer Science at KU Leuven in Belgium. His research lies in the intersection of 3D interaction and virtual reality with human-computer interaction. He is motivated by a deep interest in making immersive experiences more accessible by everyone.
View All Adalberto Simeone's Posts

Aleshia Hayes

Aleshia Hayes is an assistant professor at the University of North Texas. She is passionate about developing, evaluating, and iterating on technology used for learning in formal and informal environments. She runs the SURGE XR Lab where she has led interdisciplinary research with partners from manufacturing, defense, psychology, and education.
View All Aleshia Hayes's Posts

Post Comment

No Comments Found

On physical and social distancing: Reflections on moving just about everything online amid Covid-19

Authors: Mikael Wiberg
Posted: Mon, May 18, 2020 - 10:55:04

On March 11, the World Health Organization (WHO) declared that the spread of Covid-19 constituted a pandemic. They stated that that the virus was not just a threat to public health, but also a crisis that would affect every sector of public life: “All countries must strike a fine balance between protecting health, minimizing economic and social disruption, and respecting human rights” [1].

In response to the pandemic, the WHO recommended physical distancing [2], maintaining a physical distance between people and reducing the number of times people come into close contact with each other. The WHO proposed the term physical distancing as opposed to social distancing, due to the fact that it is a physical distance that prevents transmission. With people practicing physical distancing, they proposed that people could remain socially connected via technology. 

Despite this push for physical distancing, social distancing has become the more common term, and people are now trying to maintain social connectivity via digital technology. As a result of rapidly moving most of our social relations—work, education, family—online, videoconferencing systems are now being used more than ever.

But what does it mean to move so much of our social contact online? Is an online meeting the same as a face-to-face meeting? Probably not. Is videoconferencing with your friends and family the same as being together physically? Probably not. And what about the term social distancing when we are in fact only recommended to practice physical distancing— while maintaining our social connections over the Internet? What are we to make of physical and social distance here? Though the differences between the two may at first glance appear obvious, something more complicated seems to be unfolding during the spread of Covid-19 and the lockdowns that most of us are having to come to terms with. 

HCI, I want to suggest, offers a way to think about the complexities that the two forms of distance provoke. We have more than three decades of research on face-to-face interaction, and have conducted research on the difference between face-to-face and online interaction. In addition, there is a whole strand of research in sociology, social psychology, and environmental psychology on the important role of physical closeness—from basic communication aspects and the role of physical places for being together, to more complex questions involving matters of being together, belonging to groups, having special bonds, and feeling closeness to others (i.e., connections, belonging, attachment, and coupling). In short, there are multiple reasons why physical closeness is fundamental to us as humans— and maybe also why physical distancing feels so hard to practice for lots of people, for a number of socially rooted reasons. On the other hand, there are also many people who already always experience life with physical distance, and those who have no option to social distance (e.g., due to incarceration). Here, the Covid-19 pandemic might serve as an eye opener to show what this means on a personal and social level. 

Still—and amid the ongoing coronavirus pandemic we do not really have any alternatives—many of us need to practice physical distancing. So can we say something more specific about why videoconferencing cannot fully compensate for face-to-face interactions? Do we have any established theories that can help us understand the difference between these two forms of interaction? And more fundamentally, what happens when we remove face-to-face interaction as an available mode of interaction—when we need to abandon face-to-face interaction and move online? In this article, I will reflect on these questions and also pinpoint a few things that we as an HCI research community might need to reflect on as we move forward.

HCI on the role of face-to-face interactions

In my own research over the past 20 years, I have been interested in various aspects of “the local.” When I did my Ph.D. in the late 1990s, it was concerned with collocated groups of people and how to support mobile and collocated users with digital meeting technologies [3]. After that, I continued to focus on the role of physical places. I have looked at architecture, proxemics, and material interactions—all aspects of being together, with each other, and in close relation to materials, things, and places that represent a large part of our everyday lives—at least until recently. 

“The local” has also served as a generative concept for lots of work in HCI. In fact, face-to-face interaction, or technology support for “same time, same place” interactions, was one of the four basic modes of interaction proposed by Ellis et al. back in 1991 [4] (Figure 1). 

Figure 1. Interaction time/location matrix by Ellis et al. [4].

In fact, in HCI face-to-face interaction has been assumed to always be there as an available mode of interaction. Still, most technologies have been designed to bridge distances (e.g., email, the telephone, and the Internet). But what happens when we no longer have face-to-face interaction as an available form of interaction? Just a few months ago, that would have been a far-fetched and hypothetical question, but all of a sudden this pandemic has drastically removed this face-to-face mode of interaction for many of us. From being the most natural and taken-for-granted form of interaction, we all of a sudden cannot get together physically and find ourselves having to follow the WHO’s advice and stay in touch through the use of digital technologies.

In recent years, HCI has been even more clear about the importance of being together and the critical questions that need to be addressed, as we are increasingly acknowledging this as a fundamental part of being human. Our field has also explored issues of being separated, isolated, and apart, and to what extent we can design technologies that support loneliness and togetherness. If being together is fundamental to us as humans, then we also need to examine questions concerning how we are coming together. We see this in recent HCI research on the role of our bodies, on gender, and on inequalities. Being together is a very complex matter, and these questions also illustrate the richness and complexity of our social relations. Now, if we cannot be physically together, then it is not merely a matter of lacking communication tools—it’s a fundamental dimension of our societies, from the small-scale context of individual relations to large-scale matters of humans coming together to form groups, communities, and cities. 

So what does this imply as we are trying to move just about everything online? Could it be that physical distancing also prevents us from being together? Probably so. In the next section, I turn to media richness theory (MRT) to attempt to shed some light on why videoconferencing probably cannot fully compensate for face-to-face interactions.

Physical distancing also implies social distancing!

So why is it the case that people still refer to social distancing when it’s actually about physical distancing? And why do we have this boom in videoconferencing? Why haven’t emails, messages, and phone calls satisfied us? 

Well, if we turn to media richness theory we might see a pattern. In short, media richness theory (MRT) states that all communication media vary in their ability to enable users to communicate, which, in turn, depends on a medium’s richness. Further, MRT places all communication media on a continuous scale based on their ability to support communication, from simple information exchange to more complex forms of communication (e.g., negotiations, body language, emotions) [5]. For example, a simple message could be communicated in a short email, whereas a more complex message would be better supported via face-to-face interaction. 

If MRT is correct, it makes sense that people feel socially distant from each other even though they might still stay in contact via the use of videoconferencing systems. No matter which technology we use to mediate our interactions, it cannot compensate for the richest form of interaction: face-to-face interaction. Further, MRT might explain the current videoconferencing boom, and why we have shifted not only from face-to-face interactions to online meetings, but also toward an increased use of videoconferencing technologies—the second-best alternative to face-to-face meetings. 

While we try to use video to compensate for the lack of face-to-face interactions, we are probably also experiencing the difference between these two modes of interaction. It is hard to feel close to others over video, as it’s harder to communicate body language, gestures, and emotions. Further, it makes our everyday interactions bounded to particular sessions, and with that comes the risk of breaking the continuous flow of informal, spontaneous, and everyday encounters—the glue that keeps us together. When we practice physical distancing, we can no longer just bump into each other or maintain a shared common ground as we spontaneously meet. Instead, we need to actively seek and establish interactions— typically through planning and invitations—for a meeting, or just to hang out for a while. Physical distancing means that interactions more than ever demand an active decision to seek contact with others. Many of us might struggle to maintain the everyday connections we have from seeing each other in workplaces and around the neighborhood, or from just seeing familiar strangers at the bus stop.

In short, physical distancing also implies social distancing, even if we try to compensate for some parts of it through the use of digital technologies. 

So what can be done?

The classic saying is “you can run, but you can´t hide.” At the current moment, it is actually the other way around. We can practice physical distancing, we can go into self-quarantine, and some countries even practice complete lockdowns. That is, we are trying to keep ourselves separated, to hide from the virus, and to make it harder for the virus to spread. But we cannot run. In some countries we cannot even take a walk in the park, and in most countries there are traveling restrictions implemented in an attempt to slow the spread of the virus. In short, we need to stay put and hide—maybe for a long time. 

But there are things we can do. In fact, people are already doing a lot—from their homes and over the Internet. We see lots of creative examples online of how people try to do meaningful, and even funny things, while staying at home. 

In relation to the move from face-to-face to online interactions, there are also a number of additional things we can do. If we cannot meet face-to-face at the current moment, we can increase the frequency of our interactions. Not only do interactions depend on the richness of the media, but also on how well we know each other. If we increase the frequency of our interactions we can still share our everyday experiences with each other. While this might at first be seen as a strictly Covid-19-related recommendation, it is also crucial as we move beyond the pandemic. For people who are old, sick, or disabled, or for those that have friends and relatives who live far away, an increase in the frequency of interactions can mean the world.

There are also things technology designers can do. We can improve the technology. We can design better systems with better functions, and better video and audio quality. We can also make these tools lightweight, to make it easier to connect and have spontaneous interactions. In fact, HCI has a whole strand of research on how to design for lightweight interactions.

And we can explore modes of interactions beyond being there. Beyond any approach to mimic or compensate for face-to-face interactions, we can follow the suggestions made by Jim Hollan and Scott Stornetta [6] and explore the things we can do online that might actually be harder to do in a face-to-face setting. In short, to think beyond face-to-face interactions as the raw model for online interactions, and instead explore interactions “beyond being there.” A good example here is documented in Barry Brown’s article in this issue, where he describes how parts of this year’s CHI conference were held online, how it was arranged across different media channels, and how it was fundamentally organized as an online event rather than a poor copy of a face-to-face conference. In fact, it was a good example of how we can reimagine a conference as a socially well-established practice amid the pandemic we’re all facing.

And what can HCI learn from this?

So what can we learn from this? Well, probably that we cannot take anything for granted— not even face-to-face as a mode of interaction. Over the past 30 years, we have relied on face-to-face meetings both in our professions and as a raw model for the development of other modes of interaction. This might have prevented us from really exploring alternatives, and that put us in a vulnerable position when we suddenly had to move everything online. 

Another lesson we can learn is that we should probably not use one mode of interaction as a raw model for another mode of interaction. This is an implication for designers as well as for HCI researchers. Instead of developing technologies that at best mimic face-to-face interactions, technologies should provide functionality “beyond being there.” And for HCI researchers, this means that it is less interesting to compare different modes of interaction and more interesting to explore what the whole palette of different modes of interaction means for us in terms of being together. This is also an important path for HCI to take after the pandemic—to learn more about what we can do for people who will continue to struggle with physical and social distance even when the recommendations for physical distancing are removed.

Further, and while we have lots of methods developed for how to carefully introduce new technologies in social settings, we also need more knowledge on rapid technology deployment processes (as the current situation has so drastically and brutally shown), and more knowledge on how this is changing practices and our everyday lives.

Finally, and maybe the most important takeaway from this reflection, is that interactions matter. Amid the ravaging pandemic we do not only need food and a place to hide away—we also need each other. Physical distancing might be what many of us need to practice at the current moment, but the fact that people refer to this as social distancing stresses how we cannot make a living if we lack ways of being together. As we move online, we need to make sure that we establish new social practices that not only compensate for what we are missing, but also add new forms of connectedness. As formulated by the WHO, “We’re in this together.” That should also be the case now as we’re now moving just about everything online.



2. Harris, M., Adhanom Ghebreyesus, T. Liu, T., Ryan, M.J., Vadia; Van Kerkhove, M.D., Diego, Foulkes, I., Ondelam, C., Gretler, C., Costas. COVID-19. World Health Organization. Mar. 25, 2020; 

3. Wiberg, M. In between mobile meetings – Exploring seamless ongoing interaction support for mobile CSCW, PhD thesis, Umeå University, Sweden, 2001.

4. Ellis, C.A., Gibbs, S.J., and Rein, G.L. Groupware: Some issues and experiences. Communications of the ACM 34, 1 (1991), 39–58.

5. Daft, R.L. and Lengel, R.H. Information richness: a new approach to managerial behavior and organizational design. Research in Organizational Behavior 6 (1984), 191–233.

6. Hollan, J. and Stornetta, S. Beyond being there. Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 1992, 119–125;

Posted in: Covid-19 on Mon, May 18, 2020 - 10:55:04

Mikael Wiberg

Mikael Wiberg is a full professor in informatics at Umeå University, Sweden. Wiberg's main work is within the areas of interactivity, mobility, materiality, and architecture. He is a co-editor in chief of ACM Interactions, and his most recently published book is The Materiality of Interaction: Notes on the Materials of Interaction Design (MIT Press, 2018).
View All Mikael Wiberg's Posts

Post Comment

No Comments Found

This is not the new normal: Studying during a pandemic

Authors: Ana-Catalina Sabie, Katharina Brunnmayr, Kristina Weinberger, Renée Sophie Singer, Rafael Vrecar, Katta Spiel
Posted: Wed, May 13, 2020 - 2:31:59

Illustration of an overwhelmed student sitting in front of laptop, her face in her hands. A broken piggy bank, a large stack of papers and a long TODO-list are scattered around the laptop on the desk. In the background above the student, semi-abstract icons symbolize data influx related to COVID-19, financial problems, loss of casual interaction, cancellation of lectures, connectivity problems, loss of productivity, and subsequent demands through emails.
Illustration by Renée Sophie Singer.

Austria began its lockdown on March 10, 2020. Just three days later, the government issued stay-at-home orders, quarantined entire villages, decreed non-emergency medical procedures to be moved to a later date, and closed all but essential shops, as well as hospitals, retirement homes, schools, and day care facilities. As university classes started the week before, lecturers and students had one session before everyone’s life was turned upside down. I teach Critical Theory of Media and Informatics and noticed that the students were struggling. Well-meaning advice for considering their situation was available online, but I had seen very few accounts from students themselves. So I asked.

Even though in the middle of April there were two weeks planned without lectures, the students asked to meet at our regular time. Not all of them came, but those who did appreciated just sitting together, if apart, and discussing whatever was on their mind. I suggested they might want to write down their experiences and share them to create community with other students and allow lecturers to better understand students’ perspectives. In the following paragraphs is an account of their experiences. They want you to know that as Austrian master’s students, they are aware of the many privileges they have: healthcare is generally available to most, if not all; Austria—despite being the source of several Covid-19 clusters in Europe—is itself not very affected; and study costs are generally low, though about 60 percent of students work alongside their studies [1]. Additionally, their perspective is limited to a computer science curriculum, including some design and humanities-related courses. Hence, they cannot report on issues like lab availability or the lack of technical aptitude that some students might experience. Nonetheless, they are finding this time difficult to navigate. Given their accounts, we invite readers to consider and potentially extrapolate how their own students might feel. Because the students are not alright… and we should not expect them to be. 

— Katta Spiel

Material constraints 

We are five students in a master’s degree program in media and human-centered computing at TU Wien in Austria. As students of computer science, many of us came to remote learning from a very privileged position. All of the authors are used to working on their laptops, and students as well as lecturers have above average technology skills and could be expected to move their courses online in a timely manner. Despite this advantageous starting position, we still encounter infrastructure-related problems regularly. 

Student housing often cannot offer a stable Internet connection or a quiet and undisturbed working space to all occupants. Previously, this issue was mitigated due to the availability of learning spaces and Internet access at currently inaccessible university buildings. In addition to this lack of space, transferring all courses from attended lectures to remote teaching has not always been a smooth process. Lesson plans dependent on personal interaction or hardware availability had to be changed, scheduling restrictions regarding collisions were thrown to the wind, and everyone had to work with a fundamental lack of information. With the abrupt shutdown of universities, there was a lot of confusion around which courses would take place at all, and how as well as where students could get the necessary information. Our university was not prepared to shift to remote learning on a large scale, so while the necessary infrastructure to stream lectures was technically available, other courses had to individually identify ways of meeting up online.

One of the main effects of this lack of preparation is that students have to navigate a litany of different services across our courses. In Table 1, we list 16 different tools that we need to use for communication or collaboration in our learning, including at least one VoIP (voice over IP) service per course. All of these come with their own account requirements and notification ecology, as well as a frequent need for updates, little to no opportunity for personalization, and vastly different workflows. Next to the lack of choice or consent our educators gave us in the matter, privacy and data protection are questionable in some of these tools. 

As an example: Critical Theory of Informatics decided to use Mumble, which is an open source tool hosted on external servers that requires the creation of an account. However, other lectures use the university-wide Zoom installation, leaving students with the choice of either installing and using the tool (accepting potential breaches of privacy) or dropping the lecture. In some cases, lecturers are free to look for alternatives together to find a tool that is suitable for everyone in the course. Finding a solution that fits all requirements is virtually impossible due to limited material and cognitive resources. 

Platform/Technology Modality Availability Hosting
Slack Text, Audio chat* Limited free plan, closed source External
Email Text   Internal
Zoom Audio/Video chat* Limited free plan, closed source External
GoToMeeting Audio/Video chat* Limited free plan, closed source External
Jitsi Audio/Video chat* Open source External^
Mumble Audio chat Open source External^
Skype Audio/Video chat* Limited free plan, closed source External
Etherpad§ Text Open source External
YouTube Video and Stream Advertising, closed source External
LectureTube Video and Stream In-house development Internal
Moodle Learning Platform Open source Internal
SAP Text B2B, closed source Internal
Xodo PDF Annotation Free for teaching, closed source External
GoogleDocs Text Editing Closed source External
GitHub Programming Open source External^
GitLab Programming Open source Internal
Table 1 (* participant or time limit in free plan, ^ could be self-hosted but is not, § two different services).

In addition to setting up a variety of different tools, we also need to keep up with information provided on various university websites, via third-party services, over email, or on the lecturers’ personal homepages. Of the courses that have sent out any updates, some have been canceled altogether while others have adapted their mode to provide as good a learning environment as possible; still others switched to purely independent study courses after providing all the necessary reading materials. However, most continued in some way, though often lacking any specific support for coping with the situation.

We have found that listening to a lecture or participating in discussion via video chat is entirely doable, despite the seemingly unavoidable technical problems. However, we have found that remote discussions require considerably more energy (due to the monomodality and issues like noise or lag) than their in-person equivalent, while leading to more shallow and unsatisfying conversations. 

Mourning casual sociality

One of the biggest impacts on our learning has been the loss of personal contact with lecturers and peers. Conference calls and lecture streams can be a substitute for the lectures and in-class discussion, but fail to support informal social interaction that is not only part of each course but also fundamental to peer-driven learning. We miss the casual discussions through which we can clarify questions we might have had and exchange our thoughts on a given topic. It becomes additionally more difficult to identify and coordinate suitable groups for assignments, which then leads to difficulties in submitting on time.

This lack of casual interaction removes a layer of support between students and lecturers. Making new contacts and talking to other students provides reassurance, additional explanations, new ideas, and valuable future connections. While it is technically possible to do this in video calls, current technology comes with severe limitations for casual group conversations, as subtle social cues and body movements have to be largely inferred and there is no spatial separation between private spaces and learning spaces. Additionally, most of us are already fatigued from having several multi-hour calls to attend each week. 

Too much to do 

As mentioned in the introduction, most students finance their studies by working at least part time. Some of us are in essential jobs and have to cover more shifts to protect our more vulnerable coworkers, while others have lost their jobs, leading to financial distress. In addition, while by and large we are not in a risk group ourselves, many of those close to us are. In this situation, lecturers replacing their lectures with reading assignments poses an additional problem. As computer science students, we do not have the experience and skill to effectively work through papers on our own with little to no guidance. Instead of attending a 90-minute lecture, we have to spend hours reading several papers each week while missing out on the perspective and interpretation of our lecturers. We are still able to learn by just reading the material, but the presentation, interpretation, and framing matters to our understanding. With changing lecture times and information on assignments being dispersed and irregular, it’s easy to miss a class session or a task; whereas in regular personal meetings lecturers make sure to answer any questions we might have and explain tasks in more detail, instead of silently adding them somewhere while expecting us to adhere to strict deadlines. 

There seems to be the general assumption that we have tons of free time available because the university is closed. However, we are struggling to keep up with our jobs and potential loss of work, reorienting our entire learning routines, and, well, dealing with the baseline anxiety of experiencing a pandemic. 

Teaching versus learning

Dealing calmly with the current situation is certainly important. However, it would be reassuring for us if lecturers admitted that they are as overwhelmed as we are. While some acknowledge that this difficult situation leads to inefficiency, there is also a lack of guidance on their part. In addition, tasks like maintaining forums, communicating changes, and looking at course participants’ submissions are often handled by student teaching assistants, who have to negotiate their peers’ needs with their employers’ requests and expectations. 

We see a fundamental problem in the differing views on what the university understands as teaching and what students actually need for our learning. It seems to us that there is a widespread misconception that uploaded videos are an appropriate substitute for in-person lessons in a classroom. However, they are only marginally sufficient to our learning experience. We are aware that, currently, concessions are made, and do not expect our teachers to go above and beyond in these difficult times. But we need to not hear that this is some kind of new normal, or about how this could be an opportunity for creating new digital teaching curricula. While this is all we have right now, we value what in-person lessons bring to us. For example, a student might have questions during a lecture that could lead to a more in-depth discussion on a certain topic. With the current situation, students might not even have the option of posing questions. If they do, the environments are much more permanent. Learning is a process that includes vulnerability, as learning means also learning through mistakes and reflecting on misconceptions. Posting questions in a forum or text context means we have to expose our vulnerability more fundamentally and might still receive either no or only short, unsatisfactory answers, where the labor of going in depth might not feel warranted. 

Our intent here is not to complain and add to the pile of already overwhelmed educators. We reach out a hand in solidarity, one that asks you to check in with your students instead of focusing on the formal content. We are struggling as well. This is not normal, so let’s not pretend that it could be. 

— Catalina Sabie, Katharina Brunnmayr, Kristina Weinberger, Renée Sophie Singer, and Rafael Vrecar


When I initially suggested writing this piece, I thought it was something to do, a somewhat productive way to engage students critically with the current situation; during the writing process, I realized how cathartic it was for them. What struck me even more though is that upon hearing their stories (some of which have not even made it in here), I was humbled and ashamed, but also in awe. Humbled because even though I thought I was aware of their perspective and tried accommodating them, I had not understood their experience nearly enough. Ashamed because I realized that in moving my lectures and seminars online, I only ever thought about my classes individually, never in the context of a curriculum. In awe because of their honesty and bravery to share their stories with us. I hope you can have similar conversations with your students about their experiences during this pandemic, though it should not require a shared experience to attend to those who struggle. And maybe we can take that with us once we return to in-person classes again. 

— Katta Spiel


1. Hauschildt, K., Vögtle, E.M., and Gwosć, C. Social and Economic Conditions of Student Life in Europe: EUROSTUDENT VI 2016-2018: Synopsis of Indicators. wbv, 2018.

Posted in: Covid-19 on Wed, May 13, 2020 - 2:31:59

Ana-Catalina Sabie

Ana-Catalina Sabie is a Romanian-born computer science student at the TU Wien who is passionate about photography and editing.
View All Ana-Catalina Sabie's Posts

Katharina Brunnmayr

Katharina Brunnmayr is a student at TU Wien. She is Interested in human-centered computing, gameful design, and human-computer interaction.
View All Katharina Brunnmayr's Posts

Kristina Weinberger

Kristina Weinberger is currently pursuing a master's degree in media and human-centered computing in Vienna. She is interested in the intersection of gender and inclusive software design.
View All Kristina Weinberger's Posts

Renée Sophie Singer

Renée Sophie Singer is a user researcher, UX designer, web developer, and non-professional photographer, graphic designer, and illustrator. She is currently studying in the master’s program in media and human centered computing at TU Wien.
View All Renée Sophie Singer's Posts

Rafael Vrecar

Rafael Vrecar is a student and tutor at TU Wien.
View All Rafael Vrecar's Posts

Katta Spiel

Katta Spiel is a postdoctoral researcher in the HCI Group at TU Wien. They currently research exceptional norms by focusing on marginalized bodies in interaction design. Their research combines critical theories with transformative designs focusing largely on aspects of gender and disability.
View All Katta Spiel's Posts

Post Comment

No Comments Found

Notes on running an online academic conference, or how we got zoombombed and lived to tell the tale

Authors: Barry Brown
Posted: Tue, May 12, 2020 - 10:11:09

Due to the coronavirus, the biggest conference in our research field (the ACM CHI Conference on Human Factors in Computing Systems) was canceled in April, just a few weeks before it was scheduled to be held in Hawaii. As the Nordics have a strong research community in this area, it was logical to put together an online event for local paper authors. We asked all Nordic authors with accepted CHI 2020 papers if they would like to present; authors of 51 of the 69 papers accepted. To make a manageable event, we needed to set up parallel tracks, giving authors sufficient time to present their work but also enough time for questions and discussion. Online events are much more exhausting to attend than face-to-face meetings, so we made a number of decisions to create an event where participants could practically maintain their attention and energy:

  • We ran with 10-minute talks and 10-minute discussion sections. This gave us enough time to deal with technical problems and kept content to a single bite, but also allowed authors to get feedback and engage in discussion. Having 10 minutes for questions turned out to be important because it appears to take an online audience a little longer to digest the talk and formulate questions. Moreover, given the constraints on informal interaction at an online conference like this, we wanted to emphasize and foster interaction among speakers and participants. Having good session chairs was important, as they could start the discussion by asking the first two questions before the audience jumped in. There are other ways of doing this that might be better, but we felt our model kept things closest to the feel of a “real” conference. 

  • We limited each session to one hour, with at least 15 minutes break between sessions.

  • The event ran “from lunch to lunch” over two days. This maximized participants’ energy levels, and allowed us to run an afterwork event. It also improved the odds that community members from other continents across the world could have a chance to join at least part of the event at a sensible time in their time zones. We added a closing keynote, which also worked well to close the event by bringing everyone together into the same meeting space at the end of the conference. 

What technology to use

If you are going to run a virtual conference, deciding on what applications to use isn’t easy—clearly tools for virtual meetings are evolving quickly and users will have differing preferences. We chose to use Zoom because we had seen it work well before in a few earlier events and because it supports screen sharing and audience participation. While technically the platform worked for us, clearly it is a tool lacking in user-centered design. In particular, the preferences are a minefield, laid with hundreds of poorly worded options across a collection of confusing webpages and dialogue boxes. Moreover, some of these options you really need to get right—or things can go wrong quickly. Feature-wise Zoom is excellent, but if you use it to run an event, you will come to passionately hate its interface—it’s full of idiocies. For example, whenever a presenter shares their screen, by default everyone’s Zoom client also goes to full screen—even if you are just watching the presentation in the background. What other app do you know of that goes full screen without asking you first? But, for better or worse, Zoom is a communication app that most people have installed, and for a public event where you want to encourage participation, choosing another less-familiar system would come at some cost. Zoom also has a “webinar” mode, but we really wanted to have the audience share their video to support participation and interaction as much as possible, so we ran the event as a meeting.

Zoom as a company has also made many deeply dubious decisions, so much so that a lot of people refuse to install the app. So as to provide a second way to access the conference, we also streamed the event to YouTube Live. Again, we chose this because most people are familiar with YouTube. The Zoom integration with YouTube mostly works, except it would occasionally drop the connection and generate a new live feed with a new URL, so we had to frantically change the links on our website to the new URLs. YouTube users could also not ask questions or participate in the meeting beyond spectating. From the audience side of things, though, this Zoom+YouTube combination seemed to work well; the audience could participate and ask questions, and if they had problems with Zoom—or were just shy—they could watch on YouTube while remaining anonymous. Some relied on the combination to enable smoother session-hopping. Needless to say, you need some sort of communication backchannel for real-time communication among organizers as you run an event like this (e.g, Slack). This helps maintain awareness across tracks, troubleshoot quickly if problems arise, and foster a sense of being in it together.

How we set up Zoom and YouTube

We had three parallel tracks, and we had to manage these three tracks while homebound and not actually meeting one another. So, we had three of the organizers each use a dedicated machine to run each of the three tracks. On each machine we created a public Zoom meeting, with no password and no waiting room, that allowed access to those who had registered accounts with Zoom—we wanted to make the event as open as possible. And while this was probably a mistake, it did mean that users didn’t have to register in advance; they could just click the link. As the event was free of charge for participants, we wanted to take the opportunity to welcome everyone—something not possible at our traditional face-to-face conferences. Adding a password wouldn’t have improved security, since we would have had to share the password openly too. As the chair, I also had another machine on the side so that I could move between tracks to monitor how things were going and monitor the Slack channel for organizers. We created new Zoom meetings for each track, keeping them running for the entire day. We didn’t use our own personal meeting room IDs; this meant that each day had a different meeting URL, which caused some problems with participants trying to visit old links. To lock down Zoom, we also made it so participants were muted upon entry to the room. We also restricted screen sharing to just the meeting hosts, and made all the speakers meeting hosts before each session. This was slightly clumsy, because it meant that speakers had to arrive before the session started (which sometimes didn’t happen) and that someone behind the scenes had to watch for when they arrived and quickly make them into co-hosts. After our troubles with zoombombing (see below), we also changed the mute setting so that only the hosts could unmute. We then monitored the event and unmuted people ourselves when they were asking questions. We used the chat feature to let people express that they’d want to ask a question, switching chat settings on and off so as to make sure there wouldn’t be any disturbance during the talks. We forgot to turn off the whiteboard and screen annotation features, which caused us all sorts of problems, so make sure you turn them off.

For each track, the meeting was streamed onto YouTube. YouTube asks for an individual account for each live stream, and also has a waiting period between creating a new account and being able to live stream. To deal with this, in the end we just used our personal YouTube accounts for each live stream. Zoom also seemed to randomly drop the connection to YouTube, which caused the stream to move to a new URL every now and then, leading to the problem of “URL roulette.” 

URL roulette

Before the event, I diligently created redirection URLs (e.g., so that users could just go to that URL and be connected to the relevant Zoom or YouTube feed. Unfortunately, due to the problems with Zoom and YouTube the valid URLs changed quite a bit throughout the day. And with Web redirects you can never be sure if someone will load a cached copy and be sent to an old, invalid URL. We discovered this problem just a few hours before the event started, leading us to frantically change the website and program with the actual links rather than the redirects [1]. The main program for the event was held in a Google doc. This worked great because we could change it and it updated instantly, and as it is a Web app it doesn’t get cached. Unfortunately, in an attempt to be helpful I had also put the URLs onto the website. This was on Google sites, and while it was relatively easy to update, caches meant that we still got a stream of people complaining they were being sent to the wrong URL.


After the introduction talk, we started the main proceedings in each of our three tracks. It was at this point that our third track was zoombombed. The URL for this track was posted to some sort of forum and we were flooded by kids trying to disrupt the event. They played music from the “Tiger King,” which was kind of funny; showed pornography on their camera—much less funny; and drew swastikas on our first speaker’s presentation—not funny at all. Warning: If this is the content you expect at an academic conference you are going to the wrong sort of academic conferences.

While we probably should have locked down our meeting more beforehand, we also wanted to keep the event open to as many people as possible. We could have set up pre-registration using something like Eventbrite, but sometimes people discover an event on the day it starts and want to drop in. We could have tried to restrict attendance to those from universities, but we also wanted to run an inclusive event. What we did in the end was ask each participant to put in their full name, and kick off those with fake names (the kid who called themself Oswald Mosley at least had some historical knowledge). Any noise or weird videos also got a user kicked off, and we had (thankfully) selected that a user could not rejoin the meeting if they were kicked out. When we were getting attacked, we also enabled the waiting room, and messaged users who were waiting individually to get them to enter a full name before we would let them into the meeting. While this might not be the best way to lock down your meeting (and certainly took a lot of work, with three extra people working behind the scenes when the attack was going on, in addition to the session chair and the organizer hosting the meeting on their machine), it was the best we could come up with quickly. 

Zoom has a particularly stupid feature that allows anyone in a meeting to annotate the screen share — so our zoombombers started drawing onto presenters' slides as they were presenting. There is an option to turn this off somewhere on the huge list of Zoom preferences, but if you don’t turn it off in advance, each presenter needs to go in and turn it off individually. Therefore we had to ask our presenters to do this, before they shared the screen, until we could restart the meeting for the next day. This is one of the many points where it is obvious that Zoom is not really set up for the sort of large event we were running — the annotate feature is so basic it can really only be used for vandalizing others' slides (note to Zoom: a shared notepad feature would actually be useful. Drawing on slides , not so much). Congratulations to Zoom for building a feature that’s really only useful for its Zoombombing users!

In the end, after the amazing teamwork and quick thinking of my co-organizers and others who stepped up to help, the track was delayed by only 20 minutes, and all presenters managed to give their talks and get questions. Congratulations to the amazing team! Zoombombing is pretty horrible, though, particularly for the speakers and chairs who had to deal with this while running the event. Presenting your work is a tough enough experience as it is, so I can only imagine how terrible it is having offensive content thrown at you while trying to do it. It takes place as part of the long depressing history of online violence against women and marginalized minority groups. That those involved did such a good job of reconfiguring the event to deal with it does not take away from the bigger problem.

How to run online talks

Presenting online is challenging, and watching online talks can be, frankly, a little boring. Academic paper presentations are themselves also (surprise!) hit or miss. For this reason, we originally intended to get as many speakers as possible to prerecord their talks, thinking this would get us more polished presentations. Our original plan was to get copies (or links) of videos from our presenters, and then release these to audience members, who could then stream them locally. Thinking this through, though, we couldn’t make sure that everyone would be watching the presentations at the same time, not to mention the challenge of getting everything to work on their own. Attendees might also want to watch the event in the background, so asking them to click on links for each talk could disrupt this. We decided instead to ask presenters to play the videos over Zoom, using the shared screen feature. When we tested this, it worked rather poorly, particularly for videos within presentations. Interestingly, when we actually did this at the conference, the videos worked much more smoothly, suggesting either that something had gone wrong with our testing or that Zoom increases the bandwidth for large events.

Streaming the videos over Zoom, though, makes it impossible to also play the video locally, since there is no way to just turn the audio off on one app—the audio would come from Zoom, and from your video player too. So in the end we just had presenters stream their presentations through Zoom, or present live, and didn’t bother collecting anyone’s videos. This worked well, with some quirks. Presentations are better if you can see the presenter alongside the slides. Prerecording a talk using something like does a good job of recording your face and displaying it alongside the slides. But if you then play this through Zoom, Zoom also displays a live video of the presenter waiting to answer questions. So you see the presenter twice—once recorded, once live (as they are watching their own talk, something many speakers commented on not enjoying particularly much!). Recorded presentations, although they can be potentially much more professionally done, are also a bit more flat compared to live presentations. There is something energizing about having a presenter actually doing the presentation live, with the audience present, even if they can’t actually technically interact much with the presenter. If I were to run the event again I think I would ask presenters to just present live, as it just gave the event a much better feel. 

Audience participation

We really tried to encourage audience members to stream their video. In Zoom this means that you can get a “gallery” view of those attending. There are lots of reasons for this—it makes the event more sociable, encourages audience members to pay more attention (because you’re being watched yourself), and gives you a chance to see who else is attending what sessions (and a chance to see friends from “across the room”). We kept this optional, since there are lots of good reasons why people cannot or might not want to keep their video on. But the value of having audience members with their cameras on was also a little limited with Zoon’s UI—at maximum you can see only 25 other audience members, and when someone is presenting you can see only five audience members. This takes away from the cache so you can get audience reactions as you are presenting.

We are used to having a round of applause at the end of each talk (and usually after the questions too). We had directed our chairs to be as explicit as possible and ask for the applause at the end of each talk. If the presenter is still presenting their last slide, taking up the screen, most audience members don’t see each other clapping; and because the microphones are muted you sometimes only hear the single applause of the chair—or no clapping at all. While you can see the audience clapping, this creates a slightly discouraging experience. Not having it work well makes you realize that applause is important. Having muted microphones also means that you don’t hear the audience laughing at jokes, heckles, or the like, which again undermines the experience somewhat (living in Glasgow for seven years, I learned what a good heckle is).


Being a Nordic event, we wanted to value and encourage participation and discussion. Actually, without discussion what point is there in having an event at all? You could just put a link to pre-recorded videos on a website—clearly the sort of dead, lifeless land occupied by webinars. To avoid that feel, we increased the discussion time for each paper to 10 minutes, shortening the paper presentation time from 15 to 10 minutes. This was also partially because we thought we might lose some time to technical problems and delays (which we actually had very few of in the end), but also to make for a more interesting event compared to just sitting down and reading the papers. 

It is not that paper discussions at conferences are to be held up as the gold standard. Questions are often terrible, and many academics go all “word salad” in their answers. Some great, high-impact papers get no questions when presented because they are just too far ahead of where the audience is thinking. Other papers get acclaim and active discussion because they spark interest among a passionate but confused section of the audience. But questions clearly do give conferences the live feel—there is always the chance of dispute and argument, new ideas emerging through interaction, or authors being called to account for their work.

Doing this online is obviously not going to be as smooth as when everyone is in the same physical room (at least not while we are all still new to online events like this), but it’s crucially important that you get some sort of interaction going—at least a bit of discussion. At our event we made it a rule that to ask a question you had to ask it in the text chat (like standing at the mic), and that the chair would then ask you to unmute and ask your question (or we would unmute the person asking the question and remute them at the end). Originally we had asked for people to type their question, and although I had hoped this might let the chair select the best questions, in the end many people just seemed to type  “I have a question.” We had been warned also that getting questions in an online event can be difficult, so we had prepared the chairs to expect to ask one or two questions themselves. In all cases papers got questions from the audience, but for some it took three or four minutes for the audience to get the questions going. In one or two cases, the chair asked a question and was about to give up, but then four or five questions came in from the audience and a good discussion ensued. Conversation even got heated at times—one questioner got overexcited and muted the session chair so they could ask their question! So while the medium certainly has some limitations, I think we managed to run a participatory event where the audience was actually involved in what was going on. Key lessons here, though, are 1) be patient when waiting for audience questions or interaction, 2) do not worry about silence, and 3) expect session chairs to ask questions, rather than seeing them as the question asker of last resort.

Session chairing

Clearly session chairs were pretty essential to our event. One helpful piece of advice I had been given was the need for session chairs to be more explicit about everything that is going on in the event.  Since not everyone sees everyone else (something the tools don’t help with), participants often have no clue what is going on and where we are in terms of the ceremonies. This means the chair has to be explicit and tell everyone when to clap, when to stop asking questions, when to ask a question, and so on. This can feel like the opposite of a well-run social event where you “don’t see the strings” and can make everything feel rather clumsy, but it is pretty essential for having the event work at all.

In fact, for our event, session chairs might have been better called discussants. The chair had to really know the work enough to throw in a few questions, but also be confident enough to allow the audience to think for a little bit (sometimes as long as a minute), sometimes providing a little filler conversation to give everyone time to catch up. It was also important that alongside the chair there was an organizer running the technical aspects of the meeting and helping coordinate the track across sessions, guiding participants regarding breaks and events in parallel tracks. In the case of our zoombombing, it became clear that it’s better to have two or three such people at the ready to handle and monitor different issues. Running the event is one thing; making the technology work so that the session chair and speakers can focus on their tasks is another. So you need to plan on having at least two people per track to make things run smoothly.


I remember being told at the first conference I went to that “it wasn’t important what talks you went to see but who you didn’t go and see the talks with.” Clearly, we can’t emulate that kind of experience online. Yet, we can at least create some “stuff to talk about” that people can refer to when they eventually do meet up after the event. We scheduled a one-hour afterwork event where we made use of Zoom’s breakout rooms to put people into smaller groups of seven or so people, who chatted for 10 minutes, then went back to the main room and talked a little bit about what had been said in the breakout rooms. To be honest, this was more a structured informal discussion than an “afterwork,” although we did manage to do some singing and dancing at the end (an impromptu performance of Abba, if you must ask). Hardly the full conference experience, but still a chance to catch up a little with others. I am sure there are much better ways of doing this and that we’ll get better at these interactions with more experience.

Why bother?

After reading all this you might wonder if it is worth such an effort in running an online event. Instead of having an online conference you could just put videos up on a website, and even attempt to have some sort of offline discussion around the papers. I am sure that would be a good way of communicating research results, and would be much less effort than having a specific event at a specific time. There are two important elements you would miss with that setup, however. Having an event at a particular time and place gives you the “liveness” of a real event. While you can go back later, there are advantages to actually watching it live, such as taking part in the discussion. Just like most sports fans choose to watch events live, this liveness is something to be valued. Moreover, there are lots of effort/encouragement calculations with having a specific event—it motivates participation (I can just go and watch this thing to get an overview), plus offers the sense of having a shared experience with others. With all the content that is available online, having an event also makes it more likely that people will actually make the time to attend the talks and discussions. Clearly the tools we have right now aren’t really designed to support a good online shared experience, but we hope they will get better over time. 


An event like this needs a lot of different people to be involved. In the end we needed five chairs running things behind the scenes: Marianela Ciolfi Felice and Kristina Höök came up with the original idea and shaped the event, then Donald McMillan and Airi Lampinen came on board to help with getting the technology into shape. On the day itself Mareike Glöss, Ville Sundberg, and Asreen Rostami stepped in to deal with the zoombombing. Sara Eriksson, Riyaj Shaikh, and Kasper Karlgren helped with testing the event format and setting up the website. The event relied upon the smooth chairing skills of Mikael Wiberg, Hans Gellersen, Eva Eriksson, Sarah Homewood, Aske Mottelson, Antti Salovaara, Simo Hosio, Juho Pääkkönen, Jessica Cauchard, Marie Louise Juul Sondergaard, Susanne Bodker, Kashyap Todi, Harko Verhagen, Alexandra Weilenmann, and Mikael B. Skov.


1. I later found that my mistake was that I set the redirect up as "permanent 302" not "temporary 301"—if you set it as a temporary redirect this might support rapid changes.

Posted in: Covid-19 on Tue, May 12, 2020 - 10:11:09

Barry Brown

Barry Brown is a research professor at the University of Stockholm, where he helps to runs the STIR research group. His two most recent books have been published by Sage and MIT Press, focusing on how to research the use of digital technology, and the study and design of leisure technologies. He previously worked as the research director of the Mobile Life research centre (2011–2017) and as an associate professor in the Department of Communication at UCSD (2007–2011).
View All Barry Brown's Posts

Post Comment

No Comments Found

Designs on solidarity

Authors: Daniela Rosner, Nicole Rosner
Posted: Fri, May 08, 2020 - 10:47:48

If you have come here to help me you are wasting your time, but if you have come because your liberation is bound up with mine, then let us work together. — Lilla Watson

Over the past weeks we find ourselves pausing before signing off on email messages. We used to include a habitual "How are you?" or "I hope this email finds you well." But these days such sentiments carry new weight and urgency. As academics with the privilege to find this moment and its precarity novel, we feel those words of care break from their former function as social conventions: They might be the only words that matter. We feel compelled to acknowledge what we do not know about the situation of the person we contact, to reach out with concern. Might they have fallen ill? Lost a loved one? Been fired from a job? Sometimes the thought occurs to us that they might not be there at all. Independent of the image, we seek words of strength. After coming across a Twitter thread started on the subject [1], I (Daniela) sometimes use the phrase “in solidarity.” 

Solidarity has a short but potent history in the fields of design and HCI. Some who use the term invoke its feminist and activist affiliations (e.g., [2]). Others speak to ideas of equity and allyship, a relationship forged across hierarchies of difference (e.g., [3]). Even given a pervasive sense of doom, solidarity suggests that we can work through it together. Rather than hide behind our own individual problems, we can reckon with ongoing social upheaval through expressions and acts of mutual support.

In our current moment, solidarity gains heightened currency as Covid attacks people along existing lines of inequality. As Ruha Benjamin stresses in a recent talk, “The virus is not simply a biological entity, but a biopolitical reality which travels along well-worn patterns of inequity...It may not set out [to] discriminate, but the structures in which it circulates certainly do” [4]. The same entrenched injustices that maintain institutionalized forms of racism such as incarceration, policing, and housing policy as well as disparities in income, education, and life expectancy, just to name a few, are exacerbated during the Covid-19 pandemic. Some of the highest rates of Covid-19 have emerged in prisons, where people—disproportionately people of color—are trapped in dehumanizing conditions that overwhelmingly conflict with nearly every health and safety guideline; for many, this amounts to a death sentence [5]. Across the world, poor, disabled, and racialized groups are more likely to suffer severe effects on their lives and livelihoods due to the virus. Yet such struggles are also increasingly hidden from sight as many people continue to shelter in place. Speaking of such violences as academic blindspots, Veena Das [6] surmises, “The only question is how we might learn to see what is happening before our eyes.” This pandemic has pushed us to “see” and understand what urban scholars have long critiqued: the tendency for spatial interventions into social life to perpetuate rather than ameliorate existing inequalities, creating new excuses for dispossession and forms of segregation. Today, those processes are increasingly taking place online, where designers, those of us who develop technology, imagine virtual spaces to solve evolving social problems. A sense of togetherness, a feeling that our lives are tied up in one another’s, may feel increasingly impossible. But this connectedness is also increasingly vital to our individual and collective survival. 

Bringing these concerns to tech design, we see that technology, such as apps developed to track Covid-19, perpetuates the very same inequalities [7]. Take the example of developing new tools for contact tracing. To scope the challenge and inform a design process, we might choose to run a remote study with as many people as we can find who might participate. The more people reached the better, we might think. Yet, as we know from prior work [8], prioritizing the most likely to be reachable (as in, with flexible schedules and reliable internet, email, and videoconferencing access) tends to benefit well-educated white people who have already long benefited from the healthcare system. Correspondingly, ignoring people less reachable or treating reachability as a universal good will tend to deny basic rights, such as rights to privacy, and erase the lived experience of continually disadvantaged groups such as communities of color, those living in poverty, and those with prior health conditions. Unless those researchers and designers take seriously the conditions that produce systemic inequalities, such as the danger of surveillance among particular populations, this early work will effectively contribute to reinforcing disparities. The same could be said of design projects in a wide range of areas, whether aimed at virtual learning or religious life.

With solidarity in mind, perhaps we have been thinking in the wrong direction. When it comes to Covid-19, maybe it matters less what we in HCI have to offer those affected [9]. Instead, maybe it’s how the virus is affecting what we should have been doing already. The inequalities we see and experience are often socio-spatial—segregation, confinement, lack of services or resources. Today, those socio-spatial inequalities that urban interventions continually reinscribe are playing out in digital tools that we design. We need to avoid the pitfalls of urban designers. We need to learn from their mistakes as well as our own. We need to bring new habits of being to our worlds of design. Concerns for elegance or novelty cannot override basic needs, equal access, and participatory channels for users to take part in making these worlds work for them and their rights

There’s also a problem with talking about being "in solidarity" as privileged individuals. While our designs may address inequality, many of us have not suffered its consequences. We work on promoting equity, yet we also understand that existing inequalities perversely benefit our careers. We’re experiencing upheaval as radical and uneven. We’re experimenting with ways of working, coping, and maintaining a sense of responsibility in partial response. We’re trying to grapple with the momentousness of the situation. We’re also trying to survive as individuals in order to aid the survival of our families, friends, colleagues, and communities. It takes much more than a word or two—it takes rethinking what we do from the start. 

Returning to the email sign-offs that began our reflection, the novel coronavirus has taught that the need for solidarity has no bounds. Within our professional worlds, email is not usually the realm where we encounter suffering. But today it has increasingly become another space where we reach out with gestures of care. How can this new sense of uncertainty in realms we imagined as stable and secure push us to be more conscious designers, developers, academics, citizens, neighbors, friends, and family members? How can phrases such as “in solidarity” truly activate an ethics of care in all aspects of our lives? 

We need to do better. To pay more attention. To engage more deeply. To grapple with our own entanglement in everyday inequalities in order to stymie their reproduction and actively promote equity. Inspired by ongoing calls for mutual aid [4,6,10], we need to ask with greater urgency: Where does responsibility lie? As Edna Bonhomme [10] warns: "This is a time for solidarity and to fight back—to figure out a cure for this and to avoid the scapegoating of migrants or ethnic minorities." Engaging a legacy of solidarity within UX and HCI does not ”solve” the range of challenges presented by and within our current moment. Instead it offers one of several sites for opening a conversation across dynamic and uneven geographies of difference. What it means to do ethical design now involves reassessing our collective accountabilities. It means rethinking the worlds we should have been building all along.


Both coauthors contributed equally to this work.


1. Hamraei, A. Twitter. Apr. 3, 2020; 

2. Kumar, N., Karusala, N., Ismail, A., Wong-Villacres, A., and Vishwanath, A. Engaging feminist solidarity for comparative research, design, and practice. Proc. of the ACM on Human-Computer Interaction 3, CSCW (2019), 1–24.

3. Dye, M., Kumar, N., Schlesinger, A., Wong-Villacres, M., Ames, M.G., Veeraraghavan, R., O'Neill, J., Pal, J., and Gray, M.L. Solidarity across borders: Navigating intersections towards equity and inclusion. Companion of the 2018 ACM Conference on Computer Supported Cooperative Work and Social Computing. 2018, 487–494. 

4. Benjamin, R.  Black skin, white Masks: Racism, vulnerability, and refuting black pathology. Talk given Apr. 15, 2020; 

5. Taylor, K-Y. The black plague. The New Yorker. Apr. 16, 2020.

6. Das, V. Facing Covid-19: My land of neither hope nor despair. American Ethnologist. May 1, 2020; h

7. Alkhatib, A. We need to talk about digital contact tracing. Personal weblog. May 1, 2020;

8. Costanza-Chock, S. Design justice: Towards an intersectional feminist framework for design theory and practice. Proc. of the Design Research Society. 2018.

9. Bourdeaux, M., Gray, M.L., and Grosz, B. How human-centered tech can beat COVID-19 through contact tracing. The Hill. Apr. 21, 2020; 

10. Bonhomme, E. Covid-19 denialism and xenophobia. Spectre Journal. Apr. 20, 2020. 

Posted in: Covid-19 on Fri, May 08, 2020 - 10:47:48

Daniela Rosner

Daniela Rosner is an associate professor in human centered design & engineering (HCDE) at the University of Washington and co-director of the Tactile and Tactical Design (TAT) Lab. Her research critically investigates the ethical and participatory dimensions of design methods, particularly within sites historically marginalized within engineering cultures such as electronics maintenance and needlecraft. She is the author of Critical Fabulations: Reworking the Methods and Margins of Design (MIT Press). During the 2019–20 academic year she is working in Berlin as an artist-in-residence at MPIWG and a visiting scholar at Humboldt University.
View All Daniela Rosner's Posts

Nicole Rosner

Nicole Rosner is a postdoctoral fellow in the Mansueto Institute for Urban Innovation and a postdoctoral scholar affiliated with the UChicago Department of Anthropology. Her research concerns the everyday politics of city-making and the violent reproduction of social, spatial, and racial inequality. Her regional interests lie in Latin America, particularly Brazil. She received her Ph.D. in sociocultural anthropology from the University of California, Berkeley with a designated emphasis in global metropolitan studies, her M.Sc. in city design and social sciences from the London School of Economics, and her B.A. from Harvard University with honors. She is currently working on a book project tentatively titled: Remaking the City, Unmaking Democracy: The Afterlives of Urban Renewal in Rio de Janeiro.
View All Nicole Rosner's Posts

Post Comment

No Comments Found

The emerging need for touchless interaction technologies

Authors: Muhammad Zahid Iqbal, Abraham Campbell
Posted: Wed, May 06, 2020 - 1:47:42

With the spread of Covid-19, the world of interaction technology research has completely changed: The pandemic has created a higher demand for technologies that allow us to avoid touching devices. Before the pandemic, the world had a harder time understanding the importance of touchless technology, and even then it was not imagined in this context. The gesture-based technologies and hand interaction that have been adopted in research have thus far not been popular outside of research labs. There are several issues in the design, development, and adoption of such technologies that should be addressed in the near future. 

This is a time when the average human being can understand why there is a need for touchless interaction, which was not so easy to explain in the past. This technology is not only important for healthcare workers interacting with medical equipment, but also in the use of ATMs, vending machines, and learning devices—all great examples of where we need touchless interaction. 

Touchless interaction is possible with augmented reality technology, which uses gesture and interaction controller sensors to create a bridge between virtual and real environments. Touchless interaction technology has also been explored in the following research areas: touchless technology in surgery using gesture based technology [1], use of inertial sensors for gesture-based interaction with medical images [2], use of Kinect [3] and Leap Motion devices for touchless interaction in surgery [4]. It has also been explored in education as motion-based touchless games for learning using Kinect [5], and in medical education [6] and anatomy-learning applications using Leap Motion controllers [7]. Mainly in education, these technologies were developed to allow interaction with virtual objects, but they are also viable for avoiding hand interaction with digital devices. 

When taking an elevator, you should not have to worry if the buttons have been pressed by a Covid-19 patient. Replacing this button-based interaction with a gesture or interactive hand controller can handle such cases and move the world forward. This particular case would be addressed using a gesture-based sensing system [8] that receives the gesture data to help the user to interact with the operating system of the elevator and avoid the hand touch. 

The rapid adoption of biometric systems to monitor workplace attendance, as official identification, to control the security of digital devices, and now in the use of ATM machines has created a need for touchless fingerprint detection systems in these areas. Touchless ATM machines are the potential need of the time. A touchless fingerprint payment system has addressed this issue in a mobile device as touchless biometric payment.   

Currently, tracking devices like Kinect, Leap Motion, and the recent development of MediaPipe by Google are some great resources to integrate the touchless interactions in digital devices. By considering the design challenges, issues about their stability and accuracy will be addressed, which can help the world move toward the development of better touchless interfaces. 


1. O'Hara, K., Gonzalez, G., Sellen, A., Penney, G., Varnavas, A., Mentis, H., Criminisi, A., Corish, R., Rouncefield, M., Dastur, N., and  Carrell, T. Touchless interaction in surgery. Communications of the ACM 57, 1 (2014), 70–77.

2. Jalaliniya, S., Smith, J., Sousa, M., Büthe, L., and Pederson, T. Touch-less interaction with medical images using hand & foot gestures. Proc. of the 2013 ACM Conference on Pervasive and Ubiquitous Computing Adjunct Publication. 2013, 1265–1274.

3. Campbell, M. Kinect imaging lets surgeons keep their focus. NewScientist. May 16, 2012; 

4. Manolova, A. System for touchless interaction with medical images in surgery using Leap Motion. ICCC 2014.

5. Bartoli, L., Corradi, C., Garzotto, F., and Valoriani, M. Exploring motion-based touchless games for autistic children's learning. Proc. of the 12th International Conference on Interaction Design and Children. 2013, 102–111.

6. Nicola, S., Stoicu-Tivadar, L., Virag, I., and Crişan-Vida, M. Leap motion supporting medical education. Proc. of 2016 12th IEEE International Symposium on Electronics and Telecommunications. IEEE, 2016, 153–156.

7. Al-Razooq, A., Boreggah, B., Al-Qahtani, L., and Jafri, R. Usability evaluation of a leap motion-based educational application. Advances in Human Factors, Business Management, Training and Education. Springer, Cham, 2017, 171–185.

8. Scoville, B.A., Simcik, P.A., and Peterson, E.C. U.S. Patent No. 10,023,427. U.S. Patent and Trademark Office, Washington, DC, 2018.

Posted in: Covid-19 on Wed, May 06, 2020 - 1:47:42

Muhammad Zahid Iqbal

Muhammad Zahid Iqbal is a Ph.D. researcher in the School of Computer Science, University College Dublin, Ireland. His research interests are human-computer interaction, augmented reality in education, touchless interactions technologies, artificial intelligence, and e-learning. He is alumni of the Heidelberg Laureate Forum.
View All Muhammad Zahid Iqbal's Posts

Abraham Campbell

Abraham Campbell is an assistant professor at University College Dublin (UCD), Ireland, who is currently teaching as part of Beijing-Dublin International College (BJUT), a joint initiative between UCD and BJUT.
View All Abraham Campbell's Posts

Post Comment

No Comments Found

‘Together, we dance alone’: Building a collective toolkit for creatives in a pandemic

Authors: Kat Braybrooke
Posted: Tue, May 05, 2020 - 11:04:21

It seems to me that, if we can talk about such a thing as the tasks of resilience, then today these tasks will share that quality of taking responsibility: not an impossible, meaningless responsibility for the world in general, but one that is specific and practical, and may be different for each of us.Dougald Hine, 2012 

As the twin towers smoldered on September 11, 2001, the electronic music composer William Basinski produced The Disintegration Loops, a recording of decades-old tapes crumbling into decay alongside live footage of that fateful day’s sunset in ruins [1]. In doing so, the collective paralysis of a historic disaster became something timeless—and 20 years later Basinski’s piece continues to compel people to come together and share it in spaces of eulogy and renewal. Creative responses to crises like this, which ask us not only to consume but also to reflect and rebuild, remind us just how interconnected we all are, our lives made up of recursive loops of cause and effect. Such encounters ‘rewire our imaginations,” the science fiction author Kim Stanley Robinson argues [2]. As such, they light fires of possibility inside us—the kind of collective sonic booms that can enable, ever so briefly, alternative ways of living-with to emerge. The theorists Stuart Hall and Doreen Massey have described these as the “cracks” that dismantle and transform the systems of unequal power that structure our society [3]. How can design thinking respond to such moments of crisis and opportunity with sensitivity, in ways that transcend disciplinary and cultural divides? How can collective paralysis foster collective action?

These were the kinds of questions that consumed my attention as the Covid-19 pandemic descended upon us. I work on projects such as CreaTures, the Mozilla Festival, and Superrr that bring people together to imagine new socio-ecological futures across the arts and cultural industries. The more I spoke to the creative practitioners with whom I have collaborated on these projects, however—from artists, makers, and designers to hackers, curators, and educators—the more I started to realize just how hard-hit many of them would be by the persistent uncertainties of the virus and its impacts.

Those with precarious livelihoods are faced with two emergencies at once: The first, a health crisis; the second, economic instability. The data on this is already striking: Museums and archaeological sites from Egypt to Asia have had to close their doors and furlough staff for months, and a report by the Fairwork Foundation indicates that half of the world’s estimated 50 million gig workers have already lost their jobs. In the U.K. and Europe, a majority of arts, heritage, and culture charities are under significant threat, and over 60 percent of makers surveyed by the Crafts Council report a loss of income of over £5,000 in the next six months. In the U.S., a census of freelance art workers by Art Handler found that 90 percent do not have paid leave, and 80 percent are worried about rent. An open letter to museums and galleries, meanwhile, is making the rounds to express concern about increasing layoffs targeting precarious staff at cultural institutions like MoMA and LA MOCA. Most worryingly, we are seeing creatives across all sectors state that they lack access to the support and help they need.

The Covid Creatives Toolkit emerged from these uncertainties as a mutual-aid effort aimed at offering some of that much-needed support, by helping creative practitioners who found themselves needing to quickly migrate their practice onto digital places and spaces as a result of the virus. My collaborators and I noticed that many of the kits, guides, and other resources that were currently being populated with creatives in mind remained geographically skewed toward North American perspectives, or did not allow external contributions. For these reasons, we wanted to offer an open resource focused on free offerings with a global reach, that could be maintained by creatives themselves. Starting from an open call and a tweet asking for help, the kit’s contents were compiled by 30 curators and countless unnamed contributors worldwide, who came to it from across the arts, technology, community, academia, and gig work.

As such, the toolkit has become a living archive that articulates what co-creation as a form of care-making can look like in a crisis. Public contributions to the kit have varied widely, from mutual education and collaborative digital gatherings aimed at challenging social isolation, such as the Uroboros Festival, ANTIUNIVERSITY, and Disruptive Fridays, to film lists, meditative browser extensions for BAME communities, and digital dance parties to promote well-being. The eight featured chapters of the toolkit, from “Digital Gathering Spaces” to “Digital Tools for Creation and Support” to “Digital Well-being,” are wide in scope and offer ongoing documentation of the resources creatives are most in need of as the pandemic progresses. Chapter 8, for example, features much-needed data points on how Covid-19 is impacting creatives. Initially suggested by an anonymous contributor, it has evolved into one of the kit’s most valuable offerings. Another primary focus of the kit emerged from public requests for leads on organizing and bargaining for collective rights, with contributions from organizations that take action against exploitative practices such as the UVW Designers & Cultural Workers Union and FrenaLaCurva.

The Covid Creatives Toolkit has also benefited from the efforts of a decentralized curatorial team, made up of creatives around the world who have volunteered their time to help compile it. Here are some reflections on the process from five of those curators, in their own words.

From design fusionist Kasia Molga in Margate, U.K.: “It is such a strange time for everyone, and some people might be able to cope with this lockdown and anxiety better than others. Many digital creatives are fluent in using network tech to feel connected, but in many cases there is a need to have an anchor or a new routine to keep grounded (while the ground is shifting) and to have a strong base to continue be creative. I believe that this is what the Covid Creatives Toolkit is providing—a resource for everyone to feel anchored.”

From the writer, artist and film producer Tiffany Sia in Hong Kong: “Digital communication is insufficient in many ways, but to envision new forms of community, we must practice care and mutual aid—in its multitudinal forms—across long distance. Not just in these times, but in this next long century. Resistance against a virus is a global effort. Activism bends around these circuits, and at best, manifests as transnational efforts, sharing digital resources, methods, and tools. Creative practices must follow suit. And digital life expands not just as modes for where production happens; we have to trust in the new ways of togetherness. For the Covid Creatives Toolkit, my contributions were mostly focused on cinema. How can we watch together across timezones? How can we share work, experimental films, which were otherwise kept in the annals of email archives or shown at festivals? Through cinema upon these new channels, we must form a different kind of being together, of sharing dreams. How can we continue to forge a popular consciousness while being physically apart?” 

From Jaz Hee-jeong Choi, director of the Care-full Design Lab at RMIT’s School of Design in Melbourne, Australia: “As a Korean on an extended journey back to Australia just as the borders were closing around the world, I was acutely aware of the rapid changes in people’s perceptions and behaviors in response to Covid-19. Many, including myself, find themselves in between the need to do more of everything, like talking about and making or designing things … and do less of everything we had been doing collectively to date, and perhaps try to listen and/or reflect instead. For this reason, I refused to participate in or share calls for immediate actions, especially those calling for “doing more” from creative practitioners, many of whom were facing imminent threat to their livelihoods … but when the invitation came to contribute to curating the toolkit, I immediately agreed, with thanks. This was because … it clearly embodied openness and humility; it was not asking people to do things for particular outcomes. Rather, it held the space to share, but with care. This means acknowledgment of the diverse … needs of many different creative practitioners in vastly different situations; calling out to those who may not be thinking about what a toolkit might offer to them as a reader or contributor; and ensuring that if they want to, their voices are explicitly heard.“

From Eirini Malliaraki, who works on new AI projects at the Alan Turing Institute in London, U.K.: “I really enjoyed the process of curating the Creatives Toolkit, and particularly the speed and ease of collaboration with the other curators. Information (and misinformation) about Covid has been spreading at an unprecedented scale, and communities need support to make sense of it. The curators of the various response kits and guides are fundamentally sensemakers: They scan, filter, and organize the informational landscape as an act of collective care … emphasiz[ing] elements that resonate with the lived experience and needs of a community.”

From Storytellers United initiator Philo van Kemenade in Hilversum, Netherlands: “I see a unique role for online forms of gathering that are structurally multidisciplinary. Like a bazaar, they are organized around the realization that people coming together from with different backgrounds have more to offer to and consequently more to gain from each other …  Compared to their offline equivalents, online community efforts need to work extra hard to establish and maintain trust. What makes people dedicate time and attention to an online space? How does a communal space embody trust and where does it come from? Can it be earned or transferred? I feel that through the personal approach in curation and sharing, the Covid Creatives Toolkit did a great job at building on existing networks of trust … and is an excellent example of a framework for collective value creation via distributed contributions.” 

Decentralized co-creation also has its limitations, however. The Covid Creatives Toolkit has required the dedicated attention of its volunteer curators to manage its contributions and to disseminate it widely enough to include diverse perspectives. It has been a challenge to gather content and suggestions outside of Europe and North America—and the limited translation capacities of the toolkit’s platform mean it is less replicable than I had hoped. Free resources like the Creatives Toolkit are also left with far too few options for hosting their content on easily accessible digital spaces. As a result, projects of this kind must use free tools provided by proprietary digital platforms, which gather revenue from the data traces of their own users. We also currently lack the social infrastructure to collaborate with others creating similar toolkits elsewhere—and curators like Eirini Malliaraki have rightly asked why we cannot foster resilience not only within the many different communities affected by Covid-19, but also between them. 

These experiences illustrate how the process of taking care, as defined by Maria Puig Bellacasa as “those doings needed to create, hold together, and sustain life's essential heterogeneity by creating relation, in ways that recognize interdependence” [4], can emerge through co-creation in times of crisis in ways that build solidarity—and also how that process can be both messy and complicated. Like Basinski’s Disintegration Loops, the Covid Creatives Toolkit is a product of its time, a reminder of how Covid-19 has rewired our imaginations. It is a reflection of the mutual-aid networks built around it, and the challenges they face. In the words of the Zapatistas, it is a “world where many worlds fit.” By coming together in a time when so many of those involved are isolated and vulnerable to new forms of precarity, mutual-aid toolkits teach us that the claim of “knowing” something is inconceivable without acknowledging the multitude of interdependencies that have made that knowledge possible. As the anthropologist Arturo Escobar puts it, “All creation is collective, emergent, and relational; it involves historically and epistemically situated persons—[and] never autonomous individuals” [5]. I believe it is in these collective worlds upon worlds in all their messiness that the real work of design thinking as a viable form of future-making begins. For it is in such spaces of collective co-creation that we learn who we really are as a species and as a biosphere, and who we really want to become.


I would like to thank the following people in particular for volunteering their time to co-create the Covid Creatives Toolkit as its curators and allies: Marc Barto, Katy Beale, Andrea Botero, Tanya Boyarkina, Jaz Hee-jeong Choi, Hanna Cho, Sophie Dixon, Tracy Gagnon, Janet Gunter, Lara Houston, Sophie Huckfield, Philo van Kemenade, Jamilla Knight, Helen Leigh, Ann Light, Thor Magnusson, Eirini Malliaraki, Mauree Aki Matsusaka, Kasia Molga, Dina Ntziora, Mirena Papadimitriou, Annika Richterich, Anika Saigal, Anouska Samms,Tiffany Sia, Andrew Sleigh, Alex Taylor, and the CreaTures network of researchers and practitioners, who are developing creative practices for transformational futures across Europe, for their support and inspiration. I would also like to thank the many who continue to make suggestions, share, and maintain the toolkit. As Innervisions puts it: “Together, we dance alone.”


1. A recording of The Disintegration Loops is available here

2. See: “The Coronavirus Is Rewriting Our Imaginations,” New Yorker, May 1, 2020.

3. Hall, S., Massey, D. and Rustin, M., eds. After Neoliberalism? The Killburn Manifesto. Vol. 53. Lawrence & Wishart, London, 2013. Free to read online here.

4. Bellacasa, M.P. de la. 2012. ‘Nothing comes without its world’: Thinking with care. The Sociological Review 60, 2 (2012),197–216.

5. Escobar, A. Designs for the Pluriverse. Duke Univ. Press, Durham, NC, 2018.

Posted in: Covid-19 on Tue, May 05, 2020 - 11:04:21

Kat Braybrooke

Kat Braybrooke is a spatial anthropologist and designer whose work explores the critical implications of creative communities and spaces in places like Europe and China, with a focus on issues of social and environmental justice. She is currently a research fellow on the CreaTures project at the University of Sussex, and visiting researcher at the King’s College London Department of Digital Humanities.
View All Kat Braybrooke's Posts

Post Comment

No Comments Found

Life less normal

Authors: Alex Taylor
Posted: Thu, April 30, 2020 - 8:41:00

Is that how we lived then? But we lived as usual. Everyone does, most of the time. Whatever is going on is as usual. Even this is as usual, now. We lived, as usual, by ignoring. Ignoring isn't the same as ignorance, you have to work at it [1]. — Margaret Atwood, The Handmaid’s Tale

We’ve heard plenty of chatter about normal life in the last few weeks. Lots has been said about a departure from the normal, and questions are repeatedly being asked about what disruptions we must endure to normal life to reduce the spread of the novel coronavirus, and to eventually help find a way to return to normal.

Through critical thinking in feminist, race, and intersectional scholarship, we know though that this “normal”—ordinary life before Covid-19—is suffused with complications and surfaces acute problems for many across society. For people often assigned to the margins—people of color, the homeless, the colonized, the disabled, the low-waged, the unemployed, the displaced, and so on—normalcy relies on long histories of prejudice and continued exploitation. For many millions, globally, “the normal” is a life in precarity that demands continued endurance. 

As we live through the Covid-19 pandemic, these inequalities are becoming increasingly apparent. Coverage in the popular press shows just how widespread and deeply rooted the effects of the imbalances are, and how lethal their consequences can be. From hardships felt by low-paid key workers and those on the front lines, to the disproportionate numbers of deaths among ethnic populations in ostensibly wealthy, modern enclaves (most strikingly among health workers in the Global North), the brutal inequities and injustices of late capitalism are being felt [2].

In HCI, and through parallel research in science and technology studies, we also know that technological systems and scientific programs [3] serve to sustain many of these injustices. Technoscientific systems and infrastructures that seek to monitor and optimize human behavior and productivity, or that manage the functioning and health of bodies, enforce an idea of normal that obscures the brutal realities and erases those at the margins, sometimes violently [4].

At this moment of worldwide disruption from “the normal,” then, it seems another question we could be asking is whether we want to reimagine what, exactly, we want to “return to.” And, for HCI, we might ask what versions of technology we might imagine to disrupt the troubling normalcy that marks our times. The question I want to think with here is: What worlds are we making possible?

Let’s start then with this idea that will be familiar to many readers—that is, how the status quo—what we think of as normal—masks and erases those at the margins of society. From our experiences with Covid-19, we know that crises can make visible those who are usually out of sight. Such disruptions to the normal also bring into sharp relief the technoscientific systems that the few profit from and how they are reliant on discrimination and exploitation [5]. So the exploitation of gig workers and Wetherspoons staff, but also cleaners, migrants, carers, and people involved in mass food production and supply chains are a necessary part of sustaining the normal. Crises, like the one we are in, surface the dependencies intrinsic in “ordinary” society and who is exploited to maintain normalcy.

For me, the critical point here is that the challenges we’re facing are deeply structural [6] and are deeply entangled with the sociotechnical systems we work on in HCI [7].

Think about this with respect to the spread of Covid-19. The efforts to limit its impact have, of course, been varied and uneven. There have been reports of the virus and its technologies of mitigation and containment being used to reassert the balance(s) of power and wealth in society, and to exert control over the already marginalized and exploited—a biopolitics of our time. 

This impact is set alongside concentrated incidences of job losses, as well as fraud and crime. For us, I think, questions must be asked of how technologies and versions of technoscience are being mobilized. Everything from access to testing and ventilation equipment, to the machinery for “rebooting the economy,” to distributing state-backed welfare, need to be examined to understand how the sociotechnical, the sociopolitical, and healthcare are being entangled. And how these entanglements are amplifying already deeply set injustices and discrimination. 

The point I want to make here is not just that the technologies we envision and work on play an active role in these conditions. Nor do I want to make any exaggerated claims about the impact HCI has had on the technology sector. Rather, my claim is that we (in our urge to design interactive systems that appeal to the many) are inexorably intertwined in worlds that furnish and sustain the conditions for exploitation and discrimination. We are not innocent bystanders serving up neutral technologies or indeed fixes [8]; we are integral and complicit in worlds that make many lives a lot less like the normal we are accustomed to and, to be frank, a lot less bearable.

I’ve struggled here to choose an example to illustrate this point, not because there are too few, but because the examples are everywhere when we choose to notice. Let me illustrate my argument, then, by first touching briefly on a realm of work that has been central to HCI pretty much from its inception, remote collaboration and videoconferencing. I then want to turn to what might seem an unrelated area, the technoscientific capacities that enable exploitative, global, animal farming and food supply chains. Placed together, spanning varied realms and scales, we’ll see that the ideas and logics in HCI intertwine with many of the inequities that are surfacing during the coronavirus crisis.

Videoconferencing, for many of us, has become a regular feature of work during the pandemic. With daily calls via Skype, Zoom, Microsoft Teams, etc., those in HCI will be reminiscing about its seminal research covering the interactional challenges of remote working via video, and the human work involved in coping with dropouts and partial views of interlocutors and the spaces they are working in. We will also remember that videoconferencing was seen as one way of creating a more accessible workplace for those with disabilities or who need to work flexibly. Who could have imagined videoconferencing and the troubles of remote talk would have come into their own in the time of a global pandemic?

Yet what many in HCI will have also overlooked, including myself, is just how divisive, societally, remote, computer-based work would be in 2020. Covid-19 has made it strikingly clear that a significant proportion of undervalued and low-wage work must by and large be performed in-person. Those most at risk in society—careworkers, cleaners, bus and delivery drivers, packaging and factory workers, and so on—are at risk because they simply have to be “in place” to work, and at the same time don’t have the privilege or choice not to work. 

The turn to knowledge work in HCI was then a turn away from the less privileged and a corresponding investment in a very narrow and distinctive class in society, the wealthy and educated. And in turning its attention away from those who have to be at work, HCI also turned away from large swathes of ethnic populations and race groups. The shocking statistics of Covid-19’s disportionate impact on black, asian, and other minority ethic groups will take some time to fully explain. However, among other important determinants, I’m confident a need to be physically at work will be a critical factor.

Again, the point to take away here is not that HCI and its research into remote work and videoconferencing are the direct cause of the inequities that surround us today. Nor is it to suggest we’ve not contributed to programs that prioritize fair and equal access to ICT. It’s that we have played—arguably unwittingly—a part in furnishing a world in which the wealthy and privileged have the choice to work remotely, to isolate and socially distance, and to stay safe. HCI is part of a rationalizing of work and labor that makes a version of normal possible, perhaps even probable. In responding to the current crisis, I believe it is then incumbent on us first to notice how we are implicated in these worlds and then to think how we might use our design methods and outputs to create the conditions for many more potential worlds, and alternatives that might just offer better ways of living and dying together.

We turn now to the seemingly distant world of animal farming and food supply chains. Though understandable attention is being given to wet markets in China—those that sell live animals and often exotic species—the dangers we must acknowledge are a good deal closer. Consider the results of an article published in 2018 by epidemiologist Madhur Saharan Dhingra and her colleagues [9]. The authors use a survey of avian flu viruses to show that highly pathogenic cases are far more likely to emerge through commercial poultry farming and intensive production systems, and correspondingly their occurrence is more likely in high-income countries. It’s also conditions like these that accelerate the spread of zoonotic diseases, diseases that make the jump between species. Avian flu and coronaviruses are thus more likely to move between species and to humans in factory farming conditions, where animals are kept tightly packed and huge quantities of effluent have the opportunity to flow between systems of food production [10]. 

Of course, we know that the scale of this farming and scope for the spread of diseases relies on technologies that sense physiological functions, monitor activity, and track the mass transportation of bodies. Although we might argue the concerns of HCI are a long way from animal farming, a very particular logic of bodies is being applied that feels not unfamiliar. Bodies, here, are reduced to quantitative measures and optimal metrics for maximum productivity yields. Moreover, value is assigned and generated through the production and proliferation of data, and the transactional potential it affords. HCI might not be directly involved in designing and building technology for factory farming, but it is deeply entangled in a logic that enables it and allows it to perpetuate.

Consider this further down the supply chain. The human labor of food production, so often hidden from us when normalcy prevails, is, in this crisis, attracting attention [11]. The pandemic is revealing the precarity of low-wage immigrant populations who ordinarily work thanklessly to supply us with food. With these workers routinely classified as unskilled and easily replaceable, we see at one and the same time how undervalued people’s lives can be, but also how critical they are to normality. Again, a technoscientific logic operates here, one of extraction where systems of monitoring and surveillance are deployed to extract maximal labor from people working across global supply chains. Far more sophisticated than the Taylorism applied to the factory floor at the turn of the 20th century, algorithmic technologies manage and optimize globally distributed supply chains against demand, locating human labor among the flows of just-in-time production. The remarkable achievement is that maximum extraction and productivity operates across scales and locations, from the factory farm, to laborers along the supply chain, to the infrastructures of circulation. It’s hardly surprising that human bodies, and indeed other living bodies, appear marginal, if not expendable.

Of no coincidence are the parallels with the remarkable work from Lily Irani and Noopur Raval. They show how the piecemeal tasks of Turkers and the monitored activities of gig workers slot into interlocking technoscientific and capitalist logics. Our medical imaging software [12] and takeaway orders, for instance, so much a part of the everyday and in different ways recognized as critical in the pandemic, at once depend on a normally invisible labor that sustain flows of capital and wealth worldwide.

It should then be clear that the technologies we are preoccupied with in HCI—technologies that count, monitor, calculate, identify, etc., all across geographically dispersed networks of fiber and wireless communication channels—are implicated in a version of normal that is exploitative and injust. The intensive farming of animals and our food supply chains are just examples of where computing and computational technologies afford and sustain logics in which inequity and exploitation are prerequisites. Although this structural machinery undergirds our dependence on an injustice that feels removed from us, it aligns with the same axes of power and wealth, and amplifies the conditions in which nonhuman-born viruses can establish themselves and thrive in humans. 

In HCI, I believe we need ways of understanding how technology and technoscientific infrastructures create very particular conditions for sociotechnical relations and indeed multispecies relations. For example, how technoscience is implicated in deforestation and the massive depletion of wildlife habitats; how it affords a machinic logic in the transportation and slaughter of animals; how it persists in reducing human labor to counts and metrics; and how it creates the conditions for microbes and what emerge as human pathogens to flourish literally in our backyards

I also believe HCI and design must face the challenge of imagining how life might be otherwise, in and after the pandemic. Perhaps it is about more than what worlds are we making possible? The question to be asked might be better put: What technoscientific interventions might make other worlds possible?

Finding ways to mitigate the spread of Covid-19, supporting, for example, contact tracing, symptom tracking, and immunity certification are undoubtedly important goals. However, the longer-term challenge for those of us invested in design and technology’s proliferation is to look beyond these immediate fixes. We need to be asking what multiscalar modes and practices might be reimagined to be responsive to and responsible for the seemingly separate technoscientific realms of managing human pandemics and caring for our sociotechnical and multispecies relations. We need to be imagining worlds that resist singular or monolithic ways of valuing life, that question the logics of extraction and transaction, and that make possible a multiplicity of ways of living together. As Justin Smith writes in his article “It’s All Just Beginning”: “These are not end times...What this is, rather, is a critical shift in the way we [need to] think about the human, the natural, and the overlap between these.”


1. Thanks to Constantine Sandis for reminding me of the resonances of The Handmaid’s Tale to our contemporary moment.

2. See: Miami's flawed Covid-19 testing system exposes city's rich-poor divide

3. Yusoff, K. A Billion Black Anthropocenes or None. Univ. of Minnesota Press, 2018.

4. Star, S.L. and Strauss, A. Layers of silence, arenas of voice: The ecology of visible and invisible work. Computer Supported Cooperative Work (CSCW) 8, 1–2 (1999), 9–30.

5. Klein, N. and Peet, R. Book Review: The Shock Doctrine: The Rise of Disaster Capitalism. Human Geography 1, 2 (2008), 130–133.

6. Zoe WIlliams makes this point forcefully in her Guardian piece: "We say we value key workers, but their low pay is systematic, not accidental."

7. Katrin Fritsch makes a similar point by raising the specter of the hyperobject in her Medium article: “Back to Normal?! Data and Technology in Times of Crises"

8. Tamar Sharon writes about the complications of companies like Apple and Google building global contract tracing infrastructures in ”When Google and Apple get privacy right, is there still something wrong?"

9. Dhingra, M.S., Artois, J., Dellicour, S., Lemey, P., Dauphin, G., Von Dobschuetz, S., Van Boeckel, T.P., Castellan, D.M., Morzaria, S., and Gilbert, M. Geographical and historical patterns in the emergences of novel highly pathogenic avian influenza (HPAI) H5 and H7 viruses in poultry. Frontiers in Veterinary Science 5 (2018), 84.

10. Science journalist Sonia Shah details this in The Nation: “Think Exotic Animals Are to Blame for the Coronavirus? Think Again

11. A member of Angry Workers who works at a Bakkavor factory, April 16: "'Don’t call us heroes’: Life on a Production Line

12. Wang, S., Kang, B., Ma, J., Zeng, X., Xiao, M., Guo, J., Cai, M., Yang, J., Li, Y., Meng, X., and Xu, B. A deep learning algorithm using CT images to screen for Corona Virus Disease (COVID-19) medRxiv. Apr. 4, 2020; DOI: 10.1101/2020.02.14.20023028

Posted in: Covid-19 on Thu, April 30, 2020 - 8:41:00

Alex Taylor

Alex Taylor is a sociologist at the Centre for Human Centred Interaction Design at City, University of London. With a fascination for the entanglements between social life and machines, his research ranges from empirical studies of technology in everyday life to speculative design interventions.
View All Alex Taylor's Posts

Post Comment

No Comments Found

Design thinking around Covid-19: Focusing on the garment workers of Bangladesh

Authors: Nova Ahmed, Rahat Jahangir Rony , Kimia Tuz Zaman
Posted: Tue, April 28, 2020 - 9:57:47

What’s going to happen to these garment workers? It was a question from my young colleagues Rahat and Kimia. We were working with garment workers in Bangladesh, where the garment sector is one of the leading economic sectors, with around 4 million workers involved in over 5,800 factories [1]. But it was more than work. During our qualitative studies in January and February 2020, we spent weekends with them in their houses. We heard about their dreams, hopes, and aspirations; their mundane days and their frustrations. If you are a qualitative researcher, you will know what this is like; for others, I want to say it is like we have brought parts of them—their feelings—back here with us. When the first Covid-19 patient was acknowledged in Bangladesh in March 2020, all we could think about was their congested houses, dense workplaces, and lack of savings for healthcare and emergency support. 

Before going into their current concerns and design-related possibilities, we wanted to take you to them and into their houses, as shown in Figures 1 and 2. We talked to 55 garment workers in the urban areas of Mirpur within Dhaka city and the suburbs of Ashulia and Gazipur during January and February 2020. These workers are not living in the slums, but their houses were in areas with congested multistoried buildings, one very close to the other. Many of these buildings are not fully complete, often lacking paint and railings on staircases. Each floor of the building holds three to four rooms—sometimes five to six rooms—with a family living in each room. All of the houses we visited, however, had a very complete and elaborate kitchen and washroom, shared across families. Families live together with individual dreams and concerns, but with shared support for each other.

Figure 1. The surrounding area in Gazipur where many garment workers reside; our interview took place in rightmost corner space. February 2020.

Figure 2. A discussion in Mirpur, Dhaka. February 2020.

Having had the critical experience of working with women before, we were expecting the struggles of female garment workers that are common in our region [2]. But the baseline employment scenario is different here—job security is higher for women in the garment industry (90 percent women [1]). The priority given to women shows up at home, where their stable jobs are accepted and their spouses take other responsibilities, many struggling to maintain a continuous flow of income. There were signs of blossoming equality starting in these homes.

It is a positive insight that women are more empowered in this sector and can play a significant role in families, but the picture is not so flowery when looking at the Covid-19 pandemic. This community does not save anything for their healthcare, investing more in children and at most in possibly buying a cow in the village, using their regular income to support their daily lives. Women work together in the shared kitchen, which is why the social distancing required for safety during Covid-19 does not make sense here under the current living conditions. 

When the government-imposed lockdown started in Bangladesh in early April, all garment factories were closed immediately without providing wages. Though it was promised by the garment industry associations that all the workers would be paid their wages in a timely fashion, the reality is that very few factories paid their workers their full salary. Some of the workers we talked to previously reached out to us during the lockdown in Bangladesh, sharing their daily anxieties. They were staying in their residences, still waiting for the garment factories to reopen. We also have seen in the media that many garment workers gathered and protested for their wages during this lockdown period, without maintaining any kind of social distance. 

Our ongoing support systems in Bangladesh are designed with a top-down approach—the solutions, helplines, and risk maps are generated by the authorities. There are also volunteers, foundations, and NGOs who work together or separately on support systems and fundraising, advertising heavily on web platforms and social media. However, the garment worker community are not exposed to the technology world, as they are busy with their laborious day jobs [2]. Listening to the workers, following up during the pandemic, it was clear that they wanted to speak, to share how they have been feeling. But there is no such platform to share their voices and feelings. If the garment workers need emergency support, it will be challenging for them because they do not know what support systems are available. There is support for people in extreme poverty as well as support for middle and upper-class people over technology platforms, but this community falls in between.

In Singapore, they tried to reduce infected patients by tracking the cellphones of infected residents and deploying quarantines by clustering the community around infected people. This technology is used by all citizens for healthcare purposes. That is how they can separate the infected community and provide better support. But the context in developing nations is different. Most of the people here are poor and require financial support and measures to ensure food security. Nationwide lockdowns cause scarcity in low-income communities. Additionally, these communities have less access to technology. The Ehsaas scheme (an SMS-based system) is a cash-collection system for the poor people of Pakistan, but now it is not feasible during this pandemic due to the lockdown. Thus, getting blessings from any deployed technology is a challenge. 

But a problem can open up design opportunities—eventually leading us to the day when we have solutions that are inclusive, open, and supportive. We need a design that incorporates workers’ voices to generate a support system. There will be varying requirements, where one person just wants to share their feelings with someone while others are looking for a way to secure food for the next month. There will be requirements for emotional support as well as support in finding a healthcare provider. Though all the workers we spoke with owned mobile phones, they have a distant relationship with mobile technology. The support elements are present but the connectors are missing. The current connections are one-directional—flowing from authorities toward the community. Most of the decisions depend on the authorities, which is why all communities are not treated equally. We need an easy-to-use interface that doesn’t invade one’s privacy, and that requires minimal technology access. It can be a phone number to call and share how one is feeling, or it can be a virtual contact online.

We understand that the aspirations of this community have been deeply affected by the uncertainties stemming from the lack of proper support. There is a burning requirement to incorporate a communication link from the garment worker community to the supporting authorities. We believe that the post-Covid-19 days should be our days of hope. 


1. Islam, M.S. Ready-made garments exports earning and its contribution to economic growth in Bangladesh. GeoJournal (2020), 1–9.

2. Sambasivan, N. and Holbrook, J. Toward responsible AI for the next billion users. Interactions 26, 1 (2018), 68–71.

Posted in: Covid-19 on Tue, April 28, 2020 - 9:57:47

Nova Ahmed

Nova Ahmed is a computer scientist in Bangladesh. Her focus is on feminist HCI and social justice.
View All Nova Ahmed's Posts

Rahat Jahangir Rony

Rahat Jahangir Rony is a passionate researcher working on the various problems of Bangladesh.
View All Rahat Jahangir Rony 's Posts

Kimia Tuz Zaman

Kimia Tuz Zaman is an emerging researcher working to solve the problems of Bangladesh.
View All Kimia Tuz Zaman's Posts

Post Comment

No Comments Found


Authors: Christopher Frauenberger
Posted: Mon, April 27, 2020 - 5:27:14

As people around the world try to make sense of a new normal, many commentators are saying that this global pandemic will change us permanently. Indeed, the shifts in the ways that we run our societies are seismic and could hardly be imagined only a few months ago—social distancing, travel restrictions, and the shutdown of large parts of our economy, all implemented within a few weeks. Even if this may, and hopefully will, be only temporary, the experience of living through coronavirus will stay with us. We will have seen what is possible, both because it was necessary and because we chose to do so. Aspects of our existence that seemed to be set in stone will have turned out to be up for debate. The virus will have changed who we are.

Recently, I made an argument for a similar form of entanglement with the nonhuman world, that of technology. In [1], I argued that our intimate relationship with digital technology has become equally existential, in that the digital things we create fundamentally change who we are. Consequently, the key for guiding our creation of technological futures should be the political question of who we want to be as part of the world we happen to share with other things and beings—a holistic ethico-onto-epistemological perspective [2] that treats questions of being (ontology), knowledge creation (epistemology), and responsibility and purpose in the world (ethics) as inseparable from each other. While man-made technology is of course different from the virus in important ways, I find this relational, posthuman perspective also to be a very effective lens for making sense of our response to the pandemic, as it plays out in the context of technology as well as more generally.

As we find ourselves in a messy situation that is hard to assess, we debate what is the right or wrong response to this pandemic. In many ways, it is like a crash course on ethical dilemmas: Whose lives do we save—literally as well as in the sense of livelihoods—and at what cost? Who gets left behind? Who exerts that power and by what authority? And what will happen afterward? The main political arenas in this debate include public health, the economy, and technology, and we currently reconfigure these arenas by redefining some central relations between human and nonhuman actors [3]. This mattering [2], these discursive material practices, distribute agency and power in new ways. It makes certain things possible, while making other things very difficult, and it enacts new lines of differences and othering. While there are certainly many different ways in which this plays out, I want to pick out two examples that struck me as particularly relevant for the field of HCI.

Like in many other countries, schools were officially closed in Austria in mid-March. The teachers of my two children scrambled to find ways to implement “eLearning” in a matter of days. They sent PDF worksheets as large attachments to emails with 30-odd recipients, answered questions in WhatsApp groups, distributed links to online content all over the Internet, and organized the occasional video conference, asking “So, how is everybody?” Like all of us, they have been caught by surprise and find themselves on steep learning curves. Some parents are distraught—they share one laptop between three children and need to do their home office work at night, when none of them is eLearning. Others do not have access to a printer, or are running low on toner or paper. Some children have taken on the roles of translators and IT consultants for their parents, while self-organizing their own education—knowledge and skills that they will not be credited for. Communication with some children has just dropped out entirely. 

There has been a lot of work on the (new) digital divide [4,5], but the virus has laid it bare in the midst of our society. Further, in our response to the pandemic we witness firsthand a reconfiguration of that divide, an implicit (and explicit) othering, facilitated through technology. Hard-to-reach children from difficult socioeconomic backgrounds, who we should have the highest interest in lifting up, have just been dealt a(nother) bad hand. In the coming surge of efforts to design roles of technology in education, as no doubt will happen, we will need to negotiate these entanglements between improving learning experiences and creating equal opportunities. There will be no best decisions, only choices and trade-offs in the political arena that is innovation. And, I argue, one of the most productive questions that can guide this innovation is: Who do we want to become through the (educational) tools we bring into this world? 

The second example is, at its core, a struggle as old as humanity: between the common good and individual freedom. And, as one would expect, it implicates digital technology at its center. Surveillance capitalism has co-evolved with technology to produce an infrastructure that runs on unprecedented levels of knowledge about the masses, with mechanisms for behavior prediction and manipulation at scale [6]. Now this infrastructure can potentially inform our response to the pandemic. In Austria, the former state-owned telecommunications provider produced aggregated data about people’s mobility for the government, after their first lockdown measures. While the public largely supported the lockdown, this use of information was perceived as suspicious. Of course, Google provides a similar analysis for all the countries of the world and uses movement data to chart the least busy times to shop in supermarkets. Helpful now, no doubt, but also a simple repurposing of information that the company collected for different reasons. It is interesting to note that, at least in Austria, the public seems to be wary of the state using that information, while private companies seem to seize the opportunity for whitewashing their practices.

A related function is contact tracing. In Austria, the Red Cross teamed up with Accenture and a private insurer to produce an app, Stopp Corona, that uses Bluetooth and ultrasound to estimate the distance between two mobile phones, performing a digital handshake if they are close enough. IDs are exchanged, and if someone tests positive, the logs allow the tracing of contacts to contain the spreading. The Austrian data-rights NGO analyzed the app and came to the conclusion that it does many things right—it was developed with privacy and security in mind. But no independent audits have yet been conducted and questions around the involvement of private companies are being asked. There are also fears that, while officially denied, use of the app will become quasi-compulsory to be able to participate in the slow reopening of public life. Meanwhile, a broad European alliance of research institutes and technology providers have teamed up in the Pan-European, Privacy-Preserving Proximity Tracing (PEPP-PT) project. And, in an unusual alliance, Google and Apple are collaborating to build contact tracing into their mobile operating systems in similar ways.

Next to medical testing, such data may become indispensable for making informed decisions around the far-reaching changes in our societies. The nature of the sociotechnical infrastructure that we build to produce this data determines what we can know, who it will discriminate against, and what we become through it—again, an ethico-onto-epistemological question [2]. This entanglement is being acknowledged, as we witness a new quality of debate that recognizes that technology is deeply political. While the paradigm of surveillance capitalism has rampaged through our societies largely unchecked, with far-reaching consequences for our democratic structures, questions about whose interests are being served with tracing apps and how this is reconfiguring power are starting to be asked. As in the educational context above, there will be no objectively “correct” design decisions in using big data for keeping pandemics at bay. And as with the notorious trolley problem, there is no correct answer to how much privacy we may want to give up for saving how many lives. These will be choices that, I argue, need to be negotiated. We also need to find appropriate formats for people to participate in this process of agonistic struggle for desirable (technological) futures [7]. And we should be guided by the question of what the technology we bring into this world will make us and if this is who we want to become.

It may well be that public health becomes the next national security—an inherently elusive, yet indisputable desire of people that is being misused to justify technological surveillance. Like 9/11, we might see the coronavirus serving as the scapegoat to implement modes of mass behavior manipulation by private companies. However, current public discourse offers glimpses of hope that society might have come to realize something in this pandemic: that digital technology is not just a tool; that innovation is a political arena in which we can participate; that technology creators are political actors who cannot be allowed to be above democratic accountability; and that we can have a voice in shaping technological futures—as they shape who we become through them. In a very posthuman, relational way, the virus may have shifted our relationship with technology.


1. Frauenberger, C. Entanglement HCI the next wave? ACM Trans. Comput.- Hum. Interact. 27, 1 (2019), 2:1–2:27. DOI: 10.1145/3364998

2. Barad, K. Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Second Printing edition. Duke Univ. Press Books, Durham, NC, 2007.

3. Latour, B. Reassembling the Social: An Introduction to Actor-Network-Theory. Clarendon Lectures in Management Studies. Oxford Univ. Press, Oxford, UK, 2005. 

4. Warschauer, M. Technology and Social Inclusion: Rethinking the Digital Divide. MIT Press, 2004. 

5. Brandtzæg, P.B., Heim, J., and Karahasanovic’, A. Understanding the new digital divide—A typology of Internet users in Europe. International Journal of Human-Computer Studies 69, 3 (2011), 123–138. DOI: 10.1016/j.ijhcs.2010.11.004

6. Zuboff, S. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. 1 edition. PublicAffairs, New York, 2019.

7. Mouffe, C. Agonistics: Thinking the World Politically. Verso, 2013.

Posted in: Covid-19 on Mon, April 27, 2020 - 5:27:14

Christopher Frauenberger

Christopher Frauenberger is a senior researcher at the Human-Computer Interaction Group, TU Wien (Vienna University of Technology). His research focuses on designing technology with and for marginalized user groups, such as those with disabilities. He is committed to participatory design approaches and builds on theories and methods from diverse fields such as the action research, disability studies, philosophy of science, and research ethics.
View All Christopher Frauenberger's Posts

Post Comment

No Comments Found

Sustainable practices for the academic business sector: Publish in journals such as ToCHI

Authors: K. Höök, Robert Comber
Posted: Fri, April 24, 2020 - 3:15:06

ACM Transactions on Computer–Human Interaction (ToCHI) is the premier ACM journal for HCI research. Founded in 1994, the journal has been publishing research at the cutting edge of HCI every second month since 2013. Since taking over the helm at ToCHI in November 2018, we have started to see a number of changes, from the number and types of papers submitted to the journal to trends and practices in our field—and now, amid a global pandemic, in the role and nature of research and academic publishing. We want to use this blog to tell you a little about ToCHI and to encourage you to consider it, and other journals, for publishing your research. 

We realize that this comes at a time of difficulty for many people. Many of us now sit at home, trying to get teaching done over the Internet, rescheduling trips and events, and, most of all, worrying about those near and dear, our friends, our colleagues, our countries, and the financial situation. Publishing journal articles may be the least of our worries. 

It may be hard to see any positive effects of the pandemic—aside from the drastic drops in air pollution, allowing mother earth to take a deep breath. Yet some of the challenges and opportunities we now face preceded the current crisis. Many have long argued that we need to stop traveling and instead conduct meetings and conferences over the Internet. Innovative solutions and full-scale tests of how to host conferences are suddenly being devised without much prior experience, pushing us into solutions and behaviors that we have been longing for. At the same time, for those of us spending our days videoconferencing, it has become clear how much more research on CSCW, UIST, and user practices is needed before this will become the better option. In short, it is clear that HCI work is badly needed!—as we start working more and more remotely in order to reduce carbon emissions; as we integrate digital interactions into every area of life in the Internet of Things era; as we shift toward whole-body interactions through novel interactive materials; and as we engage with AI and big data as a path to controlling and managing whole infrastructures efficiently. In our offices, we are trying to move everything online, from traditional lectures to co-design, from supervision meetings to after-work social events. 

And we are learning. New practices are arising. 

And so a window of opportunity for debating our own profession, including our publication strategies, opens. HCI is a conference-driven field, as is the whole field of computer science. This has had many positive effects on our young field: By meeting regularly, we have fostered a global understanding of what we expect in terms of quality, research methods, understanding of different cultures, and interdisciplinarity. But maybe we are now at a point in time where we can—and perhaps must—shift more of our work toward journal publications on the one end, and popular accounts of our insights in magazines, such as Interactions, on the other end? 

While we can see many negative consequences of not meeting as often to share ideas, we might ask what, apart from reducing the environmental impact of our work, could be the benefits from such a shift? As editors of ToCHI, we see every day how journal articles give us an opportunity to provide more—more substance, a better grounding in the literature, a shift toward deeper reflection. This answers to a lack that HCI has struggled with for a long time. As many have argued before, our field needs to start building more coherence. The different subfields of HCI need to start building on one another’s work, developing research programs or even whole paradigms. We need to shift beyond importing theories and start forming our own. We need to probe the interaction design concepts and technologies we develop, testing their validity and reach, developing some coherence in our research methods and the knowledge we create. 

How much substance are we talking about when we shift from conference to journal articles? In our experience, a good ToCHI paper often builds on work done over a longer time period. Instead of one short user study, there will be several studies, or a study done over a longer term. Instead of one exploration of a novel interaction technology, there will be several iterations. Instead of a limited literature review, there will be a substantial review providing an analytical framework that shifts a whole research topic forward. ToCHI publishes work that describes user studies done over months to properly observe behavior shifts or design work done over several years, building a whole new interaction model or a design program. 

How do we know that there is substance in a submission? As ToCHI aims to be the flagship journal of the HCI field, we put high demands on the quality of papers we publish. The associate editors of ToCHI are leaders in our field with extensive knowledge of their respective subfields. We rely on their expertise to a larger degree than in, for example, the CHI conference review process. An associate editor of ToCHI will engage at least three reviewers, but will then, based on their input, make an independent decision on whether to accept, ask for revisions on, or reject an article. 

It has often been argued that journals are too slow to cater to the needs of a rapidly changing field. Like many other journals, ToCHI has worked really hard on changing this. Our aim is to provide authors with a decision within eight to nine weeks from the time an article is submitted (average is currently 48.2 days). In most cases, revisions are needed, so how fast the next step progresses depends on the author. Papers require a median of two rounds of revision, though over 40 percent are completed with one or no revision cycle (see Table 1). Each revision cycle adds about four months to the process, while the average time from first submission to publication is 11 months. These days, as soon as the final, camera-ready version is submitted, it will immediately be published in the ACM Digital Library. With the exception of special issues, there is no longer any need to wait for the whole issue to be completed before individual articles become available. In effect, this means that the time from submission to seeing your paper in the ACM Digital Library might be as short as 80 days. 

Table 1. Average number of days per revision cycle. 

ToCHI has been increasing in size during the past few years. The growth in the number of articles submitted per year is shown in Table 2,  and the trend line shows a continued growth of 14 percent year on year into the future

Table 2. The number of ToCHI submissions per year.

ToCHI is currently aiming to broaden our scope, from being more focused on the technical side of HCI to include all of the HCI subdisciplines. This is reflected in the composition of associate editors. It is also reflected in what papers have recently been submitted and published. 

Finally, ToCHI authors can choose to present their work at one of the ACM SIGCHI conferences. Most have in the past picked the CHI conference. This has given authors the best of both worlds, as CHI lets them meet their community and debate their results. But to address the sustainability concerns outlined above, we now aim to find new ways of engaging authors and readers in debate. We are discussing including commentaries on published papers, more special issues, and perhaps a blog post in Interactions on the impact of certain ToCHI papers. 

If you are considering submitting to ToCHI or in general want to know more, please feel free to contact us!

Finally, take care everyone! Put you and your family first!

Kristina Höök, editor in chief of ToCHI, and Rob Comber, associate editor and information director of ToCHI 

Posted in: Covid-19 on Fri, April 24, 2020 - 3:15:06

K. Höök

Kristina Höök is a professor in interaction design at Royal Institute of Technology [KTH), Stockholm, Sweden. Höök is known for her work on soma design, first-person perspectives on design, and epistemology for interaction design.
View All K. Höök's Posts

Robert Comber

Rob Comber is an HCI researcher and associate professor in communication at KTH Royal Institute of Technology. His research focuses on issues related to the democracy of technology, including social and environmental sustainability, social justice, and feminism, and to specific applications of computing technology, including civic society, food, and social media.
View All Robert Comber's Posts

Post Comment

No Comments Found

Workshops are now required to be conducted remotely—is this a bad thing?

Authors: Vikram Singh
Posted: Thu, April 23, 2020 - 2:22:34

There's a palpable sense of anxiety in the recent global shift to home working. Both bosses and workers are feeling the loss of the benefits of in-person gatherings, such as meetings, workshops, and casual chats. In the UX industry, the loss seems all the more urgent. Some have argued that design and research workshops are irreparably hobbled by the requirement to conduct them remotely. Workshop facilitators are forced to use “whiteboarding” tools such as Miro or Figma, which provide online collaboration with real-time edits.

There's no question that by conducting workshops remotely facilitators lose out on paralanguage—the emotive and gestural communication we take for granted in person. But the disadvantages are more than made up for by the fact that workshops are now conducted via computer, and thus intrinsically bring with them computer-mediated benefits.

It isn't that these benefits fill in for paralanguage; rather, they enhance remote workshops in ways that are lacking in in-person workshops. They also correct for challenges that appear endemic to how we have traditionally run workshops. 

First, online workshops have very low barriers to entry. Donald Schon argued that design is a conversation with materials [1]. In design workshops, facilitators will often enact this idea by getting users to sketch, group Post-its, or play with a flexible material such as lego or clay. The tangibility of these materials can be very effective at embodying thought and catalyzing new ideas, yet it can be difficult for non-designers—or those who are anxious or shy—to participate.

But with collaborative whiteboarding tools such as Miro, Figma, or even Google Draw, users can copy and paste text, group items, and even sketch out wireframes using very basic point, click, and drag interactions. The barrier to entry of these tools is extremely low for anyone with basic computer literacy. While these interactions lack the haptics of in-person workshops, they have the advantage of simplicity and extreme flexibility and scalability for anyone with basic computer knowledge. Copying, resizing, and modifying are all mere clicks away. What's more, participants who may be shy about drawing may find that the constraints imposed by the tools (e.g., all boxes look the same for every user) are beneficial, as they iron out differences in the various skill levels of participants—at least for basic sketches.

A second point is that online tools allow participants to be embodied in the design practice in an immediate way that isn’t possible with in-person workshops. Design researcher Nigel Cross discusses [2] three different phases of design—gathering information, sketching, and reflecting—as part of Schon's conversation with materials. This conversation with materials becomes highly sharpened in online tools, as participants can switch between these phases extremely quickly. Participants group, annotate, and sketch while allowing everyone else to see what they are doing, and immediately reflect on activities completed. Their actions are embodied within the tools in a way in which it is little effort to engage and disengage in different activities. I recently ran a persona workshop where each participant was able to place notes against each persona, then immediately openly reflect with the group on what they were doing, without mandated turn-taking or phases.

This is a more laborious activity in in-person workshops, where the doing and thinking phases are sharply distinguished. There are often exact note-taking, sketching, grouping, and analyzing phases, meaning it is more challenging for participants to individually act on personal and collective feedback. 

A third point is related to the above—the tools themselves are structured with a high degree of freedom. In other words, the structure of the tools does not enforce or imply a workflow or conceptual structure for organizing information. In his book The Stuff of Bits, Paul Dourish describes how the properties of software constrain, enable, limit, and shape the way those representations can be created, transmitted, stored, and manipulated [3]. He discusses how Excel (which, in my experience, is extensively used for research and design tasks) defines what workers are committing too, how the information must be structured, and the level of granularity of the information. However, with new tools such as Miro and Figma, there is no structure—the workspaces constitute an infinite, empty canvas on which users can add anything: screenshots, text, shapes, etc. While constraints can be helpful for creativity, here they need not be mandated by the software’s materiality, as happened so commonly in older tools, but rather are imposed by a facilitator to induce creativity. As such, workshops that use these tools can oftentimes have as much creative freedom as in-person workshops.

Finally, the sensemaking attributes of remote workshops are much higher than in-person workshops. Sensemaking, loosely defined here as finding conceptual frameworks for information, is expedited in online tools by allowing participants to perform any number of actions on workshop artifacts to increase their understanding. A participant may zoom in and out on communal sketches or diagrams, or may even copy artifacts elsewhere for them to play around with. In other words, the sensemaking is subjective. While the objective of workshops is to form a consensus around information, facilitating individualized sensemaking can help users contribute to a group-level sensemaking consensus.

In some ways, it can be argued that we are trapped by COVID-19. And in many ways we are. We simply can’t be physically next to each other, and it is impossible to compensate for the interpersonal benefits that are intrinsic to in-person workshops. Any attempt to replicate the feeling of sitting next to someone in a workshop is bound to be a poor imitation.

But intellectually and creatively, it’s important to examine how the digital tools that we have developed for design and research open new avenues for creation and analysis, and don’t just act as a lesser replacement for being able to sit next to another person. 


1. Schon, Donald A. Designing as reflective conversation with the materials of a design situation. Research in Engineering Design 3, 3 (1992), 131–147.

2. Cross, N. Design Thinking: Understanding How Designers Think and Work. Berg, 2011.

3. Dourish, P. The Stuff of Bits: An Essay on the Materialities of Information. MIT Press, 2017.

Posted in: Covid-19 on Thu, April 23, 2020 - 2:22:34

Vikram Singh

Vikram Singh is the head of UX at Lightful in London. He has an M.Sc. in human-centred systems from City, University of London and regularly writes about technology and how we interact with it. @wordsandsuch
View All Vikram Singh's Posts

Post Comment

No Comments Found

Insights from videochat research in the context of Covid-19

Authors: Danielle Lottridge
Posted: Mon, April 20, 2020 - 4:16:25

Covid-19 has made videochat even more important than ever, with usage of Zoom and apps like Houseparty soaring. While at Yahoo Inc., my colleagues and I spent a few years studying how teenagers interact online. In this post, I’ll share insights from our research that are relevant to today's reality of online interaction under lockdown. 

You can influence others’ videochat attentiveness with your own behavior 

Our research shows teens respond to each other’s signals for attentiveness. Looking away from the screen or pausing the video signaled a reduction in attention, while asking questions and calling out signaled an increase in attention; videochat partners matched that signaled attention level [1]. What this means is that your behavior during videochat can influence others. You can signal to others that the videochat will be for co-presence (while you each do other things), or you can rally others to pay attention by signaling your full attention visually and aurally (and perhaps even suggest ending the meeting early if you get through your agenda!)

Have empathy for context switches—they may be more disruptive for others than for you 

Many people are working from home and juggling multiple roles, both professional and personal. Sometimes context “bursts” into videochat, especially in the form of sudden noise like dogs barking or a family member calling out, which we found was embarrassing for some teens caught within context overlaps [1]. You can use these bursts as reminders to be understanding and supportive of your colleagues’ other commitments. This may be especially relevant for parents, but anybody’s life can spill in front of the camera! 

Videochat can support shifts from focused interaction to social co-presence, and there’s something great about that 

Our research found that teens regularly engaged in videochat that included long periods of silence or focus elsewhere, such as scrolling social media feeds or playing games, amidst spontaneous bouts of conversation [1]. When people are vulnerable to social isolation, we encourage you to try idly hanging out on videochat without actively talking or trying to entertain.

Feeling lonely? Consider livestreaming for a new form of social co-presence 

Unlike videochat, which tends to be intimate and by invitation, livestreaming has more of an open-door policy, welcoming anybody to watch. Our project on teens’ livestreaming found that they used it for social co-presence [2]. They chose livestreaming because they could start it any time without having to rely on or wait for others to join the call. There are privacy concerns, but two ways that teens deal with that is 1) to vet incoming participants or 2) to aim the camera at an activity, such as cooking, rather than having the camera on themselves.

Wishing everyone health and safety during this difficult time. We hope that these tips are useful in thinking about new ways to use technology to interact with your communities and colleagues.


1. Suh, M., Bentley, F., and Lottridge, D. "It's Kind of Boring Looking at Just the Face": How Teens Multitask During Mobile Videochat. Proc. of the ACM on Human-Computer Interaction 2 (CSCW), (2018), 1–23.

2. Lottridge, D., Bentley, F., Wheeler, M., Lee, J., Cheung, J., Ong, K., and Rowley, C. Third-wave livestreaming: Teens' long form selfie. Proc. of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services. ACM, New York, 2017, 1–12.

Posted in: Covid-19 on Mon, April 20, 2020 - 4:16:25

Danielle Lottridge

Danielle Lottridge completed a Ph.D. in human factors engineering at the University of Toronto, then migrated south to do a postdoc at Stanford University in California. She worked in the Silicon Valley tech industry for several years before moving to New Zealand and into computer science at the University of Auckland.
View All Danielle Lottridge's Posts

Post Comment

No Comments Found

Coronavirus and the carnivalesque: What speculative methods can tell us about Covid-19

Authors: Cally Gatehouse
Posted: Mon, April 20, 2020 - 10:31:01

Like many HCI researchers, I’ve found that the coronavirus pandemic and its consequences has made it almost impossible to continue doing research as normal. Not only am I suddenly remote from the resources and communities that I work within, but I find myself newly cautious about the speculative methods that I use. It seems irresponsible to speculate at a time like this when we’re not able to predict what the end of each week might bring. Asking questions of what shape our shared futures might take feels fraught when so many lives are on hold and at risk during lockdowns. However, as the pandemic has unfolded, I have started to recognize uncanny similarities to my experiences of doing speculative research. Much like speculative research, these events seem to ask us to renegotiate our sense of what is possible, probable, plausible, preferable on almost a daily basis. And while the events that I am normally concerned with involve myself and a handful of other actors rather than the global scale of the coronavirus, I have started to wonder if speculative research might offer some concepts that can help us to make sense of this current moment in which the world feels turned upside down.

In my research, I often use a variety of material, visual and performative devices in pursuit of a carnivalesque unsettling of the authority of what-is to dictate what-may-be. This draws on Bakhtin’s description of the carnival as time and place for “working out, in a concretely sensuous, half-real and half-playacted form, a new mode of interrelationship between individuals [1].” Carnivals, by standing in contrast to normal life, are spaces in which social hierarchies can be questioned and reconfigured. Previously, I have written about how participatory speculative workshops took on exactly such a carnivalesque atmosphere to create a space and time in which we could collectively reconsider what it would mean to take young LGBT people’s experience of hate crime seriously [2]. This upending of the normal world enables us to imbue parodic and unconventional propositions with a sense of provisional seriousness. While these events do not banish existing social hierarchies, they give heightened sense that these hierarchies are open to revision.

Coronavirus and the measures taken to slow its spread have created a similar sense that normal rules have been suspended temporarily. However, the new rules of these events are in some senses an inversion of the carnivalesque. Bakhtin’s account of the carnivalesque has four distinct elements: free and familiar contact amongst people, eccentric behaviour, profanity, and carnivalistic mésalliance (the bringing together of opposites such as light and dark, serious and silly, life and death, etc.) [3]. The temporary regime that coronavirus necessitates with limited physical contact between people, strict policing of deviance from the new rules of social distancing, and the careful control of all things bodily is a darker mirror image of these characteristics. Most important, rather than a carnivalesque inversion of existing hierarchies, we are seeing inequalities like those that result from the digital divide, precarious and low paid work, and unequal access to housing and healthcare play out in ever more striking terms. 

But even in this inverted carnivalesque, there remains some of the same inventive potential to produce newly improvised forms of social relationships. Pubs, exercise classes, and religious services are moving online. We are also finding new ways to enjoy the limited public spaces we still have access to, from spectacular mass performances like singing from apartment balconies to clapping to show support for healthcare workers. On a personal scale, we are all learning how to negotiate public spaces while keeping a safe distance. Normally trivial or mundane activities like going to the supermarket bring a new sense of risk, but people are also finding new ways to derive pleasure or comfort from whatever access to outdoor space is available to them. In my neighborhood, I’ve seen sunbathers sit on flat roofs, garages turned into gyms, and badminton games played across garden fences. I saw a woman visiting what I took to be her daughter and grandson, trapped at the garden gate by social distancing. The visitor was throwing a ball the length of the short garden path for the little boy to kick back to her. This seemed a little risky, right on the edge of physical contact, but the boy was laughing with delight.

These kind of deeply ambivalent experiences are perhaps the strongest affinity between what we’re currently experiencing and the carnivalesque. Both are experiences in which we are forced to acknowledge the ways that horror and joy coexist. The experience of lockdown is streets that feel like a peaceful Sunday afternoon all week long and the dread of daily reports of a rising death toll. It is a sudden expansion of police powers and the invention of new social safety nets. It is being painfully cut off from one another while finding new ways to connect to those physically distant. It is by creating these new ambivalences that gives what we’re currently experiencing a speculative power. Things that seemed impossible are suddenly a reality: Air pollution drops as we reconsider what economic activity is really “essential,” homeless people are housed overnight, and universal basic income has gone from a fringe proposal to a national policy in Spain. Perhaps troublingly for HCI, we are also seeing a growing acceptance that the necessity of intrusive surveillance technology outweighs privacy concerns. 

As HCI has increasingly engaged with the broader social, political, and environmental implications of our work, we are increasingly confronted with problems that far exceed our individual or collective capacity to respond as HCI researchers. Coronavirus is arguably one of these problems. Many HCI researchers, including myself, have turned to Donna Haraway’s rallying call to “stay with the trouble” [4] as a way to navigate these new territories. How do we stay with this particular moment of trouble? At some point this crisis will end and we will find ourselves in a post-corona world, one that will look and feel very different to the world we knew before. What will persist of this moment in which “business as usual” became impossible is hard to predict. However, looking back through the lens of the carnivalesque ambiguity, we can begin to account for the ways in which this time brought together elements that are both painful and pleasurable, ugly and beautiful, isolating and uniting, in a host of unexpected ways. Understanding how these discordant aspects have come together in new ways can help us design for not just the world we have, but for the other possible worlds that these strange times allow us to glimpse.

Perhaps this is why I am cautious about speculative methods, particularly in moments like this. Speculation produces novelty but offers no guarantee that we will like the novelty that it produces. Indeed, the hope and the risk of speculative methods is that they will produce something that requires entirely new values with which to judge it. In a similar way, these times, as troubling as they are, sharpen our senses to the way in which Karen Barad reminds us that what is “out of sight may be out of reach but not necessarily out of touch” [5]. While we may be physically distant at the moment, the ways in which we are inextricably interdependent have never been more obvious. While it is far too soon to fully account for what new values coronavirus will produce, speculative methods offer some insights into how these risks can be shared and how we can work to develop our collective capacity to keep responding to these events even when solutions remain beyond our grasp.


1. Folch-Serra, M. Place, voice, space: Mikhail Bakhtin’s dialogical landscape. Environment and Planning D: Society and Space 8, 3 (1990), 255–274;

2. Gatehouse, C. A Hauntology of farticipatory speculation. Proc. of the 16th Participatory Design Conference 2020 - Participation(s) Otherwise - Vol 1;

3. Bakhtin, M. Problems of Dostoevsky’s Poetics. U of Minnesota Press, 2013.

4. Haraway, D.J. Staying with the Trouble: Making Kin in the Chthulucene. Duke University Press, 2016.

5. Barad, K. Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Duke Univ. Press, 2007.

Posted in: Covid-19 on Mon, April 20, 2020 - 10:31:01

Cally Gatehouse

Cally Gatehouse is a lecturer in communication design at Northumbria University. She is a design-researcher, with a background in graphic and communication design. Her research uses feminist STS to frame and develop an understanding of critical and speculative design research as a means of "staying with the trouble."
View All Cally Gatehouse's Posts

Post Comment

No Comments Found

Hit by the pandemic, war, and sanctions: Building resilience to face the digital divide in education

Authors: Rojin Vishkaie
Posted: Fri, April 10, 2020 - 5:15:00

Globally, education is being impacted by the effects of the COVID-19 (coronavirus) pandemic. Many countries are issuing executive orders regarding the physical closure of schools—some for the entire year—to prevent the introduction and spread of COVID-19 into their respective communities. In response to the rapidly changing educational climate, the development of distance-learning programs, in which students and instructors connect via ICT in different locations, appears to be a work-in-progress—even for highly digital and developed countries. As a U.S.-based secondary school teacher recently told me, “Student’s education can't stop simply because they’re impacted by the pandemic, but also online education is new for many families and schools right now, so it is going to take some work to prepare for this rapidly changing situation.”

As a result of the somewhat forced and rapid changing of the educational climate, the use of cloud-based online learning platforms, 5G technology, mixed reality, interactive apps, synchronous face-to-face video, and live radio and television broadcasts have quickly become the prevalent choice for educational delivery in the U.S., China, and Japan. Other countries in Asia and Europe use standard asynchronous online learning tools, such as reading material via Google Classroom and email. These technologies have enabled “ubiquitous learning” experiences for learners across these regions, particularly in the growing trend of decentralized homeschooling during the pandemic. 

While the U.S. and China have created and own the vast majority of the wealth in the today's digital economy, the rest of the world, particularly countries impacted by sanctions and war, are trailing considerably behind. “This trajectory is likely to continue, further contributing to rising inequality,” writes UN Secretary-General António Guterres in the preface to a report by United Nations Conference on Trade and Development (UNCTAD). “We must work to close the digital divide, where more than half the world has limited or no access to the Internet.”

The term digital divide, which describes a gap in access to and use of ICT, came into formal usage in the early 21st century [1], but its underlying issues had been studied previously in the late 20th century. The notion of “Do Artifacts Have Politics?” put forth by Langdon Winner in 1980 [2] argues that technology can embody specific forms of power and authority in socioeconomic contexts. And Rob Kling’s “social informatics” [3] (1996) looks at the socioeconomics of the technological artifacts and human social context constituted by ICT. Particularly during the unprecedented COVID-19 pandemic, school closures have magnified already-existing socioeconomic and political disparities within the education system, especially for the most vulnerable and marginalized, revealing inequities in access to resources and issues related to privilege, power, and control in certain regions of the world.

In this context, in countries such as Iran, which are affected by economic sanctions and wars, in addition to experiencing one of the world’s largest outbreaks of the virus, the impact of the pandemic on its education system has been signficant. One example: A teacher from the city of Hamidiyeh, in the Ahvaz province, uses the fridge at her house instead of a whiteboard to teach math, and then sends the videos to her students.

Similarly, Syria, another region impacted by war and sanctions for nearly a decade, has a large segment of the population at high risk of novel coronavirus, while also facing the immense challenges of its decimated infrastructure. 

The unfortunate side effects of continued sanctions and wars in countries such as Iran, Syria, and Sudan have slowed their ability to keep pace with the evolving classroom-based teaching approaches and the transition to online-based classrooms. Coupled with COVID-19, there is the potential for a negative impact on the educational landscape and a widening of the digital divide for these countries and others around the world. 

Considering the wide scope of the still-expanding digital divide, future trajectories can be envisioned for transforming the learning experiences of students across the world to mitigate its impact. Particularly, it is important to support initiatives aimed at narrowing the gap for more vulnerable and disadvantaged countries, who are socioeconomically impacted by both the global obstacle of pandemic disease as well as extreme sanctions and war. To reduce the enormous consequences of these overwhelming challenges on global education, the following goals must be accomplished: 

i. Providing inclusive, universal access   

Government as well as non- and for-profit organizations must work hand in hand to enable more students across the globe to have universal access to digital devices and the Internet for low-income households at subsidized rates. This also includes providing equal levels of service and networks to rural and underserved communities so that all students, regardless of socioeconomic status, can participate in remote learning. Inclusiveness also means ensuring that young girls are trained with the necessary digital skills, including the knowledge and skills they need to stay safe online. 

ii. Developing digital literacy 

Having access to computers and the Internet is a crucial necessity for education globally, but that alone is not enough. The incredible power of digital technology for education must also be embraced by training and preserving additional and more qualified staff, alongside new technologies to promote the best application of these resources. In addition, digital literacy as a doorway to socioeconomic and political literacy should educate students for a digital future that is inclusive, sustainable, and collaborative. 

iii. Building resilience into the education 

Demonstrating the ability to build internal resilience assets such as problem-solving, self-efficacy, empathy, inclusion, and self-awareness in digital tools and systems has the potential to enhance student outcomes. Actively engaging with the information that students are learning will provide an opportunity for them to perform better. 


1. Bates, C. Digital divide. In Global Social Issues: An Encyclopedia. C.G. Bates and J. Ciment, eds. Routledge. London, UK, 2013.

2. Winner, L. Do Artifacts Have Politics?. Daedalus 109, 1, Modern Technology: Problem or Opportunity? (Winter 1980), 121–136.

3. Kling, R. Social informatics: A new perspective on social research about information and communication technologies. Prometheus 18, 3 (2000), 245–264.

Posted in: Covid-19 on Fri, April 10, 2020 - 5:15:00

Rojin Vishkaie

Rojin Vishkaie (Ph.D.) is currently a user researcher at Microsoft Corporation, focused on mixed reality systems.
View All Rojin Vishkaie's Posts

Post Comment

No Comments Found

Mobile tracking and privacy in the coronavirus pandemic

Authors: Montathar Faraon
Posted: Thu, April 09, 2020 - 10:08:59

The worldwide pandemic of coronavirus disease (COVID-19) is putting pressure on and testing societies, companies, and citizens. Life, health, and jobs are at stake. In this critical moment, the HCI community is working together to design, test, and execute ideas, prototypes, and solutions with the aim to strengthen our resistance in the fight against the ongoing pandemic. HCI is contributing with abilities, expertise, and methods to design intuitive solutions that are accessible, understandable, and usable. 

The coronavirus disease has had far-reaching consequences for public health, financial markets, and everyday life. With currently over 1.5 million infected and 90,000 deaths, the disease has left few people unaffected by its impact. Technological developments in transportation during the past decades have contributed to the rapid movement of people and, in turn, to the worldwide spread of the disease. Likewise, researchers, designers, and developers around the world are racing to understand, design, and develop digital technology to trace and mitigate the spread.

Contact tracing, in combination with social, or more appropriately, physical distancing, has been viewed by governments as a useful asset in controlling the spread of the disease. A video produced through a collaboration between Tectonix and X-Mode Social, who utilized anonymized location data of mobile devices, showed how far the disease could spread if physical distancing is ignored. Novel ideas for contact tracing are considered essential because, unlike other diseases, COVID-19 appears to spread more easily before any symptoms become apparent.

One of the early stories concerning contact tracing came out of Singapore. The government managed to curb the spread of the disease through mobile tracking by using a smartphone app called TraceTogether. The app uses Bluetooth technology to track users’ proximity to other people and alert those who come in contact with someone who has tested positive. The government also sent daily updates to its citizens that included the current number of cases, suspected areas of outbreaks, and hygiene information to avert infection.

In Russia, the local government of Moscow rolled out a QR code system to allow residents to leave their homes and a smartphone app called Social Monitoring to track patients’ movements. Residents need to register on the government’s online portal, which requires both phone number and photography verification. The online portal could then be used to generate a new QR code every time a resident needs to leave their apartment. If stopped by the police, residents would be required to show the QR code.

In the U.S., companies such as Google and Facebook are responding to the pandemic by using mobile location data to aid public health officials in understanding people’s movement and to what degree people are following physical distancing recommendations set forth by the government. Google released Community Mobility Reports, which  provide information about mobility trends in places such as retail, grocery, parks, transit stations, workplaces, and residences. Likewise, Facebook shared mobile location data with academics, as part of its Disease Prevention Maps program, to better understand the success of measures aimed to decrease mobility. Similar to previous initiatives, the Massachusetts Institute of Technology (MIT) has published an app called Private Kit: Safe Paths to notify people if they have likely come in contact with a person who has tested positive for COVID-19.

Because of the different privacy standards around the world, it is difficult to take advantage of and adopt mobile tracking technology between countries. Using the tools offered by the Singapore government would, for example, in the U.S. violate privacy laws like the Health Insurance Portability and Accountability Act (HIPAA) and in Europe the General Data Protection Regulation (GDPR).

To address privacy and data-protection concerns, a mobile tracking initiative has been launched in Europe, which consists of 130 members across eight countries. The initiative is called Pan-European Privacy-Preserving Proximity Tracing, or PEPP-PT. It aims to create an app that takes advantage of Bluetooth signals to detect within certain limits if users may be close enough to infect each other (e.g., closer than 2 meters for more than 15 minutes). Unlike the invasive surveillance technology now being used by governments, the initiative adheres to European privacy and data-protection laws by encrypting and anonymizing personal information.

Mobile-tracking apps could provide more accurate data during a pandemic in order to reduce transmission. They are, however, not without any challenges. A challenge with mobile-tracking apps is to make enough people install them in order to make them useful. Even if 30 to 50 percent of a population installs a mobile-tracking app, it could have a long-term impact. Continuous use of contact tracing through mobile tracking, together with physical distancing, may need to be combined with scaled-up testing and hygiene measures to avoid a resurgence of the disease. 

The global HCI community has united to work passionately on initiatives that make use of our devices to combat this pandemic. Across the world, HCI has played an essential role in a series of virtual hackathons that are being arranged to design novel tools to address the challenges with COVID-19 (HackTheCrisis in Sweden,  the MIT COVID-19 Challenge in the U.S., and the pan-EU EUvsVirus). In the end, defeating the disease requires collective efforts in order to protect ourselves and our loved ones.

Posted in: Covid-19 on Thu, April 09, 2020 - 10:08:59

Montathar Faraon

Montathar Faraon is an interaction design researcher with an interest in co-creation, participatory processes, ICTs for democracy, and concept-driven design. He is an assistant professor in the Department of Design of Kristianstad University, Sweden.
View All Montathar Faraon's Posts

Post Comment

No Comments Found

Reflections on helping research partners who support those made vulnerable during the crisis

Authors: Angelika Strohmayer
Posted: Tue, April 07, 2020 - 12:04:11

Over the past few days, I have had many conversations with friends and colleagues who work with charities, libraries, and other organizations who support some of the most vulnerable people in society. As you might well be aware, these organizations are grappling with appropriate and timely responses to the developing global COVID-19 pandemic. Many of us are thinking and talking about ways in which we, as researchers, practitioners, and people, can support our research partners in these trying times. This is while keeping in mind that staff in these services are working on overdrive at the moment; trying to keep up with ongoing changes to all of our lives while also maintaining contact with those who they support. 

Like many others, staff who support victim-survivors of domestic violence, those experiencing homelessness, families with children in care, people who are in addiction recovery, or others who may have various intersecting complex needs are working from home. Many will have had to suspend the in-person drop-in services and pause ongoing group work programs they usually facilitate, and instead are now working hard to figure out how to best use technologies to carry on with this work remotely. As technology researchers and designers, we may be uniquely placed to support organizations at a time when many are using digital technologies to carry out their work, with many adopting any such technologies for the first time. 

The research partners I have worked with are developing strategies in small teams and then sharing the learning across their wider organizations. They are making use of what's available to them: mobile phones, email, video calls, and free platforms that allow them to communicate with those they support. However, there is of course the issue that many of the people they support don't have access to reliable internet connections, and that they may not have laptops, tablets, or smartphones—7 percent of households in the U.K. have no internet access at all. Some of the people who are supported by these organizations are now at greater risk of experiencing violence from their partners, and others may be at risk of perpetrating more violence toward their partners as anxieties are on the rise and coupled with social distancing measures in response to the crisis; we know that natural disasters and diseases often lead to increased number of domestic violence reports. Others may not have homes at all to self-isolate and are instead sleeping rough or in overcrowded emergency accommodation. Despite these barriers, many people are working hard to stay in touch with those most vulnerable and those at greatest risk, and they're interested in finding creative ways of doing this. 

From the conversations I've had with colleagues who work in this space on a regular basis, and this may come as no surprise, I've learned that many of us want to help out. But I've also learned that we're not sure about how to best lend our support. 

We design technologies for a living, and may have spent years working with partners to do research and develop innovative tools: We have some skills to share. But at the same time, we must be conscious that even when we have good and longstanding relationships with these organizations, we are not party to their crisis meetings and strategies, nor are we knowledgeable about their existing contingency planning, and we only have partial views of their full service delivery. 

One way of helping is very simple but may be difficult to fully come to terms with: We should pause our in-person fieldwork. Many of us have already done this. Just like us researchers, our partners will be upset about the projects that have had to be paused. Especially when we work closely with partners in participatory ways, the projects are often as valuable to them as they are to us; they care about them as much as we do. At least in my case, this meant learning how to balance conversations between managing immediate crisis needs and looking toward the time after the crisis. 

Earlier this week, I talked to one of my collaborators who is in a managerial role of a national charity that supports women and children. This meeting had been organized to discuss an ongoing project before the crisis hit the U.K., but we decided to keep the time in our calendars even though the project was now on hold. As we were about to end the meeting, I repeated that she should simply send anything that needed doing my way. Unexpectedly, she said that conversations like the one we had just had were very helpful. She then also thanked me for an invitation I had tentatively sent her on Twitter for a 20-minute session organized by to find our inner calm. She had joined in the facilitated meditation session that morning, providing her a less anxious start to the day. 

The video chat she had had with me and the 20 minutes of guided meditation were a starkly different kind of meeting to the other daily crisis updates, calls, and chats she has with the staff she supports. They were a relief from the anxiety that is caused by this ongoing pandemic. So maybe it is our role as researchers and designers not to “do” too much, but to acknowledge our partners' human needs and to support them in unconventional ways

So what can we actually do to help out? We can offer our help. But when we do this, we have to acknowledge this is an unprecedented situation and that our partners may already be overwhelmed by offers of support, or may not know how our skills could be useful. As researchers, we have to understand this and weigh up how supportive our offers of support actually are. This may also be our opportunity to pause our “research” thoughts and instead enact solidarity and help out in other ways that are suitable. 

We can offer up our expertise without overpromising. And we can offer to help by talking through potential digital responses to the developing needs and concerns they are facing. But simultaneously, we must not overburden staff who are already working beyond their normal workloads while assimilating to working from home, caring for their own families and loved ones, and coming to terms with the anxiety and grief due to the pandemic. 

When offering support, providing examples of what we could do to help out can be immensely useful. This might include things such as writing up and designing materials, documenting practices to share as models of good practice, or being a sounding board for staff to figure out how to work with the technologies that are available to them. Before doing this, it's important to read any public response to the crisis the partner may have put out to tailor our offers. We should be looking to reduce pressures on services and think of ways we can use our research skills and provide clear avenues for our support. 

At the same time, though, we also have to acknowledge and appreciate that our own workplaces will have certain expectations of us, and that our funders have relatively strict timelines. While some universities are supporting their staff during this transition, many seem to be expecting the impossible of their staff and students: to continue as normal. Sadly, as is so often the case in academia, the research disruptions I talk about here and that COVID-19 is causing will have the greatest effects on early career researchers, precariously employed staff, and Ph.D. students' job and career prospects. We must find ways to support those most vulnerable in our workplaces informally but also through institutional support. 

To work toward building understanding, we can build community. HCI is an interdisciplinary field-one that lends itself greatly to building community across disciplines, forms of expertise, and types of organizations. We must remember that nobody knows what the “right” way of going about working in this crisis is, because nobody has been through such a situation before. But we can still learn from previous experiences. We can ask one another for help, and should be sharing resources and information on things we can do to help. Some starting points might be the Rapid Response Research Toolkit, this Google Doc on how to do fieldwork during a pandemic, or this more specific document for HCI researchers

Having conversations among ourselves as researchers to share experiences and ideas has been helpful for me as well to figure out how to best get in touch with those who have shared so much time with me over the years of doing research together. Formalized formats, such as the weekly webinars NORTHLab at Northumbria University in the U.K. have been hosting, are incredibly useful in learning more about all the different ways in which research can be useful in this time of uncertainty. But much more informal conversations I have had during the weekly digital tea breaks have been organizing on a Thursday, or even scheduled chats I've had with colleagues, have also been immensely helpful for me to get to grips with what is going on, which has ultimately helped me make decisions about how I am able to lend a hand. 

Coming back to my conversations with partners, perhaps another way of helping out might be the simple act of inviting our collaborators to social virtual hangouts. To encourage, promote, and enact compassionate and proactive solidarity; to offer a virtual cup of tea to them, just like they have offered so many physical cups of tea to ourselves and those whom they support. 

Posted in: Covid-19 on Tue, April 07, 2020 - 12:04:11

Angelika Strohmayer

Angelika Strohmayer is an interdisciplinary technology researcher, working closely with third sector organizations and other stakeholders to creatively integrate digital technologies in service delivery and advocacy work. She aims to work collaboratively on in-the-world projects that engage people at all stages of the research process to engender change toward a more just world.
View All Angelika Strohmayer's Posts

Post Comment

No Comments Found

Feeding the futures of human-food interaction

Authors: Marketa Dolejsova, Hilary Davis, Ferran Altarriba Bertran, Danielle Wilde
Posted: Fri, April 03, 2020 - 3:59:59

Human-food interaction (HFI) is a burgeoning research area that traverses multiple HCI disciplines and draws on diverse methods and approaches to bring focus to the interplay between humans, food, and technology. Recent years have seen an increase in technology products and services designed for human-food practices. Examples include June, an oven with integrated HD cameras and a WiFi connection to enable remote-controlled cooking; PlantJammer, an AI-based recipe recommender that suggests “surprising” vegan dishes from leftovers; HAPIfork, which vibrates and blinks when you eat too quickly; and DNAfit, a service that uses consumer genome data to suggest a personalized diet. Such technologies turn everyday food practices into data-driven events that can be tracked, quantified, and managed online. Often wrapped in techno-optimism, they propose what seem like straightforward solutions for diverse food problems: from everyday challenges with cooking, shopping, and dieting to systemic issues of malnutrition and unsustainable food production. While on the one hand offering visions of more efficient food futures, this techno-deterministic approach to human-food practices presents risks to individual consumers as well as food systems at large. Such risks include the uncertain safety of food products “created” by algorithms, the limited security of personal, often sensitive, health-related data shared via food-tech services, and the possible negative impacts of automation on social food practices and traditions. Smart recipe recommenders, for example, can suggest ingredient combinations that are surprising but unsafe to eat. Personal genomic data shared via DNA-diet personalization services might get misused by third parties such as healthcare insurance companies. Smart utensils might improve one’s diet but they can also disturb shared mealtimes. 

Every innovation brings the potential for new problems. Yet these concerns receive only peripheral attention in HFI literature. To date, HFI projects that propose to fix, speed up, ease, or otherwise make interactions with food more efficient far outweigh those reflecting upon the broader, and often challenging, social circumstances of food-tech innovation [1]. We believe the field of HFI would benefit from more critical engagement with the social, cultural, environmental, and political implications of augmenting food practices with technology. Motivated by concerns about the opportunities and challenges in food-technology innovation, we formed an HFI community network, Feeding Food Futures, to focus on this issue. Within the network, we undertake research, develop theory, and organize events and conference workshops to consider desirable future directions of the field. In this article, we focus on three selected workshops to discuss the opportunities and challenges that technology brings to the table, and propose possible responses in HFI design and research.   

HFI Workshops: A Conduit for Critical Debate

We report our observations from three HFI workshops, undertaken at CHI, Montreal; DIS, Hong Kong; and DIS, San Diego [2,3,4,5]. These workshops brought together diverse participants to reflect on HFI issues through hands-on, performative activities and group discussion. At all workshops, food was both primary theme and edible material, enabling practice-based, sensory-rich reflection. Activities involved collaborative crafting and tasting as well as foraging and speculating, supported by various food design props and kits including: Food Tarot cards to provoke future food imaginaries and the HFI Lit Review App to search and categorize the corpus of existing HFI research publications. We also brought, bought, and found various food-related boundary objects and made artifacts in-situ, including a DIY paper constructed from foraged food waste and personalized menus customized to DNA test results. Each workshop gathered participants of diverse backgrounds and used a variety of HFI methods and techniques: ethnographic research in food communities, experimental food design, food-oriented performance art, DIY food biohacking, as well as the prototyping of practical food-tech products. These diverse, at times contrasting, approaches to HFI supported polarized, friendly debates about the desired role of digital technology in food cultures and the contributions we might expect from HFI research. We highlight six debates that stood out in our workshops, and whose importance was recognized by participants. These debates highlight important issues within HFI and can help us to springboard desirable future developments of the field. 

Debate #1: Human agency vs. technological efficiency

At CHI 2018, participants introduced an AI-based recipe recommender that suggests ingredient combinations using deep learning and a recipe/image embedding technique [6]. The recommender aims to enhance human gastronomic creativity by suggesting surprising and spectacular recipes. It can suggest extravagant flavor combinations—supposedly beyond the limit of human imagination—as a provocative starting point for food experimentation. For some participants, such data-driven automation presents an opportunity for exciting culinary practices. Others suggested a fine line between enhancing and reducing, even removing, users’ creative engagement with food preparation. They suggested that using the recommender puts people on autopilot, as the algorithm becomes the chef. While gains are made in efficiency, the playful, creative, educational, and sensory elements of cooking may be lost. The debate foregrounded the need to maintain a careful balance between human agency and technological efficiency in AI-based food experiences.

At DIS 2018, participants cooked pancakes in two ways: 1) using PancakeBot, a machine that prints pancakes based on drawings made in custom software; and 2) using a stove and a frypan (Figure 1). With PancakeBot, the cooks’ engagement was mediated and constrained largely to preparation: They imagined, then drew a shape on rudimentary software, positioned a plunger containing the batter, and hit a virtual button to begin the “fully automated” pancake-making process; then they monitored the performance of the robotic chef. In contrast, using the frypan required embodied, in-the-moment, improvisational engagement with food materials and cooking equipment. The cooks often overlooked the fact that their activities were mediated through tools; rather, they seemed guided by their senses. Reflecting on these differences, we determined that future digital cooking technologies should include mechanisms to enable human intervention in emergent, improvisational, and embodied ways that go beyond simply instructing a machine to prepare food.

Figure 1. PancakeBot.

Debate #2: Digital technology vs. the materiality of food

With PancakeBot, the machine’s affordances shifted the way we engaged with the organoleptic qualities of the pancakes—the taste, color, odor, texture and other qualities inherent to the materiality of food and its ability to stimulate our senses. The machine’s functionality oriented our attention toward the visual aesthetics of our anticipated pancakes, to the detriment of all else. Excited by what the technology seemed to promise, we adapted the pancake-batter recipe to achieve better flow through the machine’s extrusion mechanisms, disregarding the impact of this adjustment on taste. Further, the spectacular PancakeBot influenced our activities when cooking pancakes using the frypan: After using PancakeBot, we began to prioritize the shape of our pancakes in the pan, irrespective of the impact on taste. This shift to privilege visual aesthetics over other organoleptic qualities demonstrates a key risk of inserting digital technology into material practice. Depending on how technologies are designed, they can cause users to disregard important sensual qualities that sit at the core of material cultures. On recognizing this tendency, we determined that digital enhancements should add to, rather than substitute for, the material richness of food experiences and privilege the complex, sensual nature of food and eating over technological capabilities. 

Debate #3: New and clean vs. old and messy

At CHI 2018, one participant presented “DIY Kit for Supermarket ‘Chateaus’ of Hybrid Wines” [7], a wine-fermentation kit that enables users to make wine from table grapes as an alternative to standardized off-the-shelf wines (Figure 2). The kit provoked discussion about the contrast between “new” food technologies aiming for “clean” food practices and “old” traditional food techniques, such as wild fermentation, that support “messy” practices and experimental human-food entanglements. While we agreed that food-tech designers need to ensure the safety of their designs, we also recognized the need to support and revive messy, experimental, and playful approaches to food and wine making. We felt new food technologies should nurture consumers’ curiosity for experimentation and prompt rediscovery of traditional and diverse food knowledge, rather than merely supporting efficiency and safety through standardization. 

Figure 2. DIY wine kit.

At DIS 2018, a rich discussion ensued about how one’s grandmother’s pasta recipe is much more than a combination of ingredients—it has cultural, social, and emotional meaning. As demonstrated by Mother’s Hand Taste (Son-Mat) by Jiwon Woo—an artwork that examines how the bacteria on a person’s hands impacts the flavor of their cooking—human-food practices can also be a collaboration with bacteria. Such collaboration is expressed in commonly recognized ways (e.g., making cheese, yogurt, wine, kimchi, and other fermented foods) and in perhaps more subtle ways (e.g., recognizing the cook’s microbiota as an impactful ingredient—what Koreans call the “hand taste” of the food). At DIS 2018, we used Steven Reiss’s “16 desires” [8] as a roadmap to investigate the values that shape our food choices and experiences. We recognized that desires are complex and nuanced, whereas engagement with desires through food-tech is often simplistic. This realization exposed a critical limitation of food technologies: It is not only the material messiness of food that should be taken into consideration when designing food technologies, but also the conceptual messiness and complex idiosyncratic nature of desire that must be considered.

Debate #4: Techno-determinism vs. contextual sensitivities 

Food standardization and simplification was further unpacked at DIS 2019, through a DNA-based menu—a dinner for two—introduced as a boundary object. Gene-based personalized nutrition (PN) is a growing trend that involves diet customization using personal genotypic data. The dinner for two was based on DNA test results obtained via a commercial genetic testing service. The meals contained the same ingredients in different proportions—adjusted according to DNA predispositions. Diner 1’s DNA test suggested an increased risk for type-2 diabetes. Therefore, she had a smaller portion of potatoes and plain blueberries instead of blueberry cake for dessert, although she said she would much prefer eating the cake. Her test also indicated a higher susceptibility to alcohol addiction. Her wine glass was thus served half-full. In contrast, Diner 2 had an open wine tap, as her DNA showed low risk of alcoholism. This provocative menu initiated a debate about the determinism of PN, wherein supposedly precise biodata is given precedence over taste preferences, or social and cultural background. Participants highlighted potential negative impacts on social food traditions, asking: Will we still eat together when our food practices are personalized and mutually incompatible? Further, personal and sensitive data exposed by the menu raised questions about privacy and misuse of data by third parties, such as health insurers or pharmaceutical companies. This example highlights the need for HFI to carefully consider the impact of technological interventions on the varied contextual sensitivities embedded in food practices.

Debate #5: Human-food interactions are socioculturally situated

At CHI 2018, we mapped food-tech trends onto the Montreal foodscape using our Food Tarot tool: a card deck presenting 22 imagined diet tribes to illustrate emerging food-tech practices (Figure 3). For instance, the tribe of Petri Dishers only eat in-vitro grown meat; Datavores are Quantified Self aficionados who eat according to self-tracking data. 

Figure 3. Food Tarot cards.

Inspired by the Food Tarot, local participants mapped the cards with local venues—for example, Gut Gardeners with a local fermentation workshop and Food NeoPunks with the Loop Juice shop that makes juice from leftover fruits [9]. They went to the selected venues, foraged for paper-based food waste, and crafted a locavore version of Food Tarot cards; then asked the other participants to forage paper waste at CHI and craft a collective CHI food-waste card. While crafting with the foraged trash (Figure 4), we discussed the situated nature of human-food practices, highlighting that such practices cannot be interpreted from a research lab or a design studio but rather should be explored through fieldwork “in the wild.” 

Figure 4. Making a Food Tarot card from CHI paper waste.

Debate #6: Innovation and inequity

At DIS 2019, we conducted a “walkshop” to forage for local food items and dining experiences. The conference location, in the resort area of San Diego SeaWorld, provided a controversial context for this exercise: Local dining options involved expensive hotel restaurants or fast-food chains and pizza parlors. While eating pizza and sipping soda from gigantic plastic cups, we talked about unequal socioeconomic access to “good” (healthy, sustainable) food products and food technologies designed to support “good” food practices (Figure 5). We acknowledged our privilege, having “the luxury of choice” in this extreme food landscape. Many food-tech products are available only to people from privileged socioeconomic backgrounds. Non-access due to income, education, or digital illiteracy can marginalize individuals and social groups. Food-tech innovation thus risks extending existing, and creating new, socioeconomic inequalities.

Figure 5. HFI walkshop in San Diego.

Thoughts on the Future of HFI Design and Research

During the DIS 2019 walkshop, participants documented their experiences through written notes, sketches, photos, and found or bought food items and we collaboratively crafted an HFI Zine (Figure 6). The zine has been extended by the workshop organizers to serve as a condensed summary of discussions and debates drawn from all three workshops. Not a manifesto or a fixed set of guidelines but rather a humble set of ideas, the zine offers suggestions on what HFI design and research could do in the future. For example:

To leverage and support food as a creative, material, equitable, and situated sociocultural practice, HFI should:

  • Open spaces for experimentation and learning, rather than deliver quick-fix solutions catering to consumers’ convenience 
  • Support broad socio-economic access and help reduce inequalities on the global food market rather than expanding them 
  • Take place ‘in the wild’ to reflect contextual sensitivities of local food cultures and practices 
  • Reflect the socio-culturally diverse, complex, sensory-rich nature of human-food practices, rather than prioritising standardisation by default
  • Be culturally diverse
  • Complement existing food traditions, rather than replace them
  • Keep consumers’ active and creative engagements central to food-tech practices
  • Be mindful of consumer privacy
  • Be concerned with local human-food-tech interactions, as well as systemic, larger-scale implications of food-tech innovation.

Figure 6. HFI Zine.

Feeding Food Futures: Community and Critical Discussion

The workshops discussed here illustrate how new food technologies can have ambivalent impacts on food cultures. Food-tech products and services designed to improve practices and solve problems often create new problems of their own. This is not to say that we should strive for technology-free food futures. Food-tech innovation can make important and necessary contributions to food culture; technology can be incorporated meaningfully into existing social practices. With environmental uncertainty and public health crises, food practices need to change and technological innovation can facilitate the needed changes. However, we must be mindful of the challenges, and carefully consider the sociocultural impacts that food-tech innovation may have on food-related practices.

The debates we highlight here reflect important issues that we believe HFI researchers should consider as food-tech innovation moves forward. We recognize continuity in the conversations we initiated within the workshops, and acknowledge the importance of increasing the diversity of stakeholders at the table. To that end, we will continue to promote events where such discussions can take place. Our long-term aim is to nurture a multidisciplinary HFI community that considers perspectives of food-oriented researchers, designers, practitioners, and (human and nonhuman) eaters of diverse backgrounds. With these goals in mind, we initiated the Feeding Food Futures (FFF) network to support critical, experimental, in-the-wild HFI research and discussion. Our intention is for FFF to encompass a broad spectrum of HFI-focused initiatives, such as community workshops, an HFI summer school, and further co-authored publications. Critically, we do not see ourselves as having a monopoly on FFF. The success of FFF hinges on involving diverse scholars and practitioners. We will only feed the future of food in sustainable ways if we encompass a broad set of perspectives. We thus extend an invitation to interested others to join us on this journey. We look forward to future collaborations to critically reflect on the future of human-food-technology ecosystems and to collectively shape a vibrant, rich, and diverse foundation for HFI.


We thank our workshop co-authors and participants, past, present and future, who are key to shaping FFF and HFI activities and discussions. 


1. Altarriba Bertran, F.*, Wilde, D.*, Segura, E.M., Pañella, O.G., León, L.B., Duval, J., and Isbister, K. Chasing play potentials in food culture to inspire technology design. Proc. of the 2019 Annual Symposium on Computer-Human Interaction in Play. (*joint first authorship)

2.Altarriba Bertran, F.*, Jhaveri, S., Lutz, R., Isbister, K., & Wilde, D.* Making sense of human-food interaction. Proc. of the 2019 CHI Conference on Human Factors in Computing Systems. (*joint first authorship)

3. Dolejšová, M., Khot, R.A., Davis, H., Ferdous, H.S., and Quitmeyer, A. Designing recipes for digital food futures. Extended Abstracts of the 2018 Conference on Human Factors in Computing Systems.

4. Dolejšová, M., Altarriba Bertran, F., Wilde, D., and Davis, H. Crafting and tasting issues in everyday human-food interactions. 2019 ACM Conference Companion Publication on Designing Interactive Systems. 

5. Vannucci, E., Altarriba Bertran, F., Marshall, J., and Wilde, D. Handmaking food ideals: Crafting the design of future food-related technologies. 2018 ACM Conference Companion Publication on Designing Interactive Systems. 

6. Biswas, A., Mawhorter, P., Ofli, F., Marin, J., Weber, I., and Torralba, A. Human-recipe interaction via learned semantic embeddings. Presented at the 2018 CHI Conference Workshop on Designing Recipes for Digital Food Futures.

7. Špačková, I. DIY kit for supermarket “chateaus” of hybrid wines. Presented at the 2018 CHI Conference Workshop on Designing Recipes for Digital Food Futures.

8. Reiss, S. Multifaceted nature of intrinsic motivation: The theory of 16 basic desires. Review of General Psychology 8, 3 (2004),179–193. 

9. Doonan, N., Tudge, P., and Szanto, D. Digital food cards: YUL forage. Presented at the 2018 CHI Conference Workshop on Designing Recipes for Digital Food Futures.

Posted in: on Fri, April 03, 2020 - 3:59:59

Marketa Dolejsova

Markéta Dolejšová is a design researcher with background in HCI and food studies. Her work combines experimental and participatory methods to investigate the role of digial technology in food practices and cultures. She is currently a research fellow in New Media Studies at the Charles University in Prague.
View All Marketa Dolejsova's Posts

Hilary Davis

Hilary Davis is an HCI-based, Senior Research Fellow in Melbourne, Australia. Her work investigates the role digital technology plays in people’s work, social activities and home lives. She is interested in digital storytelling, digital cookbooks, and how digital technologies generally, can positively impact on familial relationships at mealtimes.
View All Hilary Davis's Posts

Ferran Altarriba Bertran

Ferran Altarriba Bertran is a playful interaction designer and researcher, and currently a Ph.D. candidate at the University of California Santa Cruz. His research explores how technology could afford increasingly playful ways of engaging in mundane activities, with a focus on food practices.
View All Ferran Altarriba Bertran's Posts

Danielle Wilde

Danielle Wilde is associate professor of embodied design at the University of Southern Denmark. She leads critical participatory design research in food and climate futures and social and ecological sustainability, engaging with wicked problems that cut across disciplines and cultures to rethink practices, policies, and technologies through a bottom-up approach.
View All Danielle Wilde's Posts

Post Comment

No Comments Found

HCI and interaction design versus Covid-19

Authors: Peter Dalsgaard
Posted: Fri, April 03, 2020 - 10:07:36

The Covid-19 pandemic has spread across the globe at unprecedented speed, with massive consequences. Our lives and societies have been suddenly transformed, and many have a sense that when the first wave of the pandemic has passed, many things will be different.

As my country was going into lockdown in an effort to limit the spread of the virus, I initiated a shared document, inviting researchers and practitioners in the HCI and interaction design community to propose how we might use our resources and competences to contribute, both in the immediate crisis and in the phases that follow. Within a matter of hours, peers from all parts of the globe were contributing with constructive and critical proposals and perspectives. I was asked to reflect on how these proposals offer perspectives on what we can do and make right now, and how the crisis may prompt or force us to break with how we have done things in the past. Broadly speaking, the proposals fall into three categories:

  • What can we do right now to help? The immediate question that pops to mind for many people is whether we can do something right now to push the needle in the right direction. The most-discussed topic in the document is fabrication, for example, 3D printing protective gear, medical components, and even makeshift ventilators in a time when these supplies are in massive demand. Moreover, contributors offer ways in which current and novel initiatives can ensure crisis coordination, support communication and combat misinformation, mitigate the social isolation and psychological isolation that result from countrywide quarantines, and help create interfaces and infrastructures for tracing and limiting the spread of virus.

  • How can we contribute to shaping the "new normal" in the wake of the pandemic? While the situation is dire in many countries, this, too, shall pass, and societies will reopen. But what will the new normal look like, and how can we help shape it? Contributors to the document point to a range of ways. Some deal with the particular toolsets and mindsets we have for imagining and designing for the near future, exemplified in methods such as scenarios, design fictions, and participatory design involving the people with whom we will share this future world. Other proposals are more concrete, such as supporting and developing new forms of remote work and remote social communication, reshaping our cities through urban infrastructures and smart city initiatives, and, in a more direct response to the threat of epidemics, increasing our efforts to develop better systems for science labs and sharing scientific findings.

  • How are our lives changing as a result of the crisis? A third set of contributions revolve around understanding and reflecting on how the crisis impacts our lives and societies. These span from studying how existing technologies are employed, hacked, and reappropriated to cope with the suddenly different circumstances under which we live and work, to developing art and culture for the crisis, both in terms of rethinking how art can be shared without physical co-presence and how it can help us reflect upon the crisis. This set also includes broader studies of what appears to be a radical shift in our world, such as ethnographic studies of the shift from the pre-pandemic to the post-pandemic world, and inquiries into the wider politics of technoscience in the pandemic and its aftermath. These prompt us to consider, among other things, how technology is intertwined with politics and power relations. We are quick to jump to technological solutions, but ought to also consider if and when technology is the answer, or if it might even exacerbate crises.

Reflecting on the discussions that unfold in and around the multitude of proposals, it is evident that there is in our community an urgent desire to help change things for the better. It is also clear that we should uphold our traditions and fora for balancing constructive efforts with critical considerations of what "better" means, to whom and in which context.

Posted in: Covid-19 on Fri, April 03, 2020 - 10:07:36

Peter Dalsgaard

Peter Dalsgaard is a professor of interaction design at Aarhus University and director of the Center for Digital Creativity. His work explores the design and use of digital systems from a humanistic perspective with a focus on how collaborative design and creativity.
View All Peter Dalsgaard's Posts

Post Comment

No Comments Found

Drawn together – the role of sketching in product design

Authors: Jess Phoa
Posted: Wed, December 11, 2019 - 3:28:51

I’ve always loved expressing myself through visual means for as long as I can remember:  When I was a kid, I created art with reckless abandon. I didn’t care what others thought about what I made—I drew simply because I wanted to. Never would my younger self have imagined that my love for drawing would eventually help me with my future career. 

I’ve found that sketching—when put to use in the workplace—can be an integral part of the design process, grounding an entire team in the user problems they aim to solve. 

I’ve seen this process succeed firsthand in my work at Chartbeat, a technology company that provides data and analytics to global publishers. But the takeaways can be applied to any workplace or academic setting where people collaborate to solve user problems.

Creating a memorable end user via sketching 

Shortly after I joined Chartbeat, I was assigned to the admin tools product team consisting of myself along with several front-end and back-end engineers and a product manager. We were tasked with making our product’s user management workflow scale better to accommodate organizations of all sizes. At the time, admins’ workflows for managing new and current users were manual and time-consuming, and certain processes even required assistance from our in-house support team. 

Our goal was to reduce friction in the onboarding experience for our end users and free up time spent by our internal support teams.

01 hugo-christina sm.jpg
Christina and Hugo, our imaginary users.

I wanted to make sure we remained mindful of the users we were serving, so I drew up Christina and Hugo, fictional, prototypical users. Christina and Hugo have since become key members of our team and are often referenced in product meetings and discussions.

Building empathy for user problems

To help better understand the problems at hand that real users encountered, I also got the whole team involved in drawing. 

I use the term “drawing” loosely here because “knowing how to draw” can sound intimidating. The goal for these sketching sessions was not to produce beautiful illustrations, but rather to build empathy for the people we were designing for. Sketching is a hands-on, approachable way to ground a team in a problem, let everyone contribute to the ideation process, and take a break from our screens.

Often, our sketches are more rudimentary—focused on expressing the emotions and mentality of users that we are serving. 

For this session, I provided participants with paper templates containing blank squares and gave them five minutes to draw a comic about the then current state of the user management workflow. What I love about this exercise is how the open-endedness of the prompt lends itself to various interpretations, thus different perspectives. 

02 comic-relief oh cropped sm.jpg

03 comic-relief boss-man cropped sm.jpg
Drawings created by my coworkers for a participatory sketch exercise I facilitated.

These sorts of sketches prompt team members to put themselves in the end user’s shoes and evaluate the larger context of where and how these problems emerge.

Generating ideas from the whole team

Sometimes, our sketching sessions were focused on ideas for how our user experience could solve those problems.

No matter their background, every member of our team could take a few minutes to sketch an idea for a potential user experience. And each sketch brought a valuable idea to the table. In fact, our team’s front-end engineer drew the first sketch that eventually became our revamped user interface for granting user permissions within our product suite. 

”Sketching is an invaluable part of how we work through problems as a team,” says Kris Harbold, a product manager. “By having everyone, including engineers, put their ideas on paper, we open up space for conversations on not just how to best address our users’ problems, but also how they might feel about the experience and potential trade-offs. It encourages team alignment and gives people a sense of ownership of the final release.” 

04 user-mgmt before sm.jpg
Permissions provisioning sketch by my teammate created during a group brainstorm.

Of course, sketches can only take a team so far. Once we had a sketch that seemed  promising, I wanted to collect user feedback, so I collaborated with the same front-end engineer on a rough, clickable prototype. I conducted rapid internal user testing with a handful of my colleagues, including those who regularly fielded questions from clients around user management. 

Their feedback, along with beta feedback from users and eventual results from our general release, validated that our updated user permissions provisioning workflow was a drastic improvement. The real-life versions of our fictitious admins Christina and Hugo could handle more tasks on their own, and with less effort.

05 user-mgmt after sm.jpg
Screenshot of the redesigned user permissions widget.

Sketching has undoubtedly become one of the most accessible design research methods in my toolkit, and I’ve seen firsthand how it can energize a team around a user problem. Next time you find yourself starting a project and you’re not sure where to start, see where drawing takes you! 

Further reading

Learn how to run a sketch session
The potential of participatory design
How to run quick usability tests

Posted in: on Wed, December 11, 2019 - 3:28:51

Jess Phoa

Jess Phoa is a product designer at Chartbeat in New York, NY.
View All Jess Phoa's Posts

Post Comment

@abc (2020 03 02)


@IT (2020 03 02)

Great Sharing with us. We proud of you. We always happy for you. We have best offer in market. We always best Service in market. We always good type of technology in market. We always best IT Service offer to customers and clients in SMS Marketing Service

@آموزش قانون جذب (2020 03 06)

Hope you are successful in your designs forever

@آموزش قانون جذب (2020 03 06)

Hope you are successful in your designs forever

@بهترین طراح سایت در ایران (2020 03 06)

thanks god designer

Sitting with Hakken

Authors: Gopinaath Kannabiran
Posted: Tue, December 03, 2019 - 2:02:38

The last time I met David Hakken was at his office. We both knew it would be the last time we would meet. He was very calm. It felt very comforting sitting with him in his office. He told me he did not know how long he had to live. And that the next course of treatments for his cancer were going to be very aggressive. 

Me: Are you afraid David?
David: Of dying? Yes. Do you believe in life after death?
Me: No. But I believe in love. 
David: *smiles* People have been very kind to me. It feels nice to know that you are loved.
Me: David... You have touched so many lives. And inspired so many. You are sitting here laughing and making jokes when you're waiting for death. I am not sure what else one can ask of a human being.
David: *smiles* I am not sure either.
Me: David... I am so grateful to have met you in my life. Thank you for being my teacher. I love you David. I want a good long hug from you.
[We both got up and hugged each other for a few minutes.]
Me: I love you David.
David: I love you too Gopi.

David Hakken was many things to many people—ethnographer, educator, advisor, critic, storyteller, activist, debate-stirrer, idea-nourisher, mentor, writer, human rights advocate, visionary leader, pioneering researcher, inspiring colleague, caring academic, snazzy dresser, and reliable friend, to name a few. I was fortunate to take classes with him and work with him on my doctoral dissertation research. David passed away on May 3, 2016, from cancer. I did not have anything to say at David’s memorial but it was nice to see all the love he had in his life and hear their memories of him. It was a sad and beautiful event to witness the mourning and meaning of a life well lived. 

David made several important contributions to research and teaching in STS and informatics. His legacy is not just his towering intellectual contributions but also, and more importantly, the tribe he helped nourish. As a graduate student, I was enchanted by David in class. He spoke with insight but he also listened with curiosity. When I get engrossed in a subject, I sometimes overstep or forget social conventions such as taking turns to talk. David and I got into a debate during class once and I kept arguing with him. During class break, I realized that I had spoken for too long and felt that I might have come across as aggressive toward David. I approached him after class and apologized for my behavior. He responded: “I am a little bit more secure in myself than that, Gopi.” I had a newfound respect for him that day and remember thinking to myself: I want to be like David someday. 

A photo of me holding my job offer as ‘Visiting Lecturer’ in front of David’s office. 
You can see some of the notes students and colleagues left on his door.

When I decided to do my Ph.D., I braced myself mentally for the hard intellectual work ahead. But I was unprepared for the amount of emotional work that doing research and being a researcher demands. As good researchers, we are required to question our own assumptions and understanding of the world. They say rejection is a part of the game. And even though we rationally know not to take rejections personally, it can still hurt because we are human. We may feel discouraged when projects close to our heart face setbacks. Research requires emotional work, on both a personal and social level; it is not just an intellectual activity completely devoid of personal investment. During my Ph.D., I have had several conversations with students and researchers from different countries about their work and found a common theme: cynicism.

Bluntly put, if everything is bad, why try anyway? I used to feel a sense of debilitating dread of being stuck—aware of the problems around me and unable to do anything about them because… what is the point anyway? Kathleen Norris talks about acedia as an “absence of care [when] life becomes too challenging and engagement with others too demanding.” Too burnt out to care; too exhausted to hope. During my Ph.D. studies, I have seen a few brilliant, passionate, and kind people, of different ages and backgrounds, either personally unravel (mental health issues, substance abuse, relationship problems, etc.) or leave academia to avoid doing so. I got cynical. And I hated being cynical. David was an experienced researcher who tackled social justice issues throughout his academic career. I have never heard him be cynical. Every time I feel like quitting because I think something is impossible or pointless, I remember David. 

Walter Lippmann wrote: “The final test of a leader is that he leaves behind him in other[s] the conviction and the will to carry on.” What I learned from David is not merely research skills but life wisdom that I will continue practicing for the rest of my life. It makes a world of difference to see a life well lived on a day-to-day basis in conversations over tea, inside jokes, and shared silences. There are lots of horror stories about abuse of power and toxic attitudes in academia. As a feminist, I am very glad that such incidents are discussed in public so that we can collectively address the issues that plague the well-being of our research communities. At the same time, I feel compelled to document and share the life-affirming experiences that happen in academia and its impacts on research knowledge production in service toward social justice issues.

Behind the h-indices and awards, there are lovely human beings who have inspired others to live their best lives and dedicated themselves in service to the betterment of life for all. And it is their stories that I want to bring to light and share through this blog series, titled “Sitting with…” My goal is to document and share with a wider audience some of the personal, life-affirming experiences my guests have had in their full careers as researchers. Since I am asking others to share something very personal, it felt fair that I start the blog series by sharing my personal relationship with David, or as I call him, “my Gandalf.”

It feels befitting to memorialize David by creating a blog series focused on telling people’s life-affirming stories. I miss him. And that makes me want to live by what I learned from David. That is the only way I know how to keep him alive in my universe. I flail and fail but I refuse to give up because I have had teachers like David. He listened in a way that made the other person feel heard. He showed up for events to support students and colleagues, come rain or shine. These are life lessons that I learned from David, not by preaching dogma but through example. 

For this blog series, I would like to interview senior researchers from diverse backgrounds who have either retired or are close to retirement. By diverse backgrounds, I mean people across disciplines, with varied work experiences in academia, industry, NGOs, and so on, from various countries. I would like to hear, document, and share significant personal growth moments and challenges faced in the course of their careers as researchers with diverse backgrounds. If you are a senior researcher (retired or close to retirement) and willing to share your experiences through this blog series, I would like to interview you about some of the life-affirming personal growth moments in the span of your career. Kindly contact me through email.

Posted in: on Tue, December 03, 2019 - 2:02:38

Gopinaath Kannabiran

Gopinaath Kannabiran is currently working as postdoctoral researcher at the Department of Computer Science, Aarhus University, Denmark.
View All Gopinaath Kannabiran's Posts

Post Comment

No Comments Found

Reflections on mental health assessment and ethics for machine learning applications

Authors: Anja Thieme, Danielle Belgrave, Akane Sano, Gavin Doherty
Posted: Fri, November 01, 2019 - 11:33:18

As part of the ACII 2019 conference in Cambridge (U.K.), we ran a workshop on “Machine Learning for Affective Disorders” (ML4AD). The workshop was well attended and had an extensive program, from an opening keynote by UC Irvine assistant professor of psychological science Stephen Schueller, to presentations by authors of accepted workshop papers, to invited talks by established researchers in the field. Topics and application areas included: detection of depression from body movements; online suicide-risk prediction on Reddit; various approaches to assist stress recognition; a study of an impulse suppression task to help detect people suffering from ADHD; and strategies for generating better “well-being features” for end-to-end prediction of future well-being.

Keynote by Stephen Schueller.

Discussions at the workshop touched on many common ML challenges regarding data processing, feature extraction, and the need for interpretable systems. Most conversations, however, centered on: 1) difficulties surrounding mental health assessment, and 2) ethical issues when developing or deploying ML applications. Here, we want to share a synthesis of these conversations and current questions that were raised by researchers working in this area.

Mental Health Assessment
Workshop attendees described a range of assessment challenges including: data labeling and establishing “ground truth,” definitions of mental health targets, or what measures were considered as “safe” to administer to study participants or people who are perhaps self-managing their condition in everyday life. Two areas of debate received particular attention:

What healthcare need(s) to target and how to conceptualize mental health states or symptoms. In the types of ML tools or applications that are being developed, we noticed a predominant focus on the detection and diagnosis of mental health symptoms or states. This may partly be explained by the availability of data and clinically validated tools in this space, which inform how research targets are shaped. Currently, much of the existing ML work tries to match the data that is available about a person to a diagnosis category (e.g., depression). Here, attendees mentioned concerns that looking at a mental illness, like depression, as one broad category may not take into account the variability of depression symptoms and how the illness manifests, and could mean building models for monitoring depression that are less useful as a result. Further, they raised the question of whether mapping a person to a “relevant treatment” might present a more important ML task than diagnosis.

Related to this discussion, attendees raised some other key questions:

  • What is the health/medical problem that we are trying to address? Are we asking the right questions?
  • How do we ensure that the (often complex) models/solutions we develop in computing science really meet a clinical need? What are the “right” use cases for ML?
  • How can we define/select/develop good quality measures?

The value of objective vs. subjective assessments. Do they need to compete with each other? Excitement about passively and continuously captured data about peoples’ behaviors through sensors or content created online has shaped perceptions of ML approaches as providing “more objective” insights; especially when compared to other “more subjective” methods such as self-reports. It was pointed out that we cannot strictly define what is subjective or objective. Thus, instead of looking at these approaches in competition, perhaps a more promising route would be to look at interesting relations that surface through the combination of different data methods, and what each may say about the person. Rather than looking at ML insights, clinical expertise, and traditional health assessment tools in competition, how can they complement each other? This leads us to ask how ML outputs can serve as a useful information resource to assist, and help empower, clinicians. When discussing examples such as mobile phone-based schizophrenia monitoring, it was apparent that providing clinicians with a wealth of automatically collected patient data was likely to be overwhelming and of little use unless the data was presented in ways that provided meaningful insights to clinicians, and effectively complemented their work practices.

Thus, key questions included:

  • How can we empower clinicians through data tools?
  • How can we help clinicians to appropriately trust data and related generated insights?
  • How can the results of ML help make concrete actions/interventions for clinicians/patients?

Figure2 sm.jpg
Workshop participants discussing assessment challenges.

Ethical Challenges
Inevitably, when discussing the role of ML and possibilities of ML-enabled interventions for use as part of real-world mental health services, our conversations turned to ethical issues, specifically the following two themes:

(How) should we communicate ML-detected/diagnosed mental health disorders or risks? A key conversation topic was: if, and how, we should communicate to people that a ML application has diagnosed them with a mental health disorder or detected a risk. This was particularly a concern in contexts where people are perhaps unaware of a mental health problem and the processing of their data (e.g., from social media) for diagnostic purposes. On the one hand, being able to detect problems (early) can help raise awareness, validate the person’s experience, encourage help-seeking, and allow for better management of a condition. On the other hand, people may not want to be “screened” or “diagnosed” with a psychiatric condition due to the associated stigma and its implications on their personal or work life. For example, a diagnosis of a mental disorder can have severe consequences for professionals in the police force or firefighting. Thus, how do we balance both peoples’ “right to be left alone” and “right to be helped”?

Related questions were:

  • How do we sensibly communicate the detection/diagnosis of a mental health problem or disorder?
  • Should only passive data be collected and used for self-reflection and self-care of the person?
  • How do we show risk factors to people in ways that are actionable (e.g., a diagnosis alone may not be helpful unless the person knows what they can do about it)?
  • What kinds of interventions should not be developed or tested with people in the wild?

What are the broader implications of ML interventions and how can we reduce risks of misuse? It is hard to predict what unanticipated consequences a new ML intervention might have on a person, their life, or society at large. Partly this is due to the way in which we tend to study well-defined problems whose solutions may not transfer to other contexts outside of those for which they’ve been designed or trained. For example, in the context of developing an emotion recognizer based on a person’s facial expressions, we discussed what the implications might be if someone was repurposing this technology, for example, to identify children who are not working enough at school, or employees who appear less productive at work. Additional ethical concerns included: difficulties in preventing the (mis)use of developed tools with low clinical accuracy in clinical practice; and challenges related to user consent and data control.

Key questions included:

  • How do we responsibly design and develop ML systems?
  • How can we help reduce the risk of misuse for the technologies we develop? 
  • How do we rethink consent processes and support user control over their data?

We thank all organizers, keynote and invited speakers, paper authors, and attendees for their invaluable contributions to the workshop.

Posted in: on Fri, November 01, 2019 - 11:33:18

Anja Thieme

Anja Thieme is a senior researcher in the Healthcare Intelligence group at Microsoft Research, designing and studying mental health technologies.
View All Anja Thieme's Posts

Danielle Belgrave

Danielle Belgrave is a principal researcher in the Healthcare Intelligence group at Microsoft Research. Her research focuses on ML for healthcare.
View All Danielle Belgrave's Posts

Akane Sano

Akane Sano is an assistant professor at Rice University, developing technologies for detecting, predicting, and supporting mental health.
View All Akane Sano's Posts

Gavin Doherty

Gavin Doherty is an associate professor at Trinity College Dublin, and co-founder of SilverCloud Health, developing engaging mental health technology.
View All Gavin Doherty's Posts

Post Comment (2019 11 04)

The article you have shared here very good. This is really interesting information for me. Thanks for sharing!
.(JavaScript must be enabled to view this email address)
ISO 9001 Certification in Oman

@Anilkumar Merakh (2020 02 21)

I am glad to locate your recognized method for composing the post. Presently you make it simple for me to comprehend and execute the idea.
Genveritas Technologies is an expert of ISO 9001 Certification Consultants and consulting firm offering ISO 9001 Certification Services in Bengaluru. We give 100% achievement assurance as ISO 9001 Certification Consultants. We are an Approved Service Provider with broad masters and participation in all International Quality Certification Standards. We would be glad to help your organization in the ISO 9001 Certification process.

Synthesizing family perspectives on health: Using fun activities to stimulate health conversations

Authors: Jomara Sandbulte , Jordan Beck, Janice Whitaker, John Carroll
Posted: Mon, October 07, 2019 - 3:02:05

Many families have difficulty carrying out healthy behavior practices in their household.  Since conversations about health and well-being are infrequent within families, it becomes more challenging to cultivate those practices [1]. The Center for Human-Computer Interaction at Penn State University (PSU) sees this as an opportunity for experience design and is looking into how and what family members come to know about each other’s health [1], including ways they can coordinate, incentivize, and follow through in health management [2]. These ongoing efforts have created opportunities for collaboration on community-wide interventions, such as the Intergenerational Friends Fair, a day-long, family-friendly event promoted by the Intergenerational Leadership Institute at PSU. This event aimed to bring together participants of all ages for activities and interactive exhibits, to expand opportunities for intergenerational communication and learning. Together with the Center of Geriatric Nursing Excellence at PSU, we prepared an exhibit for this event. In this blog post, we reflect on our experience at this event and outline two lessons learned for researchers and designers working on family-centered projects.

Since families may struggle to find time to sit together and talk about certain topics, we decided to ask family members to sit down and talk about their current healthy living practices. However, we wanted these conversations to be organic rather than forced, so we created a space that might connote the closeness and intimacy of home. Additionally, we provided materials (e.g., Health Bingo) as chat triggers to encourage free talking and the sharing of thoughts on healthy living practices. First, we asked participants to craft ideas in response to this question: What does healthy living mean to you? in terms of what matters to them and their family, ways to promote positive health, and ways to be active. We provided colored markers and pens and A1-size white cardboard paper. Participants were encouraged to express their ideas creatively using these materials. Second, we encouraged family members to play Health Bingo—adapted from Bridges Together to include options such as drinking water, flossing, and meditation. We briefly explained the rules of the game: Each participant should circle health activities and behaviors that they engage in, compare their answers with their family members, and finally mark the ones that they have in common. As they compared answers, we talked about which activities they might like to start and which they might like to do more often.

Many intergenerational families came by our exhibit and engaged in our activities. Kids loved drawing posters. They grabbed markers and got right to work drawing their ideas about healthy living. Parents were satisfied seeing kids’ ideas about health and seemed motivated to reinforce healthy behaviors. We were intrigued that children frequently drew or talked about healthy eating when prompted to reflect on healthy living. For example, one child said he loved eating lemons. His mother then reinforced the importance of healthy eating and mentioned other food options, including fruits and vegetables, that could be considered healthy, as shown below. They made creative suggestions of images and words that the child could incorporate into the poster. In the end, the poster had drawings of superheroes fighting bad eating habits (e.g., too much sugar) and giving healthy gifts (lemons) to kids if they were good. As a team, we felt that this family’s and others’ participation reinforced how a simple and direct activity could promote valuable conversation about healthy living between family members. Throughout the day, several families were able to identify healthy living practices and promote healthy behaviors by creating posters.

Mother and child working together on their poster, using materials offered
during our event.

Health Bingo also encouraged family members to talk about their healthy living habits. For example, while an eight-year-old boy and his grandmother were playing, the grandmother watched with a big smile while the boy circled his answers (below). She asked several questions to get him to share more details. “What kind of healthy food do you like to eat?” she asked. The boy responded, “Broccoli! They’re like little trees!” We were surprised to see that he circled almost all the activities on the bingo card. When he was almost done, he noticed the only healthy habit left was flossing and said, “I need to improve my flossing!” We all smiled, and his grandmother took the opportunity to reaffirm the importance of flossing every day. We all felt pleased that he arrived at this insight on his own, with minimal guidance from us.

Grandmother and grandson playing Health Bingo and engaging in fun
conversations about healthy living.

We want to emphasize two key takeaways from our experience:

Use fun, creative activities as conversation stimuli. From previous work, and this event experience, we learned that families have a latent interest in discussing health-related topics and, once stimulated to do so, their discussions can be interesting and beneficial. How can we encourage more conversations like these? Our experience suggests that giving individuals time and space along with fun, creative ways to frame conversations produced dialogues about healthy behavior that were engaging and helpful to everyone involved. We watched a mom and child illustrate a story about superheroes fighting bad eating habits and witnessed a young boy realize that he needs to improve his flossing habit. This realization was pleasing to his grandmother, who took the opportunity to reaffirm the importance of flossing. Given the increased interest in health informatics and family-centered design, there is a terrific opportunity for researchers and designers to innovate fun, creative artifacts and systems to promote conversations like the ones we observed. The activities we chose, poster creation and health bingo, allowed for fluid movement between individual and collaborative activities. Drawing could be an individual activity periodically paused so that the drawers could discuss their drawings with one another. Alternatively, family members could draw together if they chose to do so—like the mother and child who ended up illustrating a superhero fighting bad eating habits.

Synthesize different family members’ perspectives when designing. During the event, we talked with people in different stages of life about healthy living. We learned that from a child’s perspective, eating habits are an important aspect of health. Eating vegetables and fruits were often mentioned as a healthy living practice. When considering an older individual’s view, such as parents and grandparents, we observed that they have interest in reinforcing healthy behaviors on younger generations. But it seemed they also wanted to expand healthy living practices within their families. During the conversations, parents and grandparents would often mention physical and mental activities as part of health, which may involve different activities such as participating in a family community event. How can we incorporate all these different perspectives when designing for families? Our experience suggests that having positive conversations around each individual’s view on healthy living is useful for identifying differences and similarities in family members’ perspectives. We see here an interesting opportunity to innovate in how a family member may present one’s point of view on healthy living to his/her family and as each member shares, it is important to synthesize all information to effectively promote collaboration on healthy living within families. Making use of artifacts (e.g., a bingo game) to encourage conversations was one valuable approach toward fostering family collaboration in health.

Posted in: on Mon, October 07, 2019 - 3:02:05

Jomara Sandbulte

Jomara Sandbulte is a Ph.D. student at Pennsylvania State University’s College of Information Sciences and Technology. Her research interests include health informatics and design research. jmb89@psu.ed
View All Jomara Sandbulte 's Posts

Jordan Beck

Jordan Beck is an assistant professor of user experience design at The Milwaukee School of Engineering. His research interests include scholarly communication and the tools that shape and facilitate it.
View All Jordan Beck's Posts

Janice Whitaker

Janice Whitaker is the Administrator and Community Liaison for the Center of Geriatric Nursing Excellence, at Pennsylvania State University’s College of Nursing.
View All Janice Whitaker's Posts

John Carroll

John M. Carroll is a distinguished professor at Pennsylvania State University’s College of Information Sciences and Technology. His research interests include methods and theory for human-centered design.
View All John Carroll's Posts

Post Comment

No Comments Found

Finsterworlds: Bringing light into technological forests through user empowerment

Authors: Johannes Schöning
Posted: Tue, August 13, 2019 - 2:12:03

I am really fascinated by novel technology. This had already started in kindergarten, but when I studied computer science I read Mark Weiser’s seminal paper “The Computer for the 21st Century.” This paper awoke my fascination for HCI and Ubicomp, so I decided to work in this field as have many other fellow researchers. Even though Weiser’s paper was so much ahead of his time and many of his predictions and visions came true, which I admire, I was disappointed that the last sentence never actually turned from vision to reality: “Machines that fit the human environment, instead of forcing humans to enter theirs, will make using a computer as refreshing as taking a walk in the woods.” Today, technologies are often still developed in such a way that users are not empowered, but instead are subject to a restricted user experience that is limited to a single platform. By empowered I mean that “in its strongest sense, that the users of the technology are empowered to solve their own (accessibility) problems.”

I am a passionate hiker and love to hike through the woods, but I personally believe that it is getting harder and harder to ignore the negative impacts of many novel technologies: Machines are entering our lives—not vice versa—and unfortunately most are not “as refreshing as taking a walk in the woods.” While they shed light on some things, they also can make us blind to the important things in life. We see more and more technology that creates “dark forests,” or “Finsterworlds” (by the way a great movie), that restrict users in their possibilities and force them to stick to closed platforms.

I hope that there are two key challenges to bring light to Finsterworlds—our technological forests of today, which are often as dark as the Black Forest. First, I think that we need to have a broad understanding of computer technologies. Therefore, we need a solid computer science education from early on. We have highlighted this recently again in our article on “education in a digitized world.” Second, we build too many “dumb smart” technologies. We plant too many “bad trees.” On a daily basis I see new dumb smart technology presented at conferences and tech shows. The solutions look cool, but solve problems that do not exist. Those technologies are often still developed in such a way that users are not empowered, but instead are subject to a restricted user experience that is limited to a single platform. The business models around those technologies are designed to collect as much personal data as possible. The “sweet technological porridge,” a reference to the fairy tale by the Brothers Grimm, of the Silicon Valley giants has made us “full and lazy” and prevents the critical and creative use of novel technology in a way that it is empowering the users. For sure, most developers have good intentions when designing those technologies, as they want to:

  • Improve efficiency and ease of use
  • Reduce human error
  • Make the user experience more pleasant
  • Make more informed and better choices for them.

But in my opinion, many of these developments are making people lazy, as they don’t have to think so much for themselves or remember what to do. That said, we also see in contrast a recent trend toward developing “happy apps” and devices for mindfulness, sports, and brain exercises. I have been discussing this a lot with my colleagues. Diana Beirl, Nicola Yuill, and Yvonne Rogers nicely captured this trend in their recent CSCL paper on Amazon’s Alexa. But those happy apps, those little flowers, cannot grow if they do not have light. 

We as computer scientist also need to remove our rose-colored glasses and stop looking at all the technologies with too optimistic eyes. With the ACM Future of Computing Academy (FCA), we recently published a proposal arguing that the computing research community needs to confront much more seriously the negative impacts of our innovations. To ensure that this more serious identification occurs, our proposal argued for incremental changes to incentive structures in computing research, focusing on how we evaluate the quality of research reports (e.g., papers) and research proposals (e.g., grant proposals) to cut down all the “dark trees.” I hope if we tackle those two challenges, we will make using a computer as refreshing as taking a walk in the woods, while empowering users. 

Posted in: on Tue, August 13, 2019 - 2:12:03

Johannes Schöning

Johannes Schöning is a Lichtenberg Professor and Professor of Human-Computer Interaction (HCI) at the University of Bremen in Germany. His research interests lie at the intersection between (HCI), geographic information science and ubiquitous interface technologies.
View All Johannes Schöning's Posts

Post Comment

No Comments Found

The purpose of visualization is insight, not pictures: An interview with Ben Shneiderman

Authors: Jessica Hullman
Posted: Mon, August 05, 2019 - 3:10:09

Few people in visualization research have had careers as long and as impactful as Ben Shneiderman. I caught up with Ben over email in between his travels to get his take on visualization research, what’s worked in his career, and his advice for practitioners and researchers.

Jessica Hullman:  How would you answer the question: What is visualization research?

Ben Shneiderman: First let me define information visualization and its goals; then I can describe visualization research.

Information visualization is a powerful interactive strategy for exploring data, especially when combined with statistical methods. Analysts in every field can use interactive information visualization tools for:

  • more effective detection of faulty data, missing data, unusual distributions, and anomalies;
  • deeper and more thorough data analyses that produce profounder insights; and
  • richer understandings that enable researchers to ask bolder questions.
Like a telescope or microscope that increases your perceptual abilities, information visualization amplifies your cognitive abilities to understand complex processes so as to support better decisions. In our best moments, information visualization users work on problems that address the grand challenges of our time, such as the UN Sustainable Development Goals.

Visualization research seeks new visual displays, control panels, features, and workflows that improve the capabilities of users. To accomplish these goals, visualization researchers develop perceptual and cognitive theories that guide design, in concert with developing new tools. Visualization researchers also develop quantitative and qualitative evaluation methods to validate their hypotheses and refine their theories.

My strong way of promoting information visualization is to declare that it is such a powerful amplifier of human abilities that it should be illegal, unprofessional, and unethical to do data analysis using only statistical and algorithmic processes. (However, as with all visualizations, the accompanying design has to account for users with visual disabilities by using sonification or other methods. Visualization’s potency in revealing unusual distributions and interesting clusters may productively encourage statistics and algorithm designers to extend their methods to detect these patterns as well as incorrect, anomalous, inconsistent, and missing data.)

And finally, remember that: the purpose of visualization is insight, not pictures. By addressing meaningful problems and difficult decisions, we can help leaders and managers to be more effective.

Hullman: You’ve obviously made some major contributions to human-computer interaction and visualization research. Take, for example, direct manipulation and dynamic queries, two foundational concepts. Can you give us a sense of what it was like to be “pioneering” back when visualization was still a nascent field?

Shneiderman: I see direct manipulation as providing key principles which still guide design of most new technologies:
  • Continuous representations of the objects and actions of interest with meaningful visual metaphors
  • Physical actions or presses of labeled interface objects (e.g., buttons, sliders, etc.) instead of complex syntax
  • Rapid, incremental, reversible actions whose effects on the objects of interest are visible immediately.
It was thrilling to be working on user interfaces, just as graphical user interfaces were emerging. The 1982 Gaithersburg, MD, conference was an historic event that triggered the founding of the ACM SIGCHI group. We hoped to draw 200 to 300 people from computing, psychology, and human factors to that conference, so it was electrifying when 906 attendees from even more diverse disciplines showed up.

My recently published book Encounters with HCI Pioneers: A Personal Journal and Photo History (Morgan & Claypool Publishers, 2019) tries to tell the story of how the field of human-computer interaction was formed and the bold, creative, and generous personalities who were involved.

Hullman: How do you perceive the work that you and your collaborators have done in these fields? For instance, how do you describe the contributions of your work when talking to people who don’t know these fields very well?

Shneiderman: The easiest stories to tell at airport encounters and dinner parties are about the creation of the 1) web link: a form of direct manipulation selection by clicking on highlighted links and 2) the high-precision touchscreen strategy for small keyboards called lift-off, which is still used on the Apple iPhones and many other devices. These two small innovations have had a huge impact, enabling billions of users to have easier access to information, be in communication with family, get medical care, and much more. These innovations and their evaluations were published openly with no patent protection, allowing widespread refinement and application. I thought they were both important and valuable, but I did not anticipate the immense impact they would have, nor that they would become so common and obvious to so many users.

Treemaps used in, 2019.

My airport and dinner discussion partners often take a moment to realize that these innovations had to be invented, but then they want to know more, so I tell the story of Steve Jobs’s visit to our lab in October 1988 and how he went from demo to demo saying “That’s great! That’s great! That sucks! That’s great! That sucks!” He wasn’t always right, but he had strong opinions. If they want to know more, I talk about how direct manipulation ideas led to dynamic queries, information visualization, and treemaps. The most fun is to show all these innovations through commercial applications, such as the treemaps, on my mobile phone.

Early implementation of the treemap for hard drive overview.

While treemaps have had great success, being used on thousands of websites, added to Excel in 2014, and used in most commercial information visualization tools, I’m also proud of the ideas that went into creating the commercial success story of Spotfire. It was a great experience to help take the academic ideas of direct manipulation and dynamic queries into a widely used product that revolutionized pharmaceutical drug discovery, oil/gas exploration, manufacturing, and many other application areas. Spotfire enabled rapid exploration of large datasets with many variables to discover the distributions, anomalies, outliers, and relationships among variables. Early successes included discovery of a new compound for a life-saving drug, a hidden performance failure in a large oilfield, and a manufacturing flaw that had caused more than a billion-dollar loss. It also paved the way for the hugely successful Tableau, which also came from academic colleagues, including Pat Hanrahan and Jock Mackinlay, a few years later. Tableau is now a $10 billion company with millions of users and many competitors.

Other satisfying stories are the widespread use of our network analysis and visualization tool, NodeXL, which pioneered new layout strategies that more clearly showed meaningful clusters and communities. The six patterns of Twitter discussions were an important insight: NodeXL is the most widely used tool for education about network analysis; our book on its use will appear in second edition in 2019. The striking example below shows the polarized voting pattern in the U.S. Senate. The tightly linked red Republican Senators are at the top, while the tightly linked blue Democrats are at the bottom. The two green Independent Senators typically vote with the Democrats

A visualization of voting patterns in the U.S. Senate created using NodeXL.

Finally, the work on event analytics, especially electronic health records, to show aggregated patient histories opened up new possibilities for visual analytics projects that revealed successful or failed treatment patterns. The EventFlow tool has been used by 50 to 60 projects, opening new directions for information visualization research.

A visualization in EventFlow.

Hullman: Is there a single contribution you’ve made, either working alone or in collaboration, to visualization research are you most proud of?

Shneiderman: Yes, my students! My greatest satisfaction was to work with Catherine Plaisant for 30-plus years to train a new generation of students who did projects, earned degrees, and went on to do great work on their own. We engaged with students in ways that promoted their self-efficacy to build their capacity and confidence as researchers and practitioners. The University of Maryland Human-Computer Interaction Lab (HCIL) community encouraged collaborations to improve the research, writing, speaking, and social skills of each member. Paper clinics to review each other’s writing and practice talks to improve slide presentations became common formats in which faculty, staff, and students worked hard to improve the products and skills of every HCIL member.

Hullman: If readers were going to read one thing you’ve written, what would you recommend and why?

Shneiderman: If they are serious, they should read the 6th edition of Designing the User Interface: Strategies for Effective Human-Computer Interaction, but that is asking a lot. An easier starting point would be the 1996 paper “The Eyes Have It: A Task by DataType Taxonomy of Information Visualizations” (Proc. IEEE Symposium on Visual Languages ’96). This keynote paper was hastily written close to the publication deadline, but the ideas were fresh and my informal phrasing of what I called the Information Visualization Mantra (overview first, zoom and filter, then details-on-demand) has attracted many readers. The simple ideas in this paper, presented at a second-tier conference, has received more than 5000 citations.

Hullman: At a few times in its history the visualization community has reflected on its value to the world, sometimes using terms like the “death of visualization” to refer to the way the research community has evolved from existing to help other scientists solve problems to existing more independently. What’s your take on the state of visualization research today? Are we past any risk of being irrelevant? Are we focused on the right things?

Shneiderman: I have strong confidence in the importance of information visualization, but it will take a generation for it to gain acceptance by diverse audiences. Our challenge is to show how algorithmic and automatic approaches based on statistical methods can be improved by interactive information visualization. Just as telescopes, microscopes, cameras, x-ray machines, and sonograms extend our perceptual abilities, information visualization amplifies our cognitive abilities. We will need to shift education curricula in elementary and high schools so that they teach data visualization literacy and fluency with interactive information visualization tools.

Hullman: You have always advocated for visualization research that has a real and practical impact in the world. How do we make visualization useful and relevant for practitioners? And what is the best way to bridge gaps between people who do research in visualization and those who use visualization in their profession (designers, analysts, product developers, etc.).

Shneiderman: By working with practitioners, information visualization researchers can accelerate their research by testing their tools and methods on real problems. The evidence is clear that the payoff for working with practitioners is strong. It is also very satisfying to help others and I find that our work has immense value in dealing with real problems whose solution promotes understanding and improves lives. Several of us make the case in the paper “Apply or Die: On the Role and Assessment of Application Papers in Visualization.”

Hullman: Is there anything (else) you think visualization researchers should do more of?

Shneiderman: Have fun in your work, do your best to save the world, and tip your drivers generously.

Hullman: A few years ago there was some discussion among visualization researchers about whether visualization research had “grand challenges” in the same way that other fields have (e.g., NP-hard problems in more formally theoretical subfields of CS). Do you have any thoughts on this question? Are our problems as well defined and all encompassing as those in other branches of computer science? Or are visualization and human-computer interaction research fundamentally different, and if so, what would you say is the important difference?

Shneiderman: The information visualization community is working on some of the grandest challenges of our time: amplifying human cognition in the exploration of data. Data science and explainable artificial intelligence are clearly among the biggest opportunities of our time—information visualization is vital to successful outcomes for both topics. We’ve made important contributions and had high impact already, but our influence will grow as visualizations become more important parts of products and data visualization literacy grows. Other grand challenges for information visualization researchers are to improve storytelling capacity for the general public, to engage users to explore on their own, and to support researchers in understanding causality. A larger challenge is to encourage the shift from rationalism, which assumes that algorithms are the answer, to empiricism, which assumes that continuous exploration, persistent questioning, and vigorous dialog will promote a deeper understanding of our world.

Hullman: These days many people are drawn to work with visualization but may not know how to best contribute. What advice would you give a new Ph.D. student who wants to have a real impact on visualization use in the world?

Shneiderman: Start by working on a real problem—one that you have or that you get from someone else. Working on real problems leads to better theories and better tools.

Hullman: What about a visualization practitioner, like a designer or developer?

Shneiderman: Build something, test it, improve it, and then do it again on another problem. Keep producing, keep working to serve the needs of real users working on real problems with real data.

Hullman: What about someone who is already doing visualization research, say in their early to mid career (like my colleagues and I!) Sometimes it’s hard to decide between problems to devote one’s time to, and to know how much one should care about trying to increase one’s metrics (citations, etc.) so as to achieve greater “visibility” versus to work on the problems that seem most important, even if they are not considered “core InfoVis topics” (uncertainty visualization being one example that comes to mind). How did you choose your problems? Did you ever experience any tension between the problems you wanted to work on and those that the research community thought were important?

Shneiderman: While I did have topics that I was attracted to, I invested greater time and energy where there was a person who cared about the result. Working to serve the needs of a real person raises the value of your work and focuses your attention on real problems, not the imaginary ones you invent. My 2018 paper “The Twin-Win Model: A Human-centered Approach to Research Success” showed that papers written with a non-academic co-author produced 2 to 10 times as many citations as papers that had only academic authors.

Hullman: Do you have any personal heroes, or people you look up to?

Shneiderman: Heroes in our field: Stu Card was always a wonderful colleague whose work was deep and important. Current leaders that really impress me are Maneesh Agrawala, Katy Börner, Jeff Heer, and Tamara Munzner, who are remarkably productive, creative, and influential. Also the amazing team of Fernanda Viegas and Martin Wattenberg—constant sources of brilliant and beautiful projects.

Larger figures: Marshall McLuhan, Buckminster Fuller, Lewis Mumford, Rachel Carson, Donald Knuth, Grace Hopper, Mihaly Csikszentmihalyi, Rita Colwell, and Walter Isaacson. A few years ago Robert Kosara invited me to write a longer list of those who influenced me.

Hullman: Can you tell us a bit about what you’re thinking about now? What problems have most motivated you recently?

Shneiderman: I still see big opportunities in medical applications:
  • Analyses of electronic health records to support clinical research that detects successful treatment patterns,
  • Personal data from patients who are tracking their disease and health, and
  • Wellness data, such as exercise, diet, stress, sleep, etc., from quantified self users.
Visualization could do much to improve the presentation of even basic information such as blood pressure, sleeping patterns, or mental health over time. Visualization is also an important component for deeper understanding of key issues in climate change, sustainability, environmental protection, cybersecurity, community safety, and many other domains.

I’ve also been addressing policy issues in talks such as "Algorithmic Accountability: Design for Safety," and writing about human-centered design philosophies to guide new technology. My 2016 book The New ABCs of Research: Achieving Breakthrough Collaborations continues to draw invitations to speak, while suggesting new ways to raise research impact by choosing the right problems and forming the right teams.

Hullman: Any parting thoughts you’d like to leave us with?

Shneiderman: Visualization is a powerful approach to understanding the world, so we enjoy our work to make it more central to research, education, journalism, and public policy. It will take time and energy, but embrace the struggles, take on the challenges, and celebrate your successes. These are worthy efforts.

Ben Shneiderman in Torres del Paine National Park in Chile, January 2019.

Ben Shneiderman is a Distinguished University Professor in the Department of Computer Science, Founding Director (1983–2000) of the Human-Computer Interaction Laboratory (HCIL), and a member of the UM Institute for Advanced Computer Studies (UMIACS) at the University of Maryland. He is a Fellow of the AAAS, ACM, IEEE, and NAI, and a Member of the National Academy of Engineering, in recognition of his pioneering contributions to human-computer interaction and information visualization. His widely used contributions include the clickable highlighted web-links, high-precision touchscreen keyboards for mobile devices, and tagging for photos. Shneiderman’s information visualization innovations include dynamic query sliders for Spotfire, development of treemaps for viewing hierarchical data, novel network visualizations for NodeXL, and event sequence analysis with EventFlow for electronic health records.

Ben is the co-author with Catherine Plaisant of Designing the User Interface: Strategies for Effective Human-Computer Interaction (6th ed., 2016). With Stu Card and Jock Mackinlay, he co-authored Readings in Information Visualization: Using Vision to Think (1999). His book Leonardo’s Laptop (MIT Press) won the IEEE book award for Distinguished Literary Contribution. He co-authored Analyzing Social Media Networks with NodeXL (Elsevier, Second Edition 2019) with Derek Hansen and Marc Smith. The New ABCs of Research: Achieving Breakthrough Collaborations (Oxford, April 2016) has an accompanying short book Twin-Win Research: Breakthrough Theories and Validated Solutions for Societal Benefit: Second Edition (2019). His reflections on the growth of human-computer interaction will appear in 2019: Encounters with HCI Pioneers: A Personal Journal and Photo History (Morgan & Claypool Publishers)
Posted in: on Mon, August 05, 2019 - 3:10:09

Jessica Hullman

Jessica Hullman is Assistant Professor of Computer Science and Journalism at Northwestern University, where she researches visualization and uncertainty communication.
View All Jessica Hullman's Posts

Post Comment (2019 10 01)

Ben Shneidermans work has influenced the research of of many young researchers around the globe. It is great to have someone like Ben in our world.

Thank you for the great works, inspiration and great advices during all the years.

Kawa Nazemi (Darmstadt University, Germany)

Creativity limited by technology: Lessons from the past for virtual reality as a re-trending topic

Authors: Oguzhan Ozcan
Posted: Thu, August 01, 2019 - 2:07:25

I wrote this blog to remind young researchers who are surrounded by technology trends that, in the future, these technologies will change their form or may no longer even be mentioned. However, the holistic ideas behind them might be implemented by re-reading their past to create novel futures.

Re-reading is based on inspiration for generating ideas and their analogies. Sources of inspiration can include art, nature, urban life, people, digital content, and obsolete artifacts. Analogy establishes the relationship between these known sources and your unknown ideas. In this process, by re-reading, the essence of the existing approach or mechanism is determined and then adapted to the new one.

Re-reading has been often used in literature, music, history, and philosophy. Although the value of the re-reading method in idea creation has not been fully proven, awareness of it has been increasing. A very recent example of this is the Walking Assembly Project. In the project, the construction of Stonehenge (3,100 B.C.) was re-read, investigating how the building stones were carried by human hands rather than machines. New forms were then designed to provide similar opportunities.

The Walking Assembly Project reintroduces the potentials of ancient knowledge to better inform the transportation and assembly of future architectures. (photo: Matter Design)

Thinking about the essence obtained by re-reading the existence of something shows that, in not being affected by existing technologies, re-reading might be a way to generate an effective idea, first to determine what the idea should be and then to search for how it applies to the technology of today and tomorrow. Without being under the spell of the limitations of available technologies, it is possible to create pioneering, utilitarian, and obtainable ideas that can be questioned far from popular view and considered outside the box.

From this perspective, there are many lessons to be taken from the past for virtual reality (VR), which has again become a trend. However, this trend follows a technology-limited perspective, in which the definition of VR is limited to the current technologies. On the other hand, throughout history VR has been shaped by non-realism, expressions of space-time, and optical illusions.

In the ontological analyses extending from the ancient philosophers to Plato, especially in architecture, reality is considered to be a sharp view and also a bad picture that prevents us from seeing what we actually want to imagine. Since then, a wide variety of illusions (or virtual realities) have been experimented with as alternatives. Those efforts reflect our dreams for alternative realities in design, especially architecture.

The first well-known example of the art of animation is thought to have been made in the Paleolithic Period. These are cave paintings in which an animal's movements are expressed. This discovery, made in France in the late '90s, reveals a time much earlier than the first zoetrope in the 19th century, which had been accepted as the official starting date of cinema technology. In this ancient period, early humans depicted animal figures drawn on top of each other. When you move a torch back and forth, it is possible to see an animation that gives the impression that the animal is moving. From what we can understand from this point on, the curiosity to express space-time has been with us since the dawn of humankind.

The Grand Panneau of the Salle du Fond at Chauvet Cave, an example of Palaeolithic graphic narration with, on both sides, two successive hunting sequences displaying cave lions. (photo: J. Clottes, Chauvet Science)

The expression of depth in space and of space-time is also a common element in ancient Egyptian paintings. However, the sense of depth that we understand today was obtained for the first time with the discovery of perspective in the 14th century. This was developed thoroughly in Renaissance painting as a two-dimensional medium, and later in the Baroque period. In the 16th century, the discovery that images could be projected onto a wall by optical techniques led to the innovation of photography. Then, as we all know, starting with the printed photograph, began a journey of development that extends to today's VR technology. Thus, over the course of history, a lot of studies have been done that give a sense of near-real depth, including stereography and the random-dot stereogram technique.

Returning to architecture, the design of spaces like those we might imagine Plato had in mind was an endeavor of the ancient Greeks. Perhaps the Pantheon is the most important first example of this. Within the technique called optical projection, the midpoint of the columns of the building facade was made relatively swollen compared to the head and lower ends, causing the columns to be seen without shifting into perspective when viewed from the ground.

On the other hand, in some old Roman or Baroque-period monuments, architecture was designed to create the illusion that short corridors were actually long, or that a flat church wall or ceiling was actually a dome.

Today, “realism versus non-realism” is also discussed by some designers, ignoring the limitations of current and possible future technologies. One of the current approaches is the manifesto called “transparent drawing.” According to this manifesto, a combination of visual expression, proliferating and complex information, perspective view, a desire to build, and a desire to be realistic has become a “gangrene.” This prevents the designer from holistic thinking; therefore, as humans, we must return to our “factory settings” as soon as possible. Otherwise, we cannot find out what we really want within the dizzying vortex of technological development. The authors warn us that our thinking will surrender to the flow of technology that we do not know. Transparent drawing aims to reveal the virtual dimension of our imaginary universe in a more transparent way by expressing what must be seen behind the realistic images in a drawing.

In a sense, this idea behind transparent drawing confirms the Wagnerian approach to holistic thinking in art. The concept of Gesamtkunstwerk (“total work of art” or “synthesis of arts”) proposed by 18th-century thinker Eusebius Trahndorff, was later in the 19th century questioned by Wagner, who wondered what the ideal relationship between music, text, and dance should be in a holistic structure. Wagner, in this thought, was discussing what kind of an illusion must be created that would force the human brain to look far beyond the opera. Even today, Wagner's ideas present a universal point of view that is still debated, and his arguments are triggers to explore the boundaries of creativity that leave out the technology, the existing conditions, and the molds. Such approaches run counter to current VR trends, which are based on current technological limitations. We have yet to fully respond to this holistic thought in current VR research and applications.

After looking at the brief summary of thousands of years of know-how above, how meaningful is it to ask questions such as “how do we produce content?”, “how should we do marketing?”, how can we manage this environment?”, “what do we do according to the user's expectations?”, and “how do we produce content in today's current trends in technology-based VR research?”

Instead, isn’t it better to re-read the curiosity and the ideas produced long ago?

One way of overcoming the technology-limited perspective on VR is to design a 101-level university course to promote this awareness that would be taken at the beginning of one’s academic education. My next effort will be to prepare and implement course content according to this vision of re-reading the past and to see the results.

Posted in: on Thu, August 01, 2019 - 2:07:25

Oguzhan Ozcan

Oğuzhan Özcan is a full professor and director of Koç University-Arçelik Research Center of Creative Industries, specializing in user-centered design and practice. He is supervising a number of research projects, publications, and book contributions relating to interactivity and design art.
View All Oguzhan Ozcan's Posts

Post Comment (2019 08 04)

As is the case in many countries, Saudi Arabia has become the focus of increasing local attention from those interested in HCI research and practice. This increase is simultaneous with an increasing demand for HCI training and exposure that is not currently included on the national student curriculum in Saudi Arabia. It is essential that educational institutions can offer additional HCI support. This means that their research priorities must be widened, since there is a heightened importance of user engagement and user-centered design in this field. In order to give local students and young professionals extra opportunities and a more enriched curriculum, many HCI research groups provide HCI training. This exposure in the HCI field is crucial, and ultimately led to the establishment of the local KACST ACM SIGCHI chapter, made up of researchers, professionals, and students from all over the country. Members taking part in this can attend seminars, whereby they have the chance to share their experiences and engage in ongoing research, as well as being afforded a number of opportunities to network. It was due to the community’s positive response to these initial events that the Summer School was proposed and subsequently established.

CHI-ldren: Two kids’ impressions from CHI 2019

Authors: Julianna Kun, Nicholas Kun
Posted: Tue, June 04, 2019 - 1:55:36

Hello! Our names are Julianna and Nicholas Kun. We went to CHI 2019 with our dad. We are 11 and 8 years old and would like to present to you a few impressions we got from the conference!

We really enjoyed the VR demos. One of them was a VR swing, where you were on a physical swing while wearing a VR headset. One of the VR worlds was called jellyfish. You swam in an ocean, and the faster your swing went, the faster you’d move in the simulated water. We thought that it would be cool if in the future you could use this technology to test an amusement park ride, to see if the ride was too extreme or too boring. Another VR demo we tried was a simulated museum. The simulation included physical objects we could touch, and a VR overlay we could see.  We thought it was very realistic, and even looked like Glasgow’s Kelvingrove museum that we visited just before seeing this demo. Will we be able to use this technology in a few years to bring museums to us, instead of having to go all the way to some place to see something? That would be exciting.

Figure 1. The two CHI-ldren, Julianna and Nicholas, swinging with VR headsets (top left), viewing museum artifacts in VR (right), and taking in the views of Glasgow at the TU Eindhoven CHI party (bottom left). Nicholas testing the VR fire experience (top middle).

Nicholas experienced yet another VR demo: Here your wand-controlled character had to escape from a burning building. This demo combined visuals with a simulation of heat from fires. This was done using heating elements and a fan. Nicholas’s suggestion for improving realism in the next version: How about adding moving floor tiles to allow the user to walk or run?

After the demos, we went to a party. The food tasted very good, and that the atmosphere was very upbeat. Though the party still had to do with work, it was also a place for friendship and fun.

Overall, CHI was an awesome experience—we submit to you some pictures as additional evidence (Figure 1). We’re glad that we went and cannot wait ‘til next year!

Posted in: on Tue, June 04, 2019 - 1:55:36

Julianna Kun

Julianna Kun is a rising 6th grader at the Oyster River Middle School in Durham, NH. She likes books, travel, The Beatles, and attending CHI.
View All Julianna Kun's Posts

Nicholas Kun

Nicholas Kun is a rising 4th grader in the Mast Way Elementary School in Lee, NH. He likes books, travel, The Beatles, and attending CHI.
View All Nicholas Kun's Posts

Post Comment

No Comments Found

Reflections on the first ACM SIGCHI-Sponsored Summer School in Saudi Arabia

Authors: Shiroq Al-Megren, Ragad Allwihan, Khalid Majrashi, Naelah Alageel, Areej Al-Wabil
Posted: Wed, May 29, 2019 - 11:30:14

Between August 27 and August 30, 2018, in the Saudi capital of Riyadh, the ACM SIGCHI-Sponsored Summer School on Research Methods took place. The school received ACM sponsorship from the SIGCHI Summer/Winter Schools Program (2018) as a means of generating additional funding from King Abdulaziz City for Science and Technology (KACST). It was a highly successful event that created a great deal of enthusiasm.  In the present report, we will share our experiences of arranging this event in an area that has very limited access to accredited human-computer interaction (HCI) programs.

As is the case in many countries, Saudi Arabia has become the focus of increasing local attention from those interested in HCI research and practice. This increase is simultaneous with an increasing demand for HCI training and exposure that is not currently included on the national student curriculum in Saudi Arabia. It is essential that educational institutions can offer additional HCI support. This means that their research priorities must be widened, since there is a heightened importance of user engagement and user-centered design in this field. In order to give local students and young professionals extra opportunities and a more enriched curriculum, many HCI research groups provide HCI training. This exposure in the HCI field is crucial, and ultimately led to the establishment of the local KACST ACM SIGCHI chapter, made up of researchers, professionals, and students from all over the country. Members taking part in this can attend seminars, whereby they have the chance to share their experiences and engage in ongoing research, as well as being afforded a number of opportunities to network. It was due to the community’s positive response to these initial events that the Summer School was proposed and subsequently established. 

Figure 1. A live web lecture from a colleague from Al-Zaytoonah University of Jordan (Fuad Ali Qirem, Ph.D.) to showcase work in other Arab states.

HCI research methods were at the heart of the Summer School. The program was created to include a wide variety of topics related to methods of gathering and analyzing HCI data, methods that extend well beyond experimental design and surveys to include ethnography, diary studies, and various key HCI elements. A number of sessions focused solely on different interdisciplinary HCI practices to make sure that current topics and practice areas were covered. In these sessions, students were taught about various aspects, including the intersection of HCI and machine learning, voice user interfaces, modeling and simulation, and semiotics. It was fairly easy to employ speakers, who came from all over the country and the Arab gulf region to take part.  A total of 20 speakers (13 female and 7 male) from both academia and industry played a role in developing the final program, which included lectures, case studies, workshops, and a field visit to Dopravo, a local user experience design group.

Figure 2. Site visit with a user experience design group at Dopravo, a local agency.

Altogether, 27 students enrolled at the school, most of whom were female. This clearly points to a gap. Those who took part had different levels of HCI expertise, spanning from rudimentary to heavy active engagement in HCI research. We were thrilled at the community’s involvement. College clubs gave up their free time to arrange and record the event, and in return were given the chance to attend some seminars and workshops. Sponsorship from various research hubs and local agencies further enabled the school’s success.

Figure 3. Prince Sattam Bin Abdulaziz University Innovation Club.

A comprehensive academic and industrial setting was created in the school, allowing the students to gain a solid understanding of HCI. Furthermore, high levels of interaction took place between participants, with excellent discussions being held. This led to effective HCI community building. An example of this is the inclusion of networking sessions and a Postgraduate Consortium to the schedule to solidify connections and nurture budding interest in HCI. Another major highlight of the summer school was that the school shed light on a variety of accessibility needs in order to demonstrate the importance of considering user needs. This was made possible through lectures of current research on accessibility and spotlighting users with disabilities by disability advocates. For instance, a session took place with disability advocate who is also a blind application developer. He presented a variety of applications that he uses to help him find his way locally, nationally, and even internationally. He also talked about his echolocation skill, with the audience being very impressed. 

This experience has taught us that such events allow strong bridges to be created, allowing researchers and practitioners to share knowledge and experiences. The positive response to the HCI summer school has largely encouraged us to persevere with our attempts to arrange HCI events, since we believe it solidifies, supports, and improves the development of the HCI community in Riyadh. 

Posted in: on Wed, May 29, 2019 - 11:30:14

Shiroq Al-Megren

Shiroq Al-Megren is a postdoctoral fellow at Massachusetts Institute of Technology and faculty member at King Saud University .
View All Shiroq Al-Megren's Posts

Ragad Allwihan

Ragad Allwihan is an assistant professor of computer science at King Saud Bin Abdulaziz University for Health Sciences and the founder of HCI Arabia (a voluntary initiative dedicated to translating HCI content to Arabic).
View All Ragad Allwihan's Posts

Khalid Majrashi

Khalid Majrashi is an assistant professor of computer science at the Institute of Public Administration (IPA) and founder of UXArabs (a voluntary initiative dedicated to introducing user experience concepts in the Arab world).
View All Khalid Majrashi's Posts

Naelah Alageel

Naelah Alageel is an academic researcher at the Decision Support Center (DSC) for King Abdulaziz City for Science and Technology (KACST) and Boeing. She contributed to developing usability-engineering frameworks that incorporate usability testing tools to assist professionals across disciplines in their assessment to identify usability problems and measure user satisfaction.
View All Naelah Alageel's Posts

Areej Al-Wabil

Areej Al-Wabil is a Principal Investigator at the Center for Complex Engineering Systems (CCES) at King Abdulaziz City for Science and Technology (KACST) and MIT. She is also a research affiliate in MIT’s Institute of Data, Systems and Society (IDSS).
View All Areej Al-Wabil's Posts

Post Comment

No Comments Found

Should curriculum and assessment authors be certified? A response to Rolf Molich

Authors: Gilbert Cockton
Posted: Mon, May 20, 2019 - 2:31:39

Rolf's blog begins with a question: “Should usability testers be certified?”—answering “Yes” and then, more emphatically, “YES!” I will focus on his closing question: “Is certification worth the effort?” However, his comparisons with certified doctors and pilots (or domestic gas engineers, psychotherapists, etc.) are mistaken. These individuals can cause considerable harm through incompetence. Usability testers need extensive help from others to cause harm beyond wasted resources. Furthermore, we are nowhere near making a political case for mandatory certification for usability and user experience professionals. Only legislation or industry-wide action would make certification mandatory. It is politically naive to think that this would be realistic within the next decade and probably more. Rolf’s opening question is clearly rhetorical. It makes for a good headline and opening to his blog, so let that, rather than its credibility, be its worth.

Worth is a very clever word in English (few other languages distinguish it from value). It can be a predicative adjective, taking an object like verbs do, as in that “because you’re worth it” strapline. Used this way, worth indicates that positives (feminine appeal, etc.) justify negatives (cost, time, poor unadorned body image, etc.)  So, what are the positives of the CPUX accreditation and what are the negatives?

The positives relate to Advanced Level Usability Testing being one of a family of three qualifications that are underpinned by impressive curriculum documents authored and edited by experts. These cover much ground and include some up-to-date material and valuable wisdom. For UX workers with no higher education or extensive industry courses, I would expect all three qualifications to be a good place to start. However, only Advanced Level User Requirements Engineering is adequate as a standalone qualification. This leads us into the negatives.

The foundation and user testing curricula are too narrow and overly dogmatic, and would not be approved in any well-run higher education institution. I have a degree in education, wrote my undergraduate dissertation on curriculum design, have a postgraduate certificate, and am on the UK national register of qualified teachers. Despite the expertise behind the CPUX curricula, they lack sufficient alternative perspectives and critical positions to equip anyone in interaction design (IxD), whatever their specialism, to develop an adequate playbook for working strategically in contemporary innovation. There is no springboard to professional growth in these curricula. They are currently closed-terminal qualifications with no indication that they are not the only word on the topics covered (and still less the last word).

I have taught high school curricula with criterion-based assessment and carried this expertise over into university teaching. I never have, and never will, use multiple-choice questions (MCQs). They are not learner-centered. Too often they attempt to trip students up (try Questions 1 and 3 in Too often the “correct” answers are debatable, possibly to the point of being wrong. They make marking cheap and easy, and thus have no place in genuine human-centered practice. They also discriminate, rewarding language skills as much as domain knowledge, when many individuals will lack the required (second) language skills. Their use for decades in UK medical qualifications has spawned a whole industry of courses and guidance on how to handle MCQs. Yes, that’s a help desk, a fat user manual, and near mandatory courses on how to pass exams. If we can fight for users, then surely we can fight for our own learners too. Are MCQs really necessary? What are the pass mark and first-time pass rate for these exams, and the demographics of who has passed so far?

Is there evidence of curricula and assessment authors having any formal certification in the relevant education practices? Given Rolf’s position on certification, he must logically step down with his collaborators if any are not properly qualified educationally. Their extensive domain expertise in user-centered IxD developed over decades alone does not qualify them as educators who can professionally set a syllabus and assess learners (sadly, this true of almost every academic in higher education, even when they have had some training and accreditation).

So, is certification worth the effort? You may be surprised at my answer, which is Yes! Study of content that is more up to date and professionally informed than many current academic HCI textbooks is very valuable (although fewer professional books suffer here). I would urge everyone teaching HCI and IxD to look at the curricula at and see how their syllabi match up. The only negative that could make certification not worth the effort are the MCQs. There are other negatives above, but none immediately outweigh the benefits of a “good enough for getting started” curriculum and the non-MCQ-assessed elements. I cannot imagine anyone engaging with the certification process and not benefiting (myself included, although I’d expect to fail on many MCQs, as I know far too much to correctly guess the examiners’ “correct” answer).

However, a Master’s level above Foundation and Advanced is needed. As Rolf notes: “Certification certainly will not make you a usability test expert in 10 days” (unlike certification of doctors and pilots). However, no route toward such expertise is offered and could well be severely obstructed by a foundation curriculum that sticks too uncritically to ISO 9241-210, a very dated idealized engineering design subprocess model with no empirical basis in typical design innovation work. It lacks links to strategic business considerations and vital creative practices. Indeed, it does not even acknowledge or respect either. It cannot be followed after a product or service is released. The Advanced Level User Requirements Engineering curriculum does much to compensate with its “model-based context of use analysis,” however idiosyncratic its naming and contrast to the “classic” version (plus a useful section on cooperation with others). 

The current curricula are no basis for a terminal qualification. For me, they will get you to advanced novice at best, without preparation for developing further. The user-testing curriculum is overly focused on fixed task user testing, overlooking other practices that can be viewed equally as standard activity models (e.g., learnability testing, free use in a lab or workplace). As far as evaluation for IxD is concerned, this comes too close to turning out one-trick ponies. Even so, they may be better one-trick ponies than some minimal HCI education turns out. This will be even more true for the Advanced Level User Requirements Engineering curriculum. In this sense, CPUX is not only worthwhile (with reservations, potentially calamitous for some) for an individual’s professional development, but also for formal HCI and IxD education, as well as for well-informed employers. I would expect graduates with HCI education to know what is in the three curricula, be able to follow their “standard activity models,” know when and why these need to be adapted, improved, or replaced, and know where the gaps in the curricula are and how to fill them. I must thus end with a proper acknowledgment of Rolf and colleagues’ valuable work here: Tusend tak! (but please spot the Kool Aid and don’t drink it).

Posted in: on Mon, May 20, 2019 - 2:31:39

Gilbert Cockton

An Editor-in-Chief of Interactions, Gilbert Cockton has retired from permanent employment but still works part-time as a professor.
View All Gilbert Cockton's Posts

Post Comment

No Comments Found

Leonardo da Vinci – the great procrastinator

Authors: Guillaume Couche, Markus Lahtinen
Posted: Wed, April 17, 2019 - 5:15:47

Among being organized, possessing strong communicative skills, having an ability to interface with people, and being able to be creative under pressure, the ability to meet a deadline with a satisfactory outcome is a cornerstone of current work life. Continuous procrastination is the opposite of keeping deadlines. A consistent ability to meet deadlines signals something profound about a person’s professional capacity and ability to function in contemporary work settings. If deadlines are repeatedly violated, things will get worse for you. The popular stories of modern geniuses support the general grand narrative of the high-energy entrepreneur who manages networks of people and travels like a well-lubricated machine. In that machine there is little room to fail to meet deadlines. We believe this holds true for many interaction design professionals (and designers alike). Much value has been generated with that skill set, and will continue to be delivered with that skill set.

Leonardo Design for a Flying Machine sm.jpg
Design for a flying machine by Leonardo da Vinci

In this short text, we will take a closer look at the life of a pioneer designer, artist, and polymath—Leonardo da Vinci. Not only is it timely to remember da Vinci, given it’s been 500 years since his passing in May, 1519, but also his life and career may also serve as an illustration of more timely questions regarding contemporary design work. We ask ourselves: How would the output of Leonardo da Vinci be valued in today’s work environment? Evidently, this is a speculative and anachronistic question, but it forces us to systematize da Vinci’s artistic portfolio and take a closer look at the available records of da Vinci’s life.

An obvious approach to get a glance of da Vinci’s production is by means of an online image search. Seen across a lifetime, viewing his artistic work may generate an impression of a highly prolific professional and artist. Looking beyond self-portraits and the iconic Mona Lisa, the faded paper sketches miniatures on Google Image may create an impression of a consistent production in an equally consistent artistic medium—paper. Turning to alternative records of da Vinci’s life, the available sources are less than one would expect. One of the more important and recognized biographies to da Vinci’s life is the work by Giorgio Vasari (1511–1574), often referred to as the first art historian. From Vasari we learn that “Leonardo started so many things without finishing them.” 

Directing attention to the contents of those faded papers, da Vinci’s creative production spans a wide range of outputs: paintings, sketches for aerial vehicles, weapons, sheet music, and exploration of human anatomy. Da Vinci’s production as a painter reveals an alternative narrative to the one of being highly prolific. Adding up da Vinci’s lifetime of individual pieces of finished paintings lands in the range of 15 to 20. A telling story dates back to 1480, when Augustinian monks in Florence commissioned da Vinci to paint the Adoration of the Magi. Da Vinci took off to Milan and never finished the commission. Consequently, it was handed over to another painter to complete.

In current parlance, da Vinci fits well into the description of an apparent procrastinator— someone avoiding finishing a task that needs to be completed. It also raises an unnerving doubt as regards to his ability to fit into today’s work environment. Given these shortcomings in his creative production, are there alternative frames to deconstruct and understand da Vinci’s output?

Unpacking and reframing procrastination

An alternative to describing da Vinci as a procrastinator is to view his artistic career as one of  “a series preparatory studies” crossing into seemingly unrelated disciplines and domains. For example, we also know da Vinci as an inventor of aerial devices, a bicycle, ingenious gadgets, and even war machines. His interest in the human body and anatomy has not passed by unnoticed, nor his interest in conducting scientific studies and experiments.

This experiential attitude of da Vinci resonates well with the mindset among contemporary designers that experimentation across artistic genres should be the modus operandi for any designer. But there’s a wider point to be made here—one that stretches beyond experimentation across artistic domains. Conventional understanding of skills and abilities hinges on conventions, or a set of underlying assumptions, held within a specific domain. What may come across as an unrelated idea or practice from the standards upheld in one domain comes into a different light when projecting them along the periods of da Vinci’s life. The ideas and skills generated from da Vinci’s wide range of interests and curiosity developed meta-abilities and opportunities for cross-fertilization that are historically unrivaled. Da Vinci informs us about the possible bounty of moving into more unchartered territories—ones that are not necessarily considered artistic or design-focused. 

Da Vinci’s vitae as a broken curriculum

Even though it may come across as stretch to our anachronistic thought experiment, we couldn’t resist the temptation to think about a possible answer to the initial question: How would the output of da Vinci be valued in today’s work environment? In school, he might at have come across as absent-minded. Upon graduation, his age peers might have considered him somewhat of an outlier. As he faced a professional recruiter later in his life, da Vinci’s portfolio and career choices might have come across as a broken curriculum. Most likely, da Vinci wouldn’t even be called to an interview in the first place. There is also a chance that da Vinci would respond more opportunistically—maybe by role-playing his way into convincing the recruiter of his worthiness, but that doesn’t really fit the character of da Vinci. He would already be on his way Milan before even receiving his well-articulated rejection for a job interview. Da Vinci probably never even considered himself a polymath or inventor, nor was he around to be acknowledged as a global genius.

It could be the case that current educational and professional structures would have absorbed and further developed da Vinci’s ongoing “studies,” but that’s not the point of thinking about da Vinci’s life in terms of a broken curriculum. The point is his seemingly relentless adherence to the principle of “experimentation and curiosity first,” even though it compromised commissioned obligations. 

Final note

For most of us, active procrastination is not advisable. Systematic procrastination will not develop our abilities as designers or further our professional ambitions. But—and even though the iconic status of da Vinci as a genius is reserved for a handful of individuals over centuries—we still think that da Vinci's life raises some important questions on current design work. While structures and institutions—be they in the form of design agencies, corporate settings, research communities, or professional associations—create fairly stable operating zones with a set of “rules of the game” that the members can use to their benefit, the same structures may confine and inhibit the same members from further leveraging their skills and abilities. The story of da Vinci tells us, as designers and leaders in a wider sense, to consider and challenge those taken-for-granted institutional boundaries.

Posted in: on Wed, April 17, 2019 - 5:15:47

Guillaume Couche

Guillaume Couche is the co-founder of the Wolf in Motion creative agency in London, UK.
View All Guillaume Couche's Posts

Markus Lahtinen

Markus Lahtinen holds the position as Lecturer at the School of Economics and Management at Lund University in Sweden.
View All Markus Lahtinen's Posts

Post Comment

@gabriel (2020 05 29)

Nice one, my lifestyle is like Davinci, curiosity first, always learning.
and i dont really fit on the corporate game. so i dont work. constantly learning and experimenting! cheers!

Should usability testers be certified?

Authors: Rolf Molich
Posted: Wed, April 03, 2019 - 11:01:26

Should usability testers be certified?


Or rather: YES!

Let me ask two counter questions: Would you trust a doctor who was not certified? Would you want to fly with a pilot who was not properly certified?

Just like me, you hopefully want usability work, especially usability testing, to be taken seriously by your stakeholders, in particular your development team and management.

To be taken seriously, usability testers must insist on being measured. Usability testers must establish and follow generally accepted rules. Usability testing has matured to the extent where it should be regarded as a standard process, not a piece of art created by unruly artists. If you deviate from the standard process, you introduce unnecessary risks. Most likely, your management does not like unnecessary risks.

Deviations from established practice

Here are 10 deviations from established practice in usability testing that I often see:

  1. Test tasks are too simple
  2. Test tasks contain unintended clues
  3. Moderator helps the test participant too early
  4. Moderator explores the product together with the test participant
  5. Moderator manages the available time for the usability test session badly, for example by allowing the test participant to stray from the given task or by exceeding the time limit agreed with the test participant
  6. Moderator pays attention to test participants’ opinions rather than focusing on what they are actually able to accomplish
  7. Usability test reports include findings that are based on inspection rather than what the test participant did
  8. Usability test reports are unusable, because they are too long
  9. Usability test reports are unusable, because the most important findings are hard to find
  10. Usability test reports are unusable, because they are inconsistent, for example two reports written by the same person or by the same company that have widely differing formats.

Real-world data

My knowledge about these deviations is based on real-world data. The data comes from three sources:

  • CUE studies
  • Reviews of usability tests
  • Usability test certification

CUE studies. Since 1998, I have conducted 10 Comparative Usability Evaluation studies, CUE [1], with more than 140 participating professional teams and a few motivated students. These studies have produced unique insights into how experienced UX professionals do usability testing.

In a CUE study, teams simultaneously and independently usability test the same product, most often a state-of-the art website. All of the teams are given the same test scenario and objectives for the test. Each team then conducts a usability test using their preferred procedures and techniques. After each team has completed its study, it submits the results in the form of an anonymous report. In a subsequent one-day workshop, all participants meet and discuss the reports, the differences, the reasons for the differences, and how to improve the test process. The differences are often stunning.

Most of the anonymous CUE reports are freely available

Reviews of usability tests. A few mature companies are relying so heavily on usability testing that they hire neutral experts to check the quality of usability tests carried out by their employees and subcontractors.

I have carried out such reviews. In many cases, the people that I reviewed did an excellent job. In some cases, I even learned important lessons from them. In other cases, subcontractors performed so badly that I recommended that they should either improve considerably or my client should find a more competent subcontractor.

Usability test certification. Here’s the approach to usability test certification that a candidate must follow if they want a CPUX-UT usability test certificate from our not-for-profit organization, the UXQB – User Experience Qualification Board:

  • Obtain a foundation-level certificate (CPUX-F), where you prove your understanding of about 120 basic UX concepts like usability, user experience, contextual interview, persona, quantitative user requirement, prototyping, affordance, usability test plan, and usability testing.

  • Study usability evaluation—that is, usability testing, inspection, and user surveys. The corresponding CPUX-UT curriculum is publicly available. Formal training is recommended but not required.

  • Theoretical examination: Prove your knowledge of usability testing by answering 40 multiple-choice questions, each with six possible answers within 90 minutes. To pass, you must score at least 70%. Sample questions are available.

  • Practical examination: Prove that you are able to conduct a simple usability test of a given website. You must recruit three usability test participants and write four usability test tasks for a simple website, for example, a weather website. You have seven days to carry out the usability test. The seven-day period can be placed whenever you have time. After the test has been completed, you submit raw videos of the three usability test sessions and your usability test report. The videos must show the screen and a picture of both the test participant and the moderator.

    To pass, you must score at least 70%. The checklist used by the examiners for evaluating the practical examination is publicly available.

Pros and cons 

Certification is not a shortcut to fame, wealth, and honor. Certification certainly will not make you a usability test expert in 10 days, no matter how hard you study and no matter how highly you score in the certification test.

Certification is cumbersome. Obtaining the CPUX-UT certificate takes up to 10 days: two-to-three days for the CPUX-F certificate, three days for the CPUX-UT training course including the theoretical certification test, and two-to-four days for the practical test.

On the pro side, curricula and certification are important steps toward a common UX language and a mature profession. Curricula define helpful activity models for usability testing, for example, Prepare utest, Conduct utest session, and Communicate findings; and Preparation of utest session, Briefing, Pre-session interview, Moderation, and Post-session interview. My research shows that these activity models are not known to all professionals. Such activity models help you understand when you deviate from established standards. They help the lone UX professional to understand: Am I doing it right? Another reason for certification is to let people who want to make usability testing their speciality demonstrate to themselves and to others that they are familiar with common knowledge in usability testing. Finally, certification shows that you care about keeping sharp in your profession.

Although many experienced professionals score highly on our certification test, we have seen interesting examples of experienced practitioners who had problems being certified. The most important deviations from established practice that caused problems for experienced practitioners are 1, 3, 5, 6, and 8 in the list above.

Is certification worth the effort? The real-world data I have shows that the practices of many professional usability testers need review, formalization, and a general tightening up. Since the teams I studied were professional, I suggest that everyone can benefit from having their practices reviewed.

Posted in: on Wed, April 03, 2019 - 11:01:26

Rolf Molich

Rolf Molich is vice president of the UXQB, which develops the CPUX-certification. In 2014, Rolf received the UXPA Lifetime Achievement Award for his work on the CUE
View All Rolf Molich's Posts

Post Comment

@Bernard Rummel (2019 04 04)

Couldn’t agree more Rolf (how could I, since we’re working together on this!)
As for your opening statement - bad doctors and pilots can kill you. The consequences of bad usability tests are fortunately not quite as dire and direct. Nevertheless, I’d add three items to your list of blunders which can have a direct and severe negative impact:
1) data privacy violations,
2) violation of the participant’s dignity,
3) “burning” the participant base by inappropriate moderation behavior and tasks.
In our practical certification examinations, I have seen examples for all three, for instance: a moderator leaving the video recording on after the session, recording smalltalk about the participant’s wife’s sickness (1); failing to debrief a clearly distressed participant after task failure and explaining the solution instead (2); tasks unnecessarily asking the participant to adopt a political opinion (3).
Certification cannot prevent such issues, but we have at least an opportunity to test the testers. What’s even better, we have an opportunity to collect a body of best practices and common difficulties, and thereby gradually raise the professional standards of our trade. So is it’s definitely worth the effort - not only to get a certificate, but also to build certification programs.
Looking forward to further working with you on this,

Bernard Rummel

@David Travis (2019 04 05)

Rolf knows more about usability testing than most people in UX. So I think we should take seriously his suggestion that usability testing needs to be standardised: practitioners “must establish and follow generally accepted rules”. Certification is certainly one way to achieve this and I agree it makes sense to insist that usability testing consulting firms should be certified. This will help clients appreciate the professionalism that’s needed and ensure they get a standard service.

But certification will have a revenge effect. Usability testing is a user centred design gateway drug for organisations with low UX maturity. Insisting that everyone that runs a usability test must be certified will put a barrier in their way. This could discourage these firms from ever trying out usability testing. At the very least, it will lead to these firms delegating their usability testing to consulting firms with the appropriate certification. And that means the (uncertified) development team are less likely to plan the test, less likely to observe the test and less likely to act on the results.

Certification will establish a minimum *quality* level for usability testing. But it will inevitably reduce the *quantity* of usability tests that are carried out. So the question then becomes: is a low quality usability test harmful? A low quality usability test at least has the benefit of getting the development team exposed to users. My experience is that exposure to users leads to empathy, which leads to more user research, which leads to firms growing in UX maturity. If we allow only certified people to run usability tests, I suspect that the real revenge effect will be that certification creates a world with a lower net usability than if we let anyone do it.

David Travis @userfocus

CSCW Research @ Latin America

Authors: Claudia Lopez, Cleidson de Souza, Laura S. Gaytan-Lugo, Francisco Gutierrez
Posted: Thu, March 28, 2019 - 2:20:17

After participating in SIGCHI Across Borders events, we proposed and ran a workshop in CSCW 2018 to (i) identify where CSCW and social computing research was being conducted in and about Latin America (LatAm), (ii) characterize its common themes and methods, and (iii) envision a shared agenda that could make LatAm-centered CSCW research more visible and impactful to the international community. 

The workshop accepted 12 position papers from 40 authors coming from six countries. Thanks to a SIGCHI funding, 12 authors were able to participate in the workshop, present their work, and collaborate in describing CSCW research in LatAm. Pernille Bjorn (University of Copenhagen) and David Redmiles (University of California, Irvine) volunteered to serve as senior mentors, who introduced key issues in doing CSCW research, advised the workshop participants’ projects, and worked with all of us to pinpoint opportunities and challenges of doing CSCW research in LatAm. What follows is our summary of what we were able to collectively produce in the workshop.

Workshop participants at the end of the event.

CSCW research is being conducted in, and about, several LatAm countries; however, there are only a handful of small research groups in each country. Our ever-evolving map of LatAm-centered CSCW research (below) also shows a larger number of CSCW research groups in Brazil than in other countries of the region due to Brazil’s larger geographical dimensions. There are currently only five ACM SIGCHI local chapters in LatAm: Brazil, Mexico, Guatemala City, Quito, and Santiago-Chile. Across countries, the research groups have few members. In most cases, they are sustained by a single principal investigator working in a computer science or informatics department. Their research teams are often composed of a majority of undergraduate students and a few graduate students, if any. We have not yet identified CSCW research groups with multiple researchers working in a single center or institution, although there were some local and regional efforts in the past. These aspects make us conclude that CSCW research is still scattered across isolated small research groups in LatAm. 

To strengthen these groups’ work and impact, we believe that it is crucial to establish various collaboration channels among them to create a more robust research infrastructure than the current one. We imagine an infrastructure that leads to richer, shorter feedback cycles during the research process. An infrastructure that can emulate the dynamics of a large research lab with collocated researchers and apprentices of diverse backgrounds and levels of experience. Given the strong focus on undergraduate education in LatAm, this infrastructure should also encompass an effective involvement of undergraduate students in the research process as part of their education. 

Map of CSCW research that is centered on Latin America, as of October 2018. Most researchers who focus on Latin America as a place for CSCW research are currently working in Latin American countries. A few researchers who work in Europe or North America also center their investigations on Latin America or Latino communities. We used markers to represent all of them in the map. The most current version of this map is available at

Unsurprisingly, there is a wide variety of themes and methods in LatAm-centered CSCW research. Rather than listing them here, we chose to start identifying their key differentiating characteristics that make it especially valuable to focus CSCW research efforts on this region. The following themes emerged from our conversations in the workshop and subsequent reflections: 

  • Designing for contexts with characteristics that are rare in countries where HCI research already thrives and, therefore, whose influences are insufficiently understood. For example, some CSCW research about LatAm is situated in young or unstable democracies, or in countries with high-income inequality, diverse levels of literacy and technology access, strong socioeconomic class segregation, and high crime rates. Other projects involve technologies in settings where close-knit extended families or long-distance family relationships due to immigration are common. 

  • Designing “against the system” and “beyond data access” to propose alternative configurations for current work and data practices. This includes initiatives that leverage technology affordances to engage users in countering corruption in particular public institutions, strengthening civic participation, or gathering reliable data where official data is not collected or is not accurate enough. It also involves understanding data practices and concerns in such contexts, including various aspects such as trust, privacy, and safety. 

  • Looking for insights on and beyond cultural differences to develop nuanced interpretations of the results of studies centered on LatAm. While cross-cultural frameworks are a common focus, other foci such as stigma and inclusion are being explored as well. Interpretative research methods are often used as a means to reach a deeper understanding of the phenomena under study. While both qualitative and quantitative methods are used, there is a tendency to lean toward mixed methods when quantitative approaches are used. There is also a trend to use participatory design and critical design perspectives in order to seek a deep level of understanding. 

We believe that these aspects warrant a call for more attention to LatAm as a place for CSCW research. While LatAm research is not yet widely published in ACM SIGCHI venues, it is tackling relevant issues that need to be further investigated. 

Based on the current state of CSCW research in LatAm, we see opportunities to foster a research community centered on LatAm that can create new opportunities for mentoring researchers to disseminate their results, train more people to expand the depth and scope of current research, build networks to share lessons learned, and leverage local efforts to a regional level. 

These opportunities do not come without challenges, including well-known financial shortages in our region, which we believe are structural and can hardly be addressed by researchers' articulated efforts. There is also a language barrier. As English is a second language for most researchers in LatAm, writing compelling stories that best convey the richness of their findings continues to be a challenge that is often not taken into consideration in the review processes. Additionally, national research centers in most LatAm countries continue to value only publications indexed by Web of Science. Unfortunately, the majority of ACM SIGCHI conferences are not indexed by it. Thus, currently, CSCW researchers in the region have to consider trade-offs between what is recognized by their peers nationally and having a worldwide presence in their discipline. Finally, there is also a trade-off between our research being locally impactful versus globally relevant. For many of us, our nearby contexts provide plenty of opportunities to make a significant change by means of our research and work. Finding the right balance between making these kinds of contributions to our communities and being part of cutting-edge research in the discipline has become an additional concern. 

Overall, we are very much hopeful that characterizing CSCW research in and about LatAm, its opportunities, and its challenges is a key starting step to engage in more effective efforts to strengthen it and make it more visible to the international community. We are currently working on a five-year agenda to increase mentorship and networking opportunities within the region, which we believe will be instrumental to increase the presence of our region in future ACM SIGCHI venues and the dissemination of CSCW LatAm research.

Posted in: on Thu, March 28, 2019 - 2:20:17

Claudia Lopez

Claudia López, Assistant Professor at the Universidad Técnica Federico Santa María, Chile. She serves as vice-chair of the Santiago-Chile ACM SIGCHI Chapter.
View All Claudia Lopez's Posts

Cleidson de Souza

Cleidson R. B. de Souza, Associate Faculty at the Federal University of Pará, Brazil. He is one of the leaders of the ACM SIGCHI Geographic Inclusion Team.
View All Cleidson de Souza's Posts

Laura S. Gaytan-Lugo

Laura S. Gaytán-Lugo, Assistant Professor at the University of Colima, Mexico. She is the research vice-chair of the ACM SIGHCHI Latin American HCI Community.
View All Laura S. Gaytan-Lugo's Posts

Francisco Gutierrez

Francisco J. Gutierrez, Assistant Professor at the University of Chile, Chile. He chairs the recently formed Santiago-Chile ACM SIGCHI Chapter.
View All Francisco Gutierrez's Posts

Post Comment

No Comments Found

The perils of next-gen surveillance technology

Authors: Juan Hourcade
Posted: Fri, February 22, 2019 - 4:27:20

Low-cost, high-performance, data capture, storage, and processing capabilities are changing our world. We are in the era of big data. Most large organizations are storing large amounts of data about every aspect of their operations, just in case they need it at some point, turning it over to data scientists hoping to gain insights.

This data revolution has the potential for positive outcomes. For example, it can help model important phenomena such as climate change. In medicine, it can help prevent the spread of infectious disease more efficiently and effectively.

However, in spaces where information about people has few if any legal protections, these developments have the potential of having grave consequences. A particular emerging phenomenon is the enormous imbalance between individuals and powerful organizations in terms of access to data and the ability to analyze and act on it. This kind of information inequality irrevocably leads to power imbalances, with a high likelihood of negative effects on individual freedoms and democratic values, in particular for the most vulnerable.

There is an increasingly broad awareness about large companies that gather vast amounts of information about people in exchange for free services, thanks in part to Shoshana Zuboff’s recent book Surveillance Capitalism. My concern in this post is with research on surveillance tools and methods that could make the situation even worse by tracking every little thing we do without proper consent (i.e., full awareness of the extent of surveillance, how it is used, and having a meaningful choice to not participate) and most often with little or no benefit to those being surveilled.

Such surveillance research tends to carry a concerning way of thinking. Its first component reminds me of early 20th-century Italian Futurism, with its emphasis on speed, aggressive action, and risk-taking. In this incarnation, it’s a thirst for data about people and quick action to build highly invasive tools without much concern for how they may be repurposed by organizations, without an ethics board. Italian Futurism glorified the cleansing power of war and rejected the weak. The surveillance tech that worries me aims to identify people that do not fit a particular norm, potentially leading to cleansing or manipulation through data.

A second characteristic is a highly paternalistic approach. The idea is that the organization knows better than the people with whom it interacts and therefore has the right and moral authority to conduct surveillance.

A third characteristic is an Ayn Rand–like approach to ethics, which assumes that the self-interest of the organization leads to societal benefit.

Why am I concerned? For employees, in particular those who are in professions and trades where they can easily be replaced, surveillance technologies could prove and are already proving to be quite damaging. For example, they could help companies understand how to squeeze the most work for the least pay, fire overworked employees as they burn out, obtain data to automate tasks and eliminate jobs, use social network analysis to eliminate possibilities of union organizing, and lay off people who do not fit (most likely those from vulnerable groups). For people in a customer relationship with a company, the consequences may not be as severe but may include customized efforts to keep them from leaving, whether it is in their best interest or not, and appeals to their passions and emotions in order to manipulate their behavior.

The greatest concern I have is how these technologies could be used and are already being used by police states where there are few legal guarantees for civil liberties. In such cases, these technologies could be used to track dissidents, listening in on every conversation, knowing about every personal connection, every purchase, every move, every heartbeat and breath, and lead to a variety of punishments for people who do not fit the norm and those connected to them.

So what should we do? We need to redouble our commitment to values that have traditionally been at the core of our discipline. First, when working on tools or methods that involve data about large numbers of people, we need to get off the approach of aggressive, risky, speedy action in order to grow fast/get a grant/beat the competition, no matter what breaks. Instead, let’s be guided by societal needs and think about the possible consequences of our research. How do we do this? We put those who are likely to be affected by technology at the center of our processes. We can then also avoid paternalistic approaches and self-serving ethical frameworks.

I feel that part of our community has slowly been abandoning our early commitment to putting those affected by the technologies we design at the center of our design processes. I hope this blog post works as a wake up call to colleagues, but also to our community to re-engage with these values and do our part to steer technology toward befitting society at large and away from augmenting existing power imbalances that are likely to damage individual freedoms and civil rights.

Posted in: on Fri, February 22, 2019 - 4:27:20

Juan Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Hourcade's Posts

Post Comment

No Comments Found

Morning greetings in a Bangladeshi way: Walking, greeting, and technology

Authors: Nova Ahmed
Posted: Tue, February 05, 2019 - 11:27:01

Some colleagues and I once had a big debate about whether technology separates us or connects us. At one point, someone argued that people do not even greet each other these days in Dhaka city.

I was not ready to give it up—I wanted to look into it myself. I had two motivations: the first one was to find out whether people are willing to greet you regardless of what they are busy with; the second one was a personal goal to make myself go for walks (with a bigger goal to lose weight!).

Over a period of six months, twice a week I walked a long path of five kilometers from my home in Banani to my workplace in Bashundhara, in Dhaka. I greeted everyone I met on my way to find out whether I would be greeted back. I also wanted to see how many people were busy with their phones in the morning. My final goal was to let other curious researchers know if a fun project can motivate you to go out, walk, and lose weight. Here’s what I found out: People love greetings, even when they are busy with other tasks and even when they are busy talking on their phones. The bad news is there is no way to lose weight during interesting projects … based on my walking results.

Banani, a high-income region.

The journey
My journey would start early in the morning between 7 and 9 AM, beginning in summer and ending in winter (June to December 2018). It covered walkways, intersections, residential areas, and bazaar (streetside shops) areas of Dhaka. I took the same route, two days a week. I would greet almost anyone I came across with a smile, even when not making eye contact. I was known to many of them by the end of my walking journey. That’s when I took a few photos with their permission, letting them know that I would use them with this post. All of the people I approached posed; some requested me to post them on social media along with this blog post!

Nodda, a low-income region.

Here is how my journey worked out: I would start in Banani, a residential area. Early in the morning, there were guards in front of the buildings I passed. On the road there were some shopkeepers getting ready to open up doing their morning rituals, and groups of women walking on foot toward the garment workshops in that area, most of the time walking in groups. They all seemed extremely happy, returning my greetings with a smile. I did not see anyone in this area using technology while passing them in the morning.

I passed the beggars in Banani-Gulshan intersection. The beggars normally put on a depressing face. However, they responded to my greeting with a smile, and then immediately returned to their sad facial expressions. This group does not use technology while going about their business.

There are about 30 people on the route from Gulshan to Bashundhara—some busy setting up their business by the roads and some walking toward their work. Many of the people (around 4 to 5) would be busy talking on the phone as they walked by. Some were extremely busy setting up shops or getting themselves ready. Multitasking is common, as I once saw a person brushing his teeth, talking on the phone, and trying to open his shop at the same time. I saw people busy with technology and without technology. In both cases, my morning greetings were returned. Some of the people talking on the phone just nodded to me. The best response I received was from a person on their phone who came running after me, putting his phone on hold, just to return me my greeting.

Toward Bashundhara, a mid-income region.

There were two specific incidents of harassment where I heard bad comments as I walked by. One was from a rickshaw puller making a comment about how women should not walk that fast, followed by a degrading word. The second was from an unknown passerby, who made an abusive comment.

One day I fell down on the road and was hit unintentionally by a cycle rider. There was a blind spot where we could not see each other. It took place in November. All the people around the road came running toward me. One kind rickshaw puller told me to take rest, addressing me as amma (mother). These greetings were not just random interactions—they had something more to them than that.

My roadside experiment came to an end around December when it was revealed that the roads may not be safe during the election period. 

My first quest to find out whether people have forgotten to greet each other with the presence of technology revealed that interaction in any form brings about new interactions. Only lack of interaction creates a distance. Human interactions are still prioritized over technology, or were at least not ignored in the presence of other distractions. 

The second goal of losing weight while conducting an interesting experiment did not meet my expectations. As the project seemed interesting, I was busy observing people, stopping to greet them, and smiling, which did not result in losing any weight. 

It has been my pleasure to walk you through the streets of Dhaka. Anyone who has visited the city would agree that we love human interaction, even when we are occasionally busy with mobile phones or other technology artifacts. 

Thank you!

Posted in: on Tue, February 05, 2019 - 11:27:01

Nova Ahmed

Nova Ahmed is a computer scientist in Bangladesh. Her focus is on feminist HCI and social justice.
View All Nova Ahmed's Posts

Post Comment (2019 12 24)

Nova this was such a pleasant read! I took up walking for a while and really enjoyed walking along St Kilda Road upto Albert Lake in Melbourne.  Just like you,  I smiled at everyone and people usually smile back.  At time all we need to do is be the first one to extend a hand, or smile…and the connection resonates.

Automatizing the power grid

Authors: Catherine Bischofberger
Posted: Tue, January 29, 2019 - 4:16:36

In this day and age, relations between humans and machines have become rather fraught. A growing number of anxieties crystallize around the use of robots and automation in various industries, not to mention our homes. Things were quite different in the late 19th century, when the introduction of the first machines were expected to relieve people from toiling away for long hours in exhausting circumstances. Families, in particular, reaped the benefits from time-saving appliances. Washing machines, dishwashers, and microwaves gradually became mass-market consumer goods throughout the 20th century. 

Nowadays, we worry about robots taking our jobs and becoming smarter than us. But whether we like it or not, the future spells an increasing interaction with machines in one form or another. As this trend intensifies, human-machine interfaces (HMIs) will become an ever more important technology for us to master, as they will enable us to control and interact with machines. While these three letters, HMI, might seem like just another acronym, they are one of the keys to our future world. And one of the areas where HMIs are already ubiquitous is in electricity generation and transmission. They are a key feature of grid modernization.

HMIs and the power grid

You can find HMIs in power plants and substations as well as in wind and solar farms. According to the glossary of the global standard-setting organization for the electrotechnical sector, the International Electrotechnical Commission (IEC), it is a “display screen, either as part of an intelligent electronic device (IED) or as a stand-alone device, presenting relevant data in a logical format, with which the user interacts. An HMI typically presents windows, icons, menus, and pointers, and may also include a keypad to enable user access and interaction.”

Power grids are getting smarter, which allows them to operate in a more energy-efficient and effective manner; HMIs are typically “the face” of this process. The HMI application plays a key role in the visualization and control of substation automation systems or the monitoring of the real-time status of a solar or wind farm, for example. Engineers, technicians, and operators depend on the information collected and relayed by IEDs to get a clear picture of the state of the substation and the distributed energy resources (DER). These DERs could be wind turbines, a solar farm, or a microgrid, for example. As the power grid continues to modernize, the dependency on HMI applications will therefore increase and operators will require help to monitor and control multi-vendor systems.

We will increasingly interact with machines in the future.

Need for vendor-agnostic systems

HMI applications are built upon graphical building blocks including basic shapes, colors, text, forms, or pages to communicate and exchange information. Utilities increasingly want HMIs to work with any vendor IED, requiring minimal manual configurations. A vendor-agnostic solution would simplify installation, reduce maintenance costs, and diminish the complexity of power-automation systems. It would facilitate the interoperability with multi-vendor IEDs and support data-driven configurations that place the work burden on tools instead of human beings.

Unfortunately, all the graphical components and building blocks that go into an HMI are assembled in a proprietary fashion by HMI software manufacturers. To date, there aren’t any standardized means of specifying, designing, and commissioning HMI applications. 

New international standard in the works

But this is about to change. The IEC is working on a new document that aims to define the configuration languages required to achieve digital substations, including the HMI application. The planned standard, which is currently being drafted, will be part of the IEC 61850 series of publications, which includes some of the core international standards used for integrating digital communication processes into the existing electrical grid.

One of the objectives of the new publication is to automatically generate the HMI application, including all the associated data mappings and graphical renderings. This effectively dispenses operators, engineers, or technicians from carrying out a manual configuration of the substation system and therefore saves time and cost for utilities by using resources more efficiently. It also removes the risk of human error. “You could call it ‘magical engineering’: Instead of taking weeks, sometimes even months, to configure the HMI applications, it literally will take minutes and even seconds for smaller substations,” describes Dustin Tessier, who leads the task force responsible for the new standard project inside the IEC.

California dreamin’

The HMI document is based on a proof-of-concept technology developed by Southern California Edison (SCE), the primary electricity supply company for most of Southern California. 

For many in the electricity transmission industry, SCE is viewed as a compass: Other utilities follow the company’s technology roadmaps and its data-driven HMI application is just another example of its technological savviness. The HMI is part of a 3rd-generation substation automation architecture based on IEC 61850 standards, developed by the company.

Mehrdad Vahabi is one of the engineers who worked on the HMI prototype. “Southern California Edison has always been a forward-thinking utility. In 2010–11, the company decided to modernize the grid. While HMIs were already used, they were proprietary, which created a number of problems including cost, the amount of manual work, and the time required to make changes to the systems. These legacy problems with HMI were one of the major reasons for moving to 3rd-generation substation automation,” Vahabi explains.

During their research, SCE engineers came into contact with the IEC 61850 standards and their applications for substation automation. “They are a very useful toolset but the HMI part was not yet standardized. We got involved with the IEC experts working on these aspects. We proceeded to implement our prototype in the field and give them information, which was fed into the drafting of the new IEC document,” Vahabi adds.

SCE has already started implementing the new HMI in its substations. “The plan is to automate 400 substations with this SA-3 technology by 2028,” Vahabi indicates. Further down the line, the company plans to prototype a totally virtualized substation automation system in the lab.

It may be a brave new and increasingly complex world out there but it would seem that, with HMIs, we have some of the tools to overcome many of these complexities. And the power grid is a great place to start.

About the IEC: The IEC ( is the international standards and conformity assessment body for all fields of electrotechnology. IEC enables global trade in electrical and electronic goods. Via the IEC worldwide platform, countries are able to participate in global value chains and companies can develop the standards and the conformity assessment systems they need so that safe, efficient products work anywhere in the world. The emergence of smart grids is changing the way electricity is transmitted and distributed. IEC International Standards are paving the way for this transition by helping to integrate new technology such as HMIs in the existing network.

Posted in: on Tue, January 29, 2019 - 4:16:36

Catherine Bischofberger

Catherine Bischofberger is a writer and technical communications officer at the IEC. Previously she worked as a journalist and editor on the IBC daily and wrote for many B to B publications, in a number of technology fields (IoT, cloud storage, technology for sports broadcasting, etc.) Before coming to Switzerland and joining the IEC, she worked for 15 years in Paris for The Film Français, a French B to B film industry publication. Prior to that she edited a number of broadcast technology magazines in London.
View All Catherine Bischofberger's Posts

Post Comment

No Comments Found

Making the child-computer interaction field grow up?

Authors: Olof Torgersson, Tilde Bekker, Wolmet Barendregt, Eva Eriksson, Christopher Frauenberger
Posted: Fri, October 05, 2018 - 2:49:57

Child-computer interaction (CCI) as a specialized field within human-computer interaction (HCI) has developed gradually, from the early works of Seymour Papert and Mitchel Resnick at MIT to the more recent and substantial work by key people such as Allison Druin, Yvonne Rogers, and Mike Scaife. However, a major milestone for the field was the establishment of the annual conference series “Interaction Design and Children” (IDC) in Eindhoven in 2002. 

Five years after the conference’s instantiation, several of the founders looked back at the development of the field. They concluded that “CCI is still finding its way. Relating to sociology, education and educational technology, connected to art and design, and with links to storytelling and literature, as well as psychology and computing … this new field borrows methods of inquiry from many different disciplines. This disparity in methods of enquiry makes it difficult for researchers to gain an overview of research, to compare across studies and to gain a clear view of cumulative progress in the field.” 

In 2016, several of us visited the CCI conference in Manchester. While attending the presentations, we jokingly started to use the term “fun-paper” to refer to presentations of the design of a cool (or not-so-cool) technology for children that include a short description of an evaluation that showed that the children liked the technology. While some of the technologies presented were indeed novel, well-done, or exciting, we felt it would be very hard to build further upon the papers. Coincidentally, Panos Markopoulos as a keynote speaker at the conference discussed the development of the CCI field since its inception. One of the observations in his talk was whether the IDC conference tends to mirror the latest technological developments: For example, when a new technology is introduced, it tends to spur a flow of papers exploring its application to the domain of interaction design and children; the previous ones tend to fade away. He argued that the wish to explore the possibilities offered by the latest technological advancements is natural but that the recurrent shifts of focus raise the question of whether the exploration of one technological platform can lead to any knowledge specific to the field of CCI that can be applied to the next trend that comes around, or whether there is a risk of the field falling into a constant exploration of possibilities, where only a small part of the knowledge gained from exploring one trend can be carried over to the next one. 

Based on both his and our observations of the relative immaturity of the field, we wrote a paper in which we explored whether our gut feeling that many papers mainly describe a technology but do not offer many pointers for knowledge applicable or usable for other design projects, was true. Our conclusion, based on a coding of all papers from the previous IDC conferences into “fun” (or artifact-centered papers) and “not fun” papers, showed that our feeling was not ungrounded. Furthermore, a short inspection of the number of citations for these categories of papers also showed that artifact-centered papers were cited significantly less often.

However, we also felt the need to not just complain, but also suggest some alternatives or ways to help mature the CCI field. As CCI is a sub-field of HCI, we of course turned to the literature within the field to see if there were some solutions ready at hand. The idea of creating intermediate-level knowledge (ILK) residing somewhere between theories and instances, as proposed by Kristina Höök and Jonas Löwgren, appealed to us, so we continued with a first attempt to create a specific form of intermediate-level knowledge, strong concepts, based on the artifact-centered papers we had found. Seeing this mainly as an exercise, we were aware that ILK may not be the best way forward for the field, but we concluded that there may be a potential to create this kind of knowledge in the form of strong concepts. In our paper we thus invited the CCI community to participate in efforts to make the field more mature by organizing forums for this kind of analytical work, for example in the form of workshops, dedicated paper sessions, and/or a wiki. Since we felt the need to at least accept this invitation ourselves, we organized a workshop at IDC 2018 in Trondheim. We called our workshop “Intermediate-Level Knowledge in Child-Computer Interaction” and invited 1) researchers and designers who position themselves as producing intermediate-level knowledge and 2) people in the field of design research who have not necessarily thought about their work as producing intermediate-level knowledge. 

Fifteen colleagues met with us at the workshop, confirming, provoking and challenging the maturity of CCI and the concept of intermediate-level knowledge while struggling to find ways forward for the field. Although the initial intention was to share practices about how to create intermediate-level knowledge, the discussions quickly turned to the suitability of the concept itself. In its original form, it suggests a continuum in which knowledge matures from being strictly situated in a case (an artifact) to a generally applicable theory. This is problematic for two reasons: one, it clearly suggests a strict ordering of knowledge quality that privileges positivism over pragmatic or hermeneutic perspectives on knowledge creation. Two, it suggests that the two theoretical frames, in which the extremes of the continuum live, are in any way compatible. As a result, intermediate-level knowledge is vulnerable to be perceived as weak by the positivists (the bad-science argument) and as missing-the-point by the hermeneutic tradition (the irrelevant-outside-your-lab argument). Without finding a theoretical basis that offers a more equal perspective on the different ways of knowing, embracing the view from everywhere rather than upholding the illusion of a view from nowhere, it will not be possible to find effective representations of knowledge that span across these different perspectives. Rather, we run the risk of re-living the science wars. Another consequence of the conception of intermediate-level knowledge is that it privileges abstraction over transfer, which may still be a sign of loyalty to the scientific roots of HCI. For instance, is not the way to instantiate a strong concept as valuable as the concept itself? Yet we locate the contribution in the abstract concept, not in the transfer, including the knowledge of its situated dependencies. Have we focused too much on static knowledge representations and too little on the process of transfer?

It was concluded in the workshop that CCI is still in its infancy and that one of the central means to mature the field is engaging in the debate about knowledge representations and internal rigor. This requires that we appreciate the many different ways in which CCI produces insights and find ways to connect them to build a holistic, rather than generalizable, body of knowledge. The outcome of the workshop does not suggest immediate answers, but rather raises questions and calls for developing appropriate knowledge forms. Although the workshop, and thus this blog post, focuses on CCI, we argue that the same problem of growing up applies to many young sub-fields in HCI.

Posted in: on Fri, October 05, 2018 - 2:49:57

Olof Torgersson

Olof Torgersson is an Associate Professor at the Department of Computer Science and Engineering at the University of Gothenburg in Sweden. His research focuses on user-centered and participatory design of (mobile) technologies for children.
View All Olof Torgersson's Posts

Tilde Bekker

Tilde Bekker is an Associate Professor at the Department of Industrial Design at Eindhoven University of Technology in the Netherlands and an Honorary Professor at the Design School Kolding in Denmark. Her research focuses on user-centered and participatory design of (educational) technologies for children.
View All Tilde Bekker's Posts

Wolmet Barendregt

Wolmet Barendregt is an Associate Professor at the Department of Applied Information Technology at the University of Gothenburg in Sweden. Her research focuses on user-centered and participatory design of (educational) technologies for children, such as games, reading technologies, and social robots.
View All Wolmet Barendregt's Posts

Eva Eriksson

Eva Eriksson is an Assistant Professor at the School of Communication and Culture, Department of Information Science at Aarhus University in Denmark, and a senior lecturer at Chalmers University of Technology in Sweden. Her research focuses on participatory design with children, and interaction design in public knowledge institutions.
View All Eva Eriksson's Posts

Christopher Frauenberger

Christopher Frauenberger is a senior researcher at the Human-Computer Interaction Group, TU Wien (Vienna University of Technology). His research focuses on designing technology with and for marginalized user groups, in particular autistic children. He is committed to participatory design approaches and builds on theories and methods from diverse fields such as action research, disability studies, philosophy of science, and research ethics.
View All Christopher Frauenberger's Posts

Post Comment

No Comments Found

SketchBlog #1: The rise and rise of the sketchnote

Authors: Miriam Sturdee
Posted: Tue, August 14, 2018 - 10:36:38

This blog post was co-authored by:

Miriam Sturdee, postdoctoral fellow in sketching and visualization, University of Calgary,

Makayla Lewis, research fellow, Brunel University London, 

Nicolai Marquardt, senior lecturer in physical computing, University College London, 

If you’ve been to an HCI conference, workshop, or event recently, chances are you may have seen people sketchnoting—either as part of the main conference organization in the form of visual facilitation, or simply as part of personal practice. 

Some of you may be wondering: What is a sketchnote? When we take notes, we are saving interesting points and ideas from a talk, panel, workshop, experiment, or participant to return to later. When we take SKETCHnotes, we are adding sketched visual elements to those points and ideas, whether it is as simple as emphasizing text, or adding icons and thematic references to the item being recorded (see Figure 1). Anyone can sketchnote; you don’t have to be an artist, illustrator, or even be able to draw—the important part is the ideas and thoughts you capture and developing the style that works best for you.

Figure 1. Sketchnote summarizing the closing keynote by Geraldine Fitzpatrick at ACM ISS 2017 by Makayla Lewis.

Sketchnote practitioners often believe recording information in a visual manner helps encode data for better recall, and the act of recording helps them to concentrate on complex topics or within often lengthy knowledge-sharing or idea-generation sessions. There is also joy in sharing these sketchnotes after a conference, workshop, or event (even with friends—sketchnoting goes beyond simply recording talks) to promote or reach out to the speaker, attendees, and non-attendees, or simply to share them as part of your personal or professional practice. The sketchnote has a life after a conference, workshop, or event, unlike the vague scribbles we may all be familiar with that languish at the back of a little-used notebook in our desk drawer. Spending time on your notes means you are more likely to return to them for reference, simply to reflect, or to use them as a visual prompt to support discussion with others.

With this new series of blog articles for Interactions, we will share many examples of sketchnotes from CHI and other related events and conferences, but also reflect on the practice of sketchnoting and share techniques of how to integrate sketchnoting as an everyday practice.

On Discovering Sketchnoting

Figure 2. Alan Borning at ACM Limits 2018 by Miriam Sturdee.

Miriam: During my M.Res. year I was introduced to the sketchnote by a management science lecturer who was interested in alternative formats of recording and disseminating information. Eager to try out this visual form or notetaking, I read up about sketchnotes, took two pens, and attempted the practice at the next research talk I attended. It was of varied success: The Sharpie I was using was too fat in the nib and the ink bled through the cheap cartridge paper I had dug up from my art box. The resulting images were chunky, and somehow in my haste I had chosen a brown color to contrast my line drawings. Nevertheless, I had two pages of passable notes and I still remember the talk by Emmanuel Tskleves. For my second attempt, I used a sketchbook with thicker pages and a lighter-green color to contrast the imagery, but also tried to record every element of that talk (by Chris Speed)—the imagery was rushed, the words almost unreadable. I am happy to report, however, that I got steadily better, not least in part due to my acquaintance with, and constant encouragement from, Makayla Lewis, my now long-time collaborator. I share my sketchnotes on my Twitter account @asmirry

Figure 3. Sketchnote of the workshop on the Future of Computing & Food by Makyala Lewis.

Makayla: Recently, I was interviewed by the Adobe Blog on “The Power of Sketchnoting in UX Design,” where I talked about how I use sketchnotes to brainstorm ideas, explore and understand new concepts, share user scenarios with colleagues, draw interviews to ensure a shared viewpoint with users, visualize interactions, and design futures to aid discussions and co-creation to help users clearly and simply express and share their experiences. Since 2012, I have put pen to paper—well, to be specific, a UniPin 4.0 black pen and Copic marker (a pastel shade and C3 grey) to a Moleskine sketchbook large, and sometimes a Surface Pen to Microsoft Surface— when at events, conferences, interviews, workshops, and at my desk. As an HCI researcher with an interest in user experience, human-factors of cyber security, and accessibility, I have created a back catalog of 500+ sketchnotes, of which 343 can be viewed publicly. Low-fidelity sketches often help to express experiences and complex content. They allow HCI researchers to better communicate and express their ideas, and share designs with colleagues, users, and stakeholders. Sketchnotes can enhance these sketches; the inclusion of simple connectors, containers, and separators with consideration of structure and style can better support the thinking process and communication of these thoughts to others. Thus, I think awareness and competency in sketchnoting for HCI researchers could be beneficial. I enjoy sharing my sketchnotes and process on my Twitter @maccymacx. 

Figure 4. Sketchnote of Brad A. Myers Lifetime Achievement Award presentation at CHI ’17 by Nicolai Marquardt.

Nicolai: I started using sketching during my M.Sc. and Ph.D. studies and very quickly found that sketched ideas and concepts can very effectively support my research and design process in HCI. Later, while studying for my Ph.D. in Calgary, I joined as co-author to create the Sketching User Experiences Workbook (together with Saul Greenberg, Sheelagh Carpendale, and Bill Buxton), which summarizes many of the possible methods of using sketching techniques in HCI. During that time, I also began reading about using sketches for visual notetaking and came across Mike Rohde’s blog and books (see links below). I began creating visual notes of talks I went to, meetings I attended, and many other events I joined. An example from last year is my visual summary of Brad Myers’ inspiring Lifetime Achievement Award talk at CHI ’17 (Figure 4). For me, the sketched notes help me staying active and engaged during talks and make it easier for me to synthesize what is most important and how aspects of the presented work relate to research I’m working on. Many of my sketchnotes are shared on my Twitter account @nicmarquardt.   

Sketchnote Tips and Tricks

To help anyone who is interested in taking up sketchnoting, and perhaps broaden their horizons in drawn imagery in general, we have put together some advice so that you can avoid the pitfalls some of us encountered along the way. Remember, the sketchnote mantra is IDEAS NOT ART— nobody expects a beautiful drawing as an outcome. Instead, it is important to keep practicing creating visual notes; you will see progress and find a style that works for you.

Choose your tools wisely. We’ll return to tools in a later blog, but for now, find a good pen that flows well and is consistent, and some paper or a sketchbook. It may also help to find a stiff board (or other material) to lean on. You can also work straight into your tablet or touchscreen laptop, but it might be best to start with simple pen and paper.

About the use of colors. For now, why not start with black and one pastel or light color to emphasize important points and a grey marker for shading.

Practice icons you think you will use regularly. For example, if you study smartwatches, a watch icon that you can draw quickly and in different orientations will be invaluable. Likewise, generic terms such as “AI” might be represented by a robot head, a computer with a face, etc.

Arrive early (if you can). It will give you time to find a good seat, preferably near the front. When at the front, you can draw a quick portrait sketch of the speaker(s), read all the slides without squinting, and are less likely to be disturbed by people coming and going. While you wait for the talk to begin, check the talk title, prepare your page, and get all your pens in order.

Don’t try to capture everything. Instead, try to capture the salient points, interesting quotes, and other items that jump out. You’ll be surprised by the depth of your sketchnote.

Use the Q&A time to verify and finish up. Fill in the gaps (if you are unsure about something, ask the speaker a question), complete areas you may have missed, and color or shade your sketchnote. You may not have time to come back to your notes for a while, so it is better to complete them during the session. Let’s be honest, sketchnotes are ultimately notes—how often do you return to complete your notes?

There are some great sketchnote resources on the web, here are some for a start: 

Marquardt, Nicolai, and Saul Greenberg. "Sketchnotes for Visual Thinking in HCI" Workshop paper at ACM CHI ’12 Workshop on Visual Thinking and Digital Imagery. 

The Sketchnote Handbook by Mike Rohde

The Sketchnote Workbook by Mike Rohde

Sketchnote Hangout by Makayla Lewis

Visual Notetaking for Educators by Sylvia Duckworth

19 Sketchnote Style Sheets by Makayla Lewis

100 + 1 Drawing Ideas: 100 + 1 Drawing Ideas for Sketchnoters and Doodlers by Mauro Toselli

Visual Thinking: Empowering People & Organizations through Visual Collaboration by Willemien Brand

365 Sketchnote Challenge by Makayla Lewis and Nidhi Narula

Posted in: on Tue, August 14, 2018 - 10:36:38

Miriam Sturdee

Miriam Sturdee is Postdoctoral Fellow in Sketching and Visualization at the University of Calgary.
View All Miriam Sturdee's Posts

Post Comment

No Comments Found

Being an HCI researcher working with refugees

Authors: Reem Talhouk
Posted: Thu, August 09, 2018 - 9:47:44

In our July–August 2018 Interactions article, “HCI and Refugees: Experiences and Reflections,” my co-authors and I really wanted to document all the discussions we have been having about what it means to be HCI researchers working intimately with refugee communities. In the article, we aimed to bring forth challenges experienced while conducting fieldwork and how our research is influenced by our own principles as well as the agenda of other stakeholders. We often find that as interesting as these conversations are, they unfortunately don’t make their way into our publications. So what is the value of documenting such reflections?  Upon the release of the article I received an email from a fellow Ph.D. student saying that the article made her feel that she is not the only one experiencing challenges in conducting this type of research. That in itself gives value to articles such as this.

Using mock ups with refugees in a settlement in Lebanon

At several instances when working in refugee settlements in Lebanon, I have found myself witnessing great injustices and hardships that have made me question my role and what my research can possibly do to support refugee communities. My co-authors and other researchers have discussed having the same concerns. We found ourselves reflecting on how, more and more, we find ourselves embracing our activist selves and aligning our research with the agenda of refugee communities. However, immersing ourselves in our research so that we can even begin to understand refugee experiences and what the communities we work with expect from us may come at the cost of our own emotional well-being. Indeed, working within such contexts places you face-to-face with individuals and families that are recounting their overwhelming experiences. Such encounters make you as an individual feel helpless and as a researcher feel miniscule, as you realize that there is not much one research project can do. Such feelings are further exacerbated when you are back in the comfort of your own home and you realize that you are living a completely different reality than the communities at the heart of your work. Such reflections take an emotional toll on researchers as they attempt to reconcile their experiences with refugee communities and their own lives. It is because of these emotions, expressed during the Communities & Technologies 2017 workshop, that we dedicated a whole section in the article to researcher health and well-being. As such, we encourage researchers in this field to seek out peers to share their experiences and to reflect on how it is influencing their health and well-being.

As discussed in the article, such reflexive processes should be inherent to our work. What we find is that given the highly political nature of the refugee crisis, the reflexive process brings to the forefront our own political views and values. However, we often found ourselves in meetings attempting to quiet the screams of frustration in our heads as we diplomatically smiled at stakeholders expressing political views we disagree with. Quite frequently we need to engage with such stakeholders to access refugee communities, and this puts us in a precarious positions, where I keep questioning “Where do I draw the line?” “What things that stakeholders say should I shrug off and what things should I argue with?” Unfortunately there is no simple answer. I once had to sit through a meeting in which a gatekeeper talked negatively about refugees throughout and I had to diplomatically navigate the conversation so that I did not oppose him but at the same time not agree with him. I must say, it is very difficult to remain neutral on a topic that is so intimately tied with your beliefs and political views. However, in cases such as this, neutrality is essential when considering the larger objective of my research, which is to support refugee communities through technological innovations.

In the article we highlight the types of conversations we should be having as HCI researchers working in this field. Additionally, we provide guidelines based on our experiences that we hope would benefit other researchers in the field. As a group we are very open to having these conversations and would be more than happy to have chats with others in the field, even if it is just part of their reflective process.

Posted in: on Thu, August 09, 2018 - 9:47:44

Reem Talhouk

Reem Talhouk is a doctoral trainee in digital civics at Open Lab, Newcastle University. Her research encompasses the use of technology to build refugee community resilience in Lebanon.
View All Reem Talhouk's Posts

Post Comment

No Comments Found

‘What people see’ versus ‘what people do’: Some thoughts on the cover story on visualizations

Authors: Nikiforos Karamanis
Posted: Mon, August 06, 2018 - 11:24:53

I read with a lot of interest the latest cover story on data visualizations by Danielle Albers Szafir, particularly since I recently gave an introductory seminar on this topic to Ph.D. students attending the Bioinformatics Summer School at the European Bioinformatics Institute (EMBL-EBI).

The cover story made some very good points that I'll refer to it in the next version of my seminar. However, I think that it would have been even stronger if it:

  • Cited some additional seminal background work in relation to “what people see.”

  • Mentioned (even in passing) the importance of studying “what people do” (which has been fairly firmly established within the UX and HCI communities).

The cover story focuses on “understanding what people see when they look at a visualization” to design visualizations “that support more accurate data analysis and avoid unnecessary biases.” This is very valuable, particularly within the context of a “how to” article which needs to be brief and practically applicable.

Nonetheless, I think that it would have been useful to mention the hierarchy of visual channels (see Cleveland & McGill 1985, Mackinlay 1986, Heer & Bostock 2010) especially given that position is considered to be an even better way to encode quantitative data than sequential or divergent color maps. Figure 2 in the recent review by O'Donoghue et al. (2018) provides an excellent visual overview of the hierarchy combined with succinct practical advice on the use of color maps.

Additionally, given that the article is about "graphical integrity," I was expecting it to refer to Edward Tufte, to whom this principle is attributed. I was also a bit surprised not to see a reference to Tamara Munzner's textbook for those who are new to the field but want to study interactive visualizations in more depth.

My own audience is early career life scientists so I based my seminar on the Points of View columns on data visualization in Nature Methods, which is a familiar and inspirational journal for them. Given that some of the examples in the cover story came from biology, citing this resource may have been useful too for that readership.

In my role as a UX practitioner, I rely on particular methods to understand the needs of life scientists (such as interviews and contextual observations), to capture these needs (typically as user personas and task models) and to formulate the question that we are trying to answer with a visualization (for example, as a problem statement or a job to be done). In other words, I focus as much on “what people do” as on “what people see,” by applying methods that are fairly firmly established within the HCI and UX communities.

When I tell the story of how we designed a visualization or a whole web application for a particular service in EMBL-EBI, these methods stand center stage. Although the importance of qualitative field work and analysis has been highlighted, for example, in Munzner’s design study methodology framework, my impression is that these popular UX methods are still not routinely embedded in the everyday process of data visualization researchers and practitioners. The emphasis on “what people see” in the cover story reinforced this impression.

At EMBL-EBI we bring together experts from industry and academia to address current challenges in data visualization faced by our industry partners in an attempt to bridge the gap between data visualisation researchers, the HCI community, UX practitioners, and domain experts (especially from the pharmaceutical and agro-food industry).

A photo from EMBL-EBI's recent workshop on “Innovations in data
visualization for drug discovery.”

I hope that this blog post will help all of us who are part of these diverse and active communities focus on “what people do” in addition to “what people see.”

In closing, I’d like to thank Danielle Albers Szafir for writing the cover story and the editors of Interactions for publishing it.

I welcome your feedback on these thoughts.

Posted in: on Mon, August 06, 2018 - 11:24:53

Nikiforos Karamanis

As a Senior User Experience Designer at EMBL-EBI, I enjoy spending time with life scientists, developers and other stakeholders and helping them work together to achieve their goals using Lean User Experience methods.
View All Nikiforos Karamanis's Posts

Post Comment

No Comments Found

Values tensions in academia: An exploration within the HCI community

Authors: Team ViC
Posted: Mon, June 25, 2018 - 11:31:49

Wish you were here -  by @_JPhelps

February and March 2018 saw the largest ever industrial action in the U.K.’s higher-education sector. While the cause of the strike was changes to the USS pension scheme, the picket lines were sites for conversations about many other issues within academia. Whether it was dissatisfaction with the corporatization of universities, the precarious working conditions of early career researchers, or over-work, there was a clear sense that the values held by those striking were in sharp contrast with the realities of university life. The “depth of feeling” was often bitter and angry, and the frustration with today’s higher-education system was palpable. 

While many reported a loss of trust in the system and in their own institutions, fresh hope and renewed energy came from activities such as teach outs, open teaching and discussion sessions outside the campus. These initiatives offered concrete examples of different ways of engaging with learning and research across disciplines and roles; ideas were proliferating like a “thousand butterflies.” Many now feel that very broad bridges are needed to start filling the values gap that has manifested itself so clearly during the strikes; as Prof Stephen Toope, Cambridge University vice chancellor, puts it, “The focus should be on what values our society expects to see reflected in our universities, not just value for money.”

Figure 1. Schwartz's values model. Adapted from (Schwartz 2012).

From the HCI community standpoint, a similar value tension was captured by a survey carried out as part of the “Values in Computing” (ViC) workshop at CHI 2017. With just over 150 respondents, the survey explored views about the values driving HCI research at a personal and institutional level. The survey was designed around Schwartz’s values model (Figure 1) and tried to capture relationships (i.e., lines of friction) within and between the personal and institutional values held by the HCI community. Although the survey was exploratory and the sample may not be representative of the whole HCI community, the numbers did show tensions within the community.

Overall, most respondents felt their values matched their institution’s to some extent (57% of the respondents). However, almost a third reported that their values either did not match (26.5%) or did not match at all (6%). The survey also asked respondents to rank a list of options according to which were most highly valued in their work; this is where the values tensions became manifest. As the bar charts in Figure 2 show, the top three most highly ranked options were “making the world a better place”; “competence and intellectual independence”; and “relationships with colleagues, students. and partners.” These statements were designed to represent Universalism, Self-Direction, and Benevolence in Schwartz’s values model and followed previous research guidelines. Positive societal impact, autonomy of thought, and meaningful relationships were thus the things that these computing professionals most valued about their work. By contrast, financial recognition (Power) was the least valued.

Figure 2. Personal and organizational values ranking.

Respondents were then asked to rank a similar series of options according to what they thought their institution or organization most valued. They felt that their institution most highly valued “financial success,” “international prestige,” and “league tables/rankings.” All these three options belong to the Power values group. By contrast, the bottom three options were “making the world a better place through work, research and teaching,” “staff relationships with colleagues, students, and research/work partners,” and “supporting the well-being of staff, students, and partners.” Thus, the things that the respondents most valued—with the exception of intellectual autonomy—were seen to not be highly valued by their institutions. 

The implications of this friction between personal and perceived institutional values cannot be ignored and deserve further attention. Even if this tension may be, to a certain extent, “perceived” or “inevitable” or both, the widening of the values gap may have problematic consequences. For example, recent research and extensive media coverage worldwide suggest high levels of stress and mental health problems within academia. However, the emphasis of these studies is often on the temporal and mental burdens created by the demands of the workplace, and the need for raising awareness and promoting self-care (i.e., through apps and physical activity). 

Something that isn’t often talked about is whether the values tensions may have health and well-being implications, and the need for digging deep into the root causes of these tensions before defaulting to self-care coping mechanisms. This may be particularly the case in the HCI community, as many of us not only grapple with personal challenges, but also with the challenges of a much deeper and broader “existential crisis.” This is especially important because much of HCI research focuses on designing and developing digital technologies that can change people’s lives rather than examining how digital technologies come to life. We need to look into values tensions not only for the end-users and broader stakeholders, but for us—researchers, educators, designers, and developers. 

To this end, we argue that a better understanding of values is needed, especially when it comes to computing technologies. From a research and practice perspective, this means to build on, but also go beyond, the substantial corpus of research in ethics and the well-established research field of value sensitive design (VSD). 

Our question for the HCI and broader computing community is how to bring to the open the personal, institutional, and political values tensions manifesting in our workplaces (i.e., academia, research). In other words, how can we support the next generation of computing professionals with the deliberative, technical, and critical skills necessary to tell the difference between what is worth pursuing from what is potentially harmful to self and society? And how can we create and support institutions where this civic purpose can flourish? 

Thank you!

This work is part-funded by the Engineering and Physical Sciences Research Council UK (Grant number: EP/R009600/1). Warm thanks go to our project partners and to the CHI 2017 ViC workshop participants, who have jointly shaped the vision and direction of this research. A special mention also goes to the thousands of conversations had with colleagues and students within our school and across campus. More information about ViC and related work can be found at

Posted in: on Mon, June 25, 2018 - 11:31:49

Team ViC

Team ViC is a research team with expertise in rapid prototyping, agile development and participatory action research. ViC core team is flexible and can quickly reconfigure to bring extra expertise and support from its research and industry partners. We have many years experience of working together and in partnership with communities, practitioners, and businesses in projects such as Catalyst and Clasp.
View All Team ViC 's Posts

Post Comment

@yasha (2020 03 01)

faghat mal ma khar dareh dige. chi mikhay oskol.

Reflecting on the design-culture connection in HCI and HCI4D apropos of Interact 2017 field trips

Posted: Tue, May 01, 2018 - 12:15:45

Note: This blog post was co-authored by Nimmi Rangaswamy, associate professor at the Kohli Centre on Intelligent Systems, Indian Institute of Information Technology,  IIIT,  Hyderabad. She brings an anthropological lens in understanding the impacts of  AI research and praxis. She is also adjunct professor at the Indian institute of Technology, IIT, Hyderabad where she teaches courses at the intersections of society and technology.

In this blog post we provide our own personal reflection as a consequence of being asked to organize the “field trips” track for the Interact Conference in Mumbai. We knew field trips would lead those engaged with them on a necessary journey to look at the multiple, often contested, connections between culture and the process and product of designing technology for people. Sidestepping postcolonial pitfalls, we hoped the field trips track would facilitate the translation of local knowledge into valid and useful design insights, redefining and renegotiating boundaries and relations between product and user. After all, engaging with indigenous awareness in the course of field trips should lead to interesting realizations for the ontological and epistemological assumptions of what constitutes useful, usable, and, importantly, meaningful design. These realizations from the field are also configured by the different worlds and traditions we have grown up in. Rather than feeling, drawing from a positivist epistemology, that we are not being truthful or valid by allowing ourselves to “contaminate” our experience of the other, this should instead be embraced, capitalizing on the rich phenomenological encounters afforded by field trips. It goes without saying a good chunk of our job as designers and researchers is to empathize and find new meanings and connections in existing things, objects, and practices to innovate and make life better in whichever material and experiential ways possible. The best way to do this, to our knowledge, is to merge and collide viewpoints, traditions, and ways of thinking; to provoke situations of breakdown in a Heideggerian sense, where established and often tacit values and knowledge become “present at hand,” coming to light in order to do something with or about them.

Field trips in India were a unique opportunity for these breakdowns to occur in the crossing of traditions spawned by the inter-meshing of the diverse external delegates and researchers with local communities. In the words of Professor R.K. Mukherjee: “India is a museum of cults and customs, creeds and cultures, faiths and tongues, racial types and social systems.” Field trips opened up the opportunity to discover and reflect on cultural spaces without having to rely on ready-made Hofstedian national cultural models, out of which so many research and design projects emerged, ran their course, and failed. More importantly, cultural spaces reflected upon and discovered were not only those of the other but also those of ourselves, emerging as a necessary consequence of mutual reflection and recognition. This is why in this blog post we develop a brief reflection on this design-culture connection in the context of the agenda for HCI in the developing world.

Culture continues to be a contested construct for humanists and social science scholars. Likewise, its value for design-driven academics and professionals regularly comes into question. However, the concept of culture focuses us on the semiotics that allow us to reflect on our condition of being symbolic beings shaped by beliefs and emotions. This in turn enables us to see the need for technologies to be more human, and to be able to do something about it.

The focus on making technologies for humans while taking into account diverse cultural and contextual positions should then be part of the default agenda HCI for development (HCI4D) as a research domain. HCI4D researchers and practitioners have documented how decisions in technology design influence technology usage, adoption, and the resulting impacts on a multiplicity of use scenarios and users with social consequences. Recognizing that technology is neither culturally neutral, static, nor deterministic reinstates “context” as a harbinger not only for new design choices but for a more immersive and usable HCI product.

HCI4D as a domain and a community of researchers is engaged in the play of technology in quotidian and unusual domains such as diasporic space, conflict zones, low-literacy, reproductive health, and communities on the urban edge. A focus on such topics leads to a discussion on technology for development and a focus on marginalized populations in both developing countries and industrialized nations. In short, HCI4D operates at the intersection of HCI and socioeconomic development with an evolving sensitivity to technology design and use in diverse geographic regions. The field has steadily required increasing receptivity to involve and fuse varied academic and research domains/backgrounds, from sociocultural anthropology to the engineering sciences, with an array of disciplines such as behavioral and development economics, the cognitive sciences, and, not least, the spectrum of design disciplines. Being inclusive, HCI4D presses into service engagements with seemingly disparate sciences and initiates a dialogue in the production of an inclusive design community—one that draws from collective and assorted technology experiences and shapes evidence-based research to impact and strengthen multiple interactive technology scenarios for hitherto invisible yet contemporaneous populations.

Dell and Kumar summarize the HCID research area drawing upon four seminal references that set the context, precursors, and current engagements for the domain. Chetty and Grinter, who coined the term HCI4D, argue that entrenched HCI techniques and pedagogy must stay tuned to the shifting technology landscapes of use if they are to function effectively as a domain of designing impactful computing products for an array of contexts, especially the Global South. Burrell and Toyama offer a set of definitional pointers to carve out methodological trajectories constituting good research methods and analysis for a multidisciplinary and inclusive field such as ICTD and HCI4D. The field and learnings from field immersions for a context-driven HCI was proposed by Anokwa et al., who reflect on “stories from the field,” highlighting cultural, linguistic, and social challenges in the research endeavor with technology users from cultural contexts far removed from those of the researcher. These authors were instrumental in grounding methodological practices of HCI4D firmly in-context.

Over the course of time, as Dell and Kumar point out, what the “D” in HCI4D refers to remains a topic of intense debate among the many interdisciplinary scholars of HCI4D ilk. There is a general agreement about what is described as a focus on development in low-resource settings and/or marginalized communities: low-resource and marginalized are pretty broad terms to suggest recipients of development initiatives. HCI4D research for its part has maintained a focus on design for better access and usability qualified by low-resource settings. Issues of constraints—infrastructural more than cultural—were a running theme, as well as concerns for social justice and a variety of eco-political agendas. Dell and Kumar bring to the fore that “…varied perspectives show HCI4D is an amorphous amalgam of interests that brings together a community of people from varying perspectives…” It seems that HCI4D has a set of thorny methodological issues and a challenge to grant the domain a definitive identity. The field trips we pioneered demonstrate these issues but with a positive twist, giving the HCI4D field the excitement of an emergent research ground—and we are becoming a part of it! We invite you to read the Special Topic in Interactions where the experiences of the different INTERACT 2017 field trips are reported:

Debjani Roy, José Abdelnour-Nocera, Nimmi Rangaswamy

A Mobile App for Supporting Sustainable Fishing Practices in Alibaug
Morten Hertzum, Veerendra Veer Singh, Torkil Clemmensen, Dineshkumar Singh, Stefano Valtolina, José Abdelnour-Nocera, Xiangang Qin

Engaging Different Worlds, One Field Trip at a Time
Sumita Sharma, Andreea I. Niculescu, Grace Eden, Gavin Sim, Dhvani Toprani, Biju Thankachan, Janet C. Read, Markku Turunen, Pekka Kallioniemi

Privacy and Personalization: The Story of a Cross-Cultural Field Study
Hanna Schneider, Florian Lachner, Malin Eiband, Ceenu George, Purvish Shah, Chinmay Parab, Anjali Kukreja, Heinrich Hussmann, Andreas Butz

Where the Streets Have No Name—A Field Trip in the Wild
Biju Thankachan, Sumita Sharma, Tom Gross, Deepak Akkil, Markku Turunen, Shruti Mehrotra, Mangesh Ashrit

A Personal Perspective on the Value of Cross-Cultural Fieldwork
Arne Berger, Dhaval Vyas

Posted in: on Tue, May 01, 2018 - 12:15:45

Post Comment

@ghoddoos (2020 03 01)

Initial findings do not support electronic micro-breaks as true “brain breaks” that allow for information processing and prevent mental fatigue.

Fair technology

Authors: Juan Hourcade
Posted: Fri, April 27, 2018 - 12:10:30

The means of destruction have developed pari passu with the technology of production, while creative imagination has not kept pace with either. The creative imagination I am talking of works on two levels. The first is the level of social engineering, the second is the level of vision. In my view both have lagged behind technology, especially in the highly advanced Western countries, and both constitute dangers.

The future cannot be predicted, but futures can be invented. It was man's ability to invent which has made human society what it is.

—Dennis Gabor, Inventing the Future, 1963

In his book Inventing the Future, Denis Gabor captured his impression of the impact of technology mostly based on his experience living in the 20th century. Technological changes were as radically productive as destructive, but generally lacked direction from the perspective of constructing more fair and just societies, or having a vision other than that related to the insatiable longing for wealth, status, or power of a few. 

Fast forward to 2018 and we are facing a similar situation with information and communication technologies (ICTs). We have had unprecedented production, with large amounts of information quickly available to most people in high-income countries, and increasingly throughout the world. ICT companies have focused primarily on growth, with little attention paid to the destructive uses of their technology, which now appear to have at least caught up with productive uses. Just as in Gabor’s 1963, the problem is still the lack of a serious vision for the use of technology for a more just and fair world, a vision that translates into action on the part of the major players, and that has at least equal standing with the goals of growth and profit.

Back in 2011, together with Natasha Bullock-Rest, I presented a vision for technologies to reduce armed conflict around the world through a more just and fair world with the following goals: reducing social distance between enemies, exposing war and celebrating peace, de-incentivizing private motivation for conflict, preventing failures of the social contract, promoting democracy and education, and aiding operational prevention of conflicts. It is difficult to think of any major ICT company that has taken any of the goals above seriously, at the same level at which they pay attention to growth and profit. 

Perhaps the most disappointing development is the negative effect ICTs have had on democracy, arguably providing the greatest challenge to democratic institutions in decades. These challenges have come in at least two related forms: increasing political radicalization, and diminished trust in facts and expertise. A third challenge is the massive accumulation of personal data that could be used in very damaging ways by authoritarian governments.

James Madison, in Federalist Paper #10, warned of the dangers of factions on the well-being of countries and democracies, saying “A zeal for different opinions concerning religion, concerning government, and many other points, as well of speculation as of practice; an attachment to different leaders ambitiously contending for pre-eminence and power; or to persons of other descriptions whose fortunes have been interesting to the human passions, have, in turn, divided mankind into parties, inflamed them with mutual animosity, and rendered them much more disposed to vex and oppress each other than to co-operate for their common good.” Yet, ICTs have every incentive to provide us with information and views that conform to our own inclinations, failing to provide counterpoints to undemocratic ideas, thus helping polarize society, and making responsible venues that provide balanced views less popular. In addition, increased automation is making it less necessary to interact with people who may be from a different walk of life and could provide an alternative point of view.

Factionalization has come hand-in-hand with diminished trust in facts and expertise. This is another threat to democracy as it leads to ignorance. As Thomas Jefferson stated in an 1816 letter to Charles Yancey,“If a nation expects to be ignorant and free, in a state of civilisation, it expects what never was and never will be.” It is difficult to have a truly democratic society if groups of people have widely different understandings of reality.

The challenge of the massive collection of personal data becomes weaponized once democratic protections are lifted. The rich data that companies like Facebook and Google have on billions of people, in combination with widespread cameras and face-recognition technology, would have been beyond the wildest imagination of most secret police bosses in 20th-century authoritarian regimes. The ability to go after political enemies would be unprecedented.

I am thankful that within the HCI community we have researchers exposing the dangers I outlined above and presenting visions of the future that include Gabor’s creative imagination. However, our generation of ideas and projects that may impact political topics such as supporting democracy or preventing armed conflict have arguably not had an eager audience at the top levels of large ICT companies. 

The challenge is significant and the stakes are high. I think it’s time to discuss creative ideas and I am happy to propose one so we can begin the discussion. My sense is that our challenge is in some ways similar to that of the food industry, where unhealthy food, environmentally unsustainable practices, and worker exploitation are beginning to be addressed, in part, through organic and fair trade certifications. These have been far from perfect solutions and are mostly available to people who are well-off, but we don’t even have an equivalent in the ICT world for uses that involve large amounts of data (e.g., social media). The closest we have is free, libre, and open source software and services provided by groups such as the Mozilla Foundation and the Open Source Initiative. Having widely recognizable certification for ICTs could provide a way forward, but the certification should be concerned not only with cost or source code, but with the ethical track record of an organization/product. What would it likely involve? Periodic assessments of societal outcomes, with a focus on user empowerment, individual and community well-being, and basic democratic principles.

What do you think? What solution do you propose?

Posted in: on Fri, April 27, 2018 - 12:10:30

Juan Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Hourcade's Posts

Post Comment

No Comments Found

Material interaction design

Authors: Mikael Wiberg
Posted: Tue, April 10, 2018 - 11:28:01

In his book Designing Interactions (2006), Bill Moggridge focuses on how to design interactions with digital technologies. That makes sense if you think about interaction design as the practice of designing interactions. However, interactions cannot be fully designed, determined, restrained to a particular form, or fully predicted in the same way that a service can never be fully designed. At best we can design enabling preconditions that might enable or ease a particular form of interaction. In other words, we can design the material preconditions for a particular form of interaction—but we can never completely predict and design the interaction that unfolds. 

As we now move into the era of more physical forms of computing—including the development of the Internet of Things, smart objects, and embedded systems—it is quite easy to see how interaction design is increasingly about arranging material preconditions for interaction. However, that is actually true for any interaction design project. As pointed out by Dourish (2017), computing and information is always a material concern. No matter how abstract we think computing, information, and representations are, they all rely on material infrastructures, ranging from the server halls, to the fiber networks, to the electronics that enable computing in the first place.

From that perspective, interaction design becomes a design practice of imagining new forms of interactions, and then designing as good preconditions as possible to enable those particular forms of interactions to unfold. In my recent book The Materiality of Interaction (2018), I discuss these imagined forms of interactions and how to manifest them across physical and digital material. I talk about this under the notion of a “material-centered approach to interaction design.” Here, one might wonder if this is actually a turn away from our user-centered approach? My answer to this question is a clear no. In the book I propose that a material-centered approach to interaction design follow something I call “the interaction 1st principle,” which is the imagined form of interaction in focus for the interaction design project. I then suggest that in order to manifest that imagined form of interaction in computational materials it is necessary to have a good understanding of what materials are available (ranging from electronics, sensors, and analog materials, to hardware and software) and to know about material properties and how different materials can be reimagined and reactivated in a computational moment. Further, I suggest that a design challenge is how to bring those different materials into composition so as to enable a particular form of interaction. Accordingly, I suggest that a third component here is to have compositional skills to work across a whole range of materials in interaction design projects.

As we now move forward with AI as our next design material, we also need to think about what should be a matter of interaction, and what interactive systems can do for us, autonomously or semi-autonomously. I like to think about this in terms of the design of “scripted materialities”—again with a focus on how to manifest interaction design through material configuration, but here with a focus on how the materiality might change and reconfigure itself over time, in relation to different needs and usage. While a traditional understanding of materials might lead our thinking towards things that are stable and inflexible—just think about the expression “set in stone”—a huge challenge for moving forward will be to not abandon a material-centered approach to interaction design, but rather to reimagine what materials can do—to move from thinking about materials from the viewpoint of enabling structures to thinking about how dynamic material compositions can provide the necessary means for new forms of dynamics and new forms of interactions!

Posted in: on Tue, April 10, 2018 - 11:28:01

Mikael Wiberg

Mikael Wiberg is a full professor in informatics at Umeå University, Sweden. Wiberg's main work is within the areas of interactivity, mobility, materiality, and architecture. He is a co-editor in chief of ACM Interactions, and his most recently published book is The Materiality of Interaction: Notes on the Materials of Interaction Design (MIT Press, 2018).
View All Mikael Wiberg's Posts

Post Comment

No Comments Found

Leveraging Afrofuturism in human-centered design: A way forward

Authors: Woodrow Winchester
Posted: Wed, April 04, 2018 - 10:29:27

Foresee: Process dictates product. To design for equity, we must design equitably. The practice of equitable design requires that we are mindful how we achieve equity. Inclusive design practices raise the voices of the marginalized, strengthen relationships across differences, shift positions, and recharge our democracy.

It is the aim of my March + April 2018 Interactions article “Afrofuturism, Inclusion, and the Design Imagination” to architect a case for Afrofuturism as a speculative design lens in human-centered design (HCD), articulating the whats and whys of engaging Afrofuturism in conceiving and developing inclusive and equitable future technological solutions. The natural next question is how? How can the human-centered designer engage—now—with Afrofuturism?

A proposed taxonomy (Figure 1), developed in collaboration with graphic and interaction designer Zane Sporrer, begins to frame this how. This taxonomy, depicted with a specific focus on connecting Afrofuturism with the equityXdesign framework, situates Afrofuturism as a design lens in executing liberatory design frameworks, those similar framings (e.g., humanity-centered design, inclusive design, and the anti-oppressive design framework) that incorporate equity work within HCD. In enacting the construct of “foresee” within the equityXdesign framework, Afrofuturism functions as a mechanism for focusing substantiating discursive design tactics like speculative design in imagining concepts, with an intent to offer design artifacts that provoke and trigger more inclusive conversations about both user and context of use.

Figure 1. Proposed taxonomy in engaging Afrofuturism within human-centered design.

As detailed in my Interactions piece, this mode of engaging Afrofuturism in speculative design is reflected in my efforts around more inclusive connected fitness technologies (devices). Figure 2, conceptualized in collaboration with artist Marcel L. Walker, reflects an exemplary speculative design artifact. Inspired by a core tenet of Afrofuturism, collectivism, and the Afrofuturistic imagery of the warriors of the Dora Milaje—Wakanda’s Special Forces, as depicted and featured in Marvel’s film Black Panther— this artifact conveys the importance of community and the value of connecting visually to a greater Black collective in contextualizing and motivating increased individual physical activity levels. While it is not the intent that this concept be implemented as imagined, this artifact, as a speculative probe, fosters design conversations that enrich the plausible solution space.

Figure 2. Global pulse speculative design artifact.

From an interaction design perspective in particular, deeper discussions around ways that data and information offered by connected fitness devices can be better synthesized, situated, and visualized are spurred. As the type and nature of insights traditionally offered by these devices are more quantitative in nature (e.g., number of steps taken), this concept, as probe, inspires thoughts around more qualitative representations of insights in motivating increased physical activity levels. This has been demonstrated in the successful efforts of GirlTrek, an organization that inspires black women to change their lives and communities by walking, situating one’s step count within a historical context. Efforts such as Girltrek’s recent #HarrietsGreatEscape initiative (where participants are challenged to walk 100 miles between March 10, 2018 and May 10, 2018, symbolic of Harriet Tubman’s journey along the Underground Railroad) could better motivate increased physical activity levels, particularly among marginalized groups such as black/African-American women, for whom considerable health disparities exist.

As is indicative of speculative design, immediate outcomes are not typically commercially viable or usable; further grounding is necessary. Hence, in continuing the example of more inclusive connected fitness devices where the voice of black/African-American women is placed central in the design narrative, look to the conceptual and empirical works of Andrea Grimes Parker, director of the Wellness Technology Lab at Northeastern University. Works such as her March + April 2013 Interactions article “Designing for Health Activism” help in evolving and maturing the design conversation. This ultimately seeds more inclusive and novel plausible solutions for further iteration and eventual refinement.

And, to hopefully state the obvious, inclusion matters in technology design. Sara Wachter-Boettcher, in her book Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech, states that: 

exposure to difference changes perspective, and increases tolerance. That’s why it matters so much that marginalized groups are validated within our interfaces. Because if technology has the power to connect the world, as technologists so often proclaim, then it also has the power to make the world a more inclusive place, simply by building interfaces that reflect all its users. 

Thus, the need for human-centered designers to both develop and engage with tools, methods, and practices that support this premise is paramount. Afrofuturism represents such a tool—a design lens—through which the requisite intentionality and actions can be both catalyzed and implemented.

I am both excited and encouraged by recent feedback from my Interactions article to continue this conversation. In particular, I invite the use of my thoughts concerning the engagement of Afrofuturism in HCD as a probe in advancing the continued evolution of the needed methodological rigor in increasing inclusivity and thus equity within the culture, processes, and outcomes of HCD. The consequences are great, especially as technology is becoming more deeply engaged in our daily lives and activities. For, as now being witnessed, design patterns, behaviors, and norms are being embedded and reinforced within HCD that, while unintentional, may lead to future technological solutions that do more harm than good. 

Posted in: on Wed, April 04, 2018 - 10:29:27

Woodrow Winchester

Post Comment

No Comments Found

Science fiction in HCI – a nuanced view

Authors: Philipp Jordan
Posted: Wed, March 28, 2018 - 4:09:16

Note: This blog post is a critical response and extension to the March/April Interactions Special Topic article on science fiction for innovation in HCI, written by Daniel M. Russell and Svetlana Yaros. 

I was pleased to read the most recent Interactions Special Topic on science fiction and HCI—the motivation for my first IX blog post. In the following, I provide a broader view of the complex symbiosis of science fiction and HCI research. 

Science fiction literature, cinema, and games 

To begin with, the authors conflate science fiction literature, cinema, and interactive media throughout the article. While the amalgamation of the different artistic expressions of science fiction is an object of continuous debate, it warrants more precision if we are to derive heuristics and recommendations for the utility of science fiction in HCI. 

Though there are exceptions to the rule, it is safe to assume that science fiction visualizations, such as movies, shows, or product visions can mostly be traced back to a science fiction novel, short story, or simply a written idea. On this subject, Shelley Ward’s “The History of Science Fiction” helps in visualizing both the contemporary complexity and original roots of the genre. 

Technovelgy, a science fiction web repository, lists more than 2500 ideas initially formulated in written visions. For example, the videophone is described in Jules Verne's 1889 novel In the Year 2889 as a phonotelephote

The first thing that Mr. Smith does is to connect his phonotelephote, the wires of which communicate with his Paris mansion. The telephote! Here is another of the great triumphs of science in our time. The transmission of speech is an old story; the transmission of images by means of sensitive mirrors connected by wires is a thing but of yesterday.

Three decades later, in 1927, the German science fiction dystopia Metropolis visualized a videophone (Figure 1). Metropolis depicts the videophone in a plausible manner through an interface, interaction, and a visual context, whereas Jules Verne’s elusive, literary expression of the phonotelephote leaves more room for our imagination. 

Figure 1. Screencaps from Metropolis (1927) showing the videophone. Copyright held by UFA GmbH, Fritz Lang, Thea von Harbou, and Gottfried Huppertz.

We can also conduct a philosophical inquiry into robot ethics, such as Noel Sharkey’s killer robots from the future, or watch RoboCop’s ED-209 malfunction, or play Horizon Zero Dawn, a video game set in a world overrun by machines. Media differences are very important. Accordingly, we must observe in each case the trade-offs between affordances and constraints amid the different media formats for HCI design, inspiration, and innovation. 

The fundamental distinction between science fiction writing and cinema was recognized in film studies almost 50 years ago by William Johnson in his 1972 book Focus on the Science Fiction Film (page 98): “The filmmaker is less fortunate: he must show the invention, and he must show it working.” 

On one hand, science fiction cinema might limit or mislead the imagination of the viewer due to the constraints of the media format as well as the depicted technological and metaphysical assumptions within the movie narrative, or diegesis. On the other hand, the made-up, explicit visualizations of these elements can serve as powerful showcases of future devices, interactions, and information and communication technologies to not only the general public, but researchers as well. 

David Kirby has written extensively on collaboration schemes between researchers and moviemakers. His diegetic prototypes can not only demonstrate design ideas as design fictions, but also demonstrate to a larger public audience the benevolence, need, or threat of a future technology. Film theory and the concepts of dramatic and veritable truth, or the notion of hard and soft science fiction, can help both to identify “serious” science fiction and to broaden our understanding of the different expressions of the genre. 

My research on science fiction and HCI 

In lieu of repeatedly linking well-known science fiction movies and shows to HCI research, such as Minority Report, Star Trek, or Blade Runner, I analyze the evolutionary uses of science fiction in HCI and computer science. Specifically, I investigate science communication to find out when, how, and why scientists use science fiction in HCI research and computer science. 

Last year, I conducted a three-hour interview at the Science and Entertainment Exchange in Los Angeles, a National Academy of Sciences–endorsed program to connect researchers with film-industry professionals. I learned that “the exchange” has facilitated more than 2000 consultations—or matches—of scientists and filmmakers with the goal of presenting science more accurately on screen.

My research on Star Trek referrals in the ACM Digital Library found that the franchise was associated with the proliferation of the BASIC programming language [1] and early notions of hacking. My co-authors and I proposed a science-fiction-inspired HCI research agenda, extending beyond diegetic prototypes and design fictions toward computer science education, human-robot interaction, and AI ethics. 

Later this year, I will present my research on the evolutionary uses of science fiction in the CHI conference at the HCI International conference in Las Vegas. In that study, I identify five themes where science fiction and HCI research interact; in addition, I highlight a focus on seminal popular Western science fiction in CHI research. 

In another forthcoming article, my co-authors and I review how 20 science fiction robots have been used and characterized in computer science literature. We found in this study that science fiction robots are inspirational for researchers in the field of human-robot interaction. For example, the robot Baymax from the movie Big Hero 6 has inspired scientists to create Puffy.

At present, I am analyzing science fiction referrals in 1500 peer-reviewed computer science papers, among those an IEEE paper by Hereford (page 133):

Practically speaking, literary artists could be employed as consultants and given the task of imagining as concretely as possible the lives of individual people in various social situations that are defined in terms of a given system design. Ultimately, the test of whether the system is coherent will be whether one can feel the system to be working out for concrete individuals as imagined in the drama of their particular lives. 

The required level of quality of such scenarios will vary in accordance with clients' and engineers' tastes. A quality comparable to that found in most science fiction will satisfy most engineers and probably most others too. Would-be artists with sane literary training would probably be able to produce suitable scenarios.

If you ask me, that sounds like pragmatic design fiction from 1975. What do you think? 

Non-trivializing science fiction 

There is nothing wrong with envisioning “future user interfaces” that are “all walk-up-and-use.” However, there is more to science fiction and HCI than John Underkoffler’s gestural interfaces from Minority Report, the utopian interfaces from Star Trek, and the dystopian visions of society and technology seen in Black Mirror and Westworld. Science fiction in HCI research encompasses more than pragmatic and speculative design research. 

As soon as we are mindful of our own cultural preconceptions, science fiction does not constrain inspiration, nor limit imagination; we simply cultivate a consciousness of the larger potentials of science fiction for core HCI research. I think that we have long underestimated the important, invisible relationship between science fiction and HCI and computer science research, industry, education, and, to some extent, ourselves. 


1. In fact, my dissertation chair, Scott Robertson, ACM Senior Member, told me he picked up programming due to the Star Trek BASIC game four decades ago.

Posted in: on Wed, March 28, 2018 - 4:09:16

Philipp Jordan

Post Comment

@fariba (2020 03 01)

It appears that removing ourselves from stimulation, electronic or otherwise, is crucial for our brains to function at their peak.ثبت-نام-آزمون-کارشناسی-ارشد-گروه-پزشکی/ (2020 05 05)

As soon as we are mindful of our own cultural preconceptions, science fiction does not constrain inspiration, nor limit imaginationمشاوره-کنکور-تجربی/

My users taught me to read with my ears

Authors: Mario Romero
Posted: Fri, February 16, 2018 - 1:02:48

This blog started as a response to the column What Are You Reading? in Interactions. While I have a number of books I wanted to discuss, the topic made me reflect on how I read. My reading mechanisms have evolved from working with blind people. I continue to read novels mostly with my eyes—but for research papers, I use my ears.

For many years now I have mostly read thesis drafts. In my role as supervisor and thesis examiner, I read approximately 30 master’s theses and five Ph.D. theses every year. I know what you may be thinking, but, yes, I actually read and review them. Furthermore, as a paper reviewer, I read approximately six papers every year. At an average of 10 thousand words per master’s thesis, 30 thousand per Ph.D. thesis, and 7 thousand per paper, that is about half a million words, or a paperback novel of roughly 1700 pages.

Now, mind you, I am a slow reader, even when the text is clear. Most of the theses I read are drafts that typically require several rounds of careful reading and editing. If the text is clear, I can read about 250 words per minute. At that rate, it takes me about 70 hours to do one pass through my annual review material. 

My goal here is to share my experience in switching from reading with my eyes to reading with my ears, and from editing with my fingers to editing with my voice. This switch has meant not just a significant boost to my throughput, but also an improvement in my focus and comprehension, and has allowed me to work in many more contexts. Paradoxically, perhaps the greatest benefit for me has been the ability to stay focused on the reading material while performing other physical activities, such as walking home from work. This blog outlines the methods that I have developed over the years and reflects on their pros and cons.

I begin with a short story. A few years ago, I ran a stop sign at the Georgia Tech campus and a policeman stopped me and summoned me to court. In court, I had the option to pay a $100 fine or perform community service. As a graduate student, I chose service. I signed up to volunteer at the Center for the Visually Impaired in Atlanta. During my week of service, I observed two blind teenagers in class not paying attention to their teacher. Rather, they were giggling at a mysterious device. It had a shoulder strap and it rested on the hip of the girl. She pressed combinations of six buttons and ran her index finger across a stripe at the bottom. She giggled and shared the experience with her friend, who also giggled as he ran his fingers across the device. I was bedazzled. What was producing this strong emotional response? 

A typical portable braille writer and reader, the Freedom Scientific PACmate, which we used in our user studies. Note the buttons on top allow for chorded input and the pins at the bottom push up and down for a dynamic “refreshable” braille text output. (Image CC BY Mario Romero 2011)

I discovered they were using a portable braille notetaker, which includes a six-key chorded keyboard and a refreshable braille display, the stripe they were reading with their fingertips. After that encounter, my colleagues and I started researching how blind people enter and read text on mobile devices. We discovered that the device the teens were using was called BrailleNote. It cost a sizeable $6,000 USD. 

We also determined that smartphones were, ironically, a much cheaper alternative. Although smartphones do not have braille displays, they can use voice synthesis to read out screen content. The user places a finger on the screen and the reader voices the target, a very efficient method to read out information. Unfortunately, inputting text is not as simple. While many users speak to their phones using speech-recognition technology, they cannot do so under many conditions, particularly in public environments where privacy concerns and ambient noise render the task impractical. In our research, we developed a two-hand method for entering text using braille code called BrailleTouch [1]. Apple’s iOS has since incorporated a similar method of text input.

A participant in our study using BrailleTouch. (Image CC BY Mario Romero 2011)

What we did not expect was the discovery of the extraordinary ability of people with experience using screen readers to listen to text at tremendous speeds. We observed participants using screen readers at speeds which we could not comprehend whatsoever. Participants reassured us they understood and that it was a skill they acquired through simple practice—nothing superhuman about it.

That realization sent me down the path of using my hearing for reading. I started slow, first learning to enable the accessibility apps on my iPhone. Then I learned to control the speed. I started at about 50%, or 250 wpm. I could understand at that rate, which happened to be my eye-reading rate. I downloaded PDF versions of my students’ theses into my phone and tablet and read using VoiceOver [2]. Unfortunately it did not work as well as I expected. The reader did not flow on its own, stopping over links, paragraphs, and pages. I had to push it along by swiping.

After some research, I discovered an app called vBookz PDF Voice Reader. I uploaded my students’ theses in PDF format and started to speed things along. After a few months I was reading at 400 wpm. The app also shows a visual marker of where you are on the text, so you can follow it with your eyes. After about a year, I was reading at full speed, 500 wpm with full comprehension. More importantly, I no longer had to follow with my eyes. I could, for example, walk during reading and see where I was going and remain fully focused on the text. Today there are myriad text-to-speech reading apps and I encourage you to explore them to find the one that is most fitting.

This newfound ability has meant a dramatic change in my reviewing practices. Yet it is not the whole picture. There are more practices I have explored and there are important limitations as well. I started dictating feedback on my phone. I stopped typing and started using Siri to provide comments and corrections. The upside is that I can continue to stay focused on both reading and writing while remaining physically active. The downside is that I need to verbally state the context of the feedback, as in “paragraph 3, sentence 2.” Nevertheless, I find it liberating and efficient. 

What are these other important limitations? First, the text-to-speech reader voices everything: page numbers, URLs, footnotes, image names, citations. It even states “bullet point” for the dot initiating bullet points. Most of these landmarks are distracting and I now know to disregard them when reading with my eyes. Second, tables and graphs are a nightmare to read with my ears. I have to stop the reader and use my eyes when I reach text with a non-linear structure. Third, figures are only legible as far as their meta-description allows it, and even in that case it is better to use my eyes. Fourth, and last, I have to use headphones. Otherwise, the echo from the device’s speakers coupled with ambient noise renders high-speed reading impossible for me. 

Despite these limitations, which are current research topics in eyes-free reading, I find myself fortunate enough to have recruited participants who had the generosity and sense of pride to share their expertise. I am a better research supervisor for that. Yet, for all the boost in speed and comprehension I get from ear reading, I still enjoy reading novels with my eyes and listening to audiobooks read by human actors at normal speed. My mind’s voice and that of other humans remains my preferred method for pleasure reading. Perhaps in a few years voice synthesizers will become so human that they pass this version of the Turing test.


1. Southern, Caleb, James Clawson, Brian Frey, Gregory Abowd, and Mario Romero. "An evaluation of BrailleTouch: mobile touchscreen text entry for the visually impaired." In Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services, pp. 317-326. ACM, 2012.


Posted in: on Fri, February 16, 2018 - 1:02:48

Mario Romero

Mario Romero studies immersive visualization environments and how people use them.
View All Mario Romero's Posts

Post Comment

No Comments Found

What we mean by interactive form

Authors: Mattias Arvola
Posted: Tue, November 28, 2017 - 12:31:55

The following blog post is nothing more and nothing less than an email conversation between Mattias Arvola, Jeffrey Bardzell, Stefan Holmlid, and Jonas Löwgren about the concept of interactive form, which incidentally is the name of a course given at Linköping University. If you do teach a course, it might be good to understand the meaning of the course name.

Mattias: I thought that I would ask you what associations come to mind when you hear the term interactive form. We have a course with that name for the students in graphic design and communication, but I have never really been comfortable with the course name. I have taught it for many years, but never appropriated the term. Stefan Holmlid was the one who decided that the course should have that name almost 10 years ago. So here it goes—this is my initial understanding of what interactive form is:

Interactive form is the totality of a design's interactive elements and the way they are united, without consideration of their meaning. The non-interactive formal elements are things like color, dimension, lines, mass, shape, etc., and the interactive formal elements are interactivity attributes like concurrency, continuity, expectedness, movement range, movement speed, proximity, and response speed. The user experience of mystery and intrigue a piece evokes are informal effects of the user’s response.

We can contrast this definition of interactive form to the related concepts of interaction style and interaction design patterns. Interaction style is how people interact. This is a question of what steps and means they employ in the interaction (quibus auxiliis), and with what attitude or manner they interact (quo modo). Design patterns are schematically described compositions of elements that are used in response to recurrent problems. Since I’ve never felt completely at home with the term interactive form, I avoided and focused the course on interaction style and interaction design patterns. So, Jeff, Jonas, and Stefan, what are your takes on the notion of interactive form?

Jeff: I think the problem is partly that form has a lot of meanings in English, and when you put interactive in front of it, it becomes easy to misread form altogether. The deeper issue is that form in the traditional aesthetic sense typically characterizes features of an object—the formal elements of a poem, sculpture, or fugue and their composition. But with interaction, this becomes a problem, since the form that matters isn’t in and of the artifact, but of the human-artifact interaction. Jonas’s work on concepts like pliability helps reveal the difference and its significance. I think if you use form to qualify interaction, we’re sort of hard-wired to go to the object, the artifact—so you’re starting the game with a negative score. You might not be able to rehabilitate that word from that usage. I wonder if formal qualities of interaction gets at what you want? 

Jonas: I agree that form in the context of design is tightly bound to the object and its features, and the construct you propose (formal qualities of interaction) might actually do the trick. It sets the right object (which is interaction), yet still uses the word formal which pulls in the direction of appraisal. Aesthetic appraisal, that is, in a suitably wide sense; connoisseurship and criticism rather than user testing.

Mattias: I'm onboard with interaction qualities, but if we say formal qualities we don’t necessarily imply the experiential qualities of interaction. Formal qualities of interaction would relate more closely to the sensory fabrics of the interaction. Then we add an interpretative level to get to the meaning, i.e., what an interaction might mean to someone. Perhaps we should think of the formal qualities of interaction as how a designer conceives the designed elements and their composition to contribute to certain experiences and responses. This is basically what Jonas said: aesthetic appraisal, connoisseurship, and criticism.

Jeff: You might just use the word poetics, which I understand to mean how formal qualities of aesthetic objects contribute to, cause, or shape human experiences (e.g., how hamartia in a tragedy leads to feelings of pity and fear in the audience). So, the question is where you want to situate this: in the elements and compositions of interactive objects or in interactions. It sounds like you have at least ruled out situating it in the phenomenal experience of the subject (which makes sense to me, too). 

Mattias: Now we're getting somewhere. If we speak of the formal elements and compositions of interactions, then I would speak of the entry, the body, and the exit of joint action. That would allow us to take a closer look at the composition and elements of the entry, the body, and the exit. This could help the students to appraise the details of the interaction in different designs.

Stefan: I would say that interactive form is about the experiential, aesthetic qualities of (or in) interacting. It is then important to articulate, discuss, and critique how these phenomena are formed, and how a designer can approach an understanding of these phenomena. A jumble of questions that can be used as a reflexive sounding board:

  • Are the phenomena situated in the experiencing subject alone? 
  • In what way can they be viewed as cultural constructs? 
  • How are the material manifestations of interactivity correlated to the phenomena of the experiencing subject? 
  • How are ideas about such “qualities” manifested or fulfilled through the creation of new contexts and possibilities for our situated cognition extended through the material manifestations, or mediated acting in and on the world by proxy?
  • How are the articulations formed by conventions, expectations, and individual and joint instrumental goals? 
  • How can a designer explore designs to not only form an understanding of an “interactive form,” but also extend his/her experiences of different ways of experiencing, understanding, and articulating interactive form? 

To me, that last point goes beyond the ordinary understanding of the concept of repertoire. However, interaction gestalt is also a related term that we could use in this context.

Mattias: I think it relates well. It seems that when we speak of interactive form, it relates to the constituents and constellation of the designed artifact, i.e., primary qualities, but also the subjective experiences it gives rise to, i.e., secondary qualities. Interaction is, however, about the relation between the artifact and the subjects interacting with it, and qualities of interaction can hence be said to be tertiary qualities. As you note, Stefan, the qualities of interaction do not take place in a vacuum, nor do the experiences they give rise to. This means that the topic of interactive form must be understood as inherently cultural, historical, and social, and not only subjective experience or objective materiality. Interactive form can also give rise to an interaction gestalt, i.e., a composition that gives rise to an expression in a unified concept or pattern that is more than the sum of its parts.

This will indeed prove to be an interesting course both for the students in graphic design and communication and for the teachers. It also highlights an important issue for the interaction design community: What do we actually mean when we speak of form in interaction design?

Mattias Arvola is an associate professor of cognitive science, especially interaction design and user experience, at Linköping University.

Jeffrey Bardzell is a professor of informatics, especially design theory and emerging social computing practices, at Indiana University—Bloomington.

Stefan Holmlid is a professor of design, especially design in service development and service innovation, at Linköping University.

Jonas Löwgren is a professor of interaction and information design, especially visualization and collaborative media, at Linköping University.

Posted in: on Tue, November 28, 2017 - 12:31:55

Mattias Arvola

Mattias Arvola is an associate professor of cognitive science, especially interaction design and user experience, at Linköping University.
View All Mattias Arvola's Posts

Post Comment

No Comments Found

A tent, a pigeon house, and a pomegranate tree

Authors: Shaimaa Lazem
Posted: Fri, September 29, 2017 - 10:43:03

Note: This blog post was coauthored by Danilo Giglitto, research associate, and Anne Preston, senior lecturer in technology-enhanced learning, based at the Learning and Teaching Enhancement Centre (LTEC), Kingston University, London.

After the 2011 revolution, Egypt faced a challenging socioeconomic transition. Since then, the ICT sector has become one of the promising contributors to Egypt’s economic growth. In 2014, the Ministry of Communications and Information Technology announced the Social Responsibility Strategy in ICT, with an inclusive vision for using technology to integrate different societal groups to achieve equality, prosperity, and social stability. Such goals demand that technology professionals be equipped with user-centered skills to design for groups with various socioeconomic backgrounds.

As a response to these goals, in August 2017 we ran an eight-day HCI summer school for designing technologies to document intangible cultural heritage (ICH) in the northwest of Egypt. The school was part of a UK-Egypt institutional link, the Hilali Network, a Newton-Mosharafa project between the City for Scientific Research and Technology Applications (SRTA-City) and Kingston University London. The link aimed at advancing HCI education in Egypt by training 18 engineering students from Alexandria University to engage in technology design activities with members from the Bedouin community of Borg El-Arab. 

The Bedouins in Egypt are an important tribal nomadic community who migrated to Egypt from the Arab peninsula hundreds of years ago, inhabiting the north and western deserts and the Sinai Peninsula. With increased urbanization in those areas, however, they have become mostly a settled community, at risk of losing social practices, oral traditions, customs, language, and identity, all associated with intangible cultural heritage (ICH). Digital technology has often played a major role in supporting documentation of ICH at risk of loss with Web-based material, increasing its access and dissemination. Our proposal was that the sustainability of such an approach could be harnessed to its full potential by supporting the participation of community members. This remains a challenge since ICH should be researched within each specific social, cultural, and technological setting. We therefore argued that a bottom-up approach to ICH could benefit from HCI participatory methods to engage communities with technologies.

The challenges we had to face were numerous and complex, including that our students had technical-oriented mindsets; they were less appreciative for the topics they classified as “humanities” and were reluctant to engage with community members in participatory activities. The latter challenge was surfaced in the ArabHCI network  as a common issue across the Arab world. 

Before the school started, we discussed the project with community members who worked at SRTA-City. Some of them were familiar with scholars who had come to study some of their traditions. The participatory approach we intended to adopt was new to them. They shared the fact that they were participating with others; they were proud of their Bedouin heritage and recognized the risk of it fading away, as many of them currently attend modern schools and have moved to cities to study and work.

We designed the summer school curriculum so that students would gradually build a partnership with the chosen community, while the instructors remained as facilitators,  scaffolding and advising students throughout. The curriculum used interactive material emphasizing hands-on practice and learning by doing. 

We used the Double Diamond design process model by the UK Design Council to structure the school activity. It is a four-stage model: Discover, Define, Develop, and Deliver, with every two phases forming a diamond shape. The first and third phases were exploratory, while the second and fourth were for narrowing the scope and defining focus. Every stage took roughly a couple of days in our curriculum. Lectures were used mostly in the first exploratory stage. In each phase, we had a “participatory moment,” where students worked closely with community members. 

In the first stage, Discover, we encouraged students to take a conceptual leap from being the engineering student—who receives a well-defined problem to solve—to becoming a design-thinker—who is co-responsible for framing the design and sociocultural challenges. We introduced basic HCI concepts such as usability and user experience, and bottom-up approaches to ICH documentation. 

The participatory moment in this phase was a trip we asked community members to organize for the students to learn more about Bedouin culture. We visited a “Nagae,” a group of houses for the same family, "El-Sanakra.” They set up a special Arabian tent for us which they normally do only for their festive events. The Bedouin culture prohibits young women from interacting with unknown males. Thus, the women visitors met the Bedouin women inside the house, while the men were hosted in the tent. The house itself was modern on the inside, with a flat-screen TV and WiFi connection. Everyone, including the oldest low-literate women, had mobile phones. The house featured the traditional “burj,” or pigeons’ house, that they use for their food and hunting falcons. The house had fig and pomegranate trees, from which they harvested fruit, as both crops that thrive in the desert climate. We were surprised by their modern lifestyle, which unearthed interesting discussions about fading traditions.     

The pigeons' house, or burj.

The Bedouin tent in Nagae El-Sanakra.

In the second stage, Define, the students were divided into teams. Each team had to define the scope for their projects (what traditions they would document, who would be their users, what the technical challenges would be). Some of the students had ideas based on the reports they collected during the field trip. We trained students in methods to help them understand their participants’ needs and perspectives (e.g., conducting interviews, ethnographic observations, culture probes). We asked the teams to design a two-hour workshop with one or two Bedouin participants to gather the information that would help them define their focus. Every team prepared a semi-structured interview and designed a probe as a family gift for their participant. 

For instance, students designed a family tree, where the participant was invited to color its leaves according to the knowledge and interest in documenting a Bedouin poem. Another probe was a tent that had a box inside containing colored cards (colors varied according to gender and age). The participant was invited to ask members of his family house to write something about what makes them proud Bedouins. 

In the Develop phase, the students used personas to describe their target users as they defined them in the previous stage. They analyzed the data they gathered from the interviews to find insights, identify opportunity areas, and brainstorm to generate ideas about potential solutions. Further, they conducted a second workshop to test their ideas, in which they handed over low-fidelity prototypes to one or two community members, who contributed to the design process. 

Design ideas and prototyping.

In the Deliver stage, students designed four prototypes for mobile applications. These prototypes included technology to address the documentation of improvised Bedouin poems, assessing the documenter’s knowledge about traditions. Other applications included using games to educate children about old Bedouin traditions and e-marketing Bedouin crafts. The prototypes were presented to community members, who gave feedback on the designs. 

The experience was very positive for students and community members, as we learned in the follow-up focus groups. The double diamond model was a good framework to teach a user-centered approach because it guided the students on when they should adopt divergent or convergent thinking. The ICH case study proved to be invaluable in teaching the students to drop their assumptions about a typical computer user, which was quite a challenge for students immersed in 21st-century technologies. Probe design and persona tasks helped them think deeply about their participants. Further, they had to be attentive to user interface details, as the Bedouin community is fastidious about their culture. Overall, students tended to struggle with the design tasks that required data abstraction and synthesis (e.g., generating insights and themes from field notes and interviews) as these can take a long time.

More than half of the school activities were led by the students, so we prepared the assignments to help us reflect on students’ progress. We thus had to check their responses every day, which was very demanding. Watching them develop their sense of design agency, however, was our reward. We plan to revise our curriculum and intend to offer it as a resource for instructors interested in adopting our approach to student-led learning and sensitization to HCI as a tool for community-driven learning and teaching. 

During the school the students maintained an independent blog reporting about their experience, which you can check out here.

Posted in: on Fri, September 29, 2017 - 10:43:03

Shaimaa Lazem

Shaimaa Lazem is a researcher at the City for Scientific Research and Technology Applications, Egypt. She is interested in HCI education and technology-enhanced learning. @ShaimaaLazem
View All Shaimaa Lazem's Posts

Post Comment

@Nermeen Saeed Ahmed (2017 10 29)

CS students’ engagement with the Bedouin community was a real challenge because developers or technicians have a different mindset differs from the Bedouin community members either culturally, technically or even their way of thinking about things, Also Cs students weren’t aware of how urbanized Bedouin community members are , so they were afraid of facing the whole community members with its culture, traditions and technicalities .
In my opinion involving the users (Bedouin community members) as co designers will highly affect the design process in a very good way and will help designers following a human centered design approach

Design4Arabs Workshop @DIS 2017

Authors: Ebtisam Alabdulqader
Posted: Mon, August 07, 2017 - 1:50:27

Note: This blog post was co-authored by Zohal Azimi, a student at SUNY Farmingdale State College studying visual communications, and Shaimaa Lazem, a researcher at the City for Scientific Research and Technology Applications, Egypt.

HCI research plays an important role in designing interactive systems and development worldwide. There is an increased awareness of and interest in designing for Arabic regions and cultures within HCI, yet current studies can often address “Arabs” as a single entity. There is therefore a need to reflect the diversity of the Arabic regions and the uniqueness of Arabic cultures that affect technology design. As researchers with in-depth knowledge of the Arabic regions, who are, or have experience of, studying and working within the U.S. and the U.K., we want to focus on design research that reflects our interests and concerns in designing for the Arab world, including countries where Arabic culture and language is prevalent and regions where Arabic communities are growing. With this in mind, we established the ArabHCI initiative in 2016 to empower, bridge, and connect HCI researchers and practitioners from the Arab world with those who are already working in or interested in this context. Our aim is to establish a community of HCI researchers interested in the Arab context to leverage their “insider” understanding and explore the challenges and unique opportunities in technology design.

We organized several activities to start working on this aim: A SIG meeting was held at CHI’17 to discuss key points on current HCI research across the Arab world, followed by a networking dinner. As the initiative leader, I (Ebtisam) was invited to share my vision and give talks about our mission to increase awareness of the status and opportunities of HCI education and research in the Arab world. This took place at the Diversity Lunch at CHI’17 and will also take place at the forthcoming INTERACT’17. Moreover, to promote ArabHCI discussions, a series of “Tweeting Hours” was scheduled to share ideas and experiences regarding specific aspects of HCI research in the Arab context. The outcomes of these events emphasized the demand to establish a community and encouraged us to plan future events.

Informed by this feedback, we eagerly brainstormed a precise agenda for a Design4Arab workshop at DIS 2017 in Edinburgh. Seventeen individuals ranging from students, researchers, and professionals registered to attend. Our diverse group of participants was experienced and interested in working within the Arab context and included a 10:7 Arab to Non-Arab ratio. The diverse backgrounds and experiences of the attendees enriched the discussions around technology design. Participants came from all over the world, including the U.K., Germany, and the U.S., and had submitted a wide range of position papers (available on Some papers highlighted the challenges and experiences we face as a community when it comes to designing interactive systems in Arab regions, such as the lack of participatory design. Other papers shed light on design-focused opportunities; for instance, how to design marital matchmaking technologies in Saudi Arabia, or how to design for disabled people in Jordan. 

Throughout our one-day workshop we had intriguing discussions around designing interactive systems and how to confront the emerging design challenges in Arabic regions. We started with lightning talks presented by the participants based on their experiences that helped us foster a better understanding of HCI research and identify the most pressing needs in the Arab context. Anne Webber from the University of Siegen called on designers to develop digital platforms for refugees and “work with the people, rather than design for them.” This goes hand in hand with our initiative. We should be designing for diversity while designing interactive systems to connect refugees, migrants, and the local volunteers and professionals aiding them. Franziska Tachtler from the Vienna University of Technology introduced an interesting perspective, saying, “I wonder if there are any experiences of using participatory design in the Arab world, as this has been used in the West.” We need to join together as a community to adopt some new, practical, and suitable methods of research for designing interactive technology with the Arab world. Consequently, this will help us to address unique needs, such as understanding different cultural backgrounds and involving the user sensitively in the design process. This will also promote more participatory design adaptation and reduce top-down design processes related to politics and governance within these regions.

The remaining sessions focused on exploring challenges and opportunities in collaborations between participants. We split into four groups of approximately five to have open discussions about the contextual and methodological challenges facing HCI studies in the Arab world. Each group was given six thematic cards with a probe list relevant to the card topic to facilitate their discussions. Some things we noted were that when designing for the Arab world, we should delve into the design process and learn how to successfully design technology that reflects the Arab culture and values, Arabic language, and the various dialects. We pointed out that when designing, we should not start from technology; rather, we should start with understanding the specific Arab users first. We need to ask ourselves, who are the specific people we seek to design with and for? How do we engage Arabic users in ways that show respect for cultures? How do we design sensitively that respects differences in gender and age? How do we account for a variety of concerns associated with security and privacy? The groups shared other ideas and created visuals to develop an insider understanding of these problems. We identified that HCI researchers need to engage more Arab communities in research practices in order to gain a deeper understanding of these issues.

Mapping experiences of being an insider within the Arab context.

After going around the room discussing the challenges, it was time to explore what design opportunities there are. Putting our heads together was enlightening, as it helped us dig deep into the problems that current HCI researchers might overlook while designing technologies for the Arab world. It sparked ideas and specific design opportunities on how to serve specific needs. One group’s goal was to increase autonomy in kids to express their design ideas with one another and help them present their ideas to their peers. The result was a social platform to bring kids from the local neighborhood together to build on their design ideas and proposals, producing inclusive design projects for kids in different Arab countries. Another group suggested designing a game to help young girls understand the struggles of teenage girls transitioning to young women. We also had another group propose ideas on how to improve the education system for toddlers, with the designs differing for each specific Arab country.  

Design4Arabs opportunities in action!

It was interesting to work with HCI researchers who are not from the Arab region, as they do not always fully understand the diversity and differences within the Arab context. It was fascinating to see everyone courageously discuss their thoughts and ideas, even when they didn’t agree. As we were discussing the concept of supporting teenage girls, the groups suggested involving parents so they can become educated and have open conversations with their daughters. Some of the Arab participants recommended that their proposal should be directed toward all “mothers,” instead of all “parents.” They stated that it is very unlikely for a father to be involved. It was not only that fathers will not want to be involved but also that daughters will more than likely avoid involving their fathers in such embarrassing discussions. The initial group responded with a non-Arab participant saying that any design should involve education for Arab fathers, just as much as for Arab mothers. This friendly disagreement showed us how important it is to conduct extensive research in the Arab context, to reflect on the wider social expectations when designing, and to consider how technology might be used to intervene and the impact it might have.

This whole experience has truly given us unforgettable and noteworthy outcomes. It was important for everyone to hear different voices from all over the world openly express their perspectives on Arab HCI. We were fortunate enough to have interested participants who showed us the importance of collaboration between HCI researchers from the East and the West. By going in depth into current and possible challenges, it also helped us analyze Arab HCI research and break it down into what it is now and what it could be in the future. Our community is still growing, and we are striving to move forward on this journey together. For more details about the ArabHCI initiative and to follow the upcoming events, please visit

Posted in: on Mon, August 07, 2017 - 1:50:27

Ebtisam Alabdulqader

Ebtisam Alabdulqader (@Ebtisam) is a Ph.D. candidate at Open Lab Newcastle University, and a lecturer in at King Saud University. Her research focuses on HCI aspects of social computing, health informatics, accessibility and mHealth.
View All Ebtisam Alabdulqader's Posts

Post Comment (2017 10 16)

If anything, I wished I was able to attend the workshop. Well done

Women’s Health @CHI

Authors: Madeline Balaam
Posted: Mon, June 19, 2017 - 4:58:24

Note: This blog post was co-authored by Lone Koefoed Hansen from Aarhus University. She works with feminist design, critical computing, and participatory IT.

At CHI 2017 we ran a workshop to reimagine how technology intersects with women’s health. We brought together designers, engineers, programmers, and experts in women’s health over two days in an attempt to radically re-engineer the ways women receive healthcare. We made use of the excellent public maker spaces in Denver’s Central Public library ( to build exemplary digital interactions that demonstrate the kinds of innovations we consider necessary to improve women’s health on a global scale. 

Throughout our two-day event, 25 participants developed many responses—participants built an inclusive parenting digital campaign, hacked sex vibrators, experimented with personal visualisations of menstrual cycles, discussed technologies for menopausal women, talked about what it means to be inclusive and exclusive of gender norms, and cultivated various yeasts in a cheap incubator. We put the needs and hopes of people who identify as women front and center for two days and through this, we realized just how rare it is to do so without having to explain why. 

The "hacking a sex vibrator" group start to take the technology apart.

Our workshop could not have come at a better time. One of President Trump’s first acts was to cut funding to international organizations that promote women’s health if they also offer access to abortion services, or even provide advice and information to women about these abortion services. The NGOs affected have highlighted how such an act will decrease marginalized people’s access to vital sexual and reproductive health services, and of course increase illegal abortions and ultimately mortality. Times are hard, but there are ripples of action being taken globally, from the #Repealthe8th campaign using humor to draw attention to lack of abortion services in Ireland; the Pussy Hats project, which creates a visual symbol for those advocating for women’s rights in a context of everyday sexual harassment; and the “reproductive justice” hack-a-thon that took place in March this year. At an institutional level, some country’s leaders readily identify as feminist (hello, Sweden and Canada) and some countries have begun to officially recognize non-binary genders in passports. Even CHI 2017 had a few gender-neutral restrooms.

However, it's also not easy to do research in an area, when the majority of the field think it is only relevant for a minority. We are acutely attuned to how the current political and social climate impacts our work. We notice how a workshop on “hacking women's health” makes it/us counter- or anti- just by naming it "women's health," even if half the world's population identify as female. We have been innovating digital technologies for women’s health over a number of years, through the relatively innocuous FeedFinder (supporting women in breastfeeding in public), to perhaps the more radical and confident Labella (developing awareness of intimate anatomy). So, by hosting this workshop at CHI we hoped we could not only contribute to these global ripples of action and resistance, but also increase the community, profile, and voice of researchers working in this area. Because, right now—we’ll be honest—it can be hard to be working on this topic. Reviewers have told us that our research is “not science,” is “not ethical,” is “unseemly,” is “not feminist,” or is “too feminist.” In contrast to research we have undertaken in other areas, it seems to us that this work is often held to a higher standard; more is required of us to prove it is worth publishing, or that it is even research at all. 

A portable bio lab that one group used to practice sterilization and culture
yeast whilst discussing internet memes, zine culture, and other feminist tactics.

And, this has been our experience of trying to investigate women’s health at SIGCHI venues since running the “Motherhood and HCI” workshop at CHI 2013. There, we had conversations with other attendees who wanted to spend their research time contributing to agendas around women’s health and motherhood, but who couldn’t quite find the right way to persuade their research groups and organizations to allow them to research such topics. Or those who had to invest in long discussions as to why a topic that “only” affected women was worthy of more than a smile. Somehow thinking about women’s health and the female body is considered daring and brave, and somehow, designing for the intersection between technology and the women’s body is taboo. It sits on the fringe of CHI, and it absolutely shouldn’t. There are untold opportunities for technical and interactional innovations through focusing on women’s health. Those thinking about on-body and wearable computing might find their “killer apps” exactly within this space. There are rich opportunities for designing for advocacy and activism, alongside extremely complex and sensitive settings that are entirely unchartered within the community. We need to find ways of making this type of research mainstream, because the potential for HCI to create positive impacts for women globally is great. 

But why is it so hard to legitimize work in this area? Well, this year we think we’re starting to understand. We are passionate about women’s health and women’s access to healthcare services, and angry that this access is being limited and that women’s lives are at risk as a result. We’re angry enough that we wanted to use our hacking women’s health workshop as an opportunity to not only talk about our research, exchange ideas, and be creative, but to generate impact outside our community. We thought it would be fun and useful to use activities in our workshop as a way of fundraising for Planned Parenthood, and plans were hatching for all sorts of crafts that we could sell at the conference to generate some small donations to this essential women’s and sexual health NGO. However, we were unable to do this since fundraising for any particular cause is not permitted at SIGCHI conferences. 

In the inclusive parenting group, participants are working on the (digital)
campaign material that came as a result of the discussions beginning with
breast-feeding discussions.

In the last couple of years, the CHI conference has run CHI4Good days of service. The thrust of these days has not been to raise funds, but to offer time and support (technical, design, research, and otherwise) to selected charities as part of the conference. This is a great way of ensuring meaningful impact for the wider global community. But we wondered why it was that CHI could support charities in these ways, and not others. The answer we received was that charities had applied and been chosen in advance of the conference, and that organizations would not have been selected for CHI4Good if considered to be of a “polarizing political nature.” So is Planned Parenthood too politically polarizing? Can CHI and its attendees contribute to charities and organizations as long as they are considered not too divisive? Animals, children, cycling, yes? Reproductive rights and providing healthcare to women with limited recourse to funds, no? By limiting the organizations we can work with during SIGCHI events, do we limit the impact of the community and potentially marginalize the organizations and charities that need support, in favor of those which are potentially perceived to be more agreeable? And please don’t misinterpret our frustration here. We are not arguing that children, animals, and cycling are not important and do not deserve support. We are simply questioning the biases that this raises not only in who we support as an association, but also who we decide we can do research with, which topics are OK to research, and which are not.

The workshop "hacked" the exhibition space at the opening reception by
installing the results from the workshop on a table next to an ice-cream stand.

Posted in: on Mon, June 19, 2017 - 4:58:24

Madeline Balaam

Madeline Balaam designs interactions for women’s health, digital health, and wellbeing
View All Madeline Balaam's Posts

Post Comment

No Comments Found

HCI Across Borders @CHI 2017

Authors: Neha Kumar
Posted: Thu, May 25, 2017 - 5:08:49

We are living in uncertain times where some borders are more visible than others. Even in our increasingly globalized cultures, as people and goods move from one place to another, across socioeconomic strata where multiple forms of translation take place between languages and disciplines, there can still be many barriers and dead ends to communication. Yet the field of human-computer interaction (HCI) is continuing to develop a more mature understanding of what a “user” looks like, where users live, and the sociotechnical contexts in which their interactions with computing technologies are situated. The time is therefore ripe to draw attention to these barriers and dead ends, physical and otherwise, hopefully enriching the field of HCI by highlighting diversity and representativeness, while also strengthening ties that transcend boundaries. 

Over a year ago, we were scrambling to put together the HCI Across Borders (HCIxB) workshop at CHI 2016 in San Jose. This was to be attended by over 70 HCI researchers and practitioners from all over the world whose work attempted to cross “borders” of different kinds. Some of these countries had never been represented at CHI before and we nervously and excitedly brainstormed about how to welcome these participants to a conference that was both exhilarating and overwhelming to attend. Multiple hectic days and nights later, and with a lot of help from SIGCHI and Facebook, it all worked out. And in three more months, we were ready to do it again! This time, it was with a team that had come together at HCIxB in San Jose, though several of us were meeting each other for the first time. This incredible, high-functioning team included Nova Ahmed from Bangladesh, Christian Sturm from Germany, Anicia Peters from Namibia, Sane Gaytan from Mexico, Leonel Morales from Guatemala, and Nithya Sambasivan, Susan Dray, Negin Dahya, and myself, who are based in the U.S. but who have spent significant portions of our lives crossing borders for HCI research. Naveena Karusala, our beloved student volunteer who helped us with much of the planning, also deserves special mention here. 

Our second year turned out to be bigger, even more successful and rewarding than the first, as we organized the first ever ACM CHI Symposium on HCI Across Borders on May 6–7, 2017. Before we could really get a handle on what was happening, 90 individuals from 22 countries were registered to attend with 65 accepted papers. Many frazzled emails were exchanged with the space management team in the weeks leading up to CHI. The range of research topics was vast and included domains that are rarely encountered in mainstream (or primarily Western) HCI research. We had asked all submissions to highlight how their work aimed to cross borders and which borders these were. The connections drawn were illuminating and groundbreaking. While one paper aimed to translate video-creation processes from a maternal health deployment to provide instruction on financial services in rural communities, another took a meta approach to unpack the area of overlap between the fields of social computing and HCI for development (HCI4D). Many submissions made gender a focus, and mobile technologies (smart and otherwise) were widely targeted. These papers are all available for reading at

The HCI Across Borders family at CHI 2017

The symposium began with introductions done madness-style as each participant took up to 45 seconds to tell everyone who they were, where they came from, and why they were attending, also sharing a fun fact about themselves (always the hardest!). This was followed by a poster session that lasted 90 minutes. Each workshop paper was represented by a poster and participants walked around the room with Post-its, leaving feedback as they deemed appropriate. Purva Yardi from the University of Michigan won Best Poster, chosen by the SIGCHI Executive Committee’s Vice President for Conferences Aaron Quigley. Purva’s paper was titled “Differences in STEM Gender Disparity between India and the United States.” Lunch followed the poster session, and then it was time for some levity. We played the silent birthday game, which required all 90 participants to arrange themselves in the order of their birthdays (month and date) without talking. There were more games organized across both days, including a round of musical chairs, which was nothing short of chaotic. We had short debrief sessions after every game, when we discussed the challenges that arise when we try to communicate across languages and other cultural norms; for example, the month-first date format that the U.S. follows is different from the one followed by most other countries.

Much of the weekend was devoted to working in teams on potential collaborations. These teams were formed based on topics of interest that emerged from the poster session. Clusters from poster topics were created by a few volunteers and teams were formed according to these clusters, also leaving room for participants to change teams as they pleased. Some of the topics that these teams worked on included education, health and gender, social computing, and displaced communities. The final deliverables for these teams included a short presentation on Sunday afternoon and a timeline for how the team planned to take their ideas forward over the following year, before we all came together (hopefully again) at the next HCIxB. Some examples included research studies that would span multiple geographies, while others were more community-focused, such as the development of a “co-design across borders” community. 

A major component of the weekend included four conversations or dialogs that we tried to have as a group. The first—“Then and Now”—was brief, entailing an introduction of HCIxB and how the community had formed over the years, dating back to CHI 2007 when a few of the people in the room had organized the Development Consortium to bring together researchers working in the “developing” world. The second was a dialog on mentorship and what it meant for those in the room to seek or offer mentorship. This conversation has led to the launch of our “Paper xChange” program for the community, whereby we will match those seeking help on a CHI 2018 submission and those willing to offer the kind of help being sought (also on On day 2, we had our third dialog to discuss the steps needed (along with potential challenges and opportunities) with regard to setting up SIGCHI chapters in cities across the world. Tuomo Kujala, the SIGCHI Vice President for Chapters, was generous enough to offer an overview. Our final dialog took place just before closing, as we wrapped up and brainstormed about how we could keep ourselves busy and growing through the year. We also had the honor of quite a few visits from the SIGCHI leadership team, who not only supported many participants so they could attend the symposium but also made us all feel welcome throughout the weekend.  

In addition to the above, there were several memorable moments and special conversations. I know I speak for many when I say that the weekend was unforgettable in so many ways. In a nutshell, it presented a window into what CHI could be if it included more voices—different voices—from across the world, and what HCI looks like when the H represents humans across countries, cultures, socioeconomic strata, genders, and ideologies, among other points of difference. Together and stronger, we move forward on our journey, as a community that is still small but growing fast, toward a truly global and representative field of human-computer interaction. Through this blog post, and other associated posts, as well as Nithya Sambasivan’s brand new forum on “The Next Billion,” we are committed to sharing more stories, research, and dreams from the HCIxB community. Stay tuned!

Posted in: on Thu, May 25, 2017 - 5:08:49

Neha Kumar

Neha Kumar is an assistant professor at Georgia Tech, working at the intersection of human-centered computing and global development.
View All Neha Kumar's Posts

Post Comment

No Comments Found

Turning people into workbooks

Authors: Tobie Kerridge
Posted: Wed, May 17, 2017 - 3:19:55

With its third biannual conference, RTD 2017 continued to mix intimacy and ambition with lively and informal discussion of research through design, enabled by a focus on the artifacts that come about through research projects. The National Museum of Scotland in Edinburgh acted as a hosting venue for the program, which complemented the organizers’ ambition to see design outcomes set against the material archive. Outside of the sessions I took part in, I joined other panelists for a fairly wide-ranging discussion on what the future of research through design meant for them. One question that has been particularly productive was about the articulation of research in commercial settings.

To keep our opening statements concise, panelists were asked for a single image. I broke the brief by taking a spread from a book that will soon be published by Mattering Press depicting the pages of a workbook that was put together for ECDC (Energy and Co-Designing Communities), a three-year Research Councils UK-funded project. Our group at Goldsmith’s Interaction Research Studio was one of seven research clusters looking at the effects of a government scheme where the Department of Energy and Climate Change funded around 20 communities across the UK to undertake energy-demand-reduction measures. The research groups took a range of approaches to interpret what these communities were doing. Ours was to design a technical platform, developing the methodology of the Studio, that has variously been described as ludic, speculative, or inventive.

How we used workbooks

In the Studio we use workbooks as a method for synthesizing material generated during fieldwork. We had spent a fair chunk of time with different energy-reduction groups across the UK, including tours of infrastructure such as wind turbines, photovoltaics, and ground-source heat pumps. We heard about the ambitions of these groups and about environmental predictions, the politics of delivering these projects, and the relationship between different constituencies in each setting. Workbooks allowed us to bring together documentation of these different encounters, and also other kinds of material from reviews of literature and practice. Through the synthesis of disparate material into a workbook, as researchers we share and reflect on perspectives. This in turn supports the materialization of topics and we begin to understand design options and possibilities. In this way, making and sharing workbooks is an internal process, supporting activity in the studio.

Workbooks capture insights and ideas that emerged from the initial engagement, often leading to evocative proposals.

The issue with workbooks

My aim was to use the image of the workbook pages to bring to RTD an issue that I haven’t yet resolved. I had taken the workbook to a meeting with one of our demand-reduction groups. It was unusual for us to show a workbook to respondents, though given the extended timescales of our research in relation to the schedule of this group, I thought it would be interesting to take along a document showing research activity. However, looking through the book together was a slightly odd situation, like taking an early draft of fiction to people who recognize themselves in the characters or dialogue. For example, a page titled “the champions” depicting a rosette was a reflection on competition and reward as a recurring feature of sustainability programs. Elsewhere, Futures Tourism reimagined wind-turbine farms, with their associated planning issues, as a future tourist destination, represented here by electricity-pylon spotters. The situation was odd in several ways, partly tied to anxiety about assuming authorship of others’ accounts, partly due to the seemingly insubstantial way in which the images depicted deeply held beliefs, though overall were brought about through anticipating responses from these unplanned readers. While I would argue, with conviction, for the rigor and earnestness of our research approaches, the depictions in workbooks necessarily reduce the thickness of our data to support the accretion of detail into design elements.

So why did I hope a retelling of this awkward episode would be useful? While taking studio process into the settings on which they were based was somewhat uncomfortable, it was also productive. First and foremost it supported me as a researcher in thinking through the tensions of treating the commitments of others speculatively. This episode also helped me consider how our practices have rhythms where the action of practice shifts, between letting others provide accounts of their situations and transformations of those accounts into artifacts that speak for others. And I wonder, is there scope within our analytical accounts of practice to attend to these moments when our articulations come under pressure?

Posted in: on Wed, May 17, 2017 - 3:19:55

Tobie Kerridge

Tobie Kerridge is a lecturer and researcher based in the Department of Design at Goldsmiths, University of London.
View All Tobie Kerridge's Posts

Post Comment

No Comments Found

Robots, aesthetics, and heritage contexts

Authors: Maria Luce Lupetti
Posted: Fri, April 21, 2017 - 12:11:41

Most people today have not yet had the opportunity to interact directly with a robot in their everyday lives, except maybe with children’s toys or those charming robotic vacuum cleaners. While there are ongoing experiments with robots in healthcare, many more are employed in high-tech efficient environments such as factories. But robots have also often played a large part in our cultural and historical imagination. This is celebrated in current exhibitions such as “Robot” at the Science Museum in London, exploring 500 years of humanoids robots, from pop culture to products, and “Hello, Robot. Design between Human and Machine” at the Vitra Design Museum in Germany. Apart from thematic exhibitions and despite robots pointing toward future automation, robots are starting to be used in heritage contexts to provide services, from remote exploration of museum sites to robot-based guided tours. But what happens when robots move from being an attraction to being an agent that shares the same physical space with people?

Beyond the laboratory, everyday contexts require robots to be incredibly stable and reliable over time. This is particularly important for historical sites; their use should not, under any circumstances, damage the heritage or endanger the safety of staff or visitors, or interfere with preservation or visitor enjoyment and appreciation. Challenges emerging from the use of robots in heritage contexts however, do not only relate to functional aspects, but also perceptual ones.

The role of aesthetics

Among many factors that can determine robotic acceptability such as purpose or safety, physical appearance can instantly affect how a robot is interpreted by all kinds of users that might interact with it. It is therefore of primary importance for the robot to be designed in a way that can quickly and easily communicate its skills and function, since it determines if people will understand its role and respond appropriately.

However, effective communication of a function doesn’t ensure success. For instance, in the case of service environments, robots share physical contexts with people through activities and interactions, social relationships, and artifacts. Thus, through all these aspects, a robot automatically establishes implicit relationships to which people refer for their interpretations and judgments. Cleaning robots, for instance, introduce novel technologies for cleaning activities that people are already doing with vacuum cleaners. Beyond being employed for a clear function, their appearance and aesthetics reminds them of existing products. This approach attempts to avoid ambiguities and concerns. The employment of humanoid robots for activities that are usually entrusted to people, such as museum guides, customer care, or home assistance, however, will automatically bring forth a comparison between robot and human abilities. As a consequence, this comparison can surface uncanny feelings and concerns. Furthermore, even when people are not replaced, robots are evaluated on their ability to fit into people’s sociocultural context. The design of a robot, then, should take into account the particularities of the specific context in which a robot has to be located.

Given these considerations, a good practice for the design of a robot is a formal synthesis that results from a morphology with an explicit function and the aesthetics familiar to the context. 


In 2015, I was part of a team at Politecnico di Torino, where we designed Virgil, a telepresence robot, in response to these considerations on morphology and familiar aesthetics. The design, consisting of a robot that allows remote exploration of inaccessible areas, was employed at the Racconigi Castle, in Italy. It resulted from a broad reflection on the dual nature of cultural heritage, as a specific context, to focus not only on preservation but also on enjoyment and accessibility for visitors. The castle, one of the royal residencies of the Savoy family, is a context rich in artworks, artifacts from daily life, and architecture that preserves a history of almost a thousand years. As a design team from Politecnico di Torino, guided by Prof. Claudio Germak, we designed this application within the context of a project promoted by Telecom Italia Mobile (TIM, the main mobile telecoms provider in Italy) in collaboration with the Terre dei Savoia association.

Our aim was to provide visitors remote access to inaccessible areas of the heritage, such as the nurses rooms, the tiepidarium (thermal bath), and other areas currently excluded from the exhibit for safety, restoration, or cataloging issues. To do so, a robot was managed by a museum guide via a web application. Apart from the willingness to provide an extension and enhancement of the heritage visit, we devoted great attention to the possible perceptual drawbacks. We focused on how to communicate the particular role and function of the robot and how to define an aesthetics appropriate for that specific context. Regarding the first point, we chose to keep the main elements, such as wheels, laser scanner and camera, visible so that the function of the robot was explicit. Regarding the familiar aesthetics, we analyzed the heritage context from the physical and cultural point of view. This led to a robot design that, both in form and decoration, recalls existing elements of the heritage.

Figure 1. The robot Virgil inside an area of the Racconigi Castle. The illustrations
show the two main characteristics of the robot's appearance: the shape and decoration.

The cover, made of PMMA (poly-methyl-methacrylate), has the form of a truncated pyramid. This morphology was chosen to recall similar shapes largely diffused in Savoy tradition, used in obelisks, bollards, and other architectural elements or furniture. The choice of a transparent material was determined by the need to ensure maximum lightness both from the structural and visual point of view.

Furthermore, the body of the robot was decorated with the aim to customize it in relation to the context. It consists of décor, which represents the Palagiana palm, an existing decoration that can be found in many artifacts of the castle. The decoration, then, was both a way to obtain aesthetic coherence and to make a tribute to both the architect and to the history of a place that was characterized by continuous evolution over centuries.

This design approach not only represented a way to increase the acceptability of providing visitors remote access to inaccessible areas of the heritage, but also revealed greater design opportunities for considering the aesthetics of robotics that went beyond their functionality and acceptability.

Posted in: on Fri, April 21, 2017 - 12:11:41

Maria Luce Lupetti

Maria Luce Lupetti is a Ph.D. candidate in Design for Service Robotics, at Politecnico di Torino, Italy. Her research, focuses on human-robot interaction and play and is supported by Telecom Italia Mobile (TIM). She was Publicity Chair for HRI Pioneers Workshop 2017 and visiting scholar at X-Studio, Academy of Art and Design, of Tsinghua University, China.
View All Maria Luce Lupetti 's Posts

Post Comment

@tahereh (2020 03 01)

haji kheyli gir midi. haji tahgi girinov.

Design research and social media

Authors: Fatema Qaed
Posted: Thu, April 13, 2017 - 11:37:54

The timing of my fieldwork about learning spaces in Bahrain was during the Arab Spring of 2011. This initially created significant cultural and political difficulties, not just for me, but for many others attempting to collect data about any topic in many Arabic-speaking countries. At the same time, this created a great opportunity. People’s voices were raised in social media, as they were no longer able to present issues through other types of media such as newspapers and TV. As a designer, this was interesting, since it showed the power of users when they decide to create change. I started to search through hashtags related to my research (such as #classroom) and discovered a different way of looking at learning spaces. 

Teachers using social media constantly share their problems about the design of classrooms, such as issues of overcrowding. Others kindly reciprocate by sharing how they have overcome these issues through the redesign of space. These everyday solutions are not discussed in academic literature, yet  responding to problems, creating solutions, and sharing these on social media feels more authentic and reliable. Teachers are taking action on their own terms without someone monitoring their behavior, as can often happen in more traditional data-collection fieldwork. So social media offered insights into rich virtual communication between people (e.g., with the same hashtag interests) from many cultural backgrounds. I would argue that the diversity of these shared resources and the problem-solution dialogue between users can greatly enrich the way designers understand who it is they are designing with and for. 

During my research, I tried to understand the value of learning-space design for its users and why classrooms have stayed the same for a long time. I then wanted to use this to introduce different design concepts for classrooms of the future. The three overlapping phases—contextual review, users’ perceptions of learning spaces, and participatory design—were enriched with particular knowledge from multiple practices of redesign shared on social media resources. 

The contextual review phase started by understanding space from my own research perspective, by reviewing the literature about learning theories, learning-space design elements, and the importance of learning spaces for both teachers and students. Online social media resources at this phase played an important role in adding further examples of published theories, but also rich examples of practice. Because many learning theories were implicit in these examples, user perceptions of learning spaces were not always connected to academic literature. Online examples therefore offered a rich clarification of the theoretical knowledge about learning-space design; for example, Figure 1 shows Palatre and Leclère’s color use for nursery school learning spaces. The absence of practical examples in more theoretical accounts encouraged me to look online for further examples that went beyond design as a mediator between theory and practice.

Figure 1. École Maternelle Pajol.

In the second phase of my research, extending my understanding of users’ perceptions, blogs were used to accompany initial fieldwork in Bahrain. I looked at what teachers were doing, saying, and making, in particular the way teachers and students were adapting and redesigning space. This helped to provide insights and raised awareness of users’ voices in learning-space design. The specific research value of social media for both phases was often platform specific.

Facebook: This huge reachable community allowed me to directly communicate with a diverse group of classroom users, across, for example, age range, gender, occupation, countries, and cultures. On Facebook I posted questions publicly on my timeline home page; the two questions were reflections and memories about people’s own classrooms and what a classroom of their dreams would look like.

Twitter: Hashtags in messages (# prefix) enabled search and invited responses on particular topics. This encouraged me to search for tags (e.g., #classroom, #learning_space) to see what users had shared. These searches linked through to wider social networks via hashtags on Pinterest and blogs.

Blogs: As reported in The Guardian newspaper in the U.K., “If you want the truth about school life, read the teachers’ blogs.” 

Blogs enabled me to share with teachers their daily activities, as blog data supported close interaction between teachers and myself. Data collected therefore went beyond providing me with problem and solution examples and revealed valuable methods to communicate and interact with teachers.

Flickr: This is a rich image platform and teachers share images of their classroom displays all the time. These images revealed teachers’ competence within everyday design and how to use different design elements to support their teaching methods. 

Pinterest can be understood as a visual montage of shared photos from multiple users to become a “catalogue of ideas.” Pinterest search tool enabled a quick visual scan for relevant topics with attached links to websites or blogs of relevant topics. Here, I found a huge, varied, and active teacher network sharing rich teaching and learning experiences, such as classroom activities, classroom makeover examples, teaching tools, and teaching strategies. Pinterest was an especially rich visual data source on how teachers adapt their learning spaces and how they respond to space and objects to support their teaching methods. Additionally, this rich data went some way toward explaining how relationships are built between users and design elements within classrooms. 

In the third phase, participatory design, I designed a tool called Classroom Design Recipe, inspired by a Convivial Tool approach. This was developed as a type of teaching assistant to build on both theoretical knowledge and findings on users’ understanding of spatial practice. The box contained different sets of cards (Figure 2) designed to empower teachers’ use of and redesign of learning spaces and facilitate different teaching and learning methods using spatial design elements (wall, flooring, and ceiling).  

Figure 2. Classroom Design Recipe cards.

Social media can enrich each phase of design research. If considered early it can be informative to help researchers potentially reflect on academic literature to see how something is put into practice from users’ varied perspectives. It can also help researchers contextually understand problems and how people publicly share that through social media to communicate connections and interests. In my own study, it helped build a better understanding of users, giving examples for diverse solutions where users shared regardless of cultural and geographical boundaries, suggesting potential needs and previously unarticulated ideas. One example of this was how teachers shared images of classroom space, inadvertently showing how they used windows as display boards. This inspired me to suggest different design opportunities, combining social media visuals with students’ ways of learning and teachers’ ways of teaching (Figures 3 and 4).

Figure 3. Interactive window design.


Figure 4. Lego window design.

Social media includes an ever-increasing variety of free platforms that can empower designers to understand users’ diverse needs, communicate with them closely, and potentially design better solutions. Much of the current literature highlights the value of “big data” constructed by users in social media. However, from a design perspective the real value, in my own experience, was in creatively capitalizing on these multiple varieties of data to gain different perspectives from interconnected communication resources.

Posted in: on Thu, April 13, 2017 - 11:37:54

Fatema Qaed

Fatema Qaed is an assistant professor at the University of Bahrain. Her research interests include design thinking, interior design, virtual reality exhibitions, and using the "convivial tools" concept to make tools that empower users.
View All Fatema Qaed's Posts

Post Comment

No Comments Found

Why HCI history matters!

Authors: Daniel Rosenberg
Posted: Thu, April 06, 2017 - 3:04:33

I want to use this blog entry to publically thank Jonathan Grudin both personally and on behalf of the larger HCI community for authoring From Tool to Partner: The Evolution of Human-Computer Interaction. His recently published book fills a long-standing gap in our professional narrative by providing a written history documenting how the multidisciplinary nature of our profession coevolved with the technology mega-trends of the last 60 years.

For me personally, reading From Tool to Partner felt like fondly flipping through decades of family photo albums, while at the same time finding connections and patterns that I never realized existed or never understood the origin of before. 

My entry into the field of Human Factors began in 1981 and I remain an active participant as an adjunct professor in San Jose State’s graduate program in HF/E following a 35-yearlong HCI-focused industry career. I first met Jonathan at MCC, the American 5th generation project mentioned in chapter 7. My employer at the time, Eastman Kodak, was one of the sponsoring organizations and I was their representative to the HCI program track where Jonathan was on staff. I have belonged to many of the professional organizations discussed in this book and attended many of the early conferences mentioned, the most seminal of which for me was the first Interact conference (1984) in London. In addition, I am a chapter co-author of the first Handbook of Human Computer Interaction (Helander Ed., 1988) mentioned in chapter 8.

While I could lend my own perspective on the evolution of various HCI organizations, their politics and conferences, academic refocusing, and the technical milestones that Jonathan meticulously chronicles, for this blog that is neither needed nor appropriate. Suffice to say, based on my own journey this book appears to be quite accurate and it was clearly painstakingly researched. 

Jonathan covers the progression of HCI from expert use on early mainframes to the discretionary use of personal apps, powered by the cumulative effect of Moore’s law. To date, this has resulted in the mobile experience now dominating the market and in the foreseeable future to be replaced by ubiquitous embedded computing. He pays tribute to our founders J.C.R. Licklider, Douglas Engelbart, Ivan Sutherland, and others, documenting their amazing vision while articulating both when, decades later, we have finally achieved that vision, and where we still have a significant distance to traverse. Like the author, I offer no target date for the singularity. This is still the realm of science fiction in my opinion.

One of the most illuminating aspects of the chronological narrative presented is the timing when the various disciplines of Human Factors, Computer Science, Library Science, and Design entered the mainstream of HCI practice and how they affected our shared trajectory. 

A particularly salient recurring theme in the book is the decades-long tension between the fields of AI and HCI. As noted, funding ebbed and flowed for AI research through numerous periods of “summer” followed by the “winter” of unfulfilled, overhyped promises resulting in huge budget cuts. When AI was down in the dumps, money and talent flowed into HCI research and vice versa. We are now in the middle of the 3rd AI summer, with autonomous vehicles playing the poster-child role in the media. 

For those newer to HCI professionally From Tool to Partner will obviously not invoke the same nostalgic reaction it had upon me. But most importantly the words of the philosopher George Santayana apply. To wit, Those who cannot remember the past are condemned to repeat it

History matters!

For skeptics who might think the Santayana quote is not applicable to HCI (and just another nostalgic detour on my part) let me provide a recent concrete example. 

Last year I undertook a consulting engagement to help a Silicon Valley startup productize an innovative Chat-Bot technology. Chat is “the new paradigm,” they told me. The history of HCI tells us this is not so! There is a significant HCI literature on command language interaction that was directly applicable to the usability problems manifested due to the lack of a consistent grammar both in the design tool for scripting conversations, and in the executable result when users encounter a Bot instead of a human to assist them in finding their lost luggage or replacing a missing ticket. It turned into a short consulting engagement because the boost provided by knowledge from HCI history was a major benefit to this startup. If they had taken a trial and error approach to refinement (another topic Jonathan points out the flaws with) instead of leveraging prior HCI knowledge, they would have run out money long before bringing a viable product to market. 

One of my goals at SJSU has been to introduce a course on HCI history into our program. I believe this will support our graduates in becoming more efficient and effective user researchers and designers. And it will provide a calming counterbalance to the turbulence they will surely encounter over their careers as the ever-accelerating pace of technological change continues.

And now, thanks to Jonathan I have a textbook I can submit to the university curriculum committee that proves that the field of HCI history exists!

Thanks again, 


Posted in: on Thu, April 06, 2017 - 3:04:33

Daniel Rosenberg

Daniel Rosenberg is Chief Design Officer at rCDOUX LLC.
View All Daniel Rosenberg's Posts

Post Comment

No Comments Found

A history of human-computer interaction

Authors: Jonathan Grudin
Posted: Fri, March 24, 2017 - 10:46:31

A journey ended with the publication in January of my book, From Tool to Partner: The Evolution of Human-Computer Action.

The beginning

Ron Baecker’s 1987 Readings in Human-Computer Interaction quoted from prescient 1960s essays by Vannevar Bush, J. C. R. Licklider, Douglas Engelbart, Ivan Sutherland, Ted Nelson, and others. I wondered, “How did I work for years in HCI without hearing about them?”

When Ron invited me to work on a revised edition, more questions piled up. Our discussion of the field’s origin oscillated between human factors and computer science as though they were different worlds, yet the Human Factors Society co-sponsored and participated in the first two CHI conferences. Then they left. Why? Similarly, management information systems researchers presented interesting work at early CSCW conferences, then vanished. Another mystery: NSF and DARPA program managers who funded HCI research never attended our conferences. Readings in HCI devoted a large section to Stu Card and Tom Moran’s keystroke-level and GOMS models, the most praised work in CHI. Why did almost no one, including Card and Moran, build on those models? While working in AI groups at MIT, Wang Laboratories, and MCC, I wondered why two fields about technology and thought processes seemed more often at loggerheads than partners. 

There were other mysteries. For example, in 1984, my fellow software engineers loved the new Macintosh, even though we worked for an Apple competitor. We agonized as the Mac flopped for a year and a half. Apple slid toward bankruptcy. Then Steve Jobs was fired and the Mac succeeded. What was that about?

I located people who were involved in these matters. Eventually, I found convincing answers to every question that I had begun with and other questions that arose along the way (Convincing to me, anyway.) I wrote a short encyclopedia entry on HCI history, an article for IEEE Annals of the History of Computing, handbook chapters, and other essays. I edited and sometimes authored history columns for Interactions magazine. Finally, the book.

Could you benefit from reading the book?

The book is primarily about groups of researchers and developers who contributed to HCI, coming from computer science, human factors, management information systems, library and information science, design, communications, and other fields. By stepping back to get a larger picture, you might see how your work fits in, where you might find relevant insights, and where you are less likely to.

A better name for our field might be “human-computers interaction.” Machines changed radically every decade. We all know Moore’s law, but when I read a paper that describes the object of study as “a computer,” I unconsciously picture my current computer. This makes it easy to fail to see when an advance was primarily due to new hardware; for example, the Mac succeeded when “the Fat Mac” arrived with four times as much memory, soon followed by the faster Mac II. Human-computer interaction in 1970 and 2000 were both human-computer interaction, but no one would confuse human-mainframe interaction and human-smartphone interaction.

Patterns emerge. Some topics wax and wane, and then wax again. Artificial intelligence had summers and winters that affected HCI. Interest in virtual reality surged and receded: Alphaworld in the mid-90s, Second Life in the mid-2000s, and now Oculus Rift, HTC Vive, and Hololens. Another recurring focus of research and development was desktop video-conferencing. Was it inspired more by a pressing need or by telecoms interested in selling bandwidth?

However, there was steady progress. It took longer than many expected, but we collectively built the world imagined by Vannevar Bush, J. C. R. Licklider, Douglas Engelbart, Ivan Sutherland, Ted Nelson, Alan Kay, and others. In the 1960s, a few engineers and computer scientists used computers. Yet a common thread in their writing was of a time when people in diverse occupations would use computers routinely. We’re there. Ivan Sutherland wrote a program that changed a display to create a brief illusion of movement and speculated that someday, Hollywood might take notice. Thirty years later, Toy Story was playing in theatres. Some might quibble over whether every detail has been realized, but we are effectively there.

The concept “from tool to partner” is owed to Licklider, who in 1963 forecast that a period devoted to human-computer interaction would be followed by a period of human-computer partnership, which in turn would end when machines no longer needed us. For many years, we interacted with computers by feeding in a program, typing a command, or pressing a key, after which the computer produced an output or response and then waited for the next program, command, or key. Today, while we sleep, software acts on our behalf. It accepts incoming email, checks it for spam, and filters it in response to prior instructions. Software considers our history and location in selecting search query responses and advertisements. My favorite application of recent years eliminates the need to type a password: When I start up my tablet, it turns on the camera and sees that it is me! A dull sentry (“I know your caps lock key is on and you typed the caps lock version of your password, but turn off caps lock and try again”) is transformed into a cheerful colleague (“Jonathan—Hello! Welcome.”) as I quickly go on my way.

Of course, issues arise. The book identifies challenges that will long be wrestled with. Like Samantha in the film Her, my software is interacting not only with me—it is also interacting with the owners of the sites I visit and the advertisers who buy space. Software can mislead us by not being consistent in ways that people usually are. It can be an expert at chess, abysmal at checkers, and unable to play dominos at all. A sensor can be highly expert at differentiating gas odors while neglecting to suggest that I avoid lighting a match to look. And we steadily move into a goldfish bowl where all activity is visible, with positive and negative repercussions.

Keep it impersonal?

How do you approach writing about a time much of which you lived through? The “new journalism” of the 1960s showed that personal experiences can reveal hidden complexities and the emotional impact of events. But I wanted to write a history, not a memoir. My ideal is Thucydides, who, more than halfway through History of the Peloponnesian War, stunned me long ago: “Meanwhile the party opposed to the traitors proved numerous enough to prevent the gates being immediately thrown open, and in concert with Eucles, the general, who had come from Athens to defend the place, sent to the other commander in Thrace, Thucydides, son of Olorus, the author of this history, who was at the isle of Thasos, a Parian colony, half a day's sail from Amphipolis, to tell him to come to their relief. On receipt of this message he at once set sail with seven ships which he had with him, in order, if possible, to reach Amphipolis in time to prevent its capitulation, or in any case to save Eion.” Thucydides then briefly describes reaching Eion in time but how his opponents out-maneuvered his allies and convinced Amphiopolis to join the other side. His small part played, he disappears from his history without another word about his experiences.

I compromised by including some personal experiences in an appendix. For some, they could convey the significance of events that otherwise can seem remote. That said, a history is a perspective; it is never wholly objective. An author decides what to omit and what to include. For example, I do not cover the evolution of key concepts, theories, and applications to which so many people contributed. When I cite specific work, it is usually where the boundaries of the fields working on HCI were defined or transcended. Prior to the appendix, my technical contributions get less attention than Thucydides’ campaign.

A second short appendix lists resources that you could consult in writing another history of HCI, perhaps a conceptual history. Please! While writing, I shifted from hoping to wrap everything up to hoping to live to see other HCI histories. We have been privileged to participate in an exceptional period of human history. Technology evolves so rapidly that those not yet active will struggle to understand what we were doing, yet what we are doing will shape their lives in subtle ways. We owe it to them to leave accounts.

Posted in: on Fri, March 24, 2017 - 10:46:31

Jonathan Grudin

Jonathan Grudin works on support for education at Microsoft. Access these and related papers at under Prototype Systems.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

Toward affective social interaction in VR

Authors: Giulio Jacucci
Posted: Mon, March 20, 2017 - 3:47:26

I first encountered VR in the late 90s, as a researcher looking at how it provided engineers and designers an environment for prototyping. After that I became more interested in looking at how to augment reality and our surrounding environment; however, I had the impression that although VR had been around for many decades by that point, there were many aspects, in particular from an interaction point of view, that I felt still deserved investigation. VR has gained renewed interest thanks to the recent proliferation of consumer products, content, and applications. This is accompanied by unprecedented interest and knowledge by consumers and by the maturity of VR, now considered less and less to be hype and to be more market ready. However, important challenges remain, associated with dizziness and the limits of current wearable displays, as well as interaction techniques. Despite these limitations, application fields are flourishing in training, therapy, and well-being beyond the more traditional VR fields of games and military applications.

One of the most ambitious research goals for interactive systems is to be able to recognize and influence emotions. Affect plays an important role in all we do and is an essential aspect of social interaction. Studying affective social interaction in VR can be important to the above-mentioned fields to support mediated communication. For example, in mental or psychological disorders VR can be used for interventions and training to monitor patient engagement and emotional responses to stimuli, providing feedback and correction on particular behaviors. Moreover, VR is increasingly accepted as a research platform in psychology, social science, and neuroscience as an environment that helps introduce contextual and dynamic factors in a controllable way. In such disciplines, affect recognition and synthesis are important aspects of numerous investigated phenomena.

Multimodal synthetic affect in VR

In social interaction, emotions can be expressed in a variety of ways, including gestures, posture, facial expressions, speech and its acoustic features, and touch. Our sense of touch plays a large role in establishing how we feel and act toward another person, considering, for example, hostility, nurturance, dependence, and affiliation. 

Having done work on physiological and affective computing and haptics separately, I saw a unique way to combine these techniques to develop synthetic affect in VR, combining different modalities. For example, the emotional interpretation of touch can vary depending on cues from other modalities that provide a social context. Facial expressions have been found to modulate the touch perception and post-touch orienting response. Such multimodal affect stimuli are perceived differently according to individual differences of gender and behavioral inhibition. For example, behavioral inhibition sensitivity in males was associated with stronger affective touch perception.

Taking facial expressions and touch as modalities for affective interaction, we can uncover different issues in their production. Currently emotional expression on avatars can be produced using off-the-shelf software that analyzes the facial movements of an actor modeling basic expressions, head orientation, and eye gaze. Subsequently the descriptions are used to animate virtual characters. Emotional expressions in avatars are often the result of a process involving several steps that ensure these relate to intended emotions. These are recorded first by capturing the live presentation from a professional actor using a facial-tracking software that also animates a virtual character. Expressions can then be manually adjusted to last exactly the same time and end with a neutral expression. Different animations are created for each distinct emotion type. The expressions can then be validated by measuring the recognition accuracy of participants who watch and classify the animations. This process works well to customize the facial expression to the intended use in replicable experiments. But this is resource intensive and does not scale well for other uses where facial expression might need to be generated in greater variations (for expressing nuances) or for generalizability, since every expression is unique. While mediated touch has been proven to affect emotion, behavior research into the deployability, resolution, and fidelity of haptics is ongoing. However, in our recent studies, we compared several techniques to simulate a virtual hand of a character touching the hand of a participant. 

Emotion tracking is more challenging in the case of a wearable VR headset, as facial expressions cannot be easily tracked through recent computer vision software. Physiological sensors can be used to recognize change in psychophysiological states or to assess emotional responses to particular stimuli. Physiological sensors are also being integrated into more and more off-the-shelf consumer products, as in the case of EDA (electrodermal activity) or EEG. While EDA provides a ways to track arousal among other states and is easy to use in an unobtrusive way, it is suited for changes in the order of minutes, and not suited for timely sensitive events in seconds and milliseconds. EEG, on the other hand, increasingly provided in commercial devices, is better suited for timely precise measurements. For example, the study of how emotional expressions modulate the processing of touch can be done by event-related potential (ERP) in EEG resulting from touch. Studies show that the use of EEG is compatible with commonly available HMDs.

Eye tracking, which recently appeared commercially to be used inside HMDs, can be used both to identify whether users attend to a particular stimulus to track its emotional response, and to track psychophysiological phenomena such as cognitive load and arousal. As an example, the setup in Figure 1 includes VR, haptics, and physiological sensors. It can be used to simulate a social interaction at a table where mediated multimodal affect can be studied while an avatar touches the user’s hand, at the same time delivering a facial expression. The user recognizes the virtual hand in Figure 1A as her own hand as it is synchronized in real time. 

Figure 1. Bringing it all together: hand tracking of the user through a glass. Wearable haptics, an EEG cap, and an HMD for VR allow the simulation of a situation in which a person sitting in front of the user touches her hand while showing different facial expressions. 

This setup can be used for a number of training, entertainment, or therapy uses. For example, a recent product applies VR for treating anxiety patients. Recent studies have evaluated the impact in training autism spectrum disorder patients to apply this to dealing with anxiety. In our own recent study we used the same setup for an Air hockey game. The haptics simulated the hitting of the puck and the emotional expression of the avatar allowed us to study effects on players’ performance and experience of the game.

Future steps: From research challenges to applications

VR devices, applications, and content are emerging fast. An important feature in the future will be the affective capability of the environment, including the recognition and synthesis of emotions.

A variety of research challenges exist for affective interaction:

  • Techniques in recognizing emotions of users from easily deployable sensors including the fusion of signals. Physiological computing is advancing fast in research and in commercial products; the recent success of vision-based solutions that track facial expression might soon be replicated by physiological-based sensors.

  • Synthesis of affect utilizing multiple modalities, as exemplified here. For example, combining touch and facial expression, but considering also speech and its acoustic features and other nonverbal cues. How to ensure that these multimodal expressions are generally valid and can be generated uniquely. 

Figure 2. RelaWorld using VR and physiological sensors (EEG) for a neuroadaptive meditation environment.

While these challenges are hard, the potential application fields are numerous and replete with emerging evidence of their relevance:

  • Training in, for example, emergency or disaster situations but in principle in any setting where a learner needs to simulate a task in an environment where she needs to attend to numerous features and social interaction. 

  • In certain training situations, affective capabilities are essential to carrying out the task, such as in therapy, which can be more physical, as in limb injuries, or more mental, as in autism disorder and social phobias, or both, as in cases such as stroke rehabilitation. In several of these situations—for example, mental disorders such as autism, anxiety, and social phobias—the patient practices social interaction while monitoring how they recognize or respond to emotional situations.

  • Wellbeing examples such as physical exercise and meditation (Figure 2). Affective interaction here can motivate physical exercise or monitor psychophysiological states such as engagement or relaxation.

Posted in: on Mon, March 20, 2017 - 3:47:26

Giulio Jacucci

Giulio Jacucci is a professor in computer science at the University of Helsinki, and founder of MultiTaction Ltd ( His research interests include multimodal interaction, physiological, tangible, and ubiquitous computing, search and information discovery, as well as behavioral change.
View All Giulio Jacucci's Posts

Post Comment

@avije (2020 03 01)

thanks. useful post.

@ali (2020 03 01)

that’s great. i am amazing.خدمت-سربازی/کسری/بسیج/

What kinds of users are there? Identity and identity descriptions

Authors: Aaron Marcus
Posted: Mon, March 06, 2017 - 11:37:54

User-centered design of user-friendly products and services is based on the assumption that designers are aware of who the users are. Their understanding is embodied in typical user identities or personas. There is much discussion about whether these are typical, stereotypical, and archetypical, and how one can account for, or not lose track of, important outliers that might have a valuable influence under unforseen conditions of use.

Many years ago we published a diagram of user-experience spaces:

Note that Identity is central to the other five spaces in which new products/services can emerge, in the sense that each of the others requires an awareness of the user/customer. There are many interrelationships among all of these spaces. The other five are equally as important as this central one, and some would argue that these other five determine or influence what constitutes identity; the diagram merely means to indicate that there can be products/services that primarily focus on the gathering/availability/sharing/use of identity data. 

Many have analyzed identity and claimed positions, including more contemporary thinkers like Foucault, Hall, Butler, and Baumann, along with older sources such as Erikson. One survey of thinking along these lines is an essay by David Buckingham, “Introducing Identity” [1].

The challenge for product/service developers/designers is determining the identity of the targeted or future users. One can also ask whether users have a single identity or multiple identities, and whether or not their identity/identities cause, or are affected by, the preferred social milieu. There are important questions to ask: Who determines that identity? Marketers? The users themselves? Design teams? Governmental, educational, or professional organizations? Other influential groups? Today in commerce, in technology, as well as in politics and in governance, there is much consternation and hand-wringing about identity-politics, about identifying the identity of groups of people, about deciding which are “targeted” (literally and metaphorically), and what actions to take once the identity or identities of certain people are determined.

Once one starts down the road of identifying identities, one enters a labyrinth of complex social, cultural, historical, and ethical issues. In addition, for any one person or group, it is surprising to discover how complex and manifold the identity charactersistics are. Again, who decides which are appropriate, legal, ethical, useful, etc.? Commentaries on contemporary selfhood, otherness, and other identity issues and definitions might question a single identity and favor contemporary individuals in “advanced” societies as having multiple, composite identities and exhibiting multiple cultural competencies. I mean to avoid a simple position about an individual or group in favor of a more complex position that can account for contemporary social phenomena.

I started to think about this a year ago when I learned of a group trying to contact and incorporate various groups of people into the mainstream of Judaism. I shall use this topic of Judaism as an example for several reasons: because I am simply talking about myself and am not trying to describe, judge, or evaluate other kinds of people, and because Judaism is a particularly complex amalgam of religion, people, nation, culture, and other demographic categories.

I ask myself: What kind of Jews have I encountered? What kind of a Jew am I? Established by philosophy or values? Or, by actions? Who decides what kind of Jews there are and who belongs to each of these groups? A book I’ve encountered has prompted me to consider these questions more deeply than ever before.

Every Tongue: The Racial and Ethnic Diversity of the Jewish People, by Diane Tobin, Gary A. Tobin, and Scott Rubin [2], analyzes diversity in ethnic, racial, religious, and other dimensions of the Jewish people, especially in the USA. The book is very stimulating and thoughtful, fact-filled and polemical, challenging and endearing. The data presented in the first chapter, for example on pp. 20ff, and especially the terminology used throughout the book, prompted me to think about how one would categorize or describe one’s “Jewish identity.” I started to make a list on the pages of the book, but then ran out of space. I realized I had never thought to try to look at and think about What kind of Jews are there? (A more recent book has appeared by Aaron Tapper: Judaisms: A Twenty-First-Century Introduction to Jews and Jewish Identities [3].)

Consider that task for any other kind of identity, your own, or that of someone else in another group. Especially a “target group.”

It is fascinating, as I learn about the weekly Torah portion or parshah that I was reading in the Torah during the past few weeks in my synagogue’s (Congregation Beth Israel, Berkeley) study session or Beit Midrash, headed by Maharat Victoria Sutton, that we are witnessing/discussing a small group of people (originally 70 coming to Egypt), who are now leaving Egypt after centuries there, part of the time in slavery, being instructed, forged, created, commanded, enlightened into a Religion/People/Nation of Israel, with all the complexities of identity, social structures, religious structures, and more, while they are yet wandering in the desert. How timely. How timeless. What makes for one’s identity? What kind of a “card-carrying Jew” am I? What attributes are listed on that card? “What’s in my wallet?” to paraphrase a popular commercial phrase on television these days.

For those of you who identify yourselves as Jewish, what kind of Jew are you? For those of you who think about or have befriended Jews, what kind of Jews are they? 

I wonder if, out there on the Internet, there is a Glossary of All Glossaries, or Taxonomy of all Taxonomies, of Jewish Identity. Probably. I have encountered a few partial solutions.

It would be a fascinating conceptual-verbal-visual-communication challenge to define all of these terms and ask people to build Venn/Euler diagrams of the terms and also to visualize their own identity cluster as a Venn/Euler diagram.

Consider this activity for any identity, for any group, for any target market... Identity is far more complex than one usually considers. Then begin to ask yourself how you would design with that knowledge, and design ethically. How can a mind-map of identity/identities be a tool for better design, if only as a reality check, warning, or immunization against inadvertent limitation or bias.

If you have suggestions for additional categories, please send them to Thanks in advance.

I am indebted to Gilbert Cockton for the Buckingham reference and for his astute critique of an earlier version of this text, which led to many improvements.


1. Buckingham, David (2008) “Introducing Identity." Youth, Identity, and Digital Media. Edited by David Buckingham. The John D. and Catherine T. MacArthur Foundation Series on Digital Media and Learning. Cambridge, MA: The MIT Press, 2008. 1–24. doi: 10.1162/dmal.9780262524834.001 (

2. Tobin, Diane, Gary A. Tobin, and Scott Rubin (2005). Every Tongue: The Racial and Ethnic Diversity of the Jewish People. San Francisco: Institute for Jewish Community Research, 2005.

3. Tapper, Aaron J. Hahn (2017). Judaisms: A Twenty-First-Century Introduction to Jews and Jewish Identities. Oakland: University of California Press.

Posted in: on Mon, March 06, 2017 - 11:37:54

Aaron Marcus

Aaron Marcus is principal at Aaron Marcus and Associates (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts

Post Comment

No Comments Found

From stereoscopes to holograms: The latest interaction design platform

Posted: Tue, January 10, 2017 - 11:38:04

Shortly before the widespread availability of photographs in the first third of the 19th century, stereoscopes appeared. We’ve been flirting with portable, on-demand three-dimensionality in experience ever since.

From Sir Charles Wheatstone’s invention in 1838 of a dual display to approximate binocular depth perception through the ViewMaster fad of the 1960s, people have been captivated by the promise of a portable world. Viewing two-dimensional images never seemed enough; yet to view a three-dimensional image of Confederate dead at Gettysburg or of a college campus helped bring people into connection with something more real.

This image of the original John Harvard statue shows what a
typical stereoscope card looked like. 

Lately I’ve been experimenting with Microsoft’s HoloLens, an augmented reality (AR) headset that connects to the Internet and hosts standalone content. 

Similar to other wearables such as Google’s now-defunct Glass, HoloLens requires the user to place a device on their head that uses lenses to display content. As an AR device, it also allows the user to maintain environmental awareness—something that virtual reality (VR) devices don’t. 

Many other folks have written reviews of these devices; that’s not my purpose here. Instead, I’ve been thinking about the interaction promises and pitfalls that accompany AR devices. I’ll use the HoloLens as my platform of discussion.

The key issue for interaction in AR center around discoverability, context, and human factors. You have to know what to do before you can do it, which means having a conceptual model that matches your mental model. You need an environment that takes your context into consideration. And you need to have human factors such as visual acuity, motor functions, speech, and motion taken into account.


We talk of affordances, those psychological triggers that impart enablement of action. Within an augmented world, however, we need to indicate more than just a mimicked world. 

Once the device sits on a user's head, the few hardware buttons on the band enable on/off and audio up/down. Because the device is quick to turn on or wake from sleep, it provides quick feedback to the user. This critical moment between invocation of an action and the action itself becomes hypercritical in the augmented world. Enablement needs to appear rapidly.

For HoloLens users, the tutorial appears the first time a user turns the unit on. It's easy to invoke it afterwards as well. Yet there's still a need for an in-view image that enables discoverability.


Knowing where you are within the context of the world is critical to meaning. As we understand from J.J. Gibson, cognition is more than a brain stimulation: It's also a result of physical, meatspace engagement. 

So a designer of an AR system must understand the context in which the user engages with that system. Mapping the environment helps from a digital standpoint, but awareness of objects in the space could be clearer. Users need to know the proximity of objects or dangers, so they don't fall down stairs or knock glasses of water onto the floor.

Human Factors

Within AR systems, the ability to engage physically, aurally, and vocally entails a key understanding of human factors. Designers need to show concern for how much physicality the user needs to engage with. 

In the HoloLens, users manipulate objects with a pinch, a grab, and a "bloom"—an upraised closed palm that then becomes an open hand. A short session with HoloLens is comfortable; too much interaction in the air can get tiring.

Voice interfaces litter our landscape, from Siri to Alexa to Cortana. HoloLens users can invoke actions through voice by initiating a command with  "Hey, Cortana." They can also engage with an interface object by saying "Select." 

Yet too often these designed interfaces smack of marketing- and development-led initiatives that don't take context, discoverability, or human factors into account. The recent movie Passengers mocks the voice interfaces a little, in scenes where the disembodied intelligence doesn't understand the query. 

We've all had similar issues with interactive voice response (IVR) systems. I'm not going to delve into IVR issues, but AR systems that rely on voice response need to take human factors of voice (or lack thereof) into account. A dedicated device such as the HoloLens seems to be more forgiving of background noise than other devices, but UX designers need to understand how the strain of accurate voicing can impact the user's experience.

Back to Basics

User experience designers working on AR need to understand these core basic tenets: 

  • Factor in discoverability of actions into a new interaction space.
  • Understand the context of use, to include safety and potential annoyance of neighbors to the user.
  • Account for basic human factors such as motor functions, eye strain, and vocal fatigue.

Posted in: on Tue, January 10, 2017 - 11:38:04

Post Comment

No Comments Found

Observations on finishing a book

Authors: Jonathan Grudin
Posted: Tue, January 03, 2017 - 11:39:03

I’ve only posted twice to the Interactions blog in 8 months, but I’ve been writing, and frequently thought “this would be a good blog essay.” Minutes ago, I emailed in the last proof edit for a book. This post covers things I learned about writing and the English language after a brief, relevant description of the book.

From Tool to Partner: The Evolution of HCI is being published by Morgan Claypool. Twenty-five years ago I agreed to update the “intellectual history” that Ron Baecker wrote for the first edition of Readings in HCI. He had cited work in different fields; the connections and failures to connect among those fields mystified me. I didn’t have enough time to resolve the mysteries, and other questions surfaced later. I tracked down people to interview and eventually answered, to my satisfaction, each of my questions. When I was invited in 2002 to write an HCI encyclopedia article on social software, I asked to write about HCI history instead. An article in IEEE Annals of the History of Computing and several handbook chapters followed. In early 2016, I decided to update and extend that work, looking across HCI as it is practiced in human factors, management information systems, library and information science, and computer science, with a tip of the hat to design and communication. What do these fields have in common? What has often kept them apart? How did each evolve? A good way to learn about your own field is to understand how it resembles and differs from others.

Surprising things I learned about the English language while working on this book and the handbook chapters are general insights, but they came into focus because writing about history is different from other writing. Here too, contrast brought clarity.

An author may interact with an editor, copy editor, proofreader, compositor, and external reviewers. The focus on language is greatest with copy editors, who clean up punctuation, word choice, grammar, sentence structure, citations, and references. Some handbook publishers outsource copy editing to fluent but not necessarily native English speakers who contract to apply The Chicago Manual of Style. This book’s editors were native speakers who did an excellent job, yet issues arose. Because the editors were good, I realized that inherent ambiguities in English exist and may be irresolvable, although some could be addressed by applying more sophisticated rules.

Years ago, I broached this with someone at The Chicago Manual of Style, who objected strongly to publishers mandating adherence to their guidelines. The guidelines do not always apply, I was told. “The Chicago Manual of Style itself does not always follow The Chicago Manual of Style!” Nevertheless, copy editors need a reliable process, and if adhering to the extensive CMS is insufficient, what can be done? Authors: Work with a copy editor sympathetically but be ready to push back in a friendly fashion. Copy editors: Let authors know you are applying guidelines and consider additional processes. 

Context matters

We know that goals differ among readers of blogs, professional and mass media magazines and websites, conference proceedings, journal articles, and books, though some lines are blurring. The following examples show that even within one venue, a book, goals can differ in significant ways.

Writing about history for scientists is different from scientific writing to inform colleagues. The treatment of citations and dates are two strong examples. Science aims for factual objectivity, whereas a history writer selects what to include, emphasize, and omit. In scientific writing, the number of supporting references and their authors’ identities can be highly significant. Engaged readers may track down specific references; they may frequently pause and think while reading. The reader of a history (unless it is another historian) is looking for the flow of events. Lists of citations interrupt the flow, whether they are (name, year) or just [N] style. Rarely do readers of a history consult the references. To smooth the flow, histories often move secondary material as well as citations to footnotes or endnotes, whereas scientific writing usually keeps material in the text. Popular histories, including some by outstanding historians, now go so far as to omit citations, footnotes, and endnotes from the text while collecting them in appendices with sections such as “Notes for pages 17-25.”

Dates take on greater significance in a history. When something happened can be more important than what happened. Often, I wrote something like “In 1992, such and such was written by Lee,” and a copy editor changed it to “Such and such was written by Lee (1992). The details of such and such were not important, only that Lee worked on it way back in 1992. The emphasis shifted, the key point is buried. Similarly, in discussing a period of time, a work written in that period is very different from a work written later about the period. One is evidence of what was happening, the other points to a description of what was happening. The date is crucial for the first, not for the second, which could be relegated to an endnote.

History contrasted with science is a special case, but subtle distinctions of the same nature could affect established vs. emerging topics, or slow-moving versus rapidly-evolving fields. In a dynamic research area, the year or even the month a study was conducted can be important, a fact that we often overlook by not reporting when data were collected.

The strange case of acronyms

I will work through the most puzzling example. We all deal with acronyms and blends (technically called portmanteaus, such as FORTRAN from FORmula TRANslation). HCI history is rife with universities, government agencies, professional organizations, and applications that have associated acronyms. A rule of copy editing is that the first time such a compound noun is encountered, the expansion followed by the acronym appears, such as National Science Foundation (NSF) or Organizational Information Systems (OIS). After that, only the acronym appears. Copy editors apply this global replace. Unfortunately, in many circumstances it is not ideal or even acceptable. Some are context-dependent, so a copy editor can’t know what is best. In some situations, almost everyone in the author’s field would agree; in others, perhaps not.

One important contextual issue is the familiarity of the acronym to typical readers. Many systems and applications were known only by their acronym. Discussions of Engelbart’s famous NLS system rarely mention that it was derived from oNLine System, or that IBM’s successful JOSS programming language stood for JOHNNIAC Open Shop System. To introduce the expansion is disruptive and unnecessary. I was familiar with the pioneering PLATO system developed at UIUC (aka University of Illinois at Urbana-Champaign : ), but did not know the expanded acronym until last week, when I looked it up for an acronym glossary that reviewers requested I include with my book. The rule of including an expansion only once is fine for a short paper or for an acronym that is familiar or heavily used, such as NSF and CHI in the case of my book. But when a relatively unfamiliar acronym such as OIS is introduced early and reappears after a gap of 50 pages, why not remind the reader (or inform a person browsing) of the expansion? Override the rule! Similarly, an acronym that is heavily used in one field, such as IS for Information Systems, could be expanded a few times when it might be confused by people in other fields (e.g., for information science). These judgment calls require knowledge that a copy editor won’t have.

Other examples are numerous and sometimes subtle. CS is a fine acronym for Computer Science, and CS department is fine, but “department of CS” is awkward and “the Stanford Department of CS” is even worse. Similarly, it seems shabby to say that someone was elected “president of the HFS” rather than “president of the Human Factors Society.” We would not call Obama “President of the US” in a formal essay.

I will conclude with one that I haven’t been able to solve. Consider the sentence, “The University of California, Los Angeles won the game.” That sounds good. But “The UCLA won the game” sounds worse than “UCLA won the game.” Global acronym replacement fails.

Try this experiment: In the sentence “X awarded me a grant,” replace X with each of the following: Defense Advanced Research Projects Agency, Federal Bureau of Investigation, Internal Revenue Service, Indiana University, University of California Los Angeles, National Science Foundation, DARPA, FBI, IRS, IU, UCLA, and NSF. Which sentences should start with the word “The”? I would do so for all of the expanded versions except Indiana University. For the acronyms, I would only do it for FBI and IRS. Global replace often fails, and I cannot find an algorithm that explains all of these. My conclusion is that English is more mysterious than I realized, and authors are well-advised to pay close attention and collaborate sympathetically with editors.

Posted in: on Tue, January 03, 2017 - 11:39:03

Jonathan Grudin

Jonathan Grudin works on support for education at Microsoft. Access these and related papers at under Prototype Systems.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

Escape the room: A girl on a drip, a pizza slice, and a smartphone set to stun

Authors: David Fore
Posted: Fri, December 23, 2016 - 10:56:39

One late autumn afternoon I found myself in an Ohio hospital room sitting with a teenage girl I'll call Orleans Jackson. She was spending her 15th birthday “getting poked full of holes,” as she put it. 

Rail-thin with almond eyes, Orleans had her hair in springy dreadlocks. She apologized for “being in total moonface mode,” describing the signature look of those on a regular and robust diet of steroids.

Orleans' visit today was a big deal. She was transiting from a daily cocktail of oral medications (featuring the steroid prednisone) to quarterly infusions of infliximab, a “biologic” drug derived from human genes that would be delivered directly into a vein in her left arm.

From that hand hung a sloppy slice of pizza courtesy of a pair of nurses who felt for any girl forced to spend her birthday at the hospital. In Orlean's other hand she grasped a smartphone for dear life.

Orleans was one of about 30 patients, family members, nurses, doctors, and researchers I followed around that year. This was part of a nationwide ethnographic study into a healthcare ecology where kids were undergoing treatment for—and learning to cope with—symptoms associated with inflammatory bowel disease (IBD). What I learned in the field would inform the design of a “learning health system” called C3N. Funded by the National Institutes of Health (NIH), C3N’s aim was to improve care and drive down costs by providing participants better online and offline means to make and share new knowledge in real time. 

Once our study was complete, we would go onto synthesize our data in hopes of identifying salient psychographic, demographic, and behavioral patterns (personas, usage situations, scenarios, and system requirements) that would help us frame the design work ahead of us.

Orleans was a teenage girl from one of the impoverished and predominantly African-American neighborhoods near the hospital. I also learned from a prior conversation with her father (an auto mechanic from Jamaica I’ll call Floyd) that she was “a closed-mouth girl by nature” who fell into a deep depression two years before when he realized she was bleeding from her gut. This would be the first of many times she would visit the emergency room before finally getting a diagnosis.

Most of my demographics, opportunities, and personality traits, meanwhile, were very nearly her mirror opposite. With all our differences I needed to figure out how to connect.

Where to start when questioning the Sphinx? How about the end.

Let's chat about poop!

Controlling one’s bowels is a fundamental developmental and social benchmark among humans and other mammals. Cultural taboos—not to mention a widespread physiological aversion to fecal matter—enforce embargoes on serious contemplation or discussion of errant gastrointestinal activity.

The painful and awkward chronic illnesses that fall under the IBD rubric—chiefly Crohn’s disease and ulcerative colitis—can have life-changing physical, social, and psychological implications. In addition to the everyday indignities that define the common American school experience, kids like Orleans must develop clever ways to undertake—without undue notice—dozens of painful, unpleasant, and bloody diarrhea episodes each day.

“This thing isn’t going to kill a kid outright,” one Kentucky mother told me as we sat with her afflicted son in their living room the week before. “But it’ll kill you a little bit every day if you don’t watch out.”

One thing I noticed straight off about this population: Given half a chance, those who deal with IBD are blazingly direct if only to prime the pump of knowledge. But what chance was I giving Orleans? That’s the question I turned over in my head that afternoon as minutes turned to hours and a birthday cake replaced the pizza.

I did what I often do in such situations. I was friendly but hung back. I took a note here and there but kept my head up and eyes open. I observed clinicians and family members who entered and left the room and greeted them when appropriate. I took some snapshots.

Every now and then I glanced at my topic list for the inspiration. I asked Orleans about her visitors. She offered cursory responses. I asked her to tell me a story about school. “Boring.” About home? “Boring.” What makes for a no-good-very-bad-day? “The hospital.” What makes for a pretty-good-day-all-things-considered? “Pizza and cake.”

More curt answers. More notes. More hanging out.

I pulled out a deck of brainstorming cards. Nuthin’ doin’. I invited her to draw pictures. Not today. I took a final glance at my topic list and saw I had actually managed to get what I had set out to get. But I also knew enough to know that I knew nothing about how this girl might benefit from this otherwise abstract thing called C3N.

She remained a crystal mystery to me.

When at first you don't succeed, play, play again

Time was getting short. I studied her messing around with her phone and I posed my final question.

“What do you play?”

Escape the Room.”

That’s when conversation flowed. She showed me the ins and outs of the game, what was “addictive” about it and what was “stupid but fun anyway.” She described the people she encountered online while playing. Nobody with IBD, she said, but at least they knew her name (even if she used a pseudonym) and treated her nice (even if they were competitors.) She told me she had never met anybody else with the disease, and that she could trust only one friend with the bloody details.

“The only thing more difficult than a messy flareup at school,” she said, “was being absent all the time.”

She understood what the doctors and nurses were saying about her disease, she explained, but what she really wanted was to meet a girl her age “who could share her truth.” She said that sometimes she could play the game for hours as the time flowed by, which helped calm her nerves and permit her to return to "real life" a little lighter of heart.

Soon the nurses were arriving to bring the treatment to an end. Floyd also showed up to gather his daughter. While we said our good-byes in the moment, we would invite them and others to contribute insight as part of the C3N advisory group.

Let's meet where you are

When I think about Orleans I feel gratitude for reminding me how successful designs form themselves around the lives of those who will depend on them. In the case of this girl, she was looking for opportunities for social contact and a few healthy doses of fun. She wanted to leaven her isolation and reduce stress; she also wanted to increase the likelihood of trustworthy bonds that hold potential for sharing knowledge and feelings.

The thing is this: play is a near-universal interactive mode that permits us to work through all sorts of ambitions and fears, ideas and strategies. Play works on our bodies and minds because it is a metaphoric—and therefore relatively safe—environment. Researchers know that playing games with respondents can yield insight. What this story suggests is that simply asking a respondent about her gameplay can offer productive ways to navigate challenging personal terrain.

The best designs meet people where they are. For kids like Orleans, that place is sometimes a small room where they are tethered to an IV as they play with the locks on the door.

Applying design thinking and practices to a medical study requires careful data collection and synthesis. Accordingly, our work was overseen by an institutional research board. I describe our research and synthesis methods in greater detail in this article, published in a leading peer-reviewed medical journal. My wise and patient co-authors were Peter Margolis, Laura Peterson, and Michael Seid.

Posted in: on Fri, December 23, 2016 - 10:56:39

David Fore

David Fore cut his teeth at Cooper, where he led the interaction design practice for many years. Then he went on to run Lybba, a healthcare nonprofit. Now he leads Catabolic, a product strategy and design consultancy. His aim with this blog is to share tools and ideas designers can use to make a difference in the world.
View All David Fore's Posts

Post Comment

No Comments Found

A design by any other name would be so delightful

Authors: Monica Granfield
Posted: Thu, November 10, 2016 - 12:46:08

Three or four years ago I attended the UXPA conference. The theme was "We’re not there yet." After 26 years as an Interaction, UI, and UX designer, my first reaction was joy, at the validation that I wasn't alone in my perception of my role on a product. My next reaction was despair, that well, I am not alone in my perception of UX and my role, and that this issue is much larger than just myself. The message was "We have a way to go." I began thinking about the perception of UX and design and how it could be better promoted to move UX along, to get us "there," starting within my own organization and considering my past experiences.

There is still plenty of confusion over exactly what UX is and a great misperception that it is visual design. Although the word "design" means "to plan and make decisions about (something that is being built or created)," somehow it is still very much perceived as creating only visual solutions. I still hear things like "Well I am the PM"  or "Oh, I thought you were the visual designer" or "When will we see pictures?" and "This LOOKS great."

Viewing the final product simply as images of what you see on the screen, in a visual pleasing way, is a misperception of what UX design is and how it contributes to the product. The fact is that these "screens" are actually sets of complex workflows, technology, and interactions that are the result of months of research, including customer visits, interviews, cross-group work with services or finance teams, fleshing out complex work flows, and resolving implementation issues with development, all in order to negotiate the way to the best end-user experience possible—aka, what you are seeing on these screens.

I took a step back and started observing what transpires in the typical product-creation process. I observed something like the following: Marketing and product have an idea or have received a customer request, or customer support or design has unearthed some pain points in the product. A developer, a designer, maybe a product manager, get together to whiteboard and exchange ideas. Once this small dynamic group feels solid on a concept, the designer goes off and plans out the layout and interaction for each of the screens and the workflows that build the relationships between them, to assure that a user is able to accomplish their tasks. Some customer interviews or visits may take place, and some usability testing, to gain feedback on the ideas, may occur. Therein lies the problem. The concept that this entire process IS the design is what is being missed. The final step of engaging a visual designer is where the focus sits.

The premise of design thinking is to address exactly this: All team members, regardless of their role, are responsible for the design because the process is the design; the screens and the experience are the end result. All of those involved in creating the product seem to understand that they are designing the product, but still consider the designer mainly responsible for making the pictures of the decisions that have been made along the way.

As I thought about this, I wondered how we can change this perception of design and move design to be seen as the process, not just the end result. I began by considering the semantics of how I refer to and use the word "design" in my everyday work.

Semantics count and influence the way in which design is perceived. When the research phase for a feature or an experience is happening, I don't refer to it as "the research phase of the design," I refer to it as "conducting design research." Why? If I say, "I am working on the research phase of the design," what is heard and thought of as the design, as an end result, is a picture. The emphasis is on "design" and the idea that a picture is being tested to gain insight and see if it works or how to make it better. And what is communicated back might sound something like, "OK, that's great but when will you have the pictures ready?" because the focus is still on the end result, which is what people are going to see, and not the research and process that helps drive the end result. Getting the team to focus on the journey of the process, and to value this, is the goal. To shift the emphasis to the research portion of the process, saying, "I am in the design research phase," may be subtle, but is now a reference to design with the focus on a portion of the process, the research.

I also looked at how I've worked with teams to create product designs. One thing I noticed was that we tend to plan around the stages of design, but talk about the final artifacts—the pictures—as the design goal, not the design as the process and the experience it creates as the goal. For example, when I begin generating wireframes, I no longer refer to the wireframes as "early designs" or "wireframes of the designs," as again what we say matters. Instead, I refer to these as "early product/experience concepts" or simply just "concept wireframes," excluding the word "design." Why? Because the "wireframes" are often viewed as final and then a push to have them LOOK better occurs. The words and the semantics of the words count, in order to not make design inconsequential and to educate teams that design is the process, the journey to the solution.

I have begun trying this approach by making a conscious effort to talk about the various stages of product creation, in terms of how the process contributes to the final outcome of the product and the experience users have with the product, rather than referring to any part of the process as "the design," as if the design is this ultimate end product that magically appears. An example: When referring to the design research, I explain how research is a way to understand what the users are expecting out of the product or how workflows need to be created. Because these are the backbone of identifying how the users move through the product to accomplish tasks, or how the technology and data architecture contribute to when and how users see information and interact with it. 

As creators, collectors of ideas + possibilities, the words we choose to use, and how we use them, can elevate our conversations away from the end results and toward the process and the journey. That is the heart of the design.

It is perhaps a statement of the obvious, but worth emphasizing, that the forms or structures of the immediate world we inhabit are overwhelmingly the outcome of human design. They are not inevitable or immutable and are open to examination and discussion. Whether executed well or badly (on whatever basis this is judged,) designs are not determined by technological processes, social structures, or economic systems, or any other objective source. They result from the decisions and choices of human beings. While the influence of context and circumstance may be considerable, the human factor is present in decisions taken at all levels in design practice.—John Heskett 

Posted in: on Thu, November 10, 2016 - 12:46:08

Monica Granfield

Monica Granfield is a user experience strategist at Go Design LLC.
View All Monica Granfield's Posts

Post Comment

No Comments Found

Same as it ever was: Constitutional design and the Orange One

Authors: David Fore
Posted: Tue, October 11, 2016 - 4:07:05

In politics, as in music, one person’s stairway to heaven is another’s highway to hell. Proof is in the polls: Americans across the political spectrum believe the country is headed in the wrong direction. And here’s the thing: Rather than dispute one another over the facts, we acknowledge only the facts that suit our present viewpoint and values.

Then there’s the fact that none of us believe in facts all the time.

As the unwritten rules of the 140-­character news cycle render our political aspirations into a lurid muddle, our national conversation circles the drain into threats of bloodshed in the streets. Many are asking how we got here. The answer is this: our constitution, which was designed by—and for—people who hold opposing viewpoints about a common vision. The current election cycle demonstrates how this document still generates a governmental system that works pretty much as planned. It does so by satisfying most of Dieter Ram’s famous principles for timeless design most of the time. What’s more, I believe, this bodes well for the outcome of the current election. Read on...

So you want to frame a constitution...

The public has seized upon the vision of the American dream crashing into the bottom of a ravine, wheels-­up, spinning without purpose. Rather than giving in to panic, however, I recommend taking a deep breath and listening closely. If you do, you might soon hear the dulcet chords and hip­-hop beats of the century’s most successful Broadway musical, Hamilton, rising above the hubbub.

Design scrum, circa 1787. See: Cosmo

Here’s the scene: The Americans just vanquished the greatest military on the planet. The fate of the world’s newest nation now dances upon the knife­-edge of history. The new leaders wear funky wigs while declaiming, cutting deals, and making eyes at one another’s wives. They are predicting the future by designing their best possible version of it.

Alexander Hamilton stars as the libertine Federalist who believes the Constitution should gather under its wing most of the bureaucratic and legal functions of a central government, complete with a standing army and a central bank. James Madison is the pedantic anti­-Federalist who argues that rights and privileges must be reserved to individuals and states to guard against tyranny.

These two partisans hold opposing viewpoints, vendettas, and virtues. Their mutual disdain shines through every encounter and missive as they got busy framing a new form of government that balances each side against the other.

Back in the day, they called it framing. Today, we call it design thinking. Same difference.

In "Wicked Problems in Design Thinking," Richard Buchanan posits that intractable human problems can be addressed with the mindset, methods, and tools associated with the design profession. This means sustaining, developing, and integrating people “into broader ecological and cultural environments” by means of “shaping these environments when desirable and possible, or adapting to them when necessary.” Sounds a lot like the task of forming a more perfect union that establishes justice, ensures domestic tranquility, and all the rest.

Design is constructive idealism. It happens when designers set out to create coherence out of chaos by resolving tensions into a pleasing and functional whole that realizes a vision of the future.

Designers might balance content against whitespace, for instance, in order to resolve tensions in a layout that would otherwise interfere with comprehension. Designers are good at tricking-out flat displays of pixelated lights to deliver deeply immersive experiences. We craft compelling brands by employing the thinky as well as the heartfelt. And because the most resilient designs are systems­-aware, we craft workflows and pattern libraries that institutionalize creativity while generating new efficiencies.

The Framers, for their part, were designing a system of governance that would have to balance different kinds of forces. The allure of power against the value of the status quo. The inherent sway of the elite versus the voice of the citizen. The efficiencies of a central government and the wisdom that can come with local knowledge. They resolved these and similar tensions with a high input/low output system designed for governance, and delineated by a written Constitution.

From concept to prototype to Version 1

The Framers knew their Constitution would be a perpetual work in progress. It would possess mechanisms for ensuring differences could always be sorted out, that no single political faction or individual could rule the day, and that changes to the structure of government—while inevitable with time—would have to survive the gauntlet before being realized.

The Framers arrived at this vision while struggling against the British, first as their colonial rulers then as military adversaries. They also had the benefit of a prototype constitution: the Articles of Confederation. Nobody much liked it when they drafted it and even fewer liked governing the country under its authority. It was a hot mess that nevertheless gave the Framers a felt sense of what would work and what would not. This prototype lent focus and urgency to the task of creating a successor design that would be more resilient and useful.

Hamilton and Madison represented just two ends of an exceedingly unruly spectrum of ageless ideals, momentary grievances, political calculations, and professional ambitions. Still, they were the ones who did the heavy lifting during the drafting process, while boldface names such as Jefferson, Franklin, Adams, and Washington kibitzed from the wings before rushing forward to take credit for the outcome. Sound familiar?

Also familiar might be the fact that the process took far longer and created much more strife than anybody had anticipated. Still, against these headwinds they shaped a Constitution (call it Version 1.0) that delineated the powers of three branches of government without including the explicit civil protections sought by Madison and the anti­-Federalists.

Not wanting the perfect to be the enemy of the good, the Framers decided to push the personal-­liberty features to the next release... assuming there would be a next release.

And there was. After shipping the half­baked Constitution to the public, it was ratified.

Now Madison was free to dust off his list of thirty-­nine amendments that would limit power through checks and balances of the government just constituted. This wish list was whittled down to the ten that comprise the Bill of Rights, ratified in 1791.

The resulting full­-featured Constitution boasts plentiful examples of design thinking that address current and future tensions. It also contains florid prejudices, flaws, and quirks that hog-­tie us to this day.

But still, it breathes.

First, the bug report

Let’s look at the current slow-­motion spectacle over the refusal of Senate leaders to hold hearings on the President’s choice for filling a vacancy on the Supreme Court. It’s not that they refused to support the choice… they refused to consider his candidacy.

How could this be? It is owing to a failure of imagination on the part of the Framers, who did not anticipate that leaders would simply decide not to do their jobs. And so the Constitution is silent on how swiftly the Senate must confirm a presidential appointee, making it conceivable that this bug could lead to a court that withers away with each new vacancy.

And while this is unlikely—at some point even losers concede defeat if only to play another day—this is something that needs fixing if we want our third branch of government to function as constituted.

Other breakdowns are the result of sloppy code. Parts of the Constitution are so poorly written that the fundamental intent is obscured. The hazy lazy language of the Second Amendment’s serves as Exhibit A:

A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.

Umm, come again? Somebody must have been really rather drunk at the time that tongue twister was composed, which has since opened up a new divide between the two ends of the American spectrum.

Strategy is execution. See:

When in doubt, blame politics

More troubling to me are the results of two compromises that mar the original design. High on that list is the 3/5 compromise, which racialized citizenship and ensured that slaveholders would hold political sway for another century. This compromise ultimately sprang from a political consideration, in variance with the design, meant to ensure buy­-in by Southern states. The prevailing view was that in the long run this kind of compromise was necessary if there was to be a long run for the country. The losing side held that a country built on subjugation was not worth constituting in the first place. Our bloody civil war and our current racial divides are good indicators that this was a near­-fatal flaw in the Constitutional program, now fixed. Mostly.

Another compromise in variance with the original design creates an ongoing disenfranchisement on a massive scale. Article 1 says that each state shall be represented by two senators, regardless of that state’s population. The result? Today’s California senators represent 19 million citizens each, while those Connecticut senators each represent fewer than 2 million people.

It’s important to note that this was a feature, not a bug. It was consciously introduced into the program in order to ensure that those representing smaller states would vote to support ratification.

And it works! We know that because every organization is perfectly designed to get the results it gets. This near­t-autological axiom is attributed followers of Edwards Deming. In some ways the father of lean manufacturing, Deming showed how attention to outcomes sheds light on where design choices are made. In this case, small states have outsized influence, which keeps them in the game.

The Senate: undemocratic by design. See: Washington Post.

Still, not everything intended is desireable. Even Hamilton—notwithstanding his own skepticism about unbridled democracy—opposed this Senatorial scheme. Rather than dampening “the excesses of democracy,” which he saw as the greatest threat to our fledgling nation, this clause resolves nothing save for a short­-term political problem that could have been addressed through means other than a fundamentally arbitrary scheme that would perpetually deform the will of the people.

Easy to use… but not too easy

Still, Hamilton got much of his design vision into the final product. The document boasts a wide range of process­-related choke points, for instance, that permit plentiful consideration of ideas while ensuring few ever see the light of day.

Consider how a bill becomes a law. It must (almost always) pass through both the House and the Senate, then survive the possibility of a presidential veto, then avoid being struck down by the courts. By making it easy to strangle both bad bills and good ones in their cribs, this janky workflow is a hedge against Hamilton’s concern about an “excess of lawmaking.”

Does it succeed? Yes, if we judge it by the measure of fidelity between intent and outcome. Partisans shake their fists when their own legislation fails but they appreciate the cold comfort that comes from the knowledge that their friends across the aisle wind up getting stuck in the same mud as they do. (See Kahneman et al. on the subject of loss aversion.)

The killer app

If the Constitution has an indispensable feature, I cast my vote for the First Amendment. The Framers felt the same, so they provided it pride of place, designating it as the driving mechanism for all that follows. In so doing they confirm what we all know: that speech springs from core beliefs and passions whose untold varieties are impervious to governance anyway… so then why even try?

Equally important, by making it safe to air grievances about the government, this amendment guarantees that subsequent generations will enjoy the freedom to identify and resolve the tensions that arise in their lives, just as the Framers had in theirs. They also well understood the temptation of abandoning speech in favor of violence, a specter that always hovers above political conflict. Better to allow folks to vent so they don’t feel they need to act.

A success? Yes, if we evaluate the First Amendment by the clear fact of the matter that centuries later Americans do enjoy popping off quite a lot. Even if what we say is divisive and noxious. With a press [1] free to call B.S. on candidates, though, we have a good shot at exposing the charlatans, racists, and con­men among us. This amendment ensures that voters have the information they need to ensure that the keys to 1600 Pennsylvania Avenue fall into the right hands. I, for one, am confident that we will.

What would Dieter do?

The Constitution is an imperfect blueprint created by manifestly imperfect men. In lasting 230 years, though, it has become history’s longest­-surviving written national charter, doing so according to Ram’s principles of timeless design:

  • It is durable, which also indicates a certain thoroughness.
  • It is innovative, in that no country had established a democracy so constituted.
  • It is useful, in that we depend upon it to this day.
  • While not particularly aesthetic—and also often obtuse—the document manages to be unobtrusive in the way Ram means: it ensures our ability to express ourselves.
  • It is for the most part honest in that it does what it sets out to do. Still, like other governments of its time, ours devalued and/or demeaned most inhabitants, including blacks, native Americans, women, and those without property. Still does, in many ways.
  • By leaving many decisions to state governments and individual citizens, it has as little design as possible.
  • Where the Constitution falls short, utterly, is Ram’s principle of environmental sustainability. And while this is forgivable given the historical context, we are now compelled to consider how to bend this instrument to the exigencies of our chance behind the wheel.

I admire Ram’s canonical principles, but I’ve always felt there is something missing. That’s magic.

Great design thrives in our hearts, and balances in our hands, just so. Great design is transcendent, and I have come to believe this is so owing to its ensoulment.

You sense the ensouled design because somehow it reflects your essence and furthers your goals, regardless of whether you understand how it does so.

In the case of our Constitution it is we, the people, who supply the magic. This is what the Framers intended. And so it is up to all of us to redeem the oft­-broken promises of the past, and to realize a more inclusive, fair, and resilient vision of America, generation after generation.

Same as it ever was?


1. Here, “the press” includes the Internet and its cousins. I’ve always wondered whether the First Amendment is chief among the reasons the United States has been a wellspring for the emergence of these socio­technical mechanisms and their inherent potential for liberating speech for everyone everywhere. Further research anyone?

Posted in: on Tue, October 11, 2016 - 4:07:05

David Fore

David Fore cut his teeth at Cooper, where he led the interaction design practice for many years. Then he went on to run Lybba, a healthcare nonprofit. Now he leads Catabolic, a product strategy and design consultancy. His aim with this blog is to share tools and ideas designers can use to make a difference in the world.
View All David Fore's Posts

Post Comment

No Comments Found

Thailand: Augmented immersion

Authors: Jonathan Grudin
Posted: Thu, September 08, 2016 - 5:29:47

Our first day in Thailand, we visited the Museum of Regalia, Royal Decorations and Coins. Case after case of exquisite sets of finely crafted gold and silver objects from successive reigns: How had Thailand managed to hold on to these priceless objects for centuries?

That night, Wikipedia provided an answer: Thailand is the only Southeast Asian country that avoided colonization and the inevitable siphoning of treasures to European museums and country houses. Thailand’s kings strategically yielded territory and served as a buffer between British and French colonies. Thailand has cultural resemblances to Japan, which was also self-governing during the colonial era. In each, uninterrupted royal succession progressed from absolute monarchs to ceremonial rulers who remain integral to the national identity. Both countries exhibit a deep animism with a Buddhist overlay. In neither country were ambitious citizens once forced to learn a western language and culture. 

In the past, such insights were found in guide books. Today, overall, digital technology is an impressive boon for a curious tourist. It shaped our two-week visit just as, for better and worse, it is shaping Thai society. Consider making use of this golden era for cultural exploration in a compressed timeframe before the window of opportunity closes, before technologies blend us all into a global village.

GoEco and The Green Lion

For over ten years, my wife has planned our travel holidays on the Web. Online tools and resources steadily improve. The planner’s dilemma is the sacrifice of blind adventure for a vacation experienced in advance online, seeing the marvelous views and reading accounts of others’ experiences, and learning and later worrying about possible pitfalls—where nut allergies were triggered, smokers ruined the ambience, and so forth.

This year, TripAdvisor and GoEco were instrumental. GoEco aggregates programs that are designed to be humanitarian and environmentally responsible, creating some and also booking programs created by other organizations. We spent a week in one managed by The Green Lion, which began (as Greenway) in Thailand in 1998 and is now in 15 countries in Asia, Africa, Latin America, and Fiji. Travelers do not book directly with The Green Lion—it works with 75 agencies, one of which is GoEco.

The Green Lion “experience culture through immersion” programs, many in relatively rural Thailand, include voluntary English teaching in schools, construction work in an orphanage, Buddhism instruction in a temple, an intense survey of Thai culture, elephant rehabilitation, and Thai boxing. A few dozen participants in different programs stayed in the same lodging compound in Sing Buri Province north of Bangkok. Most volunteers were European or Chinese students on gap years or summer breaks, about two-thirds women slightly older than my teenage daughters; my wife and I may have been the oldest. We shared experiences over communal dinners and breakfasts.

The compound was a miracle of lean organization. The manager appeared on Sunday to convey the basics to new arrivals—distribute room keys, review the schedule on a whiteboard, tell us to wash our dishes and not to bring alcohol back across the rural road from shops established by enterprising Thais to serve the year-round flow of volunteers and explorers, and so on. Every day, a cheerful Thai woman cooked and placed wonderful vegetarian food on a central table. (Thai herbs and spices have reached markets near us at home, but some great Thai vegetables, not yet.)  Our conversations were self-organizing. While we were out for the day, there was light housekeeping, gardening, and food delivery.

A volunteer week

We volunteered to help in a school. At some schools, volunteers were handed a class to instruct in English or entertain with no assistance, but our government-funded rural K-12 school had a smart English teacher who had worked her way through a Thai university. Virtually all students managed to acquire a phone, although many came from poor families. The classroom had one good PC and a printer. The teacher maintains a Facebook page, and would like some day to get formal instruction in English from a native speaker.

Our week was unusual, with all-school presentations and displays on Tuesday for a government ministerial inspection, and again on Friday to commemorate the anniversary of the ten-nation Association of South East Asian Nations (ASEAN). On Monday and Thursday, we helped our class prepare booths and presentations. Our daughters were given roles. The events were held in a large, covered open-air space ideal for a tropical country (daytime temperatures were around 90 F). On display was the full range of school activities. Many had a vocational focus: preparing elegant foods and traditional medicines, and making decorations and other objects valued in daily life. In a carefully choreographed reenactment of an historical event or legend, pairs of students representing Burmese invaders and Thai defenders engaged in stunningly fierce combat with blunt but full-length swords. On Wednesday we went to an orphanage with other volunteers and painted walls and helped dig a septic pit. One small, silent boy dropped into the pit with an unused shovel and for 20 minutes tossed earth out above his head, outlasting us volunteers in the scorching sun.

In the school, we saw a hand-made poster on global warming, contrasting the advantages (better for drying clothes and fish) with the disadvantages (dead animals and floods). Drought in Thailand has grown more severe each of the past four years. While changing planes in Taipei airport, we had noticed a large government-sponsored student-constructed public exhibit on global warming. It felt eerie to see the topic embraced in Asian schools more openly than is possible in United States schools.

A tourism week

We spent three days with a great guide and driver recommended on The guide learned English partly in school from a non-native speaker, then while working. She was always available for discussion, but when we were swimming, kayaking, or otherwise engaged, if not taking a photo or video of us on one of our devices, she was texting on her phone. Mobile access was available except for stretches of the drive to Kanchanaburi Province, west of Bangkok. We hiked and climbed to see waterfalls in a national park and kayaked for hours down a river, spending a night in a cabin on the water. We helped bathe some rescue elephants and hiked to see an “Underwater Temple” that is no longer under water; the drought-stricken reservoir behind Vajiralongkorn Dam, which inundated the area thirty years ago, has receded. We walked along the “Death Railway” featured in The Bridge on the River Kwai. The Hellfire Canyon museum details the harrowing story of forced construction of the Thailand-Burma Railway by World War II prisoners of war. The museum, an Australian-Thai effort, gathered recollections from Australian, British, and American survivors, as well as contemporaneous written accounts, drawings, and images (POWs sneaked in a few cameras and some local Thais helped them at great risk).

Three Pagodas Pass on the border with Myanmar (often still called Burma), once a quiet outpost, has become one of the better tourist markets we saw—our guide bought goods to take back to colleagues. The immigration and customs building at the crossing proved to be security theatre—from a shop in the market we stepped out a back door and found ourselves on a shop street in Myanmar. As co-members of ASEAN, the neighbors’ relations have relaxed; Burmese raiders are history, although workers slip across from Myanmar in search of higher-paying employment.


Our third focus was Bangkok, which defies easy description. High-rises of often original design and décor are proliferating. On traffic-congested city streets one is often in range of huge LCD displays that flash images or run video ads, reminiscent of Tokyo. We took a public ferry down and back up the wide Chao Phraya River alongside the city, getting off to spend an afternoon exploring the Grand Palace complex, two dozen ornate buildings constructed over a century and a half and spread over 200,000 square meters. “Beware of wily thieves” warned a sign, but instead of larceny in Bangkok, apart from occasional minor errors in making change or setting a taxi or tuk-tuk fare, we encountered dramatically honest and helpful locals, often delighting in selflessly advising an obvious tourist, in a city with many tourists. We researched online and visited the house-museum of Jim Thompson, a benevolent 20th-century American with a fascinating biography. Twice we traversed Khao San Road, once a backpackers’ convergence point and now a tourist stop. We found a bit more of the old ambience one street over.

Phone maps were useful in the countryside and wonderful in Bangkok—and not only for navigating. A map report of an accident ahead exonerated our taxi driver of blame for being stuck in gridlocked traffic. We could relax, observe the ubiquitous street life, and know that wherever we were headed, we would be there when we got there.

Politics and tragedy

On our second Sunday, Thailand voted on a constitutional revision proposed by the military junta that deposed a democratically elected government two years earlier. The change strengthened the junta’s grip; campaigning against it was not allowed. After its passage was announced and a few hours after our plane took off, bombs began exploding. One of our daughters had friended several of our Green Lion cohort; through Facebook we learned that eight who had progressed south were injured, seven temporarily hospitalized. A shocking contrast to the remarkably peaceful, friendly, and safe Thailand we experienced—another mystery that Wikipedia helped resolve. A map on the “Demographics of Thailand” page shows that 5% of the population near the southern border with Malaysia, a Sunni Islamic state, are ethnic Malays; they had voted overwhelmingly against the constitutional change. Some are separatists battling a Buddhist government that is capable of harsh repression.

The future of a unique culture

Apart from the 5%, Thailand appears to have a homogeneous, friendly culture, replete with animism, Buddhism, low crime, and sincere respect for the royal family. The 88-year-old King is the world’s longest-serving head of state. He is the only living monarch who was born in the United States; his father, a prince, studied public health and earned an MD at Harvard. His program for a “sufficiency economy” of moderation, responsibility, and resilience recognizes environmental concerns and is generally respected, although it condones a class system. Like other ASEAN countries, Thailand has professionals who lead upper middle-class lives alongside a subsistence-level less-educated population, plus a few super-wealthy families. Thailand has a relatively high standard of living in Southeast Asia; the poor appear to get food and basic medical attention.

Thailand is not without problems. The highly erratic dictatorship is worrisome. Even in better times, a sense of entitlement at higher rungs of the hierarchy leads to corruption. Some Thai women expressed the view that “Thai men are useless, they don’t work, they drink and expect women to do the work.” We did indeed see exceptionally industrious women, a phenomenon Jim Thompson channeled in a non-exploitative way to create the modern Thai silk industry. Basic education extends to boys and girls, but higher education favors males, for whom it is free if they ordain temporarily as monks, a common practice among Thai men. Omnipresent male monks may contribute more philosophically than productively, but we saw men everywhere driving taxis, tuk-tuks, and working in construction. We saw industrious male students—and the orphanage shoveler.

The King and Queen are called Father and Mother. This familial aura may contribute to the country’s smooth functioning. Artisans produce goods of the kinds that we saw students making, and people appreciate them. They could no doubt be mass-produced at low cost. Without discounting the efficacy of traditional medicines used by both classes, additional modern medicine could bring benefits. And Thais could acquire more technology than they do.

Is that the path they should take? Will they be happier shifting from animism and Buddhism to consumerism? Do they have a choice? They have phones. They can see alternative ways of living. Bangkok is a stunning city of glamor, where Ronald McDonald stands expressionlessly with fingers tented in a Thai greeting of respect.

Thanks to Gayna Williams for planning the trip and suggesting material to include in this post. Isobel advocated that we help children on our vacation. Eleanor brought her high engagement to interactions with volunteers and students. Phil McGovern provided background on The Green Lion and Fred Callaway shared observations on the flight out of Bangkok.

Posted in: on Thu, September 08, 2016 - 5:29:47

Jonathan Grudin

Jonathan Grudin works on support for education at Microsoft. Access these and related papers at under Prototype Systems.
View All Jonathan Grudin's Posts

Post Comment

No Comments Found

The map is the territory: A review of The Stack

Authors: David Fore
Posted: Wed, August 03, 2016 - 4:18:39

From that phantom vibration to that reflex to grab your own rear, you are responding to the call of The Stack…

From the virtual caliphate of ISIS to the first Sino-Google War of 2009 to the perpetually pending Marketplace Fairness Act, The Stack gives birth to new sovereignties even as it strangles others in their sleep.

From YouTube’s content guidelines to Facebook’s news algorithm to Amazon’s invisible hand, The Stack promulgates de facto cultural, legal, and economic norms, transforming conventional borders into well-worn cheesecloth.

From @RealDonaldTrump’s hostile takeover of the GOP to all those hand-wavy pretexts for Brexit, The Stack scrambles party politics by offering peer-to-peer loyalties, fungible citizenship, royalty-free political franchises, and 24/7 global platforms… all for the price of a moment of your time.

From mining coal to rare earth to bitcoins, The Stack wrangles the resources to fuel itself—and hence ourselves—even if at the price of planetary autophagia.

From the Internet that got us [1] this far to the neo-internets coming online to splinter and consolidate perceptions, fiefdoms, and freedoms, The Stack is reaching a new inflection point.

From earth to Cloud to City to Address to Interface to User, Your Permanent Record is enacted, assembled, stored, distributed, accessed, and made into meaning by means of The Stack.

The Stack had no name, not until Benjamin Bratton borrowed the term from computational architecture, capitalized it, then applied it to this techno-geo-social architecture we call home. It’s also the title of his new book published by MIT Press: The Stack: On Software and Sovereignty.

The book delineates the hockey-stick ascent of digital networking from its early identity as a narrowly defined instrument made and used by government and military types to its central role in creating a planetary organ of cognition, composition, dissolution, and transformation. In a startlingly short time we see digital systems of computation, communication, feedback, and control emerging as our species’ system of systems. 

Fleshy and rocky, pixelated and political, The Stack is every bit as consequential as climate, geology, society, and other sister systems of planetary activity. Promiscuous by nature, The Stack invites most anything and excludes virtually nothing. It connects us and completes us, rendering the world visible even as it pins us to our place like butterflies to a board. Acknowledging absence as much as presence, The Stack serves as an automagically accommodating host that anticipates, invents, and activates urges, ideas, and outcomes. We leverage The Stack not just because we want to, but because by signing the Terms of Use we are feeding The Stack’s own capacious memory and so informing its actions. 

Convenience and benefit is where the line blurs between user and used

Not simply a host, The Stack is also a parasite that renders us into its host. The physics of exchange are simple; the terms of human exchange less so. But a deal is a deal.

So immense as to hide out in plain sight, The Stack was designed and built by nobody—not knowingly, at any rate, and certainly not all of it. Still nobody is in charge of its uses, its reach, or its fate; except maybe each of us as we use The Stack to extend our reach to realize our fates. The Stack will oblige by responding in kind.

But really what is the The Stack? To start with, this is not your father’s Internet. Instead, the Stack is a consolidated six-layer meta-platform we can use as an “engine for thinking and building.” It is also a “conceptual model” with which we might apprehend the “coherent totality” of this “technical arrangement of planetary computation.” Bratton’s book lays out the former in service of the latter.

For some readers, The Stack will stand shoulder-to-shoulder with Christopher Alexander’s pattern language tomes, particularly A Timeless Way of Building. Like Alexander, Bratton is deeply committed to identifying patterns in human-made structures and how they support, subsume, and define our humanity and the viability of the world itself. Both writers view citizens as potential architect/builders with an inherent right to program the spaces they inhabit. They both take a fundamentally ecosystemic view of the project of design, construction, and use.

But writers are stylists, though of very distinct stripes. Alexander can wax poetic, composing his prose to a faintly mystical beat; he rarely cites sources, and there is a messianic slant to his historiography. For all of that, perhaps, Alexander is much loved by readers, whether laypeople or professionals. Bratton, by contrast, crafts intricate prose, employs diction as concrete as his subject is abstract, and deeply sources his inspirations. His worldview, meanwhile, is assiduously non-deterministic. For all that, we shall see how broadly Bratton’s work will be embraced by intellectual, practitioner, and civilian readerships. 

Each sentence of The Stack is packed as tight as a Tokyo subway train, forcing the willing reader to double-back to pick up what Bratton lays down. The payoff can be thrilling, much like watching the brain-bending eloquence of Cirque de Soleil. Bratton’s wit shines through most every page, in forms both dry and waggish, offering the dyspeptic reader means to metabolize the book’s quarter-million words [2].

But why a “stack”? 

Traditionally, a technology stack comprises hardware and software that runs stuff, with storage servers situated toward the bottom and end-user applications toward the top. Companies, however, tend to overvalue the layer of the stack in which they operate, and they undervalue those above. Bratton’s schema takes air out of the tires of blinkered business models, blanched political projections, and pell-mell individual attempts to make change. The Stack he sees in his mind’s eye runs this gamut and much more. He abstracts application and data elements, replacing them with Cloud, City, and Address, thereby emphasizing the technical, jurisdictional, and identity functions at the heart of The Stack’s raison d’être. Bratton goes on to cap The Stack top and bottom with a pair of layers — User and Earth — typically ignored by technical stacks. The Interface, meanwhile, abides… even if Bratton’s idea of interface is less digital and more architectural than most writers attach to that word [3].

So I’m, like, a designer, right? I’d expect that such a designery book would employ scads of visualized designery designs to sell its suds. Well, I have a warning for readers panting for designporn: The Stack will leave you high and dry. Bratton is crafting an argument here, something never won without words. Indeed, there’s just a single image, found on p. 66:

In eschewing complex graphical explications, Bratton’s overt bookmanship serves his project. The Stack hexagram lends structural clarity, metaphoric utility, and conceptual coherence that might fertilize bountiful discussions about this most systematic of unsystems whose workings we sense but whose totality we never quite glimpse. 

To name The Stack is to make it visible and therefore subject to interpretation, critique, and use [4]. In any event, Bratton’s six-layer hexagram [5] is not simply a simplification, some kind of user-friendly rhetorical strategy for making his ideas more digestible. Rather (or in addition to) digestibility, it is a concrete logos that we — makers, theorists, political disrupters, web monkeys, app-slingers, and armchair academics alike — can use to better focus our meditations and actions. There is a portability to it that I quite like. I can easily hoist these ideas up on a whiteboard for my comrades and I to grok its gestalt and locate opportunities to investigate our own particular professional, personal, aesthetic, ethical, and political interests:

In delaminating and delineating each layer’s position, constituents, relationships, and purpose, Bratton’s hexagram helps us see beyond the shibboleths of the-Internet-as-cash-machine, Internet-as-thought-control-machine, Internet-as-Leviathan, and Internet-as-Heaven’s Gate. And that’s a good thing. After all, while the Internet (or now, according to a recent update to The New York Times’ style guide, “the internet”) has provided many of the platforms on which The Stack depends, this accidental megastructure is something far more complex, powerful, and new than all that. Between its great potential and uncertain future, The Stack is still a bit of an ingénue. Put in motion, The Stack is manifesting emergent orders that amount to an asymptotic identity: always approaching but never arriving at a perceptible resolution. We can help it come clean and grow up. 

By considering The Stack as a whole, this book better equips us to contend with this “accidental geodesign that demands from us further, better deliberative geodesign.” When the world itself is seen as information, in other words, the task of organizing all the information is the same as organizing all the world. The map becomes the territory, which renders the converse true, too.

While many aspects of The Stack were the result of deliberate planning, The Stack as a whole was neither conceived nor constructed to envelop and subsume the planet. The notion that the Stack is a planetary sensing/cognition/connection/control system that we unintentionally midwifed is repeatedly emphasized by Bratton. But does that imply that something like mindful action might make a difference here? Indeed, The Stack can seem as resistant to comprehension and design as a hurricane is resistant to science and prayer. A palpable urgency infuses Bratton’s breakneck prose, perhaps reflecting his view that The Stack is already crafting politics, geography, and territory in its own image, with modes of governance that enforce themselves. Indeed, as the press is atwitter with visions of robot armies cresting the hill, we would do well to ensure that the substantive contributions we make to The Stack now will constitute kernels of its ultimate operations. Falling short of that goal will leave us watching our ideas and ideals slip away into quaint and ultimately orphan code, commented out by other better, synthetic minds than our own. 

Then should we then just buy sandbags, hunker down, and sweep up after? The prospect of us being stalked by Skynet strongmen seems no more likely to Bratton than the arrival of machines of loving grace foretold by the poet [6]. The flipping toast might just as well land butter-side up as down; we really won’t know until the final instant. In the meantime, we might seize the opportunity and means to do what we can to flip it our way. Will we measure up to the challenge? By positing a coherent totality for The Stack, Bratton offers a perch from which we might exercise influence over the profound and protean implications of our reliance upon, and responsibilities for, The Stack.

Finally what comes to mind is the musical question [7] posed by Jacob Bronowski, in his book Science and Human Values, as he drove toward Hiroshima in August 1945:

Is you is or is you ain’t my baby
Maybe baby’s found somebody new
Or is my baby still my baby true.


1. I use “us” in that writerly way (you and me) as well as in reference to our fellow Stackmates, including but not limited to users animal, vegetable, and mineral; connections and desertions tacit and explicit; recherchant industrial processes and ethically confused robots; spam sent by the supply chains in hopes of inspiring our refrigerators to place an order in time for dinner.

2. Thumbnail heuristic evaluation: Even with tiny type and slim margins, the hardbound version of The Stack weighs in at nose-breaking 2.5 pounds. And so while this reader stills consumes books composed of sheets of reconstituted trees bound together by glue and pigmented with ink, I urge safety-minded readers to download the electronic version of this weighty tome. The User does so by activating an Interface at a convenient Address in the City to stream Bratton’s colorful disquisitions from the Cloud, which is on, sometimes below, but always of the Earth. (viz: here)

3. While conception and significance of interfaces will remain non-negligible in my practice, I see the interplay of voice and algorithm gradually rendering interfaces immaterial.

4. The wordfulness of this book also reminds us that the machines comprising The Stack are history’s most inveterate readers; IBM Watson, just to name one, can read 800 million pages per second.

5. For readers following along in their I Ching: we see here hexagram #1: Ch’ien/The Creative.

6. I like to think
(it has to be!)

of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.

—Richard Brautigan

7. Quoting the much-loved Renne Olstead lyric.

Posted in: on Wed, August 03, 2016 - 4:18:39

David Fore

David Fore cut his teeth at Cooper, where he led the interaction design practice for many years. Then he went on to run Lybba, a healthcare nonprofit. Now he leads Catabolic, a product strategy and design consultancy. His aim with this blog is to share tools and ideas designers can use to make a difference in the world.
View All David Fore's Posts

Post Comment

No Comments Found

The joy of procrastination

Authors: Jonathan Grudin
Posted: Mon, July 11, 2016 - 11:37:43

I have long meant to write an essay on procrastination. Having just been sent a link to a TED talk on a virtue of procrastination, this seems a good time to move it to the front burner [1].

An alarming stream of research papers describe interventions to get chronic procrastinators like myself on the ball: wearable devices, displays mounted in kitchens, email alerts, project schedule sheets, and community discussion groups (think “Procrastinators Anonymous”). Papers on multitasking and fragmented attention suggest that procrastination contributes to problems with stress, health, career, and life in general.

Virtually everyone confesses to occasionally delaying the start or completion of a task. About a fifth of us are classified as chronic procrastinators. If you are with me in the chronic ward, cheer up: I am here to call out the virtues of procrastination.

Procrastination and creativity

The TED talk describes lab studies that support the hypothesis that people who are given a task benefit by incubation—by putting it on a back burner for a while, rather than plunging in. This is consistent with reports that after working for a time on a problem, an insight came out of the blue or in a dream. Obviously, immediately completing a task—no procrastination—leaves no time for incubation. (Waiting to the last minute to engage could also leave no chance for incubation.) The speaker concludes, “Procrastinating is a vice when it comes to productivity, but it can be a virtue for creativity.” Mmmm. Let’s consider productivity virtues that arise from procrastinating.

Procrastination and productivity

My team, once upon a time, was planning a workflow management system. Through daily on-site interviews, we studied the work practices of people with different roles in a manufacturing plant. One was a senior CAD designer. At any point in time, he had been assigned several parts. Each had a due date. When a part was finished, it was checked in. We thought that by tracking check-ins, our system could automatically assign the next part to the designer with the fewest parts not yet checked in.

We made a surprising discovery. Instead of working on a part until finishing it, the outstanding designer procrastinated on every one. He stopped when the remaining work could be completed in about half a day and waited until he was asked for it. This was often a little after the original due date—why did he choose to be late? He did so because parts had dependencies on other parts assigned to other designers; changes in theirs could force changes in his. It was more efficient to accumulate forced changes, then at the end reopen his task, ramp up, and resolve all remaining work. Finishing early would mean that subsequent change requests would force him to reopen the task and possibly undo work. Because final requests for delivery came with a day to spare, having a manageable effort left was fine. Procrastination enhanced productivity. (Unfortunately, our automatic workflow assignment tool was a non-starter, when every task was left open and the system unable to detect whether a part was 5% or 95% done.)

On any group project, you must decide when it is most effective to jump in and when it is better to delay until others have had a turn. Sometimes a request for input is withdrawn before the work is due, benefitting a procrastinator. I know people who ignore email requests, saying “If it is important, they will ask again.” I don’t do that, but if I sense that a request is not firm, I may wait and confirm that it is needed when the requested response date is close but comfortable. If the work remains green-flagged, I may get a benefit, such as incubation or evolving requirements. For example, this essay profited from being put off until the TED talk appeared: I could include the creativity section. 

Another productivity benefit will be understood by anyone with perfectionist tendencies: As long as you know how much time it will take to complete a job, by waiting until just enough time remains, you can eliminate the temptation to waste time tinkering with inconsequential details. The scare literature often links the source of procrastination to a feared or stressful task, which grows more feared and stressful when left to the last minute. In the meantime, it hovers overhead as a source of dread. Yes, it happens. I spent some childhood Sunday afternoons, when homework loomed, watching televised football that didn’t really mean anything to me. I also remember aversive procrastination in college, and 30 years later still have dreams in which I realize that I’m enrolled in a class with an exam approaching that I had forgotten to study for. Perhaps this symbolizes procrastination-induced anxieties? I detect less avoidance-based procrastination now, but tasks do slip that I wish would get done—cleaning out the garage, reading a book, writing a blog post. 

What is easy to overlook, though, is that at least for me, procrastination can be positively wonderful.

The joy of procrastination

When my primary task is a deliverable due at hour H, I estimate the time T that it will take to complete it without undue stress. Then I put off working on it until H-T, leaving just enough time to do it comfortably. In that time, I clear away short tasks such as reviews and reference letters, and then some that are not necessary but are very appealing, such as writing to friends, seeing a movie, starting a book, or writing a blog post. Tasks that are so much fun that accomplishing one produces an endorphin wave. The euphoria carries over to the big task, helping me breeze through it. In this procrastination period I may benefit from incubation and the avoidance of endless tinkering, but the exhilaration is the key benefit.

You may wonder, doesn’t this conflict with the advice to provide rewards after making progress on a big task? With kids we say, “Give them candy after they finish the assignment.” I get that, but sometimes the sugar rush gets you through the assignment faster, and in better spirits. 

The skill of procrastinating 

Experience is required to make the judgment calls. The key to realizing positive outcomes of delaying action is the accurate estimation of the time needed to complete a deferred task to your satisfaction without incurring unpleasant stress. Especially when young, it is easy to be overly optimistic about how quickly a task will be completed. Objective self-awareness is required to develop forecasting skill, but with experience and attention we can do it.

Especially when young, I experienced procrastination-amplified dread. I have regrets about tasks postponed until time ran out. When I misjudge, the task usually gets done, but with more stress than I would have preferred. The residue may haunt a future dream. Nevertheless, procrastination has enabled me to accomplish many of the things that I have most loved doing. Having finished this post, I have a big job to get back to.


1. After writing this I learned of this article by Adam Grant, the TED speaker. It has more examples. Like my essay, it starts with a confession of procrastination.

Thanks to Gayna Williams, John King, and Audrey Desjardins for discussions and pointers.

Posted in: on Mon, July 11, 2016 - 11:37:43

Jonathan Grudin

Jonathan Grudin works on support for education at Microsoft. Access these and related papers at under Prototype Systems.
View All Jonathan Grudin's Posts

Post Comment (2016 08 22)

I can really relate to the CAD designer you mentioned. My work is somewhat related. As a UI designer for an evolving system to manage mortgages and property taxes, I get change requests from every stake holder. Everyone wants it done now. I find delaying actually working on a small bit saves me time as I can finish a bigger piece with everyone’s 2 cents incorporated. Nearly 90% of such “top priority” CR’s are redundant or can be done away with. Meaning the same output is available via another simpler method.

But at least in my case I have to keep everyone informed that so and so will get done. Just, not now. I send regular emails mentioning the due action items which keeps the suits happy. Most people just want an acknowledgment that their brain wave will be considered.

In the end I do whatever I feel is right. With least effort from me. Everyone is happy.

The Receptionist

Authors: Deborah Tatar
Posted: Wed, July 06, 2016 - 2:47:55

My mother is dying. She had a rare gastric cancer in her pyloric valve. The amazing doctors at UCSF took out 70% of her stomach and a bunch of lymph nodes, stitched her up, and gained her another 13 months of good life. Unhappily, she was not among the 50% who make it to five years without recurrence. A few insidious cells escaped annihilation and now she has an inoperable recurrence at the head of the pancreas. She’s done with chemo. Her body cannot tolerate more, and she doesn’t want to spend the rest of her life in pain. Atul Gawande’s “Being Mortal” has become our bible, and she is very clear about what she wants: to live as she has done—independently, swimming every morning, meditating—without pain as long as she can and then to die quickly. She’ll have radiation because it might reduce her pain and probably won’t hurt. 

Her parlous state gives the small events of her life more meaning.

Because she lives pretty far away from UCSF, another oncologist—we’ll call him Simpson—closer to home, had administered the chemotherapy before and after her big operation, under the direction of her main oncologist at UCSF. 

There are a lot of chores associated with being ill. Part of avoiding pain is that she has a port, a line to a vein in her chest cavity that can be used to administer medicine or draw blood without pain. It results in an odd protrusion under the tight skin of her breastbone, as if she had implanted a plastic bottle cap. The port has to be flushed once a month and since she actually does not want to die, especially from infection, she is assiduous about making sure that this is done. 

Last week, she thought to spare herself a lengthy trip into San Francisco so she called Dr. Simpson’s office to see if she could get the port flushed at his office. After the initial call, she called me in high dudgeon:

I called and after I waited and hit the all the buttons, eventually I left a message and then when the receptionist finally called me back, I explained the situation and she said that if I waited ten days, I could have an appointment during my swimming time, which was bad enough, but I had to have an appointment with the doctor first. I asked why I had to see Dr. Simpson if all I needed was my port flushed, and she said, “It’s the rule.” I asked what kind of rule that was—the doctor knows me, he’s in contact with my oncologist (at UCSF) and he knows that I’m not having any more chemo—but the receptionist just kept saying that it was the rule. I was so mad!

It wasn’t the effort of seeing the doctor that enraged her. It was what she perceived as waste. “They’re just money-grubbers! They’re getting rich off of Medicare!” She had resolved that, instead of seeing Simpson to get the port flushed, she would haul herself two hours each way by car, BART (Bay Area Rapid Transit), and shuttle bus to San Francisco. 

After arranging the flush with UCSF (“They were so nice!”), she called Dr. Simpson’s office back to cancel. Then, she called me in a state of enhanced indignation. She reported that after the receptionist canceled the flush, she said, “But what about your appointment with the doctor?” Mom replied, “I don’t need an appointment with the doctor.” The receptionist said, “But there’s a note here that says that you are supposed to have a check up with him every four to six months. You need to make an appointment.” Mom reasonably explained, “He’s not my main oncologist. He just administered the chemo that I had last year. I’m not having any more chemo. I don’t need to see him.” This is where things began to be weird. The receptionist replied, with some anxiety, “But there’s a note here that says that he wants to see you every four to six months.” “But I’m not having any more chemotherapy.” “But there’s a note here. The doctor wants to see you.” Evidently, they went around the block a couple more times in increasingly emotional tones until Mom finally replied, “I’m not having any more chemo. I’m not coming back to see Dr. Simpson. I’m having radiation and then I’m going to die. Good-bye.” 

If we were to argue about this interaction, Mom would see the receptionist as autonomous and responsible for her own behavior. I’m a softer woman than my mother and I would have felt sorry for the receptionist. From my point of view, Mom was arguing with someone who was bondage to an information system. 

Yet, Mom’s ground truth was rooted not only in her lifelong habit of expecting and demanding dignity from herself and others, but also in the test of this approach in the face of impending death. 

The system that dominated the receptionist’s behavior had no place for Mom’s actual condition, although Mom’s care was its putative object. It attempted to erase her pain, her fatigue, her agency—her identity. She resisted. Dealing with such systems is not part of living a good life. Yet resisting such systems might be—by resisting, she was asserting herself and also asking the receptionist to be better than the system. And the receptionist might have seen her power to behave differently in similar situations. Something might have been learned.

There are two views about how socio-technical systems should be designed. One is that the computer system itself should have been designed to encourage negotiation between the receptionist and my mother. The other is that the receptionist should be trained to take more power over the system. But the need for either or both of these is invisible. It is rare for the death of a thousand cuts suffered by countless system users to be visible to even one other person. 

I admire my mother’s insistence on resistance to the machine, writ large. I would like to die, and to live, as she has. I cannot do anything about her cancer or her impending death, but we designers can strive to perceive and advocate for issues of dignity and compassion even when these issues are not widely acknowledged.

Posted in: on Wed, July 06, 2016 - 2:47:55

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts

Post Comment

No Comments Found

Brigitte (Gitti) Jordan: An obituary

Authors: Elizabeth Churchill
Posted: Thu, June 23, 2016 - 3:43:25

It is with great sadness that we report on Brigitte (Gitti) Jordan’s death; Gitti died on May 24, 2016, at her home in La Honda, California, surrounded by loved ones. She was 78.

While the influence of anthropology and the practice of ethnography may seem very familiar to us in the HCI and technology design world today, it was not always so. During the 1980s and 1990s, a number of trailblazing researchers laid the groundwork for what we now take for granted when it comes to methods for understanding human activities with and around technologies. Gitti was one of those trailblazers. Gitti’s worldview was broad, but she always put people and their interests at the center, whether she was studying childbirth, use of productivity tools, or autonomous vehicles. Her field sites included village huts, corporate research labs, and virtual worlds. 

One of the pioneers of business and corporate ethnography and what we now call design ethnography, Gitti always emphasized grounding design, and especially technology design, by understanding people's everyday landscapes, their needs, values, behaviors, and settings. Although SIGCHI and Interactions was not one of Gitti’s intellectual and publishing “homes,” she certainly had influence on a number of people and projects that are part of the SIGCHI canon. Gitti coined the term “lifescapes of the future” and consistently used research on what is happening now to speculate about the changes that are likely to be wrought by the introduction of new technologies. 

Gitti’s curiosity about technologies in use by consumers, but also by technologists, went back a long way, and throughout her long career she sustained an engaged yet highly critical relationship with computer modelers, cognitive scientists, and artificial intelligence researchers. Her master’s thesis in 1971 explored how computer simulations might be better exploited by anthropologists. The thesis, Diffusion, Models and Computer Analysis: A Simulation of the Diffusion of Innovations, earned her an M.A. in 1971 from Sacramento State College. 

Gitti’s Ph.D. was based at the University of California, Irvine, where she engaged deeply with developments in ethnomethodology and conversational analysis, emerging thinking in cognition (now known as situated and distributed cognition), learning theory, and more. She also furthered her interest in computer science, taking a course with the young professor John Seely Brown, who went on to lead Xerox PARC (Palo Alto Research Center) and who, when there, invited Gitti to join PARC. At PARC, Gitti worked with Lucy Suchman, Jeanette Blomberg, Julian Orr, and other pioneers to advance the contributions of anthropological and ethnographic study of complex technology. At the same time, she became a senior researcher at the Institute for Research on Learning (IRL), where she played a central role in establishing IRL’s depth of focus and understanding of processes of social learning wherever it is found. Gitti led numerous teams through rich and challenging projects in corporate workplace settings to examine and help support meaningful knowledge economies. Gitti had a keen interest in methodology, leading regular interaction analysis labs at both PARC and IRL. 

In the last years of her life, she consulted to the Nissan Research Center in Silicon Valley, a lab led by artificial intelligence scientists and roboticists aiming to develop autonomous vehicles. In this environment, Gitti insisted on the need to dedicate equal, if not more, attention to the human implications of this emerging technology.

Gitti was characterized by her boundless curiosity, personal warmth, and encouraging style. Her standards regarding ideas and empirical realities and their interactions were high and exacting. Again and again people point to her as the reason they are doing what they are doing—and more importantly, for finding and rekindling their sense of excitement and importance in the work that they do. She was respected, admired, and loved by her colleagues, family, and friends, and her multiple legacies will live on as others continue to carry forward her work in the major fields she helped to found. 

Written with gracious assistance and contributions from Melissa Cefkin, Bob Irwin, Robbie Davis Floyd, Lucy Suchman, Jeanette Blomberg, and Susan Stucky.

Posted in: on Thu, June 23, 2016 - 3:43:25

Elizabeth Churchill

Elizabeth Churchill is a director of user experience at Google. She has been a scholar and research manager focused on human-computer interaction for over 20 years. A Distinguished Scientist of the ACM, her current work focuses on HCI aspects of the social web and the emerging Internet of Things.
View All Elizabeth Churchill's Posts

Post Comment

No Comments Found

Technology and liberty

Authors: Jonathan Grudin
Posted: Tue, May 03, 2016 - 10:22:39

The absence of plastic microbeads in the soap led to a shower spent reflecting on how technologies can constrain liberties, such as those of microbead producers and consumers who are yearning to be clean.

Technologies that bring tremendous benefits also bring new challenges. Sometimes they create conditions conducive to oppression: oppression of the weak by the strong, the poor by the rich, or the ignorant by the clever. Efforts on behalf of the weak, poor, and ignorant often infringe on the liberty of the strong, rich, and clever. As powerful technologies proliferate, our survival may require us to get the balance right. Further constraints on liberty will balance new liberating opportunities.

Let’s start back a few million years, before beads were micro and technologies changed everything.

Fission-fusion and freedom

For millions of years our ancestors hunted and gathered in fission-fusion bands. A group grew when food was plentiful; in times of scarcity, it split into smaller groups that foraged independently. When food was again plentiful, small groups might merge… or might not. A fission-fusion pattern made it relatively easy for individuals, couples, or small groups to separate and obtain greater independence. This was common: Homo sapiens spread across the planet with extraordinary rapidity, adapting to deserts, jungles, mountaintops and the arctic tundra. That freedom led to the invention of diverse cultural arrangements.

1. Agriculture and the concentration of power

It is a law of nature, common to all men, which time shall neither annul nor destroy, that those who have more strength and power shall rule over those who have less. – Dionysius of Halicarnassus

Agriculture was a transformative technology. Food sufficiency turned roaming hunter-gatherers into farmers and gave rise to large-scale social organization and an explosion in occupations. Everyone could enjoy the arts, crafts, diverse foods, sports, medicine, security, and potential religious salvation, but with it came implicit contracts: Artists, craftspeople, farmers, distributors, athletes, healers, warriors, and priests were guaranteed subsistence. People were collectively responsible for each other, including many they would never meet, individuals outside their immediate kinship groups. People who wanted freedom might slip away into the wilderness, but those who reaped the benefits of civilization were expected to conform to cultural norms that often encroached on personal liberty.

The leader of a hunter-gatherer band had limited power, but agriculture repeatedly spawned empires ruled by despots—pharaohs in Egypt, Julio-Claudian emperors in Rome, and equally problematic rulers in Peru, Mesoamerica, and elsewhere. The Greek historian Dionysius lived when Rome was strong and powerful.

Why this pattern? Governments were needed for security and order: to protect against invasion and to control the violence between kinship groups that was common in hunter-gatherer settings but which interfere with large-scale social organization.

The response to oppression of the weak by the powerful? Gradually, more democratic forms of government constrained emperors, kings, and other powerful figures. Today, violence control is the rule; strong individuals or groups can’t ignore social norms. Even libertarians acknowledge a role for military and police in safeguarding security and enforcing contracts that the strong might violate if they could.

2. The second industrial revolution and the concentration of wealth

Another technological revolution yielded a new problem: oppression of the poor by the wealthy. In the early 20th century, monopolistic robber barons in control of railroads and mines turned workers into indentured servants. Producers could make fortunes by using railroads to distribute unhealthy or shoddy goods quickly and widely; detection and redress had been much easier when all customers were local.

The response to the oppression of the poor by the wealthy? Perhaps to offset the rise of populist or socialist movements, the United States passed anti-trust legislation in the early 20th century, giving the government a stronger hand in regulating business.  Also, the interstate commerce clause of the Constitution was applied more broadly, encroaching on the liberty of monopolists and others who might use manufacturing and transportation technologies exploitatively. It was a steady process. Ralph Nader’s 1965 book Unsafe at Any Speed identified patterns in automobile defects that had gone unnoticed and triggered additional consumer protection legislation. In contrast, after a loosening of regulations that enabled wealthy financiers to wreck the world economy a decade ago, the 2010 Dodd–Frank Wall Street Reform and Consumer Protection Act constrained the liberty of the wealthy, an effort to head off a recurrence that may or may not prove sufficient. Some libertarians on the political right, such as the Koch brothers, are vehemently anti-regulation, but for a century most people have accepted constraints [1].

3. Information technology and the concentration of knowledge

Libertarian friends in the tech industry believe that they desire the freedom of the cave-dweller. Sort of. Not strong and powerful, they support our collective endeavors to maintain security and enforce signed contracts. They are not among the 1%, either, and they favor preventing the very wealthy from reducing the rest of us to indentured servitude in the manner of robber baron monopolists.

However, my libertarian tech friends are clever, and they oppose limiting the ability of the intelligent to oppress the less intelligent through contracts with implications or downstream effects that the less clever cannot figure out: “The market rules, and a contract is a contract.” Technology that provides unencumbered information access gives an edge to sharp individuals. The Big Short illustrated this; banks outsmarted less astute homeowners and investors, then a few very clever fellows beat the bankers, who succeeded in passing on most of their losses to customers and taxpayers.

The response to the oppression of the slow by the quick-witted? A clear example is the 1974 U.S. Federal Trade Commission rule that designates a three-day “cooling-off period” during which anyone can undo a contract signed with a fast-talking door-to-door salesman. Europe has also instituted cooling-off periods. The U.S. law applies to any sale for over $25 made in a place other than the seller’s usual place of business. How this will be applied to online transactions is an interesting question. More generally, though, information technology provides ever more opportunities for the quick to outwit the slow. We must decide, as we did with the strong and the rich, what is equitable.

Butterfly effects

Technology has accelerated the erosion of liberty by accelerating the ability of an individual to have powerful effects on other individuals. Twenty thousand years ago, a bad actor could only affect his band and perhaps a few neighboring groups. In agrarian societies, a despot’s reach could extend hundreds of miles. Today, those affected can be nearby or in distance places, with an impact that is immediate and evident, or delayed and with an obscure causal link. It can potentially affect the entire planet. It is not only those with a finger on a nuclear button who can do irreparable damage. Harmful manufactured goods can spread more quickly than a virus or a parasite. A carcinogen in a popular product can soon be in most homes.

We who do not live alone in a cave are all in this together, signatories to an implicit social contract that may be stronger than some prefer, which limits our freedom to do as we please. Constraining liberty is not an effort to deprive others of the rewards of their efforts. It is done to protect people from those who might intentionally or unintentionally, through negligence, malfeasance, oppression, or simply lack of awareness, violate the loose social contract that for thousands of years has provided our species with the invaluable freedom to experiment, innovate, and trust one another—or leave their society to build something different. If the powerful, wealthy, or clever press their advantage too hard, we risk becoming a distrustful, less productive, and less peaceful society.

Plastic microbeads in cosmetics and soaps spread quickly, accumulating by the billions in lakes and oceans, attracting toxins and adhering to fish, reminiscent of the chlorofluorocarbon buildup that once devastated the ozone layer. In 2013 the UN Environment Programme discouraged microbead use. Regional bans followed. Even an anti-regulatory U.S. Congress passed the Microbead-Free Waters Act of 2015. It only applies to rinse-off cosmetics, but some states went further. The most stringent, in California of course, overcame opposition from Proctor & Gamble and Johnson & Johnson. Our creativity has burdened us with the responsibility for eternal vigilance in detecting and addressing potential catastrophes.


1. Politicians who favor freedom for themselves but would, for example, deny women reproductive choices might not seem to fit the definition of libertarian, but some claim that mantle.

Thanks to John King and Clayton Lewis for discussions and comments, and to my libertarian friends for arguing over these issues and helping me sort out my thoughts, even if we have not bridged the gap.

Posted in: on Tue, May 03, 2016 - 10:22:39

Jonathan Grudin

Jonathan Grudin works on support for education at Microsoft. Access these and related papers at under Prototype Systems.
View All Jonathan Grudin's Posts

Post Comment

@James Dissertation Writer (2016 06 04)

Nice post. thank you for sharing.

Oh, the places I will go

Posted: Fri, April 29, 2016 - 1:26:09

Over the decades conferences, symposia, webinars, and summits all have formed critical portions of my professional development. I learned about controlled vocabularies and usability testing and the viscosity of information and personas and information visualization and so much more from attending two- or three-day events—and even local evening presentations from peers and leaders alike, all centered on this thing of user experience.

So along with dogwood blossoms and motorcycling weather, the conference season blooms anew. This year finds me focusing on three events, with the anticipation of meeting up with friends both met and as-yet unknown.

The Information Architecture Summit
May 4–8, 2016, Atlanta, GA

I’ve attended all but two of the summits since its inception at the Boston Logan Hilton in 2000. That year, the summit saw itself as a needed discussion and intersection point between the information-design-oriented (and predominately West Coast) information architects and the library-science-oriented East Coasters. 

When it began, at the ebbing of the wave of the dotcom headiness, the American Society of Information Science (and, later, Technology) decided to hold the summit only over the weekend and in an airport hotel, to reduce the impact of folks having to miss work. We had so much to do back then, didn’t we?

As it grew, the IA Summit has expanded the days of the core conference as well as adding several days of workshops before the summit itself.

Because I’ve attended all but two summits (2003 and 2004), I know a lot of the folks who have woven in and out of its tapestry. So, for me, this event is as important for inspiration from seeing who’s doing what and catching up with people as it is in learning from sessions.

But learning is a core component; last year’s summit in Minneapolis reignited an excitement for IA through Marsha Haverty’s “What We Mean by Meaning” and Andrew Hinton’s work on context and embodied cognition. 

So expect both heady bouts with science, technology, philosophy, and practicing work in IA. Oh, and come see me speak on Sunday, if ya’d like.

Then there’s the Hallway, where the conference really takes place. From conversations to karaoke, from game night to the jam, the IA Summit creates a community outside the confines of the mere conference itself.

Enterprise User Experience
June 8–10, 2016, in San Antonio, TX

Last year’s inaugural Enterprise UX conference took me and much of the UX world by storm: a much-needed conference focused on the complexity of enterprise approaches to user experience. Two days of a single-tracked session event followed by a day of optional workshops provided great opportunities for learning, discussion, and debate. 

Dan Willis highlighted a wonderfully unique session showcasing eight storytellers rapidly telling their personal experiences in at once humorous, at once poignant ways. 

This year, luminaries such as Steve Baty from Meld Studios in Sydney, MJ Broadbent from GE Digital, and Maria Giudice from Autodesk will be among a plethora of great speakers.

These themes guide the conference this year:

  • How to Succeed when Everyone is Your User
  • Growing UX Talent and Teams
  • Designing Design Systems
  • The Politics of Innovation

Plus, the organization of the conference is simply stellar. Props to Rosenfeld Media for spearheading this topic!

October 24–26, 2016, in Charlottesville, VA

This conference is as much of a labor of love and devotion to the field of UX in EDU as anything. Also, I’ve been involved since its inception: The first year I was an attendee, the second year a speaker, and ever since I’ve been involved in programming and planning. So, yeah, I’ve a vested interest in this conference.

Virginia Foundation for the Humanities’ (VFH) Web Communications Officer Trey Mitchell and former UVA Library programmer Jon Loy were sitting around one day, thinking about conferences such as the IA Summit, the Interaction Design Association’s conference, Higher Ed Web, and other cool UX-y conferences and thought, “Why don’t we create a conference here in Virginia that we’d wanna go to?”

Well, there’s a bit more to the story, but they created a unique event focused on the .edu crowd—museums, universities, colleges, libraries, institutes, and foundations—while also providing great content for anyone in the UX space.

As the website says, “edUi is a concatenation of ‘edu’ (as in .edu) and ‘UI’ (as in user interface). You can pronounce it any way you like. Some people spell it out like “eee dee you eye” but most commonly we say it like “ed you eye.”

Molly Holzschlag, Jared Spool, and Nick Gould stepped onto the podia in 2009, among many others. Since then, Trey and company have brought an amazing roster of folks. 

For the first two years, the conference was in Charlottesville. Then it moved to Richmond for four years. Last year it returned to Charlottesville and Trey led a redesign of the conference. From moving out of the hotel meeting rooms and into inspiring spaces along the downtown Charlottesville pedestrian mall to sudden surprises of street performers during the breaks, the conference became almost a mini festival where an informative conference broke out.

This year proves to continue in that vein. So if a 250-ish conference focused on issues of UX that lean toward (but aren’t exclusively) .edu-y sounds interesting…meet me in Charlottesville.

Posted in: on Fri, April 29, 2016 - 1:26:09

Post Comment

No Comments Found

Violent groups, social psychology, and computing

Authors: Juan Hourcade
Posted: Mon, April 25, 2016 - 2:55:08

About two years ago, I participated in the first Build Peace conference, a meeting of practitioners and researchers from a wide range of backgrounds with a common interest in using technologies to promote peace around the world. During one session, the presenter asked members of the audience to raise their hands if they had lived in multiple countries for an extended period of time. Most hands in the audience went up, which was at the same time a surprise and a revelation. Perhaps there is something about learning to see the world from another perspective, as long as we are receptive to it, that can lead us to see our common humanity as more binding than group allegiances.

It’s not that group allegiances are necessarily negative. They can be very useful for working together toward common goals. Moreover, most groups use peaceful methods toward constructive goals. The problems come when strong group allegiances intersect with ideologies where violence is a widely accepted method, and dominion over (or elimination of) other groups is a goal.

A recent Scientific American Mind magazine issue with several articles on terrorism highlights risk factors associated with participation in groups supporting the use of violence against other groups. A consistent theme is the strong sense of belonging to a particular group, to the exclusion of other groups, in some cases including family and childhood friends, together with viewing those from other groups as outsiders to be ignored or worse.

Information filters or bubbles can play a role in isolating people so they mostly have a deep engagement with the viewpoints of only one group, and can validate extreme views with people outside of their physical community. These filters and bubbles are not new to the world of social media, but they are easily realized within it as competing services attempt to grasp our attention by providing us with content we are more likely to enjoy.

At the same time, interactive technology and social media can be the remedy to break out of these filters and bubbles. To think about what some of these remedies may be, I discuss some articles that can provide motivations, all cited in the previously mentioned Scientific American Mind issue.

The first area in which interactive technologies could help is in making us realize that our views are not always broadly accepted. This is to avoid a challenge referred to as the “false consensus effect”, through which we often believe that our personal judgements are common among others. Perhaps providing a sense of the relative commonality (or rarity) of certain beliefs could be useful.

Sometimes we may not have strong feelings about something, and it seems that if that is the case, we tend to copy the decisions of others we feel resemble us most, while disregarding those who are different. It’s important in this case, then, to highlight experts from outside someone’s group, as well as helping us realize that people from other groups often make decisions that would work for us too.

Allegiances to groups can get to the point of expressing willingness to die for one’s group when people feel that their identity is fused with that of the group. Interactive technologies could help in this regard by making it easier to identify with multiple groups, so that we don’t feel solely associated with one.

As I mentioned earlier, being part of a tight group most of the time does not lead to problems, and can often be useful. But what if the group widely accepts the use of violence to achieve dominance over others? One way to bring people back from these groups is to reconnect them with memories and emotions of their earlier life, helping them reunite with family and old friends. Social media already does a good job of this, but perhaps there could be a way of highlighting the positives from the past in order to help. With a bit of content analysis, it would be possible to focus on the positive highlights.

There is obviously much more to consider and discuss within this topic. I encourage you to continue this discussion in person during the Conflict & HCI SIG at CHI 2016, on Thursday, May 12, at 11:30am in room 112. See you there!

Posted in: on Mon, April 25, 2016 - 2:55:08

Juan Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Hourcade's Posts

Post Comment

No Comments Found

Collateral damage

Authors: Jonathan Grudin
Posted: Tue, April 05, 2016 - 1:06:30

Researchers are rewarded for publishing, but this time, my heart wasn’t in it.

It was 2006. IBM software let an employer specify an interval—two months, six months, a year—after which an email message would disappear. This was a relatively new concept. Digital storage had been too expensive to hang onto much, but prices had dropped and capacity increased. People no longer filled their hard drives. Many saved email.

When IBM put automatic email deletion into practice, a research manager asked her IT guy to disable it. “That would be against policy,” he pointed out. She replied, “Disable it.” Another IBM acquaintance avoided upgrading to the version of the email system that included the feature. When she returned from a sick leave, she found that a helpful colleague had updated her system. Her entire email archive was irretrievably gone.

“We call it email retention, but it’s really email deletion.”

Word got around that Microsoft would deploy a new “managed email” tool to all North American employees, deleting most messages after six months (extended grudgingly to 12 when some argued email was needed in preparing for annual reviews). Because of exceptions—for example, patent-related documents must be preserved for 10 years—employees would have to file email appropriately.

Many researchers, myself included, prefer to hang onto stuff indefinitely. I paused another project to inquire and learned that a former student of mine was working on it. A pilot test with 1,000 employees was underway, he said. In a company of 100,000, it is easy not to hear about such things. He added that it was not his favorite project, and soon left the team.

Our legal division had assembled a team of about 10 to oversee the deployment. Two-thirds were women, including the group manager and her manager. They were enthusiastic. Many had voluntarily transferred from positions in records management or IT to work on it. My assumption that people embracing email annihilation were authoritarian types quickly proved wrong, it was a friendly group with bohemian streaks. They just didn’t like large piles of email.

I had assumed that the goal of deleting messages was to avoid embarrassing revelations, such as an admission that smoking is unhealthy or a threat to cut off a competitor’s air supply. Wrong again. True, some customers clamoring for this capability had figured prominently in questionable government contracting and environmental abuse. But it is a crime to intentionally delete inculpatory evidence and, I was told, litigation outcomes are based on patterns of behavior, not the odd colorful remark that draws press notice.

Why then delete email? Not everyone realized that storage costs had plummeted, but for large organizations, the primary motive was to reduce the cost of “ediscovery,” not hardware expenditures.

Major companies are involved in more litigation than you might think. Each party subpoenas the other’s correspondence, which is read by high-priced attorneys. They read their side’s documents to avoid being surprised and to identify any that need not be turned over, such as personal email, clearly irrelevant email, and any correspondence with an attorney, which as we know from film and television falls under attorney-client privilege. A large company can spend tens of millions of dollars a year reading its employees’ email. Reduce the email lying around to be discovered, the thinking went, and you reduce ediscovery expenses.

Word of researcher unhappiness over the approaching email massacre reached the ears of the company’s Chief Software Architect, Bill Gates. We were granted an exemption: A “research” category was created, similar to that for patent-related communication.

Nevertheless, I pursued the matter. I asked the team about the 1000-employee pilot deployment. The response was, “The software works.” Great, but what was the user experience? They had no idea. The purpose of the pilot was to see that the software deleted what it should—and only what it should. The most important exception to automatic deletion is “litigation hold”: Documents of an employee involved in litigation must be preserved. Accidental deletion of email sent or received by someone on litigation hold could be catastrophic.

The deployment team was intrigued by the idea of asking the early participants about their experiences. Maybe we would find and fix problems, and identify best practices to promote. This willingness to seek out complaints was to the team’s credit, although I was realizing that they and I had very different views of the probable outcome. They believed that most employees would want to reduce ediscovery costs and storage space requirements, and about that they were right. But they also believed that saving less email would increase day to day operational efficiency, whereas my intuition was that it would reduce efficiency, and not by a small amount. But I had been wrong about a lot so far, a not uncommon result of venturing beyond the ivory tower walls of a research laboratory, so I was open-minded.

“Doesn’t all that email make you feel grubby?”

My new collaborators often invoked the term “business value.” The discipline of records management matured at a time when secretaries maintained rolodexes and filing cabinets organized to facilitate information retrieval. Despite such efforts, records often proved difficult to locate. A large chemical company manager told me that it was less expensive to run new tests of the properties of a chemical compound than to find the results of identical tests carried out years earlier.

To keep things manageable back then, only records that had business value were retained. To save everything would be painful and make retrieval a nightmare. Raised in this tradition, my easygoing colleagues were uncompromising in their determination to expunge my treasured email. They equated sparsity with healthy efficiency. When I revealed that I saved everything, they regarded me sadly, as though I had a disease.

I have no assistant to file documents and maintain rolodexes. I may never again wish to contact this participant in a brief email exchange—but what if five years from now I do? Adding everyone to my contact list is too much trouble, so I keep the email, and a quick search based on the topic or approximate date can retrieve her in seconds. It happens often enough.

I distributed to the pilot participants an email survey comprising multiple choice and open-ended response questions. The next step was to dig deeper via interviews. Fascinated, the deployment team asked to help. Working with inexperienced interviewers does not reduce the load, but the benefits of having the team engage with their users outweighed that consideration. I put together a short course on interview methods.

“Each informant represents a thousand other employees, a million potential customers—we want to understand the informant, not convert them to our way of thinking. For example, someone conducting a survey of voter preferences has opinions, but doesn’t argue with a voter who differs. If someone reports an intention to vote for Ralph Nader, the interviewer doesn’t shout, ‘What? Throw away your vote?’”

Everyone nodded.

“In exchange for the informant trusting us with their information, our duty is to protect them.” After they nodded again, I continued with a challenging example drawn from the email survey: “For example, if an employee says that he or she gets around the system by using gmail for work-related communication—”

White-faced, a team member interrupted me through clenched teeth, “That would be a firing offense!”

At the end of the training session, the team manager said, “I don’t think I’ll be able to keep myself from arguing with people.” Everyone laughed.

The white-faced team member dropped out. Each interview save one was conducted by one team member and myself, so I could keep it on track. One interview I couldn’t attend. The team manager and another went. When I later asked where the data were, they looked embarrassed. “We argued with him,” the team manager reported. “We converted him.”

My intuition batting average jumps to one for three

The survey and interviews established that auto-deletion of email was disastrously inefficient. The cost of the time that employees spent categorizing email as required by the system outweighed ediscovery costs. Time was also lost reconstructing information that had only been retained in email. “I spent four hours rebuilding a spreadsheet that was deleted.”

Workarounds contrived to hide email in other places took time and made reviewing messages more difficult. Such workarounds would also create huge problems if litigants’ attorneys became aware they existed, as the company would be responsible for ferreting them out and turning everything over.

Most damning of all, I discovered that managed email would not reduce ediscovery costs much. The executives and senior managers whose email was most often subpoenaed were always on litigation hold for one case or another, so their email was never deleted and would have to be read. The 90% of employees who were never subpoenaed would bear virtually all of the inconvenience.

Finally, ediscovery costs were declining. Software firms were developing tools to pre-process and categorize documents, enabling attorneys to review them an order of magnitude more efficiently. At one such firm I saw attorneys in front of large displays, viewing clusters of documents that had been automatically categorized on several dimensions and arranged so that an attorney could dismiss a batch with one click—all email about planning lunch or discussing performance reviews—or drill down and redirect items. That firm had experimented with attorneys using an Xbox handset rather than a keyboard and mouse to manipulate clusters of documents. They obtained an additional 10% increase in efficiency. However, they feared that customers who saw attorneys using Xbox handsets would conclude that these were not the professionals they wanted to hire for a couple hundred dollars an hour, so the idea was