Blogs

A tent, a pigeon house, and a pomegranate tree


Authors: Shaimaa Lazem
Posted: Fri, September 29, 2017 - 9:43:51

Note: This blog post was coauthored by Danilo Giglitto, research associate, and Anne Preston, senior lecturer in technology-enhanced learning, based at the Learning and Teaching Enhancement Centre (LTEC), Kingston University, London.

After the 2011 revolution, Egypt faced a challenging socioeconomic transition. Since then, the ICT sector has become one of the promising contributors to Egypt’s economic growth. In 2014, the Ministry of Communications and Information Technology announced the Social Responsibility Strategy in ICT, with an inclusive vision for using technology to integrate different societal groups to achieve equality, prosperity, and social stability. Such goals demand that technology professionals be equipped with user-centered skills to design for groups with various socioeconomic backgrounds.

As a response to these goals, in August 2017 we ran an eight-day HCI summer school for designing technologies to document intangible cultural heritage (ICH) in the northwest of Egypt. The school was part of a UK-Egypt institutional link, the Hilali Network, a Newton-Mosharafa project between the City for Scientific Research and Technology Applications (SRTA-City) and Kingston University London. The link aimed at advancing HCI education in Egypt by training 18 engineering students from Alexandria University to engage in technology design activities with members from the Bedouin community of Borg El-Arab. 

The Bedouins in Egypt are an important tribal nomadic community who migrated to Egypt from the Arab peninsula hundreds of years ago, inhabiting the north and western deserts and the Sinai Peninsula. With increased urbanization in those areas, however, they have become mostly a settled community, at risk of losing social practices, oral traditions, customs, language, and identity, all associated with intangible cultural heritage (ICH). Digital technology has often played a major role in supporting documentation of ICH at risk of loss with Web-based material, increasing its access and dissemination. Our proposal was that the sustainability of such an approach could be harnessed to its full potential by supporting the participation of community members. This remains a challenge since ICH should be researched within each specific social, cultural, and technological setting. We therefore argued that a bottom-up approach to ICH could benefit from HCI participatory methods to engage communities with technologies.

The challenges we had to face were numerous and complex, including that our students had technical-oriented mindsets; they were less appreciative for the topics they classified as “humanities” and were reluctant to engage with community members in participatory activities. The latter challenge was surfaced in the ArabHCI network  as a common issue across the Arab world. 

Before the school started, we discussed the project with community members who worked at SRTA-City. Some of them were familiar with scholars who had come to study some of their traditions. The participatory approach we intended to adopt was new to them. They shared the fact that they were participating with others; they were proud of their Bedouin heritage and recognized the risk of it fading away, as many of them currently attend modern schools and have moved to cities to study and work.

We designed the summer school curriculum so that students would gradually build a partnership with the chosen community, while the instructors remained as facilitators,  scaffolding and advising students throughout. The curriculum used interactive material emphasizing hands-on practice and learning by doing. 

We used the Double Diamond design process model by the UK Design Council to structure the school activity. It is a four-stage model: Discover, Define, Develop, and Deliver, with every two phases forming a diamond shape. The first and third phases were exploratory, while the second and fourth were for narrowing the scope and defining focus. Every stage took roughly a couple of days in our curriculum. Lectures were used mostly in the first exploratory stage. In each phase, we had a “participatory moment,” where students worked closely with community members. 

In the first stage, Discover, we encouraged students to take a conceptual leap from being the engineering student—who receives a well-defined problem to solve—to becoming a design-thinker—who is co-responsible for framing the design and sociocultural challenges. We introduced basic HCI concepts such as usability and user experience, and bottom-up approaches to ICH documentation. 

The participatory moment in this phase was a trip we asked community members to organize for the students to learn more about Bedouin culture. We visited a “Nagae,” a group of houses for the same family, "El-Sanakra.” They set up a special Arabian tent for us which they normally do only for their festive events. The Bedouin culture prohibits young women from interacting with unknown males. Thus, the women visitors met the Bedouin women inside the house, while the men were hosted in the tent. The house itself was modern on the inside, with a flat-screen TV and WiFi connection. Everyone, including the oldest low-literate women, had mobile phones. The house featured the traditional “burj,” or pigeons’ house, that they use for their food and hunting falcons. The house had fig and pomegranate trees, from which they harvested fruit, as both crops that thrive in the desert climate. We were surprised by their modern lifestyle, which unearthed interesting discussions about fading traditions.     


The pigeons' house, or burj.


The Bedouin tent in Nagae El-Sanakra.

In the second stage, Define, the students were divided into teams. Each team had to define the scope for their projects (what traditions they would document, who would be their users, what the technical challenges would be). Some of the students had ideas based on the reports they collected during the field trip. We trained students in methods to help them understand their participants’ needs and perspectives (e.g., conducting interviews, ethnographic observations, culture probes). We asked the teams to design a two-hour workshop with one or two Bedouin participants to gather the information that would help them define their focus. Every team prepared a semi-structured interview and designed a probe as a family gift for their participant. 

For instance, students designed a family tree, where the participant was invited to color its leaves according to the knowledge and interest in documenting a Bedouin poem. Another probe was a tent that had a box inside containing colored cards (colors varied according to gender and age). The participant was invited to ask members of his family house to write something about what makes them proud Bedouins. 

In the Develop phase, the students used personas to describe their target users as they defined them in the previous stage. They analyzed the data they gathered from the interviews to find insights, identify opportunity areas, and brainstorm to generate ideas about potential solutions. Further, they conducted a second workshop to test their ideas, in which they handed over low-fidelity prototypes to one or two community members, who contributed to the design process. 


Design ideas and prototyping.

In the Deliver stage, students designed four prototypes for mobile applications. These prototypes included technology to address the documentation of improvised Bedouin poems, assessing the documenter’s knowledge about traditions. Other applications included using games to educate children about old Bedouin traditions and e-marketing Bedouin crafts. The prototypes were presented to community members, who gave feedback on the designs. 

The experience was very positive for students and community members, as we learned in the follow-up focus groups. The double diamond model was a good framework to teach a user-centered approach because it guided the students on when they should adopt divergent or convergent thinking. The ICH case study proved to be invaluable in teaching the students to drop their assumptions about a typical computer user, which was quite a challenge for students immersed in 21st-century technologies. Probe design and persona tasks helped them think deeply about their participants. Further, they had to be attentive to user interface details, as the Bedouin community is fastidious about their culture. Overall, students tended to struggle with the design tasks that required data abstraction and synthesis (e.g., generating insights and themes from field notes and interviews) as these can take a long time.

More than half of the school activities were led by the students, so we prepared the assignments to help us reflect on students’ progress. We thus had to check their responses every day, which was very demanding. Watching them develop their sense of design agency, however, was our reward. We plan to revise our curriculum and intend to offer it as a resource for instructors interested in adopting our approach to student-led learning and sensitization to HCI as a tool for community-driven learning and teaching. 

During the school the students maintained an independent blog reporting about their experience, which you can check out here.



Posted in: on Fri, September 29, 2017 - 9:43:51

Shaimaa Lazem

Shaimaa Lazem is a researcher at the City for Scientific Research and Technology Applications, Egypt. She is interested in HCI education and technology-enhanced learning. @ShaimaaLazem
View All Shaimaa Lazem's Posts


Post Comment


@Nermeen Saeed Ahmed (2017 10 29)

CS students’ engagement with the Bedouin community was a real challenge because developers or technicians have a different mindset differs from the Bedouin community members either culturally, technically or even their way of thinking about things, Also Cs students weren’t aware of how urbanized Bedouin community members are , so they were afraid of facing the whole community members with its culture, traditions and technicalities .
In my opinion involving the users (Bedouin community members) as co designers will highly affect the design process in a very good way and will help designers following a human centered design approach

@James (2017 11 03)

Hi there! I could have sworn I’ve been to this blog before but after checking through some of the post I realized it’s new to me. Nonetheless, I’m definitely happy I found it and I’ll be bookmarking and checking back frequently!


Design4Arabs Workshop @DIS 2017


Authors: Ebtisam Alabdulqader
Posted: Mon, August 07, 2017 - 12:50:31

Note: This blog post was co-authored by Zohal Azimi, a student at SUNY Farmingdale State College studying visual communications, and Shaimaa Lazem, a researcher at the City for Scientific Research and Technology Applications, Egypt.

HCI research plays an important role in designing interactive systems and development worldwide. There is an increased awareness of and interest in designing for Arabic regions and cultures within HCI, yet current studies can often address “Arabs” as a single entity. There is therefore a need to reflect the diversity of the Arabic regions and the uniqueness of Arabic cultures that affect technology design. As researchers with in-depth knowledge of the Arabic regions, who are, or have experience of, studying and working within the U.S. and the U.K., we want to focus on design research that reflects our interests and concerns in designing for the Arab world, including countries where Arabic culture and language is prevalent and regions where Arabic communities are growing. With this in mind, we established the ArabHCI initiative in 2016 to empower, bridge, and connect HCI researchers and practitioners from the Arab world with those who are already working in or interested in this context. Our aim is to establish a community of HCI researchers interested in the Arab context to leverage their “insider” understanding and explore the challenges and unique opportunities in technology design.

We organized several activities to start working on this aim: A SIG meeting was held at CHI’17 to discuss key points on current HCI research across the Arab world, followed by a networking dinner. As the initiative leader, I (Ebtisam) was invited to share my vision and give talks about our mission to increase awareness of the status and opportunities of HCI education and research in the Arab world. This took place at the Diversity Lunch at CHI’17 and will also take place at the forthcoming INTERACT’17. Moreover, to promote ArabHCI discussions, a series of “Tweeting Hours” was scheduled to share ideas and experiences regarding specific aspects of HCI research in the Arab context. The outcomes of these events emphasized the demand to establish a community and encouraged us to plan future events.

Informed by this feedback, we eagerly brainstormed a precise agenda for a Design4Arab workshop at DIS 2017 in Edinburgh. Seventeen individuals ranging from students, researchers, and professionals registered to attend. Our diverse group of participants was experienced and interested in working within the Arab context and included a 10:7 Arab to Non-Arab ratio. The diverse backgrounds and experiences of the attendees enriched the discussions around technology design. Participants came from all over the world, including the U.K., Germany, and the U.S., and had submitted a wide range of position papers (available on ArabHCI.org). Some papers highlighted the challenges and experiences we face as a community when it comes to designing interactive systems in Arab regions, such as the lack of participatory design. Other papers shed light on design-focused opportunities; for instance, how to design marital matchmaking technologies in Saudi Arabia, or how to design for disabled people in Jordan. 

Throughout our one-day workshop we had intriguing discussions around designing interactive systems and how to confront the emerging design challenges in Arabic regions. We started with lightning talks presented by the participants based on their experiences that helped us foster a better understanding of HCI research and identify the most pressing needs in the Arab context. Anne Webber from the University of Siegen called on designers to develop digital platforms for refugees and “work with the people, rather than design for them.” This goes hand in hand with our initiative. We should be designing for diversity while designing interactive systems to connect refugees, migrants, and the local volunteers and professionals aiding them. Franziska Tachtler from the Vienna University of Technology introduced an interesting perspective, saying, “I wonder if there are any experiences of using participatory design in the Arab world, as this has been used in the West.” We need to join together as a community to adopt some new, practical, and suitable methods of research for designing interactive technology with the Arab world. Consequently, this will help us to address unique needs, such as understanding different cultural backgrounds and involving the user sensitively in the design process. This will also promote more participatory design adaptation and reduce top-down design processes related to politics and governance within these regions.

The remaining sessions focused on exploring challenges and opportunities in collaborations between participants. We split into four groups of approximately five to have open discussions about the contextual and methodological challenges facing HCI studies in the Arab world. Each group was given six thematic cards with a probe list relevant to the card topic to facilitate their discussions. Some things we noted were that when designing for the Arab world, we should delve into the design process and learn how to successfully design technology that reflects the Arab culture and values, Arabic language, and the various dialects. We pointed out that when designing, we should not start from technology; rather, we should start with understanding the specific Arab users first. We need to ask ourselves, who are the specific people we seek to design with and for? How do we engage Arabic users in ways that show respect for cultures? How do we design sensitively that respects differences in gender and age? How do we account for a variety of concerns associated with security and privacy? The groups shared other ideas and created visuals to develop an insider understanding of these problems. We identified that HCI researchers need to engage more Arab communities in research practices in order to gain a deeper understanding of these issues.


Mapping experiences of being an insider within the Arab context.

After going around the room discussing the challenges, it was time to explore what design opportunities there are. Putting our heads together was enlightening, as it helped us dig deep into the problems that current HCI researchers might overlook while designing technologies for the Arab world. It sparked ideas and specific design opportunities on how to serve specific needs. One group’s goal was to increase autonomy in kids to express their design ideas with one another and help them present their ideas to their peers. The result was a social platform to bring kids from the local neighborhood together to build on their design ideas and proposals, producing inclusive design projects for kids in different Arab countries. Another group suggested designing a game to help young girls understand the struggles of teenage girls transitioning to young women. We also had another group propose ideas on how to improve the education system for toddlers, with the designs differing for each specific Arab country.  


Design4Arabs opportunities in action!

It was interesting to work with HCI researchers who are not from the Arab region, as they do not always fully understand the diversity and differences within the Arab context. It was fascinating to see everyone courageously discuss their thoughts and ideas, even when they didn’t agree. As we were discussing the concept of supporting teenage girls, the groups suggested involving parents so they can become educated and have open conversations with their daughters. Some of the Arab participants recommended that their proposal should be directed toward all “mothers,” instead of all “parents.” They stated that it is very unlikely for a father to be involved. It was not only that fathers will not want to be involved but also that daughters will more than likely avoid involving their fathers in such embarrassing discussions. The initial group responded with a non-Arab participant saying that any design should involve education for Arab fathers, just as much as for Arab mothers. This friendly disagreement showed us how important it is to conduct extensive research in the Arab context, to reflect on the wider social expectations when designing, and to consider how technology might be used to intervene and the impact it might have.

This whole experience has truly given us unforgettable and noteworthy outcomes. It was important for everyone to hear different voices from all over the world openly express their perspectives on Arab HCI. We were fortunate enough to have interested participants who showed us the importance of collaboration between HCI researchers from the East and the West. By going in depth into current and possible challenges, it also helped us analyze Arab HCI research and break it down into what it is now and what it could be in the future. Our community is still growing, and we are striving to move forward on this journey together. For more details about the ArabHCI initiative and to follow the upcoming events, please visit https://arabhci.org.



Posted in: on Mon, August 07, 2017 - 12:50:31

Ebtisam Alabdulqader

Ebtisam Alabdulqader (@Ebtisam) is a Ph.D. candidate at Open Lab Newcastle University, and a lecturer in at King Saud University. Her research focuses on HCI aspects of social computing, health informatics, accessibility and mHealth. https://ebtisamaq.com
View All Ebtisam Alabdulqader's Posts


Post Comment


@soud.a.nassir@student.uts.edu.au (2017 10 16)

If anything, I wished I was able to attend the workshop. Well done


Women’s Health @CHI


Authors: Madeline Balaam
Posted: Mon, June 19, 2017 - 3:58:35

Note: This blog post was co-authored by Lone Koefoed Hansen from Aarhus University. She works with feminist design, critical computing, and participatory IT.

At CHI 2017 we ran a workshop to reimagine how technology intersects with women’s health. We brought together designers, engineers, programmers, and experts in women’s health over two days in an attempt to radically re-engineer the ways women receive healthcare. We made use of the excellent public maker spaces in Denver’s Central Public library (https://www.denverlibrary.org/idealab) to build exemplary digital interactions that demonstrate the kinds of innovations we consider necessary to improve women’s health on a global scale. 

Throughout our two-day event, 25 participants developed many responses—participants built an inclusive parenting digital campaign, hacked sex vibrators, experimented with personal visualisations of menstrual cycles, discussed technologies for menopausal women, talked about what it means to be inclusive and exclusive of gender norms, and cultivated various yeasts in a cheap incubator. We put the needs and hopes of people who identify as women front and center for two days and through this, we realized just how rare it is to do so without having to explain why. 


The "hacking a sex vibrator" group start to take the technology apart.

Our workshop could not have come at a better time. One of President Trump’s first acts was to cut funding to international organizations that promote women’s health if they also offer access to abortion services, or even provide advice and information to women about these abortion services. The NGOs affected have highlighted how such an act will decrease marginalized people’s access to vital sexual and reproductive health services, and of course increase illegal abortions and ultimately mortality. Times are hard, but there are ripples of action being taken globally, from the #Repealthe8th campaign using humor to draw attention to lack of abortion services in Ireland; the Pussy Hats project, which creates a visual symbol for those advocating for women’s rights in a context of everyday sexual harassment; and the “reproductive justice” hack-a-thon that took place in March this year. At an institutional level, some country’s leaders readily identify as feminist (hello, Sweden and Canada) and some countries have begun to officially recognize non-binary genders in passports. Even CHI 2017 had a few gender-neutral restrooms.

However, it's also not easy to do research in an area, when the majority of the field think it is only relevant for a minority. We are acutely attuned to how the current political and social climate impacts our work. We notice how a workshop on “hacking women's health” makes it/us counter- or anti- just by naming it "women's health," even if half the world's population identify as female. We have been innovating digital technologies for women’s health over a number of years, through the relatively innocuous FeedFinder (supporting women in breastfeeding in public), to perhaps the more radical and confident Labella (developing awareness of intimate anatomy). So, by hosting this workshop at CHI we hoped we could not only contribute to these global ripples of action and resistance, but also increase the community, profile, and voice of researchers working in this area. Because, right now—we’ll be honest—it can be hard to be working on this topic. Reviewers have told us that our research is “not science,” is “not ethical,” is “unseemly,” is “not feminist,” or is “too feminist.” In contrast to research we have undertaken in other areas, it seems to us that this work is often held to a higher standard; more is required of us to prove it is worth publishing, or that it is even research at all. 


A portable bio lab that one group used to practice sterilization and culture
yeast whilst discussing internet memes, zine culture, and other feminist tactics.

And, this has been our experience of trying to investigate women’s health at SIGCHI venues since running the “Motherhood and HCI” workshop at CHI 2013. There, we had conversations with other attendees who wanted to spend their research time contributing to agendas around women’s health and motherhood, but who couldn’t quite find the right way to persuade their research groups and organizations to allow them to research such topics. Or those who had to invest in long discussions as to why a topic that “only” affected women was worthy of more than a smile. Somehow thinking about women’s health and the female body is considered daring and brave, and somehow, designing for the intersection between technology and the women’s body is taboo. It sits on the fringe of CHI, and it absolutely shouldn’t. There are untold opportunities for technical and interactional innovations through focusing on women’s health. Those thinking about on-body and wearable computing might find their “killer apps” exactly within this space. There are rich opportunities for designing for advocacy and activism, alongside extremely complex and sensitive settings that are entirely unchartered within the community. We need to find ways of making this type of research mainstream, because the potential for HCI to create positive impacts for women globally is great. 

But why is it so hard to legitimize work in this area? Well, this year we think we’re starting to understand. We are passionate about women’s health and women’s access to healthcare services, and angry that this access is being limited and that women’s lives are at risk as a result. We’re angry enough that we wanted to use our hacking women’s health workshop as an opportunity to not only talk about our research, exchange ideas, and be creative, but to generate impact outside our community. We thought it would be fun and useful to use activities in our workshop as a way of fundraising for Planned Parenthood, and plans were hatching for all sorts of crafts that we could sell at the conference to generate some small donations to this essential women’s and sexual health NGO. However, we were unable to do this since fundraising for any particular cause is not permitted at SIGCHI conferences. 


In the inclusive parenting group, participants are working on the (digital)
campaign material that came as a result of the discussions beginning with
breast-feeding discussions.

In the last couple of years, the CHI conference has run CHI4Good days of service. The thrust of these days has not been to raise funds, but to offer time and support (technical, design, research, and otherwise) to selected charities as part of the conference. This is a great way of ensuring meaningful impact for the wider global community. But we wondered why it was that CHI could support charities in these ways, and not others. The answer we received was that charities had applied and been chosen in advance of the conference, and that organizations would not have been selected for CHI4Good if considered to be of a “polarizing political nature.” So is Planned Parenthood too politically polarizing? Can CHI and its attendees contribute to charities and organizations as long as they are considered not too divisive? Animals, children, cycling, yes? Reproductive rights and providing healthcare to women with limited recourse to funds, no? By limiting the organizations we can work with during SIGCHI events, do we limit the impact of the community and potentially marginalize the organizations and charities that need support, in favor of those which are potentially perceived to be more agreeable? And please don’t misinterpret our frustration here. We are not arguing that children, animals, and cycling are not important and do not deserve support. We are simply questioning the biases that this raises not only in who we support as an association, but also who we decide we can do research with, which topics are OK to research, and which are not.


The workshop "hacked" the exhibition space at the opening reception by
installing the results from the workshop on a table next to an ice-cream stand.


Posted in: on Mon, June 19, 2017 - 3:58:35

Madeline Balaam

Madeline Balaam designs interactions for women’s health, digital health, and wellbeing
View All Madeline Balaam's Posts


Post Comment


No Comments Found


HCI Across Borders @CHI 2017


Authors: Neha Kumar
Posted: Thu, May 25, 2017 - 4:08:49

We are living in uncertain times where some borders are more visible than others. Even in our increasingly globalized cultures, as people and goods move from one place to another, across socioeconomic strata where multiple forms of translation take place between languages and disciplines, there can still be many barriers and dead ends to communication. Yet the field of human-computer interaction (HCI) is continuing to develop a more mature understanding of what a “user” looks like, where users live, and the sociotechnical contexts in which their interactions with computing technologies are situated. The time is therefore ripe to draw attention to these barriers and dead ends, physical and otherwise, hopefully enriching the field of HCI by highlighting diversity and representativeness, while also strengthening ties that transcend boundaries. 

Over a year ago, we were scrambling to put together the HCI Across Borders (HCIxB) workshop at CHI 2016 in San Jose. This was to be attended by over 70 HCI researchers and practitioners from all over the world whose work attempted to cross “borders” of different kinds. Some of these countries had never been represented at CHI before and we nervously and excitedly brainstormed about how to welcome these participants to a conference that was both exhilarating and overwhelming to attend. Multiple hectic days and nights later, and with a lot of help from SIGCHI and Facebook, it all worked out. And in three more months, we were ready to do it again! This time, it was with a team that had come together at HCIxB in San Jose, though several of us were meeting each other for the first time. This incredible, high-functioning team included Nova Ahmed from Bangladesh, Christian Sturm from Germany, Anicia Peters from Namibia, Sane Gaytan from Mexico, Leonel Morales from Guatemala, and Nithya Sambasivan, Susan Dray, Negin Dahya, and myself, who are based in the U.S. but who have spent significant portions of our lives crossing borders for HCI research. Naveena Karusala, our beloved student volunteer who helped us with much of the planning, also deserves special mention here. 

Our second year turned out to be bigger, even more successful and rewarding than the first, as we organized the first ever ACM CHI Symposium on HCI Across Borders on May 6–7, 2017. Before we could really get a handle on what was happening, 90 individuals from 22 countries were registered to attend with 65 accepted papers. Many frazzled emails were exchanged with the space management team in the weeks leading up to CHI. The range of research topics was vast and included domains that are rarely encountered in mainstream (or primarily Western) HCI research. We had asked all submissions to highlight how their work aimed to cross borders and which borders these were. The connections drawn were illuminating and groundbreaking. While one paper aimed to translate video-creation processes from a maternal health deployment to provide instruction on financial services in rural communities, another took a meta approach to unpack the area of overlap between the fields of social computing and HCI for development (HCI4D). Many submissions made gender a focus, and mobile technologies (smart and otherwise) were widely targeted. These papers are all available for reading at www.hcixb.org


The HCI Across Borders family at CHI 2017

The symposium began with introductions done madness-style as each participant took up to 45 seconds to tell everyone who they were, where they came from, and why they were attending, also sharing a fun fact about themselves (always the hardest!). This was followed by a poster session that lasted 90 minutes. Each workshop paper was represented by a poster and participants walked around the room with Post-its, leaving feedback as they deemed appropriate. Purva Yardi from the University of Michigan won Best Poster, chosen by the SIGCHI Executive Committee’s Vice President for Conferences Aaron Quigley. Purva’s paper was titled “Differences in STEM Gender Disparity between India and the United States.” Lunch followed the poster session, and then it was time for some levity. We played the silent birthday game, which required all 90 participants to arrange themselves in the order of their birthdays (month and date) without talking. There were more games organized across both days, including a round of musical chairs, which was nothing short of chaotic. We had short debrief sessions after every game, when we discussed the challenges that arise when we try to communicate across languages and other cultural norms; for example, the month-first date format that the U.S. follows is different from the one followed by most other countries.

Much of the weekend was devoted to working in teams on potential collaborations. These teams were formed based on topics of interest that emerged from the poster session. Clusters from poster topics were created by a few volunteers and teams were formed according to these clusters, also leaving room for participants to change teams as they pleased. Some of the topics that these teams worked on included education, health and gender, social computing, and displaced communities. The final deliverables for these teams included a short presentation on Sunday afternoon and a timeline for how the team planned to take their ideas forward over the following year, before we all came together (hopefully again) at the next HCIxB. Some examples included research studies that would span multiple geographies, while others were more community-focused, such as the development of a “co-design across borders” community. 

A major component of the weekend included four conversations or dialogs that we tried to have as a group. The first—“Then and Now”—was brief, entailing an introduction of HCIxB and how the community had formed over the years, dating back to CHI 2007 when a few of the people in the room had organized the Development Consortium to bring together researchers working in the “developing” world. The second was a dialog on mentorship and what it meant for those in the room to seek or offer mentorship. This conversation has led to the launch of our “Paper xChange” program for the community, whereby we will match those seeking help on a CHI 2018 submission and those willing to offer the kind of help being sought (also on www.hcixb.org). On day 2, we had our third dialog to discuss the steps needed (along with potential challenges and opportunities) with regard to setting up SIGCHI chapters in cities across the world. Tuomo Kujala, the SIGCHI Vice President for Chapters, was generous enough to offer an overview. Our final dialog took place just before closing, as we wrapped up and brainstormed about how we could keep ourselves busy and growing through the year. We also had the honor of quite a few visits from the SIGCHI leadership team, who not only supported many participants so they could attend the symposium but also made us all feel welcome throughout the weekend.  

In addition to the above, there were several memorable moments and special conversations. I know I speak for many when I say that the weekend was unforgettable in so many ways. In a nutshell, it presented a window into what CHI could be if it included more voices—different voices—from across the world, and what HCI looks like when the H represents humans across countries, cultures, socioeconomic strata, genders, and ideologies, among other points of difference. Together and stronger, we move forward on our journey, as a community that is still small but growing fast, toward a truly global and representative field of human-computer interaction. Through this blog post, and other associated posts, as well as Nithya Sambasivan’s brand new forum on “The Next Billion,” we are committed to sharing more stories, research, and dreams from the HCIxB community. Stay tuned!



Posted in: on Thu, May 25, 2017 - 4:08:49

Neha Kumar

Neha Kumar is an assistant professor at Georgia Tech, working at the intersection of human-centered computing and global development. neha.kumar@gatech.edu
View All Neha Kumar's Posts


Post Comment


No Comments Found


Turning people into workbooks


Authors: Tobie Kerridge
Posted: Wed, May 17, 2017 - 2:19:55

With its third biannual conference, RTD 2017 continued to mix intimacy and ambition with lively and informal discussion of research through design, enabled by a focus on the artifacts that come about through research projects. The National Museum of Scotland in Edinburgh acted as a hosting venue for the program, which complemented the organizers’ ambition to see design outcomes set against the material archive. Outside of the sessions I took part in, I joined other panelists for a fairly wide-ranging discussion on what the future of research through design meant for them. One question that has been particularly productive was about the articulation of research in commercial settings.

To keep our opening statements concise, panelists were asked for a single image. I broke the brief by taking a spread from a book that will soon be published by Mattering Press depicting the pages of a workbook that was put together for ECDC (Energy and Co-Designing Communities), a three-year Research Councils UK-funded project. Our group at Goldsmith’s Interaction Research Studio was one of seven research clusters looking at the effects of a government scheme where the Department of Energy and Climate Change funded around 20 communities across the UK to undertake energy-demand-reduction measures. The research groups took a range of approaches to interpret what these communities were doing. Ours was to design a technical platform, developing the methodology of the Studio, that has variously been described as ludic, speculative, or inventive.

How we used workbooks

In the Studio we use workbooks as a method for synthesizing material generated during fieldwork. We had spent a fair chunk of time with different energy-reduction groups across the UK, including tours of infrastructure such as wind turbines, photovoltaics, and ground-source heat pumps. We heard about the ambitions of these groups and about environmental predictions, the politics of delivering these projects, and the relationship between different constituencies in each setting. Workbooks allowed us to bring together documentation of these different encounters, and also other kinds of material from reviews of literature and practice. Through the synthesis of disparate material into a workbook, as researchers we share and reflect on perspectives. This in turn supports the materialization of topics and we begin to understand design options and possibilities. In this way, making and sharing workbooks is an internal process, supporting activity in the studio.


Workbooks capture insights and ideas that emerged from the initial engagement, often leading to evocative proposals.

The issue with workbooks

My aim was to use the image of the workbook pages to bring to RTD an issue that I haven’t yet resolved. I had taken the workbook to a meeting with one of our demand-reduction groups. It was unusual for us to show a workbook to respondents, though given the extended timescales of our research in relation to the schedule of this group, I thought it would be interesting to take along a document showing research activity. However, looking through the book together was a slightly odd situation, like taking an early draft of fiction to people who recognize themselves in the characters or dialogue. For example, a page titled “the champions” depicting a rosette was a reflection on competition and reward as a recurring feature of sustainability programs. Elsewhere, Futures Tourism reimagined wind-turbine farms, with their associated planning issues, as a future tourist destination, represented here by electricity-pylon spotters. The situation was odd in several ways, partly tied to anxiety about assuming authorship of others’ accounts, partly due to the seemingly insubstantial way in which the images depicted deeply held beliefs, though overall were brought about through anticipating responses from these unplanned readers. While I would argue, with conviction, for the rigor and earnestness of our research approaches, the depictions in workbooks necessarily reduce the thickness of our data to support the accretion of detail into design elements.

So why did I hope a retelling of this awkward episode would be useful? While taking studio process into the settings on which they were based was somewhat uncomfortable, it was also productive. First and foremost it supported me as a researcher in thinking through the tensions of treating the commitments of others speculatively. This episode also helped me consider how our practices have rhythms where the action of practice shifts, between letting others provide accounts of their situations and transformations of those accounts into artifacts that speak for others. And I wonder, is there scope within our analytical accounts of practice to attend to these moments when our articulations come under pressure?



Posted in: on Wed, May 17, 2017 - 2:19:55

Tobie Kerridge

Tobie Kerridge is a lecturer and researcher based in the Department of Design at Goldsmiths, University of London.
View All Tobie Kerridge's Posts


Post Comment


No Comments Found


Robots, aesthetics, and heritage contexts


Authors: Maria Luce Lupetti
Posted: Fri, April 21, 2017 - 11:11:41

Most people today have not yet had the opportunity to interact directly with a robot in their everyday lives, except maybe with children’s toys or those charming robotic vacuum cleaners. While there are ongoing experiments with robots in healthcare, many more are employed in high-tech efficient environments such as factories. But robots have also often played a large part in our cultural and historical imagination. This is celebrated in current exhibitions such as “Robot” at the Science Museum in London, exploring 500 years of humanoids robots, from pop culture to products, and “Hello, Robot. Design between Human and Machine” at the Vitra Design Museum in Germany. Apart from thematic exhibitions and despite robots pointing toward future automation, robots are starting to be used in heritage contexts to provide services, from remote exploration of museum sites to robot-based guided tours. But what happens when robots move from being an attraction to being an agent that shares the same physical space with people?

Beyond the laboratory, everyday contexts require robots to be incredibly stable and reliable over time. This is particularly important for historical sites; their use should not, under any circumstances, damage the heritage or endanger the safety of staff or visitors, or interfere with preservation or visitor enjoyment and appreciation. Challenges emerging from the use of robots in heritage contexts however, do not only relate to functional aspects, but also perceptual ones.

The role of aesthetics

Among many factors that can determine robotic acceptability such as purpose or safety, physical appearance can instantly affect how a robot is interpreted by all kinds of users that might interact with it. It is therefore of primary importance for the robot to be designed in a way that can quickly and easily communicate its skills and function, since it determines if people will understand its role and respond appropriately.

However, effective communication of a function doesn’t ensure success. For instance, in the case of service environments, robots share physical contexts with people through activities and interactions, social relationships, and artifacts. Thus, through all these aspects, a robot automatically establishes implicit relationships to which people refer for their interpretations and judgments. Cleaning robots, for instance, introduce novel technologies for cleaning activities that people are already doing with vacuum cleaners. Beyond being employed for a clear function, their appearance and aesthetics reminds them of existing products. This approach attempts to avoid ambiguities and concerns. The employment of humanoid robots for activities that are usually entrusted to people, such as museum guides, customer care, or home assistance, however, will automatically bring forth a comparison between robot and human abilities. As a consequence, this comparison can surface uncanny feelings and concerns. Furthermore, even when people are not replaced, robots are evaluated on their ability to fit into people’s sociocultural context. The design of a robot, then, should take into account the particularities of the specific context in which a robot has to be located.

Given these considerations, a good practice for the design of a robot is a formal synthesis that results from a morphology with an explicit function and the aesthetics familiar to the context. 

Virgil

In 2015, I was part of a team at Politecnico di Torino, where we designed Virgil, a telepresence robot, in response to these considerations on morphology and familiar aesthetics. The design, consisting of a robot that allows remote exploration of inaccessible areas, was employed at the Racconigi Castle, in Italy. It resulted from a broad reflection on the dual nature of cultural heritage, as a specific context, to focus not only on preservation but also on enjoyment and accessibility for visitors. The castle, one of the royal residencies of the Savoy family, is a context rich in artworks, artifacts from daily life, and architecture that preserves a history of almost a thousand years. As a design team from Politecnico di Torino, guided by Prof. Claudio Germak, we designed this application within the context of a project promoted by Telecom Italia Mobile (TIM, the main mobile telecoms provider in Italy) in collaboration with the Terre dei Savoia association.

Our aim was to provide visitors remote access to inaccessible areas of the heritage, such as the nurses rooms, the tiepidarium (thermal bath), and other areas currently excluded from the exhibit for safety, restoration, or cataloging issues. To do so, a robot was managed by a museum guide via a web application. Apart from the willingness to provide an extension and enhancement of the heritage visit, we devoted great attention to the possible perceptual drawbacks. We focused on how to communicate the particular role and function of the robot and how to define an aesthetics appropriate for that specific context. Regarding the first point, we chose to keep the main elements, such as wheels, laser scanner and camera, visible so that the function of the robot was explicit. Regarding the familiar aesthetics, we analyzed the heritage context from the physical and cultural point of view. This led to a robot design that, both in form and decoration, recalls existing elements of the heritage.

Figure 1. The robot Virgil inside an area of the Racconigi Castle. The illustrations
show the two main characteristics of the robot's appearance: the shape and decoration.

The cover, made of PMMA (poly-methyl-methacrylate), has the form of a truncated pyramid. This morphology was chosen to recall similar shapes largely diffused in Savoy tradition, used in obelisks, bollards, and other architectural elements or furniture. The choice of a transparent material was determined by the need to ensure maximum lightness both from the structural and visual point of view.

Furthermore, the body of the robot was decorated with the aim to customize it in relation to the context. It consists of décor, which represents the Palagiana palm, an existing decoration that can be found in many artifacts of the castle. The decoration, then, was both a way to obtain aesthetic coherence and to make a tribute to both the architect and to the history of a place that was characterized by continuous evolution over centuries.

This design approach not only represented a way to increase the acceptability of providing visitors remote access to inaccessible areas of the heritage, but also revealed greater design opportunities for considering the aesthetics of robotics that went beyond their functionality and acceptability.



Posted in: on Fri, April 21, 2017 - 11:11:41

Maria Luce Lupetti

Maria Luce Lupetti is a Ph.D. candidate in Design for Service Robotics, at Politecnico di Torino, Italy. Her research, focuses on human-robot interaction and play and is supported by Telecom Italia Mobile (TIM). She was Publicity Chair for HRI Pioneers Workshop 2017 and visiting scholar at X-Studio, Academy of Art and Design, of Tsinghua University, China.
View All Maria Luce Lupetti 's Posts


Post Comment


No Comments Found


Design research and social media


Authors: Fatema Qaed
Posted: Thu, April 13, 2017 - 10:37:54

The timing of my fieldwork about learning spaces in Bahrain was during the Arab Spring of 2011. This initially created significant cultural and political difficulties, not just for me, but for many others attempting to collect data about any topic in many Arabic-speaking countries. At the same time, this created a great opportunity. People’s voices were raised in social media, as they were no longer able to present issues through other types of media such as newspapers and TV. As a designer, this was interesting, since it showed the power of users when they decide to create change. I started to search through hashtags related to my research (such as #classroom) and discovered a different way of looking at learning spaces. 

Teachers using social media constantly share their problems about the design of classrooms, such as issues of overcrowding. Others kindly reciprocate by sharing how they have overcome these issues through the redesign of space. These everyday solutions are not discussed in academic literature, yet  responding to problems, creating solutions, and sharing these on social media feels more authentic and reliable. Teachers are taking action on their own terms without someone monitoring their behavior, as can often happen in more traditional data-collection fieldwork. So social media offered insights into rich virtual communication between people (e.g., with the same hashtag interests) from many cultural backgrounds. I would argue that the diversity of these shared resources and the problem-solution dialogue between users can greatly enrich the way designers understand who it is they are designing with and for. 

During my research, I tried to understand the value of learning-space design for its users and why classrooms have stayed the same for a long time. I then wanted to use this to introduce different design concepts for classrooms of the future. The three overlapping phases—contextual review, users’ perceptions of learning spaces, and participatory design—were enriched with particular knowledge from multiple practices of redesign shared on social media resources. 

The contextual review phase started by understanding space from my own research perspective, by reviewing the literature about learning theories, learning-space design elements, and the importance of learning spaces for both teachers and students. Online social media resources at this phase played an important role in adding further examples of published theories, but also rich examples of practice. Because many learning theories were implicit in these examples, user perceptions of learning spaces were not always connected to academic literature. Online examples therefore offered a rich clarification of the theoretical knowledge about learning-space design; for example, Figure 1 shows Palatre and Leclère’s color use for nursery school learning spaces. The absence of practical examples in more theoretical accounts encouraged me to look online for further examples that went beyond design as a mediator between theory and practice.


Figure 1. École Maternelle Pajol.

In the second phase of my research, extending my understanding of users’ perceptions, blogs were used to accompany initial fieldwork in Bahrain. I looked at what teachers were doing, saying, and making, in particular the way teachers and students were adapting and redesigning space. This helped to provide insights and raised awareness of users’ voices in learning-space design. The specific research value of social media for both phases was often platform specific.

Facebook: This huge reachable community allowed me to directly communicate with a diverse group of classroom users, across, for example, age range, gender, occupation, countries, and cultures. On Facebook I posted questions publicly on my timeline home page; the two questions were reflections and memories about people’s own classrooms and what a classroom of their dreams would look like.

Twitter: Hashtags in messages (# prefix) enabled search and invited responses on particular topics. This encouraged me to search for tags (e.g., #classroom, #learning_space) to see what users had shared. These searches linked through to wider social networks via hashtags on Pinterest and blogs.

Blogs: As reported in The Guardian newspaper in the U.K., “If you want the truth about school life, read the teachers’ blogs.” 

Blogs enabled me to share with teachers their daily activities, as blog data supported close interaction between teachers and myself. Data collected therefore went beyond providing me with problem and solution examples and revealed valuable methods to communicate and interact with teachers.

Flickr: This is a rich image platform and teachers share images of their classroom displays all the time. These images revealed teachers’ competence within everyday design and how to use different design elements to support their teaching methods. 

Pinterest can be understood as a visual montage of shared photos from multiple users to become a “catalogue of ideas.” Pinterest search tool enabled a quick visual scan for relevant topics with attached links to websites or blogs of relevant topics. Here, I found a huge, varied, and active teacher network sharing rich teaching and learning experiences, such as classroom activities, classroom makeover examples, teaching tools, and teaching strategies. Pinterest was an especially rich visual data source on how teachers adapt their learning spaces and how they respond to space and objects to support their teaching methods. Additionally, this rich data went some way toward explaining how relationships are built between users and design elements within classrooms. 

In the third phase, participatory design, I designed a tool called Classroom Design Recipe, inspired by a Convivial Tool approach. This was developed as a type of teaching assistant to build on both theoretical knowledge and findings on users’ understanding of spatial practice. The box contained different sets of cards (Figure 2) designed to empower teachers’ use of and redesign of learning spaces and facilitate different teaching and learning methods using spatial design elements (wall, flooring, and ceiling).  


Figure 2. Classroom Design Recipe cards.

Social media can enrich each phase of design research. If considered early it can be informative to help researchers potentially reflect on academic literature to see how something is put into practice from users’ varied perspectives. It can also help researchers contextually understand problems and how people publicly share that through social media to communicate connections and interests. In my own study, it helped build a better understanding of users, giving examples for diverse solutions where users shared regardless of cultural and geographical boundaries, suggesting potential needs and previously unarticulated ideas. One example of this was how teachers shared images of classroom space, inadvertently showing how they used windows as display boards. This inspired me to suggest different design opportunities, combining social media visuals with students’ ways of learning and teachers’ ways of teaching (Figures 3 and 4).


Figure 3. Interactive window design.

   


Figure 4. Lego window design.

Social media includes an ever-increasing variety of free platforms that can empower designers to understand users’ diverse needs, communicate with them closely, and potentially design better solutions. Much of the current literature highlights the value of “big data” constructed by users in social media. However, from a design perspective the real value, in my own experience, was in creatively capitalizing on these multiple varieties of data to gain different perspectives from interconnected communication resources.



Posted in: on Thu, April 13, 2017 - 10:37:54

Fatema Qaed

Fatema Qaed is an assistant professor at the University of Bahrain. Her research interests include design thinking, interior design, virtual reality exhibitions, and using the "convivial tools" concept to make tools that empower users.
View All Fatema Qaed's Posts


Post Comment


No Comments Found


Why HCI history matters!


Authors: Daniel Rosenberg
Posted: Thu, April 06, 2017 - 2:04:33

I want to use this blog entry to publically thank Jonathan Grudin both personally and on behalf of the larger HCI community for authoring From Tool to Partner: The Evolution of Human-Computer Interaction. His recently published book fills a long-standing gap in our professional narrative by providing a written history documenting how the multidisciplinary nature of our profession coevolved with the technology mega-trends of the last 60 years.

For me personally, reading From Tool to Partner felt like fondly flipping through decades of family photo albums, while at the same time finding connections and patterns that I never realized existed or never understood the origin of before. 

My entry into the field of Human Factors began in 1981 and I remain an active participant as an adjunct professor in San Jose State’s graduate program in HF/E following a 35-yearlong HCI-focused industry career. I first met Jonathan at MCC, the American 5th generation project mentioned in chapter 7. My employer at the time, Eastman Kodak, was one of the sponsoring organizations and I was their representative to the HCI program track where Jonathan was on staff. I have belonged to many of the professional organizations discussed in this book and attended many of the early conferences mentioned, the most seminal of which for me was the first Interact conference (1984) in London. In addition, I am a chapter co-author of the first Handbook of Human Computer Interaction (Helander Ed., 1988) mentioned in chapter 8.

While I could lend my own perspective on the evolution of various HCI organizations, their politics and conferences, academic refocusing, and the technical milestones that Jonathan meticulously chronicles, for this blog that is neither needed nor appropriate. Suffice to say, based on my own journey this book appears to be quite accurate and it was clearly painstakingly researched. 

Jonathan covers the progression of HCI from expert use on early mainframes to the discretionary use of personal apps, powered by the cumulative effect of Moore’s law. To date, this has resulted in the mobile experience now dominating the market and in the foreseeable future to be replaced by ubiquitous embedded computing. He pays tribute to our founders J.C.R. Licklider, Douglas Engelbart, Ivan Sutherland, and others, documenting their amazing vision while articulating both when, decades later, we have finally achieved that vision, and where we still have a significant distance to traverse. Like the author, I offer no target date for the singularity. This is still the realm of science fiction in my opinion.

One of the most illuminating aspects of the chronological narrative presented is the timing when the various disciplines of Human Factors, Computer Science, Library Science, and Design entered the mainstream of HCI practice and how they affected our shared trajectory. 

A particularly salient recurring theme in the book is the decades-long tension between the fields of AI and HCI. As noted, funding ebbed and flowed for AI research through numerous periods of “summer” followed by the “winter” of unfulfilled, overhyped promises resulting in huge budget cuts. When AI was down in the dumps, money and talent flowed into HCI research and vice versa. We are now in the middle of the 3rd AI summer, with autonomous vehicles playing the poster-child role in the media. 

For those newer to HCI professionally From Tool to Partner will obviously not invoke the same nostalgic reaction it had upon me. But most importantly the words of the philosopher George Santayana apply. To wit, Those who cannot remember the past are condemned to repeat it

History matters!

For skeptics who might think the Santayana quote is not applicable to HCI (and just another nostalgic detour on my part) let me provide a recent concrete example. 

Last year I undertook a consulting engagement to help a Silicon Valley startup productize an innovative Chat-Bot technology. Chat is “the new paradigm,” they told me. The history of HCI tells us this is not so! There is a significant HCI literature on command language interaction that was directly applicable to the usability problems manifested due to the lack of a consistent grammar both in the design tool for scripting conversations, and in the executable result when users encounter a Bot instead of a human to assist them in finding their lost luggage or replacing a missing ticket. It turned into a short consulting engagement because the boost provided by knowledge from HCI history was a major benefit to this startup. If they had taken a trial and error approach to refinement (another topic Jonathan points out the flaws with) instead of leveraging prior HCI knowledge, they would have run out money long before bringing a viable product to market. 

One of my goals at SJSU has been to introduce a course on HCI history into our program. I believe this will support our graduates in becoming more efficient and effective user researchers and designers. And it will provide a calming counterbalance to the turbulence they will surely encounter over their careers as the ever-accelerating pace of technological change continues.

And now, thanks to Jonathan I have a textbook I can submit to the university curriculum committee that proves that the field of HCI history exists!

Thanks again, 

Dan…



Posted in: on Thu, April 06, 2017 - 2:04:33

Daniel Rosenberg

Daniel Rosenberg is Chief Design Officer at rCDOUX LLC.
View All Daniel Rosenberg's Posts


Post Comment


No Comments Found


A history of human-computer interaction


Authors: Jonathan Grudin
Posted: Fri, March 24, 2017 - 9:46:31

A journey ended with the publication in January of my book, From Tool to Partner: The Evolution of Human-Computer Action.

The beginning

Ron Baecker’s 1987 Readings in Human-Computer Interaction quoted from prescient 1960s essays by Vannevar Bush, J. C. R. Licklider, Douglas Engelbart, Ivan Sutherland, Ted Nelson, and others. I wondered, “How did I work for years in HCI without hearing about them?”

When Ron invited me to work on a revised edition, more questions piled up. Our discussion of the field’s origin oscillated between human factors and computer science as though they were different worlds, yet the Human Factors Society co-sponsored and participated in the first two CHI conferences. Then they left. Why? Similarly, management information systems researchers presented interesting work at early CSCW conferences, then vanished. Another mystery: NSF and DARPA program managers who funded HCI research never attended our conferences. Readings in HCI devoted a large section to Stu Card and Tom Moran’s keystroke-level and GOMS models, the most praised work in CHI. Why did almost no one, including Card and Moran, build on those models? While working in AI groups at MIT, Wang Laboratories, and MCC, I wondered why two fields about technology and thought processes seemed more often at loggerheads than partners. 

There were other mysteries. For example, in 1984, my fellow software engineers loved the new Macintosh, even though we worked for an Apple competitor. We agonized as the Mac flopped for a year and a half. Apple slid toward bankruptcy. Then Steve Jobs was fired and the Mac succeeded. What was that about?

I located people who were involved in these matters. Eventually, I found convincing answers to every question that I had begun with and other questions that arose along the way (Convincing to me, anyway.) I wrote a short encyclopedia entry on HCI history, an article for IEEE Annals of the History of Computing, handbook chapters, and other essays. I edited and sometimes authored history columns for Interactions magazine. Finally, the book.

Could you benefit from reading the book?

The book is primarily about groups of researchers and developers who contributed to HCI, coming from computer science, human factors, management information systems, library and information science, design, communications, and other fields. By stepping back to get a larger picture, you might see how your work fits in, where you might find relevant insights, and where you are less likely to.

A better name for our field might be “human-computers interaction.” Machines changed radically every decade. We all know Moore’s law, but when I read a paper that describes the object of study as “a computer,” I unconsciously picture my current computer. This makes it easy to fail to see when an advance was primarily due to new hardware; for example, the Mac succeeded when “the Fat Mac” arrived with four times as much memory, soon followed by the faster Mac II. Human-computer interaction in 1970 and 2000 were both human-computer interaction, but no one would confuse human-mainframe interaction and human-smartphone interaction.

Patterns emerge. Some topics wax and wane, and then wax again. Artificial intelligence had summers and winters that affected HCI. Interest in virtual reality surged and receded: Alphaworld in the mid-90s, Second Life in the mid-2000s, and now Oculus Rift, HTC Vive, and Hololens. Another recurring focus of research and development was desktop video-conferencing. Was it inspired more by a pressing need or by telecoms interested in selling bandwidth?

However, there was steady progress. It took longer than many expected, but we collectively built the world imagined by Vannevar Bush, J. C. R. Licklider, Douglas Engelbart, Ivan Sutherland, Ted Nelson, Alan Kay, and others. In the 1960s, a few engineers and computer scientists used computers. Yet a common thread in their writing was of a time when people in diverse occupations would use computers routinely. We’re there. Ivan Sutherland wrote a program that changed a display to create a brief illusion of movement and speculated that someday, Hollywood might take notice. Thirty years later, Toy Story was playing in theatres. Some might quibble over whether every detail has been realized, but we are effectively there.

The concept “from tool to partner” is owed to Licklider, who in 1963 forecast that a period devoted to human-computer interaction would be followed by a period of human-computer partnership, which in turn would end when machines no longer needed us. For many years, we interacted with computers by feeding in a program, typing a command, or pressing a key, after which the computer produced an output or response and then waited for the next program, command, or key. Today, while we sleep, software acts on our behalf. It accepts incoming email, checks it for spam, and filters it in response to prior instructions. Software considers our history and location in selecting search query responses and advertisements. My favorite application of recent years eliminates the need to type a password: When I start up my tablet, it turns on the camera and sees that it is me! A dull sentry (“I know your caps lock key is on and you typed the caps lock version of your password, but turn off caps lock and try again”) is transformed into a cheerful colleague (“Jonathan—Hello! Welcome.”) as I quickly go on my way.

Of course, issues arise. The book identifies challenges that will long be wrestled with. Like Samantha in the film Her, my software is interacting not only with me—it is also interacting with the owners of the sites I visit and the advertisers who buy space. Software can mislead us by not being consistent in ways that people usually are. It can be an expert at chess, abysmal at checkers, and unable to play dominos at all. A sensor can be highly expert at differentiating gas odors while neglecting to suggest that I avoid lighting a match to look. And we steadily move into a goldfish bowl where all activity is visible, with positive and negative repercussions.

Keep it impersonal?

How do you approach writing about a time much of which you lived through? The “new journalism” of the 1960s showed that personal experiences can reveal hidden complexities and the emotional impact of events. But I wanted to write a history, not a memoir. My ideal is Thucydides, who, more than halfway through History of the Peloponnesian War, stunned me long ago: “Meanwhile the party opposed to the traitors proved numerous enough to prevent the gates being immediately thrown open, and in concert with Eucles, the general, who had come from Athens to defend the place, sent to the other commander in Thrace, Thucydides, son of Olorus, the author of this history, who was at the isle of Thasos, a Parian colony, half a day's sail from Amphipolis, to tell him to come to their relief. On receipt of this message he at once set sail with seven ships which he had with him, in order, if possible, to reach Amphipolis in time to prevent its capitulation, or in any case to save Eion.” Thucydides then briefly describes reaching Eion in time but how his opponents out-maneuvered his allies and convinced Amphiopolis to join the other side. His small part played, he disappears from his history without another word about his experiences.

I compromised by including some personal experiences in an appendix. For some, they could convey the significance of events that otherwise can seem remote. That said, a history is a perspective; it is never wholly objective. An author decides what to omit and what to include. For example, I do not cover the evolution of key concepts, theories, and applications to which so many people contributed. When I cite specific work, it is usually where the boundaries of the fields working on HCI were defined or transcended. Prior to the appendix, my technical contributions get less attention than Thucydides’ campaign.

A second short appendix lists resources that you could consult in writing another history of HCI, perhaps a conceptual history. Please! While writing, I shifted from hoping to wrap everything up to hoping to live to see other HCI histories. We have been privileged to participate in an exceptional period of human history. Technology evolves so rapidly that those not yet active will struggle to understand what we were doing, yet what we are doing will shape their lives in subtle ways. We owe it to them to leave accounts.



Posted in: on Fri, March 24, 2017 - 9:46:31

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Toward affective social interaction in VR


Authors: Giulio Jacucci
Posted: Mon, March 20, 2017 - 2:47:26

I first encountered VR in the late 90s, as a researcher looking at how it provided engineers and designers an environment for prototyping. After that I became more interested in looking at how to augment reality and our surrounding environment; however, I had the impression that although VR had been around for many decades by that point, there were many aspects, in particular from an interaction point of view, that I felt still deserved investigation. VR has gained renewed interest thanks to the recent proliferation of consumer products, content, and applications. This is accompanied by unprecedented interest and knowledge by consumers and by the maturity of VR, now considered less and less to be hype and to be more market ready. However, important challenges remain, associated with dizziness and the limits of current wearable displays, as well as interaction techniques. Despite these limitations, application fields are flourishing in training, therapy, and well-being beyond the more traditional VR fields of games and military applications.

One of the most ambitious research goals for interactive systems is to be able to recognize and influence emotions. Affect plays an important role in all we do and is an essential aspect of social interaction. Studying affective social interaction in VR can be important to the above-mentioned fields to support mediated communication. For example, in mental or psychological disorders VR can be used for interventions and training to monitor patient engagement and emotional responses to stimuli, providing feedback and correction on particular behaviors. Moreover, VR is increasingly accepted as a research platform in psychology, social science, and neuroscience as an environment that helps introduce contextual and dynamic factors in a controllable way. In such disciplines, affect recognition and synthesis are important aspects of numerous investigated phenomena.

Multimodal synthetic affect in VR

In social interaction, emotions can be expressed in a variety of ways, including gestures, posture, facial expressions, speech and its acoustic features, and touch. Our sense of touch plays a large role in establishing how we feel and act toward another person, considering, for example, hostility, nurturance, dependence, and affiliation. 

Having done work on physiological and affective computing and haptics separately, I saw a unique way to combine these techniques to develop synthetic affect in VR, combining different modalities. For example, the emotional interpretation of touch can vary depending on cues from other modalities that provide a social context. Facial expressions have been found to modulate the touch perception and post-touch orienting response. Such multimodal affect stimuli are perceived differently according to individual differences of gender and behavioral inhibition. For example, behavioral inhibition sensitivity in males was associated with stronger affective touch perception.

Taking facial expressions and touch as modalities for affective interaction, we can uncover different issues in their production. Currently emotional expression on avatars can be produced using off-the-shelf software that analyzes the facial movements of an actor modeling basic expressions, head orientation, and eye gaze. Subsequently the descriptions are used to animate virtual characters. Emotional expressions in avatars are often the result of a process involving several steps that ensure these relate to intended emotions. These are recorded first by capturing the live presentation from a professional actor using a facial-tracking software that also animates a virtual character. Expressions can then be manually adjusted to last exactly the same time and end with a neutral expression. Different animations are created for each distinct emotion type. The expressions can then be validated by measuring the recognition accuracy of participants who watch and classify the animations. This process works well to customize the facial expression to the intended use in replicable experiments. But this is resource intensive and does not scale well for other uses where facial expression might need to be generated in greater variations (for expressing nuances) or for generalizability, since every expression is unique. While mediated touch has been proven to affect emotion, behavior research into the deployability, resolution, and fidelity of haptics is ongoing. However, in our recent studies, we compared several techniques to simulate a virtual hand of a character touching the hand of a participant. 

Emotion tracking is more challenging in the case of a wearable VR headset, as facial expressions cannot be easily tracked through recent computer vision software. Physiological sensors can be used to recognize change in psychophysiological states or to assess emotional responses to particular stimuli. Physiological sensors are also being integrated into more and more off-the-shelf consumer products, as in the case of EDA (electrodermal activity) or EEG. While EDA provides a ways to track arousal among other states and is easy to use in an unobtrusive way, it is suited for changes in the order of minutes, and not suited for timely sensitive events in seconds and milliseconds. EEG, on the other hand, increasingly provided in commercial devices, is better suited for timely precise measurements. For example, the study of how emotional expressions modulate the processing of touch can be done by event-related potential (ERP) in EEG resulting from touch. Studies show that the use of EEG is compatible with commonly available HMDs.

Eye tracking, which recently appeared commercially to be used inside HMDs, can be used both to identify whether users attend to a particular stimulus to track its emotional response, and to track psychophysiological phenomena such as cognitive load and arousal. As an example, the setup in Figure 1 includes VR, haptics, and physiological sensors. It can be used to simulate a social interaction at a table where mediated multimodal affect can be studied while an avatar touches the user’s hand, at the same time delivering a facial expression. The user recognizes the virtual hand in Figure 1A as her own hand as it is synchronized in real time. 

Figure 1. Bringing it all together: hand tracking of the user through a glass. Wearable haptics, an EEG cap, and an HMD for VR allow the simulation of a situation in which a person sitting in front of the user touches her hand while showing different facial expressions. 

This setup can be used for a number of training, entertainment, or therapy uses. For example, a recent product applies VR for treating anxiety patients. Recent studies have evaluated the impact in training autism spectrum disorder patients to apply this to dealing with anxiety. In our own recent study we used the same setup for an Air hockey game. The haptics simulated the hitting of the puck and the emotional expression of the avatar allowed us to study effects on players’ performance and experience of the game.

Future steps: From research challenges to applications

VR devices, applications, and content are emerging fast. An important feature in the future will be the affective capability of the environment, including the recognition and synthesis of emotions.

A variety of research challenges exist for affective interaction:

  • Techniques in recognizing emotions of users from easily deployable sensors including the fusion of signals. Physiological computing is advancing fast in research and in commercial products; the recent success of vision-based solutions that track facial expression might soon be replicated by physiological-based sensors.

  • Synthesis of affect utilizing multiple modalities, as exemplified here. For example, combining touch and facial expression, but considering also speech and its acoustic features and other nonverbal cues. How to ensure that these multimodal expressions are generally valid and can be generated uniquely. 

Figure 2. RelaWorld using VR and physiological sensors (EEG) for a neuroadaptive meditation environment.

While these challenges are hard, the potential application fields are numerous and replete with emerging evidence of their relevance:

  • Training in, for example, emergency or disaster situations but in principle in any setting where a learner needs to simulate a task in an environment where she needs to attend to numerous features and social interaction. 

  • In certain training situations, affective capabilities are essential to carrying out the task, such as in therapy, which can be more physical, as in limb injuries, or more mental, as in autism disorder and social phobias, or both, as in cases such as stroke rehabilitation. In several of these situations—for example, mental disorders such as autism, anxiety, and social phobias—the patient practices social interaction while monitoring how they recognize or respond to emotional situations.

  • Wellbeing examples such as physical exercise and meditation (Figure 2). Affective interaction here can motivate physical exercise or monitor psychophysiological states such as engagement or relaxation.


Posted in: on Mon, March 20, 2017 - 2:47:26

Giulio Jacucci

Giulio Jacucci is a professor in computer science at the University of Helsinki, and founder of MultiTaction Ltd (www.multitaction.com). His research interests include multimodal interaction, physiological, tangible, and ubiquitous computing, search and information discovery, as well as behavioral change.
View All Giulio Jacucci's Posts


Post Comment


No Comments Found


What kinds of users are there? Identity and identity descriptions


Authors: Aaron Marcus
Posted: Mon, March 06, 2017 - 10:37:54

User-centered design of user-friendly products and services is based on the assumption that designers are aware of who the users are. Their understanding is embodied in typical user identities or personas. There is much discussion about whether these are typical, stereotypical, and archetypical, and how one can account for, or not lose track of, important outliers that might have a valuable influence under unforseen conditions of use.

Many years ago we published a diagram of user-experience spaces:

Note that Identity is central to the other five spaces in which new products/services can emerge, in the sense that each of the others requires an awareness of the user/customer. There are many interrelationships among all of these spaces. The other five are equally as important as this central one, and some would argue that these other five determine or influence what constitutes identity; the diagram merely means to indicate that there can be products/services that primarily focus on the gathering/availability/sharing/use of identity data. 

Many have analyzed identity and claimed positions, including more contemporary thinkers like Foucault, Hall, Butler, and Baumann, along with older sources such as Erikson. One survey of thinking along these lines is an essay by David Buckingham, “Introducing Identity” [1].

The challenge for product/service developers/designers is determining the identity of the targeted or future users. One can also ask whether users have a single identity or multiple identities, and whether or not their identity/identities cause, or are affected by, the preferred social milieu. There are important questions to ask: Who determines that identity? Marketers? The users themselves? Design teams? Governmental, educational, or professional organizations? Other influential groups? Today in commerce, in technology, as well as in politics and in governance, there is much consternation and hand-wringing about identity-politics, about identifying the identity of groups of people, about deciding which are “targeted” (literally and metaphorically), and what actions to take once the identity or identities of certain people are determined.

Once one starts down the road of identifying identities, one enters a labyrinth of complex social, cultural, historical, and ethical issues. In addition, for any one person or group, it is surprising to discover how complex and manifold the identity charactersistics are. Again, who decides which are appropriate, legal, ethical, useful, etc.? Commentaries on contemporary selfhood, otherness, and other identity issues and definitions might question a single identity and favor contemporary individuals in “advanced” societies as having multiple, composite identities and exhibiting multiple cultural competencies. I mean to avoid a simple position about an individual or group in favor of a more complex position that can account for contemporary social phenomena.

I started to think about this a year ago when I learned of a group trying to contact and incorporate various groups of people into the mainstream of Judaism. I shall use this topic of Judaism as an example for several reasons: because I am simply talking about myself and am not trying to describe, judge, or evaluate other kinds of people, and because Judaism is a particularly complex amalgam of religion, people, nation, culture, and other demographic categories.

I ask myself: What kind of Jews have I encountered? What kind of a Jew am I? Established by philosophy or values? Or, by actions? Who decides what kind of Jews there are and who belongs to each of these groups? A book I’ve encountered has prompted me to consider these questions more deeply than ever before.

Every Tongue: The Racial and Ethnic Diversity of the Jewish People, by Diane Tobin, Gary A. Tobin, and Scott Rubin [2], analyzes diversity in ethnic, racial, religious, and other dimensions of the Jewish people, especially in the USA. The book is very stimulating and thoughtful, fact-filled and polemical, challenging and endearing. The data presented in the first chapter, for example on pp. 20ff, and especially the terminology used throughout the book, prompted me to think about how one would categorize or describe one’s “Jewish identity.” I started to make a list on the pages of the book, but then ran out of space. I realized I had never thought to try to look at and think about What kind of Jews are there? (A more recent book has appeared by Aaron Tapper: Judaisms: A Twenty-First-Century Introduction to Jews and Jewish Identities [3].)

Consider that task for any other kind of identity, your own, or that of someone else in another group. Especially a “target group.”

It is fascinating, as I learn about the weekly Torah portion or parshah that I was reading in the Torah during the past few weeks in my synagogue’s (Congregation Beth Israel, Berkeley) study session or Beit Midrash, headed by Maharat Victoria Sutton, that we are witnessing/discussing a small group of people (originally 70 coming to Egypt), who are now leaving Egypt after centuries there, part of the time in slavery, being instructed, forged, created, commanded, enlightened into a Religion/People/Nation of Israel, with all the complexities of identity, social structures, religious structures, and more, while they are yet wandering in the desert. How timely. How timeless. What makes for one’s identity? What kind of a “card-carrying Jew” am I? What attributes are listed on that card? “What’s in my wallet?” to paraphrase a popular commercial phrase on television these days.

For those of you who identify yourselves as Jewish, what kind of Jew are you? For those of you who think about or have befriended Jews, what kind of Jews are they? 

I wonder if, out there on the Internet, there is a Glossary of All Glossaries, or Taxonomy of all Taxonomies, of Jewish Identity. Probably. I have encountered a few partial solutions.

It would be a fascinating conceptual-verbal-visual-communication challenge to define all of these terms and ask people to build Venn/Euler diagrams of the terms and also to visualize their own identity cluster as a Venn/Euler diagram.

Consider this activity for any identity, for any group, for any target market... Identity is far more complex than one usually considers. Then begin to ask yourself how you would design with that knowledge, and design ethically. How can a mind-map of identity/identities be a tool for better design, if only as a reality check, warning, or immunization against inadvertent limitation or bias.

If you have suggestions for additional categories, please send them to aaron.marcus@bamanda.com. Thanks in advance.

I am indebted to Gilbert Cockton for the Buckingham reference and for his astute critique of an earlier version of this text, which led to many improvements.

Endnotes

1. Buckingham, David (2008) “Introducing Identity." Youth, Identity, and Digital Media. Edited by David Buckingham. The John D. and Catherine T. MacArthur Foundation Series on Digital Media and Learning. Cambridge, MA: The MIT Press, 2008. 1–24. doi: 10.1162/dmal.9780262524834.001 (http://www.issuelab.org/resources/850/850.pdf).

2. Tobin, Diane, Gary A. Tobin, and Scott Rubin (2005). Every Tongue: The Racial and Ethnic Diversity of the Jewish People. San Francisco: Institute for Jewish Community Research, 2005.

3. Tapper, Aaron J. Hahn (2017). Judaisms: A Twenty-First-Century Introduction to Jews and Jewish Identities. Oakland: University of California Press.



Posted in: on Mon, March 06, 2017 - 10:37:54

Aaron Marcus

Aaron Marcus is principal at Aaron Marcus and Associates (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts


Post Comment


No Comments Found


From stereoscopes to holograms: The latest interaction design platform


Authors: Joe Sokohl
Posted: Tue, January 10, 2017 - 10:38:04

Shortly before the widespread availability of photographs in the first third of the 19th century, stereoscopes appeared. We’ve been flirting with portable, on-demand three-dimensionality in experience ever since.

From Sir Charles Wheatstone’s invention in 1838 of a dual display to approximate binocular depth perception through the ViewMaster fad of the 1960s, people have been captivated by the promise of a portable world. Viewing two-dimensional images never seemed enough; yet to view a three-dimensional image of Confederate dead at Gettysburg or of a college campus helped bring people into connection with something more real.


This image of the original John Harvard statue shows what a
typical stereoscope card looked like. 

Lately I’ve been experimenting with Microsoft’s HoloLens, an augmented reality (AR) headset that connects to the Internet and hosts standalone content. 

Similar to other wearables such as Google’s now-defunct Glass, HoloLens requires the user to place a device on their head that uses lenses to display content. As an AR device, it also allows the user to maintain environmental awareness—something that virtual reality (VR) devices don’t. 

Many other folks have written reviews of these devices; that’s not my purpose here. Instead, I’ve been thinking about the interaction promises and pitfalls that accompany AR devices. I’ll use the HoloLens as my platform of discussion.

The key issue for interaction in AR center around discoverability, context, and human factors. You have to know what to do before you can do it, which means having a conceptual model that matches your mental model. You need an environment that takes your context into consideration. And you need to have human factors such as visual acuity, motor functions, speech, and motion taken into account.

Discoverability

We talk of affordances, those psychological triggers that impart enablement of action. Within an augmented world, however, we need to indicate more than just a mimicked world. 

Once the device sits on a user's head, the few hardware buttons on the band enable on/off and audio up/down. Because the device is quick to turn on or wake from sleep, it provides quick feedback to the user. This critical moment between invocation of an action and the action itself becomes hypercritical in the augmented world. Enablement needs to appear rapidly.

For HoloLens users, the tutorial appears the first time a user turns the unit on. It's easy to invoke it afterwards as well. Yet there's still a need for an in-view image that enables discoverability.

Context

Knowing where you are within the context of the world is critical to meaning. As we understand from J.J. Gibson, cognition is more than a brain stimulation: It's also a result of physical, meatspace engagement. 

So a designer of an AR system must understand the context in which the user engages with that system. Mapping the environment helps from a digital standpoint, but awareness of objects in the space could be clearer. Users need to know the proximity of objects or dangers, so they don't fall down stairs or knock glasses of water onto the floor.

Human Factors

Within AR systems, the ability to engage physically, aurally, and vocally entails a key understanding of human factors. Designers need to show concern for how much physicality the user needs to engage with. 

In the HoloLens, users manipulate objects with a pinch, a grab, and a "bloom"—an upraised closed palm that then becomes an open hand. A short session with HoloLens is comfortable; too much interaction in the air can get tiring.

Voice interfaces litter our landscape, from Siri to Alexa to Cortana. HoloLens users can invoke actions through voice by initiating a command with  "Hey, Cortana." They can also engage with an interface object by saying "Select." 

Yet too often these designed interfaces smack of marketing- and development-led initiatives that don't take context, discoverability, or human factors into account. The recent movie Passengers mocks the voice interfaces a little, in scenes where the disembodied intelligence doesn't understand the query. 

We've all had similar issues with interactive voice response (IVR) systems. I'm not going to delve into IVR issues, but AR systems that rely on voice response need to take human factors of voice (or lack thereof) into account. A dedicated device such as the HoloLens seems to be more forgiving of background noise than other devices, but UX designers need to understand how the strain of accurate voicing can impact the user's experience.

Back to Basics

User experience designers working on AR need to understand these core basic tenets: 

  • Factor in discoverability of actions into a new interaction space.
  • Understand the context of use, to include safety and potential annoyance of neighbors to the user.
  • Account for basic human factors such as motor functions, eye strain, and vocal fatigue.



Posted in: on Tue, January 10, 2017 - 10:38:04

Joe Sokohl

For 20 years Joe Sokohl has concentrated on crafting excellent user experiences using content strategy, information architecture, interaction design, and user research. He helps companies effectively integrate user experience into product development. Currently he is the principal for Regular Joe Consulting, LLC. He’s been a soldier, cook, radio DJ, blues road manager, and reporter once upon a time. He tweets at @mojoguzzi and blogs at sokohl.com. Joe Sokohl is Principal of Regular Joe Consulting, LLC
View All Joe Sokohl's Posts


Post Comment


No Comments Found


Observations on finishing a book


Authors: Jonathan Grudin
Posted: Tue, January 03, 2017 - 10:39:03

I’ve only posted twice to the Interactions blog in 8 months, but I’ve been writing, and frequently thought “this would be a good blog essay.” Minutes ago, I emailed in the last proof edit for a book. This post covers things I learned about writing and the English language after a brief, relevant description of the book.

From Tool to Partner: The Evolution of HCI is being published by Morgan Claypool. Twenty-five years ago I agreed to update the “intellectual history” that Ron Baecker wrote for the first edition of Readings in HCI. He had cited work in different fields; the connections and failures to connect among those fields mystified me. I didn’t have enough time to resolve the mysteries, and other questions surfaced later. I tracked down people to interview and eventually answered, to my satisfaction, each of my questions. When I was invited in 2002 to write an HCI encyclopedia article on social software, I asked to write about HCI history instead. An article in IEEE Annals of the History of Computing and several handbook chapters followed. In early 2016, I decided to update and extend that work, looking across HCI as it is practiced in human factors, management information systems, library and information science, and computer science, with a tip of the hat to design and communication. What do these fields have in common? What has often kept them apart? How did each evolve? A good way to learn about your own field is to understand how it resembles and differs from others.

Surprising things I learned about the English language while working on this book and the handbook chapters are general insights, but they came into focus because writing about history is different from other writing. Here too, contrast brought clarity.

An author may interact with an editor, copy editor, proofreader, compositor, and external reviewers. The focus on language is greatest with copy editors, who clean up punctuation, word choice, grammar, sentence structure, citations, and references. Some handbook publishers outsource copy editing to fluent but not necessarily native English speakers who contract to apply The Chicago Manual of Style. This book’s editors were native speakers who did an excellent job, yet issues arose. Because the editors were good, I realized that inherent ambiguities in English exist and may be irresolvable, although some could be addressed by applying more sophisticated rules.

Years ago, I broached this with someone at The Chicago Manual of Style, who objected strongly to publishers mandating adherence to their guidelines. The guidelines do not always apply, I was told. “The Chicago Manual of Style itself does not always follow The Chicago Manual of Style!” Nevertheless, copy editors need a reliable process, and if adhering to the extensive CMS is insufficient, what can be done? Authors: Work with a copy editor sympathetically but be ready to push back in a friendly fashion. Copy editors: Let authors know you are applying guidelines and consider additional processes. 

Context matters

We know that goals differ among readers of blogs, professional and mass media magazines and websites, conference proceedings, journal articles, and books, though some lines are blurring. The following examples show that even within one venue, a book, goals can differ in significant ways.

Writing about history for scientists is different from scientific writing to inform colleagues. The treatment of citations and dates are two strong examples. Science aims for factual objectivity, whereas a history writer selects what to include, emphasize, and omit. In scientific writing, the number of supporting references and their authors’ identities can be highly significant. Engaged readers may track down specific references; they may frequently pause and think while reading. The reader of a history (unless it is another historian) is looking for the flow of events. Lists of citations interrupt the flow, whether they are (name, year) or just [N] style. Rarely do readers of a history consult the references. To smooth the flow, histories often move secondary material as well as citations to footnotes or endnotes, whereas scientific writing usually keeps material in the text. Popular histories, including some by outstanding historians, now go so far as to omit citations, footnotes, and endnotes from the text while collecting them in appendices with sections such as “Notes for pages 17-25.”

Dates take on greater significance in a history. When something happened can be more important than what happened. Often, I wrote something like “In 1992, such and such was written by Lee,” and a copy editor changed it to “Such and such was written by Lee (1992). The details of such and such were not important, only that Lee worked on it way back in 1992. The emphasis shifted, the key point is buried. Similarly, in discussing a period of time, a work written in that period is very different from a work written later about the period. One is evidence of what was happening, the other points to a description of what was happening. The date is crucial for the first, not for the second, which could be relegated to an endnote.

History contrasted with science is a special case, but subtle distinctions of the same nature could affect established vs. emerging topics, or slow-moving versus rapidly-evolving fields. In a dynamic research area, the year or even the month a study was conducted can be important, a fact that we often overlook by not reporting when data were collected.

The strange case of acronyms

I will work through the most puzzling example. We all deal with acronyms and blends (technically called portmanteaus, such as FORTRAN from FORmula TRANslation). HCI history is rife with universities, government agencies, professional organizations, and applications that have associated acronyms. A rule of copy editing is that the first time such a compound noun is encountered, the expansion followed by the acronym appears, such as National Science Foundation (NSF) or Organizational Information Systems (OIS). After that, only the acronym appears. Copy editors apply this global replace. Unfortunately, in many circumstances it is not ideal or even acceptable. Some are context-dependent, so a copy editor can’t know what is best. In some situations, almost everyone in the author’s field would agree; in others, perhaps not.

One important contextual issue is the familiarity of the acronym to typical readers. Many systems and applications were known only by their acronym. Discussions of Engelbart’s famous NLS system rarely mention that it was derived from oNLine System, or that IBM’s successful JOSS programming language stood for JOHNNIAC Open Shop System. To introduce the expansion is disruptive and unnecessary. I was familiar with the pioneering PLATO system developed at UIUC (aka University of Illinois at Urbana-Champaign : ), but did not know the expanded acronym until last week, when I looked it up for an acronym glossary that reviewers requested I include with my book. The rule of including an expansion only once is fine for a short paper or for an acronym that is familiar or heavily used, such as NSF and CHI in the case of my book. But when a relatively unfamiliar acronym such as OIS is introduced early and reappears after a gap of 50 pages, why not remind the reader (or inform a person browsing) of the expansion? Override the rule! Similarly, an acronym that is heavily used in one field, such as IS for Information Systems, could be expanded a few times when it might be confused by people in other fields (e.g., for information science). These judgment calls require knowledge that a copy editor won’t have.

Other examples are numerous and sometimes subtle. CS is a fine acronym for Computer Science, and CS department is fine, but “department of CS” is awkward and “the Stanford Department of CS” is even worse. Similarly, it seems shabby to say that someone was elected “president of the HFS” rather than “president of the Human Factors Society.” We would not call Obama “President of the US” in a formal essay.

I will conclude with one that I haven’t been able to solve. Consider the sentence, “The University of California, Los Angeles won the game.” That sounds good. But “The UCLA won the game” sounds worse than “UCLA won the game.” Global acronym replacement fails.

Try this experiment: In the sentence “X awarded me a grant,” replace X with each of the following: Defense Advanced Research Projects Agency, Federal Bureau of Investigation, Internal Revenue Service, Indiana University, University of California Los Angeles, National Science Foundation, DARPA, FBI, IRS, IU, UCLA, and NSF. Which sentences should start with the word “The”? I would do so for all of the expanded versions except Indiana University. For the acronyms, I would only do it for FBI and IRS. Global replace often fails, and I cannot find an algorithm that explains all of these. My conclusion is that English is more mysterious than I realized, and authors are well-advised to pay close attention and collaborate sympathetically with editors.



Posted in: on Tue, January 03, 2017 - 10:39:03

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Escape the room: A girl on a drip, a pizza slice, and a smartphone set to stun


Authors: David Fore
Posted: Fri, December 23, 2016 - 9:56:39

One late autumn afternoon I found myself in an Ohio hospital room sitting with a teenage girl I'll call Orleans Jackson. She was spending her 15th birthday “getting poked full of holes,” as she put it. 

Rail-thin with almond eyes, Orleans had her hair in springy dreadlocks. She apologized for “being in total moonface mode,” describing the signature look of those on a regular and robust diet of steroids.

Orleans' visit today was a big deal. She was transiting from a daily cocktail of oral medications (featuring the steroid prednisone) to quarterly infusions of infliximab, a “biologic” drug derived from human genes that would be delivered directly into a vein in her left arm.

From that hand hung a sloppy slice of pizza courtesy of a pair of nurses who felt for any girl forced to spend her birthday at the hospital. In Orlean's other hand she grasped a smartphone for dear life.

Orleans was one of about 30 patients, family members, nurses, doctors, and researchers I followed around that year. This was part of a nationwide ethnographic study into a healthcare ecology where kids were undergoing treatment for—and learning to cope with—symptoms associated with inflammatory bowel disease (IBD). What I learned in the field would inform the design of a “learning health system” called C3N. Funded by the National Institutes of Health (NIH), C3N’s aim was to improve care and drive down costs by providing participants better online and offline means to make and share new knowledge in real time. 

Once our study was complete, we would go onto synthesize our data in hopes of identifying salient psychographic, demographic, and behavioral patterns (personas, usage situations, scenarios, and system requirements) that would help us frame the design work ahead of us.

Orleans was a teenage girl from one of the impoverished and predominantly African-American neighborhoods near the hospital. I also learned from a prior conversation with her father (an auto mechanic from Jamaica I’ll call Floyd) that she was “a closed-mouth girl by nature” who fell into a deep depression two years before when he realized she was bleeding from her gut. This would be the first of many times she would visit the emergency room before finally getting a diagnosis.

Most of my demographics, opportunities, and personality traits, meanwhile, were very nearly her mirror opposite. With all our differences I needed to figure out how to connect.

Where to start when questioning the Sphinx? How about the end.

Let's chat about poop!

Controlling one’s bowels is a fundamental developmental and social benchmark among humans and other mammals. Cultural taboos—not to mention a widespread physiological aversion to fecal matter—enforce embargoes on serious contemplation or discussion of errant gastrointestinal activity.

The painful and awkward chronic illnesses that fall under the IBD rubric—chiefly Crohn’s disease and ulcerative colitis—can have life-changing physical, social, and psychological implications. In addition to the everyday indignities that define the common American school experience, kids like Orleans must develop clever ways to undertake—without undue notice—dozens of painful, unpleasant, and bloody diarrhea episodes each day.

“This thing isn’t going to kill a kid outright,” one Kentucky mother told me as we sat with her afflicted son in their living room the week before. “But it’ll kill you a little bit every day if you don’t watch out.”

One thing I noticed straight off about this population: Given half a chance, those who deal with IBD are blazingly direct if only to prime the pump of knowledge. But what chance was I giving Orleans? That’s the question I turned over in my head that afternoon as minutes turned to hours and a birthday cake replaced the pizza.

I did what I often do in such situations. I was friendly but hung back. I took a note here and there but kept my head up and eyes open. I observed clinicians and family members who entered and left the room and greeted them when appropriate. I took some snapshots.

Every now and then I glanced at my topic list for the inspiration. I asked Orleans about her visitors. She offered cursory responses. I asked her to tell me a story about school. “Boring.” About home? “Boring.” What makes for a no-good-very-bad-day? “The hospital.” What makes for a pretty-good-day-all-things-considered? “Pizza and cake.”

More curt answers. More notes. More hanging out.

I pulled out a deck of brainstorming cards. Nuthin’ doin’. I invited her to draw pictures. Not today. I took a final glance at my topic list and saw I had actually managed to get what I had set out to get. But I also knew enough to know that I knew nothing about how this girl might benefit from this otherwise abstract thing called C3N.

She remained a crystal mystery to me.



When at first you don't succeed, play, play again

Time was getting short. I studied her messing around with her phone and I posed my final question.

“What do you play?”

Escape the Room.”

That’s when conversation flowed. She showed me the ins and outs of the game, what was “addictive” about it and what was “stupid but fun anyway.” She described the people she encountered online while playing. Nobody with IBD, she said, but at least they knew her name (even if she used a pseudonym) and treated her nice (even if they were competitors.) She told me she had never met anybody else with the disease, and that she could trust only one friend with the bloody details.

“The only thing more difficult than a messy flareup at school,” she said, “was being absent all the time.”

She understood what the doctors and nurses were saying about her disease, she explained, but what she really wanted was to meet a girl her age “who could share her truth.” She said that sometimes she could play the game for hours as the time flowed by, which helped calm her nerves and permit her to return to "real life" a little lighter of heart.

Soon the nurses were arriving to bring the treatment to an end. Floyd also showed up to gather his daughter. While we said our good-byes in the moment, we would invite them and others to contribute insight as part of the C3N advisory group.

Let's meet where you are

When I think about Orleans I feel gratitude for reminding me how successful designs form themselves around the lives of those who will depend on them. In the case of this girl, she was looking for opportunities for social contact and a few healthy doses of fun. She wanted to leaven her isolation and reduce stress; she also wanted to increase the likelihood of trustworthy bonds that hold potential for sharing knowledge and feelings.

The thing is this: play is a near-universal interactive mode that permits us to work through all sorts of ambitions and fears, ideas and strategies. Play works on our bodies and minds because it is a metaphoric—and therefore relatively safe—environment. Researchers know that playing games with respondents can yield insight. What this story suggests is that simply asking a respondent about her gameplay can offer productive ways to navigate challenging personal terrain.

The best designs meet people where they are. For kids like Orleans, that place is sometimes a small room where they are tethered to an IV as they play with the locks on the door.

Applying design thinking and practices to a medical study requires careful data collection and synthesis. Accordingly, our work was overseen by an institutional research board. I describe our research and synthesis methods in greater detail in this article, published in a leading peer-reviewed medical journal. My wise and patient co-authors were Peter Margolis, Laura Peterson, and Michael Seid.



Posted in: on Fri, December 23, 2016 - 9:56:39

David Fore

David Fore cut his teeth at Cooper, where he led the interaction design practice for many years. Then he went on to run Lybba, a healthcare nonprofit. Now he leads Catabolic, a product strategy and design consultancy. His aim with this blog is to share tools and ideas designers can use to make a difference in the world.
View All David Fore's Posts


Post Comment


No Comments Found


A design by any other name would be so delightful


Authors: Monica Granfield
Posted: Thu, November 10, 2016 - 11:46:08

Three or four years ago I attended the UXPA conference. The theme was "We’re not there yet." After 26 years as an Interaction, UI, and UX designer, my first reaction was joy, at the validation that I wasn't alone in my perception of my role on a product. My next reaction was despair, that well, I am not alone in my perception of UX and my role, and that this issue is much larger than just myself. The message was "We have a way to go." I began thinking about the perception of UX and design and how it could be better promoted to move UX along, to get us "there," starting within my own organization and considering my past experiences.

There is still plenty of confusion over exactly what UX is and a great misperception that it is visual design. Although the word "design" means "to plan and make decisions about (something that is being built or created)," somehow it is still very much perceived as creating only visual solutions. I still hear things like "Well I am the PM"  or "Oh, I thought you were the visual designer" or "When will we see pictures?" and "This LOOKS great."

Viewing the final product simply as images of what you see on the screen, in a visual pleasing way, is a misperception of what UX design is and how it contributes to the product. The fact is that these "screens" are actually sets of complex workflows, technology, and interactions that are the result of months of research, including customer visits, interviews, cross-group work with services or finance teams, fleshing out complex work flows, and resolving implementation issues with development, all in order to negotiate the way to the best end-user experience possible—aka, what you are seeing on these screens.

I took a step back and started observing what transpires in the typical product-creation process. I observed something like the following: Marketing and product have an idea or have received a customer request, or customer support or design has unearthed some pain points in the product. A developer, a designer, maybe a product manager, get together to whiteboard and exchange ideas. Once this small dynamic group feels solid on a concept, the designer goes off and plans out the layout and interaction for each of the screens and the workflows that build the relationships between them, to assure that a user is able to accomplish their tasks. Some customer interviews or visits may take place, and some usability testing, to gain feedback on the ideas, may occur. Therein lies the problem. The concept that this entire process IS the design is what is being missed. The final step of engaging a visual designer is where the focus sits.

The premise of design thinking is to address exactly this: All team members, regardless of their role, are responsible for the design because the process is the design; the screens and the experience are the end result. All of those involved in creating the product seem to understand that they are designing the product, but still consider the designer mainly responsible for making the pictures of the decisions that have been made along the way.

As I thought about this, I wondered how we can change this perception of design and move design to be seen as the process, not just the end result. I began by considering the semantics of how I refer to and use the word "design" in my everyday work.

Semantics count and influence the way in which design is perceived. When the research phase for a feature or an experience is happening, I don't refer to it as "the research phase of the design," I refer to it as "conducting design research." Why? If I say, "I am working on the research phase of the design," what is heard and thought of as the design, as an end result, is a picture. The emphasis is on "design" and the idea that a picture is being tested to gain insight and see if it works or how to make it better. And what is communicated back might sound something like, "OK, that's great but when will you have the pictures ready?" because the focus is still on the end result, which is what people are going to see, and not the research and process that helps drive the end result. Getting the team to focus on the journey of the process, and to value this, is the goal. To shift the emphasis to the research portion of the process, saying, "I am in the design research phase," may be subtle, but is now a reference to design with the focus on a portion of the process, the research.

I also looked at how I've worked with teams to create product designs. One thing I noticed was that we tend to plan around the stages of design, but talk about the final artifacts—the pictures—as the design goal, not the design as the process and the experience it creates as the goal. For example, when I begin generating wireframes, I no longer refer to the wireframes as "early designs" or "wireframes of the designs," as again what we say matters. Instead, I refer to these as "early product/experience concepts" or simply just "concept wireframes," excluding the word "design." Why? Because the "wireframes" are often viewed as final and then a push to have them LOOK better occurs. The words and the semantics of the words count, in order to not make design inconsequential and to educate teams that design is the process, the journey to the solution.

I have begun trying this approach by making a conscious effort to talk about the various stages of product creation, in terms of how the process contributes to the final outcome of the product and the experience users have with the product, rather than referring to any part of the process as "the design," as if the design is this ultimate end product that magically appears. An example: When referring to the design research, I explain how research is a way to understand what the users are expecting out of the product or how workflows need to be created. Because these are the backbone of identifying how the users move through the product to accomplish tasks, or how the technology and data architecture contribute to when and how users see information and interact with it. 

As creators, collectors of ideas + possibilities, the words we choose to use, and how we use them, can elevate our conversations away from the end results and toward the process and the journey. That is the heart of the design.

It is perhaps a statement of the obvious, but worth emphasizing, that the forms or structures of the immediate world we inhabit are overwhelmingly the outcome of human design. They are not inevitable or immutable and are open to examination and discussion. Whether executed well or badly (on whatever basis this is judged,) designs are not determined by technological processes, social structures, or economic systems, or any other objective source. They result from the decisions and choices of human beings. While the influence of context and circumstance may be considerable, the human factor is present in decisions taken at all levels in design practice.—John Heskett 



Posted in: on Thu, November 10, 2016 - 11:46:08

Monica Granfield

Monica Granfield is a user experience strategist at Go Design LLC.
View All Monica Granfield's Posts


Post Comment


No Comments Found


Same as it ever was: Constitutional design and the Orange One


Authors: David Fore
Posted: Tue, October 11, 2016 - 3:07:05

In politics, as in music, one person’s stairway to heaven is another’s highway to hell. Proof is in the polls: Americans across the political spectrum believe the country is headed in the wrong direction. And here’s the thing: Rather than dispute one another over the facts, we acknowledge only the facts that suit our present viewpoint and values.

Then there’s the fact that none of us believe in facts all the time.

As the unwritten rules of the 140-­character news cycle render our political aspirations into a lurid muddle, our national conversation circles the drain into threats of bloodshed in the streets. Many are asking how we got here. The answer is this: our constitution, which was designed by—and for—people who hold opposing viewpoints about a common vision. The current election cycle demonstrates how this document still generates a governmental system that works pretty much as planned. It does so by satisfying most of Dieter Ram’s famous principles for timeless design most of the time. What’s more, I believe, this bodes well for the outcome of the current election. Read on...

So you want to frame a constitution...

The public has seized upon the vision of the American dream crashing into the bottom of a ravine, wheels-­up, spinning without purpose. Rather than giving in to panic, however, I recommend taking a deep breath and listening closely. If you do, you might soon hear the dulcet chords and hip­-hop beats of the century’s most successful Broadway musical, Hamilton, rising above the hubbub.


Design scrum, circa 1787. See: Cosmo

Here’s the scene: The Americans just vanquished the greatest military on the planet. The fate of the world’s newest nation now dances upon the knife­-edge of history. The new leaders wear funky wigs while declaiming, cutting deals, and making eyes at one another’s wives. They are predicting the future by designing their best possible version of it.

Alexander Hamilton stars as the libertine Federalist who believes the Constitution should gather under its wing most of the bureaucratic and legal functions of a central government, complete with a standing army and a central bank. James Madison is the pedantic anti­-Federalist who argues that rights and privileges must be reserved to individuals and states to guard against tyranny.

These two partisans hold opposing viewpoints, vendettas, and virtues. Their mutual disdain shines through every encounter and missive as they got busy framing a new form of government that balances each side against the other.

Back in the day, they called it framing. Today, we call it design thinking. Same difference.

In "Wicked Problems in Design Thinking," Richard Buchanan posits that intractable human problems can be addressed with the mindset, methods, and tools associated with the design profession. This means sustaining, developing, and integrating people “into broader ecological and cultural environments” by means of “shaping these environments when desirable and possible, or adapting to them when necessary.” Sounds a lot like the task of forming a more perfect union that establishes justice, ensures domestic tranquility, and all the rest.

Design is constructive idealism. It happens when designers set out to create coherence out of chaos by resolving tensions into a pleasing and functional whole that realizes a vision of the future.

Designers might balance content against whitespace, for instance, in order to resolve tensions in a layout that would otherwise interfere with comprehension. Designers are good at tricking-out flat displays of pixelated lights to deliver deeply immersive experiences. We craft compelling brands by employing the thinky as well as the heartfelt. And because the most resilient designs are systems­-aware, we craft workflows and pattern libraries that institutionalize creativity while generating new efficiencies.

The Framers, for their part, were designing a system of governance that would have to balance different kinds of forces. The allure of power against the value of the status quo. The inherent sway of the elite versus the voice of the citizen. The efficiencies of a central government and the wisdom that can come with local knowledge. They resolved these and similar tensions with a high input/low output system designed for governance, and delineated by a written Constitution.

From concept to prototype to Version 1

The Framers knew their Constitution would be a perpetual work in progress. It would possess mechanisms for ensuring differences could always be sorted out, that no single political faction or individual could rule the day, and that changes to the structure of government—while inevitable with time—would have to survive the gauntlet before being realized.

The Framers arrived at this vision while struggling against the British, first as their colonial rulers then as military adversaries. They also had the benefit of a prototype constitution: the Articles of Confederation. Nobody much liked it when they drafted it and even fewer liked governing the country under its authority. It was a hot mess that nevertheless gave the Framers a felt sense of what would work and what would not. This prototype lent focus and urgency to the task of creating a successor design that would be more resilient and useful.

Hamilton and Madison represented just two ends of an exceedingly unruly spectrum of ageless ideals, momentary grievances, political calculations, and professional ambitions. Still, they were the ones who did the heavy lifting during the drafting process, while boldface names such as Jefferson, Franklin, Adams, and Washington kibitzed from the wings before rushing forward to take credit for the outcome. Sound familiar?

Also familiar might be the fact that the process took far longer and created much more strife than anybody had anticipated. Still, against these headwinds they shaped a Constitution (call it Version 1.0) that delineated the powers of three branches of government without including the explicit civil protections sought by Madison and the anti­-Federalists.

Not wanting the perfect to be the enemy of the good, the Framers decided to push the personal-­liberty features to the next release... assuming there would be a next release.

And there was. After shipping the half­baked Constitution to the public, it was ratified.

Now Madison was free to dust off his list of thirty-­nine amendments that would limit power through checks and balances of the government just constituted. This wish list was whittled down to the ten that comprise the Bill of Rights, ratified in 1791.

The resulting full­-featured Constitution boasts plentiful examples of design thinking that address current and future tensions. It also contains florid prejudices, flaws, and quirks that hog-­tie us to this day.

But still, it breathes.

First, the bug report

Let’s look at the current slow-­motion spectacle over the refusal of Senate leaders to hold hearings on the President’s choice for filling a vacancy on the Supreme Court. It’s not that they refused to support the choice… they refused to consider his candidacy.

How could this be? It is owing to a failure of imagination on the part of the Framers, who did not anticipate that leaders would simply decide not to do their jobs. And so the Constitution is silent on how swiftly the Senate must confirm a presidential appointee, making it conceivable that this bug could lead to a court that withers away with each new vacancy.

And while this is unlikely—at some point even losers concede defeat if only to play another day—this is something that needs fixing if we want our third branch of government to function as constituted.

Other breakdowns are the result of sloppy code. Parts of the Constitution are so poorly written that the fundamental intent is obscured. The hazy lazy language of the Second Amendment’s serves as Exhibit A:

A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.

Umm, come again? Somebody must have been really rather drunk at the time that tongue twister was composed, which has since opened up a new divide between the two ends of the American spectrum.


Strategy is execution. See: sears.com

When in doubt, blame politics

More troubling to me are the results of two compromises that mar the original design. High on that list is the 3/5 compromise, which racialized citizenship and ensured that slaveholders would hold political sway for another century. This compromise ultimately sprang from a political consideration, in variance with the design, meant to ensure buy­-in by Southern states. The prevailing view was that in the long run this kind of compromise was necessary if there was to be a long run for the country. The losing side held that a country built on subjugation was not worth constituting in the first place. Our bloody civil war and our current racial divides are good indicators that this was a near­-fatal flaw in the Constitutional program, now fixed. Mostly.

Another compromise in variance with the original design creates an ongoing disenfranchisement on a massive scale. Article 1 says that each state shall be represented by two senators, regardless of that state’s population. The result? Today’s California senators represent 19 million citizens each, while those Connecticut senators each represent fewer than 2 million people.

It’s important to note that this was a feature, not a bug. It was consciously introduced into the program in order to ensure that those representing smaller states would vote to support ratification.

And it works! We know that because every organization is perfectly designed to get the results it gets. This near­t-autological axiom is attributed followers of Edwards Deming. In some ways the father of lean manufacturing, Deming showed how attention to outcomes sheds light on where design choices are made. In this case, small states have outsized influence, which keeps them in the game.


The Senate: undemocratic by design. See: Washington Post.

Still, not everything intended is desireable. Even Hamilton—notwithstanding his own skepticism about unbridled democracy—opposed this Senatorial scheme. Rather than dampening “the excesses of democracy,” which he saw as the greatest threat to our fledgling nation, this clause resolves nothing save for a short­-term political problem that could have been addressed through means other than a fundamentally arbitrary scheme that would perpetually deform the will of the people.

Easy to use… but not too easy

Still, Hamilton got much of his design vision into the final product. The document boasts a wide range of process­-related choke points, for instance, that permit plentiful consideration of ideas while ensuring few ever see the light of day.

Consider how a bill becomes a law. It must (almost always) pass through both the House and the Senate, then survive the possibility of a presidential veto, then avoid being struck down by the courts. By making it easy to strangle both bad bills and good ones in their cribs, this janky workflow is a hedge against Hamilton’s concern about an “excess of lawmaking.”

Does it succeed? Yes, if we judge it by the measure of fidelity between intent and outcome. Partisans shake their fists when their own legislation fails but they appreciate the cold comfort that comes from the knowledge that their friends across the aisle wind up getting stuck in the same mud as they do. (See Kahneman et al. on the subject of loss aversion.)

The killer app

If the Constitution has an indispensable feature, I cast my vote for the First Amendment. The Framers felt the same, so they provided it pride of place, designating it as the driving mechanism for all that follows. In so doing they confirm what we all know: that speech springs from core beliefs and passions whose untold varieties are impervious to governance anyway… so then why even try?

Equally important, by making it safe to air grievances about the government, this amendment guarantees that subsequent generations will enjoy the freedom to identify and resolve the tensions that arise in their lives, just as the Framers had in theirs. They also well understood the temptation of abandoning speech in favor of violence, a specter that always hovers above political conflict. Better to allow folks to vent so they don’t feel they need to act.

A success? Yes, if we evaluate the First Amendment by the clear fact of the matter that centuries later Americans do enjoy popping off quite a lot. Even if what we say is divisive and noxious. With a press [1] free to call B.S. on candidates, though, we have a good shot at exposing the charlatans, racists, and con­men among us. This amendment ensures that voters have the information they need to ensure that the keys to 1600 Pennsylvania Avenue fall into the right hands. I, for one, am confident that we will.

What would Dieter do?

The Constitution is an imperfect blueprint created by manifestly imperfect men. In lasting 230 years, though, it has become history’s longest­-surviving written national charter, doing so according to Ram’s principles of timeless design:

  • It is durable, which also indicates a certain thoroughness.
  • It is innovative, in that no country had established a democracy so constituted.
  • It is useful, in that we depend upon it to this day.
  • While not particularly aesthetic—and also often obtuse—the document manages to be unobtrusive in the way Ram means: it ensures our ability to express ourselves.
  • It is for the most part honest in that it does what it sets out to do. Still, like other governments of its time, ours devalued and/or demeaned most inhabitants, including blacks, native Americans, women, and those without property. Still does, in many ways.
  • By leaving many decisions to state governments and individual citizens, it has as little design as possible.
  • Where the Constitution falls short, utterly, is Ram’s principle of environmental sustainability. And while this is forgivable given the historical context, we are now compelled to consider how to bend this instrument to the exigencies of our chance behind the wheel.

I admire Ram’s canonical principles, but I’ve always felt there is something missing. That’s magic.

Great design thrives in our hearts, and balances in our hands, just so. Great design is transcendent, and I have come to believe this is so owing to its ensoulment.

You sense the ensouled design because somehow it reflects your essence and furthers your goals, regardless of whether you understand how it does so.

In the case of our Constitution it is we, the people, who supply the magic. This is what the Framers intended. And so it is up to all of us to redeem the oft­-broken promises of the past, and to realize a more inclusive, fair, and resilient vision of America, generation after generation.

Same as it ever was?

Endnote

1. Here, “the press” includes the Internet and its cousins. I’ve always wondered whether the First Amendment is chief among the reasons the United States has been a wellspring for the emergence of these socio­technical mechanisms and their inherent potential for liberating speech for everyone everywhere. Further research anyone?



Posted in: on Tue, October 11, 2016 - 3:07:05

David Fore

David Fore cut his teeth at Cooper, where he led the interaction design practice for many years. Then he went on to run Lybba, a healthcare nonprofit. Now he leads Catabolic, a product strategy and design consultancy. His aim with this blog is to share tools and ideas designers can use to make a difference in the world.
View All David Fore's Posts


Post Comment


No Comments Found


Thailand: Augmented immersion


Authors: Jonathan Grudin
Posted: Thu, September 08, 2016 - 4:29:47

Our first day in Thailand, we visited the Museum of Regalia, Royal Decorations and Coins. Case after case of exquisite sets of finely crafted gold and silver objects from successive reigns: How had Thailand managed to hold on to these priceless objects for centuries?

That night, Wikipedia provided an answer: Thailand is the only Southeast Asian country that avoided colonization and the inevitable siphoning of treasures to European museums and country houses. Thailand’s kings strategically yielded territory and served as a buffer between British and French colonies. Thailand has cultural resemblances to Japan, which was also self-governing during the colonial era. In each, uninterrupted royal succession progressed from absolute monarchs to ceremonial rulers who remain integral to the national identity. Both countries exhibit a deep animism with a Buddhist overlay. In neither country were ambitious citizens once forced to learn a western language and culture. 

In the past, such insights were found in guide books. Today, overall, digital technology is an impressive boon for a curious tourist. It shaped our two-week visit just as, for better and worse, it is shaping Thai society. Consider making use of this golden era for cultural exploration in a compressed timeframe before the window of opportunity closes, before technologies blend us all into a global village.

GoEco and The Green Lion

For over ten years, my wife has planned our travel holidays on the Web. Online tools and resources steadily improve. The planner’s dilemma is the sacrifice of blind adventure for a vacation experienced in advance online, seeing the marvelous views and reading accounts of others’ experiences, and learning and later worrying about possible pitfalls—where nut allergies were triggered, smokers ruined the ambience, and so forth.

This year, TripAdvisor and GoEco were instrumental. GoEco aggregates programs that are designed to be humanitarian and environmentally responsible, creating some and also booking programs created by other organizations. We spent a week in one managed by The Green Lion, which began (as Greenway) in Thailand in 1998 and is now in 15 countries in Asia, Africa, Latin America, and Fiji. Travelers do not book directly with The Green Lion—it works with 75 agencies, one of which is GoEco.

The Green Lion “experience culture through immersion” programs, many in relatively rural Thailand, include voluntary English teaching in schools, construction work in an orphanage, Buddhism instruction in a temple, an intense survey of Thai culture, elephant rehabilitation, and Thai boxing. A few dozen participants in different programs stayed in the same lodging compound in Sing Buri Province north of Bangkok. Most volunteers were European or Chinese students on gap years or summer breaks, about two-thirds women slightly older than my teenage daughters; my wife and I may have been the oldest. We shared experiences over communal dinners and breakfasts.

The compound was a miracle of lean organization. The manager appeared on Sunday to convey the basics to new arrivals—distribute room keys, review the schedule on a whiteboard, tell us to wash our dishes and not to bring alcohol back across the rural road from shops established by enterprising Thais to serve the year-round flow of volunteers and explorers, and so on. Every day, a cheerful Thai woman cooked and placed wonderful vegetarian food on a central table. (Thai herbs and spices have reached markets near us at home, but some great Thai vegetables, not yet.)  Our conversations were self-organizing. While we were out for the day, there was light housekeeping, gardening, and food delivery.

A volunteer week

We volunteered to help in a school. At some schools, volunteers were handed a class to instruct in English or entertain with no assistance, but our government-funded rural K-12 school had a smart English teacher who had worked her way through a Thai university. Virtually all students managed to acquire a phone, although many came from poor families. The classroom had one good PC and a printer. The teacher maintains a Facebook page, and would like some day to get formal instruction in English from a native speaker.

Our week was unusual, with all-school presentations and displays on Tuesday for a government ministerial inspection, and again on Friday to commemorate the anniversary of the ten-nation Association of South East Asian Nations (ASEAN). On Monday and Thursday, we helped our class prepare booths and presentations. Our daughters were given roles. The events were held in a large, covered open-air space ideal for a tropical country (daytime temperatures were around 90 F). On display was the full range of school activities. Many had a vocational focus: preparing elegant foods and traditional medicines, and making decorations and other objects valued in daily life. In a carefully choreographed reenactment of an historical event or legend, pairs of students representing Burmese invaders and Thai defenders engaged in stunningly fierce combat with blunt but full-length swords. On Wednesday we went to an orphanage with other volunteers and painted walls and helped dig a septic pit. One small, silent boy dropped into the pit with an unused shovel and for 20 minutes tossed earth out above his head, outlasting us volunteers in the scorching sun.

In the school, we saw a hand-made poster on global warming, contrasting the advantages (better for drying clothes and fish) with the disadvantages (dead animals and floods). Drought in Thailand has grown more severe each of the past four years. While changing planes in Taipei airport, we had noticed a large government-sponsored student-constructed public exhibit on global warming. It felt eerie to see the topic embraced in Asian schools more openly than is possible in United States schools.

A tourism week

We spent three days with a great guide and driver recommended on TripAvisor.com. The guide learned English partly in school from a non-native speaker, then while working. She was always available for discussion, but when we were swimming, kayaking, or otherwise engaged, if not taking a photo or video of us on one of our devices, she was texting on her phone. Mobile access was available except for stretches of the drive to Kanchanaburi Province, west of Bangkok. We hiked and climbed to see waterfalls in a national park and kayaked for hours down a river, spending a night in a cabin on the water. We helped bathe some rescue elephants and hiked to see an “Underwater Temple” that is no longer under water; the drought-stricken reservoir behind Vajiralongkorn Dam, which inundated the area thirty years ago, has receded. We walked along the “Death Railway” featured in The Bridge on the River Kwai. The Hellfire Canyon museum details the harrowing story of forced construction of the Thailand-Burma Railway by World War II prisoners of war. The museum, an Australian-Thai effort, gathered recollections from Australian, British, and American survivors, as well as contemporaneous written accounts, drawings, and images (POWs sneaked in a few cameras and some local Thais helped them at great risk).

Three Pagodas Pass on the border with Myanmar (often still called Burma), once a quiet outpost, has become one of the better tourist markets we saw—our guide bought goods to take back to colleagues. The immigration and customs building at the crossing proved to be security theatre—from a shop in the market we stepped out a back door and found ourselves on a shop street in Myanmar. As co-members of ASEAN, the neighbors’ relations have relaxed; Burmese raiders are history, although workers slip across from Myanmar in search of higher-paying employment.

Bangkok

Our third focus was Bangkok, which defies easy description. High-rises of often original design and décor are proliferating. On traffic-congested city streets one is often in range of huge LCD displays that flash images or run video ads, reminiscent of Tokyo. We took a public ferry down and back up the wide Chao Phraya River alongside the city, getting off to spend an afternoon exploring the Grand Palace complex, two dozen ornate buildings constructed over a century and a half and spread over 200,000 square meters. “Beware of wily thieves” warned a sign, but instead of larceny in Bangkok, apart from occasional minor errors in making change or setting a taxi or tuk-tuk fare, we encountered dramatically honest and helpful locals, often delighting in selflessly advising an obvious tourist, in a city with many tourists. We researched online and visited the house-museum of Jim Thompson, a benevolent 20th-century American with a fascinating biography. Twice we traversed Khao San Road, once a backpackers’ convergence point and now a tourist stop. We found a bit more of the old ambience one street over.

Phone maps were useful in the countryside and wonderful in Bangkok—and not only for navigating. A map report of an accident ahead exonerated our taxi driver of blame for being stuck in gridlocked traffic. We could relax, observe the ubiquitous street life, and know that wherever we were headed, we would be there when we got there.

Politics and tragedy

On our second Sunday, Thailand voted on a constitutional revision proposed by the military junta that deposed a democratically elected government two years earlier. The change strengthened the junta’s grip; campaigning against it was not allowed. After its passage was announced and a few hours after our plane took off, bombs began exploding. One of our daughters had friended several of our Green Lion cohort; through Facebook we learned that eight who had progressed south were injured, seven temporarily hospitalized. A shocking contrast to the remarkably peaceful, friendly, and safe Thailand we experienced—another mystery that Wikipedia helped resolve. A map on the “Demographics of Thailand” page shows that 5% of the population near the southern border with Malaysia, a Sunni Islamic state, are ethnic Malays; they had voted overwhelmingly against the constitutional change. Some are separatists battling a Buddhist government that is capable of harsh repression.

The future of a unique culture

Apart from the 5%, Thailand appears to have a homogeneous, friendly culture, replete with animism, Buddhism, low crime, and sincere respect for the royal family. The 88-year-old King is the world’s longest-serving head of state. He is the only living monarch who was born in the United States; his father, a prince, studied public health and earned an MD at Harvard. His program for a “sufficiency economy” of moderation, responsibility, and resilience recognizes environmental concerns and is generally respected, although it condones a class system. Like other ASEAN countries, Thailand has professionals who lead upper middle-class lives alongside a subsistence-level less-educated population, plus a few super-wealthy families. Thailand has a relatively high standard of living in Southeast Asia; the poor appear to get food and basic medical attention.

Thailand is not without problems. The highly erratic dictatorship is worrisome. Even in better times, a sense of entitlement at higher rungs of the hierarchy leads to corruption. Some Thai women expressed the view that “Thai men are useless, they don’t work, they drink and expect women to do the work.” We did indeed see exceptionally industrious women, a phenomenon Jim Thompson channeled in a non-exploitative way to create the modern Thai silk industry. Basic education extends to boys and girls, but higher education favors males, for whom it is free if they ordain temporarily as monks, a common practice among Thai men. Omnipresent male monks may contribute more philosophically than productively, but we saw men everywhere driving taxis, tuk-tuks, and working in construction. We saw industrious male students—and the orphanage shoveler.

The King and Queen are called Father and Mother. This familial aura may contribute to the country’s smooth functioning. Artisans produce goods of the kinds that we saw students making, and people appreciate them. They could no doubt be mass-produced at low cost. Without discounting the efficacy of traditional medicines used by both classes, additional modern medicine could bring benefits. And Thais could acquire more technology than they do.

Is that the path they should take? Will they be happier shifting from animism and Buddhism to consumerism? Do they have a choice? They have phones. They can see alternative ways of living. Bangkok is a stunning city of glamor, where Ronald McDonald stands expressionlessly with fingers tented in a Thai greeting of respect.


Thanks to Gayna Williams for planning the trip and suggesting material to include in this post. Isobel advocated that we help children on our vacation. Eleanor brought her high engagement to interactions with volunteers and students. Phil McGovern provided background on The Green Lion and Fred Callaway shared observations on the flight out of Bangkok.



Posted in: on Thu, September 08, 2016 - 4:29:47

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


The map is the territory: A review of The Stack


Authors: David Fore
Posted: Wed, August 03, 2016 - 3:18:39

From that phantom vibration to that reflex to grab your own rear, you are responding to the call of The Stack…

From the virtual caliphate of ISIS to the first Sino-Google War of 2009 to the perpetually pending Marketplace Fairness Act, The Stack gives birth to new sovereignties even as it strangles others in their sleep.

From YouTube’s content guidelines to Facebook’s news algorithm to Amazon’s invisible hand, The Stack promulgates de facto cultural, legal, and economic norms, transforming conventional borders into well-worn cheesecloth.

From @RealDonaldTrump’s hostile takeover of the GOP to all those hand-wavy pretexts for Brexit, The Stack scrambles party politics by offering peer-to-peer loyalties, fungible citizenship, royalty-free political franchises, and 24/7 global platforms… all for the price of a moment of your time.

From mining coal to rare earth to bitcoins, The Stack wrangles the resources to fuel itself—and hence ourselves—even if at the price of planetary autophagia.

From the Internet that got us [1] this far to the neo-internets coming online to splinter and consolidate perceptions, fiefdoms, and freedoms, The Stack is reaching a new inflection point.

From earth to Cloud to City to Address to Interface to User, Your Permanent Record is enacted, assembled, stored, distributed, accessed, and made into meaning by means of The Stack.

The Stack had no name, not until Benjamin Bratton borrowed the term from computational architecture, capitalized it, then applied it to this techno-geo-social architecture we call home. It’s also the title of his new book published by MIT Press: The Stack: On Software and Sovereignty.

The book delineates the hockey-stick ascent of digital networking from its early identity as a narrowly defined instrument made and used by government and military types to its central role in creating a planetary organ of cognition, composition, dissolution, and transformation. In a startlingly short time we see digital systems of computation, communication, feedback, and control emerging as our species’ system of systems. 

Fleshy and rocky, pixelated and political, The Stack is every bit as consequential as climate, geology, society, and other sister systems of planetary activity. Promiscuous by nature, The Stack invites most anything and excludes virtually nothing. It connects us and completes us, rendering the world visible even as it pins us to our place like butterflies to a board. Acknowledging absence as much as presence, The Stack serves as an automagically accommodating host that anticipates, invents, and activates urges, ideas, and outcomes. We leverage The Stack not just because we want to, but because by signing the Terms of Use we are feeding The Stack’s own capacious memory and so informing its actions. 

Convenience and benefit is where the line blurs between user and used

Not simply a host, The Stack is also a parasite that renders us into its host. The physics of exchange are simple; the terms of human exchange less so. But a deal is a deal.

So immense as to hide out in plain sight, The Stack was designed and built by nobody—not knowingly, at any rate, and certainly not all of it. Still nobody is in charge of its uses, its reach, or its fate; except maybe each of us as we use The Stack to extend our reach to realize our fates. The Stack will oblige by responding in kind.

But really what is the The Stack? To start with, this is not your father’s Internet. Instead, the Stack is a consolidated six-layer meta-platform we can use as an “engine for thinking and building.” It is also a “conceptual model” with which we might apprehend the “coherent totality” of this “technical arrangement of planetary computation.” Bratton’s book lays out the former in service of the latter.

For some readers, The Stack will stand shoulder-to-shoulder with Christopher Alexander’s pattern language tomes, particularly A Timeless Way of Building. Like Alexander, Bratton is deeply committed to identifying patterns in human-made structures and how they support, subsume, and define our humanity and the viability of the world itself. Both writers view citizens as potential architect/builders with an inherent right to program the spaces they inhabit. They both take a fundamentally ecosystemic view of the project of design, construction, and use.

But writers are stylists, though of very distinct stripes. Alexander can wax poetic, composing his prose to a faintly mystical beat; he rarely cites sources, and there is a messianic slant to his historiography. For all of that, perhaps, Alexander is much loved by readers, whether laypeople or professionals. Bratton, by contrast, crafts intricate prose, employs diction as concrete as his subject is abstract, and deeply sources his inspirations. His worldview, meanwhile, is assiduously non-deterministic. For all that, we shall see how broadly Bratton’s work will be embraced by intellectual, practitioner, and civilian readerships. 

Each sentence of The Stack is packed as tight as a Tokyo subway train, forcing the willing reader to double-back to pick up what Bratton lays down. The payoff can be thrilling, much like watching the brain-bending eloquence of Cirque de Soleil. Bratton’s wit shines through most every page, in forms both dry and waggish, offering the dyspeptic reader means to metabolize the book’s quarter-million words [2].

But why a “stack”? 

Traditionally, a technology stack comprises hardware and software that runs stuff, with storage servers situated toward the bottom and end-user applications toward the top. Companies, however, tend to overvalue the layer of the stack in which they operate, and they undervalue those above. Bratton’s schema takes air out of the tires of blinkered business models, blanched political projections, and pell-mell individual attempts to make change. The Stack he sees in his mind’s eye runs this gamut and much more. He abstracts application and data elements, replacing them with Cloud, City, and Address, thereby emphasizing the technical, jurisdictional, and identity functions at the heart of The Stack’s raison d’être. Bratton goes on to cap The Stack top and bottom with a pair of layers — User and Earth — typically ignored by technical stacks. The Interface, meanwhile, abides… even if Bratton’s idea of interface is less digital and more architectural than most writers attach to that word [3].

So I’m, like, a designer, right? I’d expect that such a designery book would employ scads of visualized designery designs to sell its suds. Well, I have a warning for readers panting for designporn: The Stack will leave you high and dry. Bratton is crafting an argument here, something never won without words. Indeed, there’s just a single image, found on p. 66:

In eschewing complex graphical explications, Bratton’s overt bookmanship serves his project. The Stack hexagram lends structural clarity, metaphoric utility, and conceptual coherence that might fertilize bountiful discussions about this most systematic of unsystems whose workings we sense but whose totality we never quite glimpse. 

To name The Stack is to make it visible and therefore subject to interpretation, critique, and use [4]. In any event, Bratton’s six-layer hexagram [5] is not simply a simplification, some kind of user-friendly rhetorical strategy for making his ideas more digestible. Rather (or in addition to) digestibility, it is a concrete logos that we — makers, theorists, political disrupters, web monkeys, app-slingers, and armchair academics alike — can use to better focus our meditations and actions. There is a portability to it that I quite like. I can easily hoist these ideas up on a whiteboard for my comrades and I to grok its gestalt and locate opportunities to investigate our own particular professional, personal, aesthetic, ethical, and political interests:

In delaminating and delineating each layer’s position, constituents, relationships, and purpose, Bratton’s hexagram helps us see beyond the shibboleths of the-Internet-as-cash-machine, Internet-as-thought-control-machine, Internet-as-Leviathan, and Internet-as-Heaven’s Gate. And that’s a good thing. After all, while the Internet (or now, according to a recent update to The New York Times’ style guide, “the internet”) has provided many of the platforms on which The Stack depends, this accidental megastructure is something far more complex, powerful, and new than all that. Between its great potential and uncertain future, The Stack is still a bit of an ingénue. Put in motion, The Stack is manifesting emergent orders that amount to an asymptotic identity: always approaching but never arriving at a perceptible resolution. We can help it come clean and grow up. 

By considering The Stack as a whole, this book better equips us to contend with this “accidental geodesign that demands from us further, better deliberative geodesign.” When the world itself is seen as information, in other words, the task of organizing all the information is the same as organizing all the world. The map becomes the territory, which renders the converse true, too.

While many aspects of The Stack were the result of deliberate planning, The Stack as a whole was neither conceived nor constructed to envelop and subsume the planet. The notion that the Stack is a planetary sensing/cognition/connection/control system that we unintentionally midwifed is repeatedly emphasized by Bratton. But does that imply that something like mindful action might make a difference here? Indeed, The Stack can seem as resistant to comprehension and design as a hurricane is resistant to science and prayer. A palpable urgency infuses Bratton’s breakneck prose, perhaps reflecting his view that The Stack is already crafting politics, geography, and territory in its own image, with modes of governance that enforce themselves. Indeed, as the press is atwitter with visions of robot armies cresting the hill, we would do well to ensure that the substantive contributions we make to The Stack now will constitute kernels of its ultimate operations. Falling short of that goal will leave us watching our ideas and ideals slip away into quaint and ultimately orphan code, commented out by other better, synthetic minds than our own. 

Then should we then just buy sandbags, hunker down, and sweep up after? The prospect of us being stalked by Skynet strongmen seems no more likely to Bratton than the arrival of machines of loving grace foretold by the poet [6]. The flipping toast might just as well land butter-side up as down; we really won’t know until the final instant. In the meantime, we might seize the opportunity and means to do what we can to flip it our way. Will we measure up to the challenge? By positing a coherent totality for The Stack, Bratton offers a perch from which we might exercise influence over the profound and protean implications of our reliance upon, and responsibilities for, The Stack.

Finally what comes to mind is the musical question [7] posed by Jacob Bronowski, in his book Science and Human Values, as he drove toward Hiroshima in August 1945:

Is you is or is you ain’t my baby
Maybe baby’s found somebody new
Or is my baby still my baby true.


Endnotes

1. I use “us” in that writerly way (you and me) as well as in reference to our fellow Stackmates, including but not limited to users animal, vegetable, and mineral; connections and desertions tacit and explicit; recherchant industrial processes and ethically confused robots; spam sent by the supply chains in hopes of inspiring our refrigerators to place an order in time for dinner.

2. Thumbnail heuristic evaluation: Even with tiny type and slim margins, the hardbound version of The Stack weighs in at nose-breaking 2.5 pounds. And so while this reader stills consumes books composed of sheets of reconstituted trees bound together by glue and pigmented with ink, I urge safety-minded readers to download the electronic version of this weighty tome. The User does so by activating an Interface at a convenient Address in the City to stream Bratton’s colorful disquisitions from the Cloud, which is on, sometimes below, but always of the Earth. (viz: here)

3. While conception and significance of interfaces will remain non-negligible in my practice, I see the interplay of voice and algorithm gradually rendering interfaces immaterial.

4. The wordfulness of this book also reminds us that the machines comprising The Stack are history’s most inveterate readers; IBM Watson, just to name one, can read 800 million pages per second.

5. For readers following along in their I Ching: we see here hexagram #1: Ch’ien/The Creative.

6. I like to think
(it has to be!)

of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.

—Richard Brautigan

7. Quoting the much-loved Renne Olstead lyric.



Posted in: on Wed, August 03, 2016 - 3:18:39

David Fore

David Fore cut his teeth at Cooper, where he led the interaction design practice for many years. Then he went on to run Lybba, a healthcare nonprofit. Now he leads Catabolic, a product strategy and design consultancy. His aim with this blog is to share tools and ideas designers can use to make a difference in the world.
View All David Fore's Posts


Post Comment


No Comments Found


The joy of procrastination


Authors: Jonathan Grudin
Posted: Mon, July 11, 2016 - 10:37:43

I have long meant to write an essay on procrastination. Having just been sent a link to a TED talk on a virtue of procrastination, this seems a good time to move it to the front burner [1].

An alarming stream of research papers describe interventions to get chronic procrastinators like myself on the ball: wearable devices, displays mounted in kitchens, email alerts, project schedule sheets, and community discussion groups (think “Procrastinators Anonymous”). Papers on multitasking and fragmented attention suggest that procrastination contributes to problems with stress, health, career, and life in general.

Virtually everyone confesses to occasionally delaying the start or completion of a task. About a fifth of us are classified as chronic procrastinators. If you are with me in the chronic ward, cheer up: I am here to call out the virtues of procrastination.

Procrastination and creativity

The TED talk describes lab studies that support the hypothesis that people who are given a task benefit by incubation—by putting it on a back burner for a while, rather than plunging in. This is consistent with reports that after working for a time on a problem, an insight came out of the blue or in a dream. Obviously, immediately completing a task—no procrastination—leaves no time for incubation. (Waiting to the last minute to engage could also leave no chance for incubation.) The speaker concludes, “Procrastinating is a vice when it comes to productivity, but it can be a virtue for creativity.” Mmmm. Let’s consider productivity virtues that arise from procrastinating.

Procrastination and productivity

My team, once upon a time, was planning a workflow management system. Through daily on-site interviews, we studied the work practices of people with different roles in a manufacturing plant. One was a senior CAD designer. At any point in time, he had been assigned several parts. Each had a due date. When a part was finished, it was checked in. We thought that by tracking check-ins, our system could automatically assign the next part to the designer with the fewest parts not yet checked in.

We made a surprising discovery. Instead of working on a part until finishing it, the outstanding designer procrastinated on every one. He stopped when the remaining work could be completed in about half a day and waited until he was asked for it. This was often a little after the original due date—why did he choose to be late? He did so because parts had dependencies on other parts assigned to other designers; changes in theirs could force changes in his. It was more efficient to accumulate forced changes, then at the end reopen his task, ramp up, and resolve all remaining work. Finishing early would mean that subsequent change requests would force him to reopen the task and possibly undo work. Because final requests for delivery came with a day to spare, having a manageable effort left was fine. Procrastination enhanced productivity. (Unfortunately, our automatic workflow assignment tool was a non-starter, when every task was left open and the system unable to detect whether a part was 5% or 95% done.)

On any group project, you must decide when it is most effective to jump in and when it is better to delay until others have had a turn. Sometimes a request for input is withdrawn before the work is due, benefitting a procrastinator. I know people who ignore email requests, saying “If it is important, they will ask again.” I don’t do that, but if I sense that a request is not firm, I may wait and confirm that it is needed when the requested response date is close but comfortable. If the work remains green-flagged, I may get a benefit, such as incubation or evolving requirements. For example, this essay profited from being put off until the TED talk appeared: I could include the creativity section. 

Another productivity benefit will be understood by anyone with perfectionist tendencies: As long as you know how much time it will take to complete a job, by waiting until just enough time remains, you can eliminate the temptation to waste time tinkering with inconsequential details. The scare literature often links the source of procrastination to a feared or stressful task, which grows more feared and stressful when left to the last minute. In the meantime, it hovers overhead as a source of dread. Yes, it happens. I spent some childhood Sunday afternoons, when homework loomed, watching televised football that didn’t really mean anything to me. I also remember aversive procrastination in college, and 30 years later still have dreams in which I realize that I’m enrolled in a class with an exam approaching that I had forgotten to study for. Perhaps this symbolizes procrastination-induced anxieties? I detect less avoidance-based procrastination now, but tasks do slip that I wish would get done—cleaning out the garage, reading a book, writing a blog post. 

What is easy to overlook, though, is that at least for me, procrastination can be positively wonderful.

The joy of procrastination

When my primary task is a deliverable due at hour H, I estimate the time T that it will take to complete it without undue stress. Then I put off working on it until H-T, leaving just enough time to do it comfortably. In that time, I clear away short tasks such as reviews and reference letters, and then some that are not necessary but are very appealing, such as writing to friends, seeing a movie, starting a book, or writing a blog post. Tasks that are so much fun that accomplishing one produces an endorphin wave. The euphoria carries over to the big task, helping me breeze through it. In this procrastination period I may benefit from incubation and the avoidance of endless tinkering, but the exhilaration is the key benefit.

You may wonder, doesn’t this conflict with the advice to provide rewards after making progress on a big task? With kids we say, “Give them candy after they finish the assignment.” I get that, but sometimes the sugar rush gets you through the assignment faster, and in better spirits. 

The skill of procrastinating 

Experience is required to make the judgment calls. The key to realizing positive outcomes of delaying action is the accurate estimation of the time needed to complete a deferred task to your satisfaction without incurring unpleasant stress. Especially when young, it is easy to be overly optimistic about how quickly a task will be completed. Objective self-awareness is required to develop forecasting skill, but with experience and attention we can do it.

Especially when young, I experienced procrastination-amplified dread. I have regrets about tasks postponed until time ran out. When I misjudge, the task usually gets done, but with more stress than I would have preferred. The residue may haunt a future dream. Nevertheless, procrastination has enabled me to accomplish many of the things that I have most loved doing. Having finished this post, I have a big job to get back to.

Endnote

1. After writing this I learned of this article by Adam Grant, the TED speaker. It has more examples. Like my essay, it starts with a confession of procrastination.

Thanks to Gayna Williams, John King, and Audrey Desjardins for discussions and pointers.


Posted in: on Mon, July 11, 2016 - 10:37:43

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


@Agastya309@gmail.com (2016 08 22)

I can really relate to the CAD designer you mentioned. My work is somewhat related. As a UI designer for an evolving system to manage mortgages and property taxes, I get change requests from every stake holder. Everyone wants it done now. I find delaying actually working on a small bit saves me time as I can finish a bigger piece with everyone’s 2 cents incorporated. Nearly 90% of such “top priority” CR’s are redundant or can be done away with. Meaning the same output is available via another simpler method.

But at least in my case I have to keep everyone informed that so and so will get done. Just, not now. I send regular emails mentioning the due action items which keeps the suits happy. Most people just want an acknowledgment that their brain wave will be considered.

In the end I do whatever I feel is right. With least effort from me. Everyone is happy.


The Receptionist


Authors: Deborah Tatar
Posted: Wed, July 06, 2016 - 1:47:55

My mother is dying. She had a rare gastric cancer in her pyloric valve. The amazing doctors at UCSF took out 70% of her stomach and a bunch of lymph nodes, stitched her up, and gained her another 13 months of good life. Unhappily, she was not among the 50% who make it to five years without recurrence. A few insidious cells escaped annihilation and now she has an inoperable recurrence at the head of the pancreas. She’s done with chemo. Her body cannot tolerate more, and she doesn’t want to spend the rest of her life in pain. Atul Gawande’s “Being Mortal” has become our bible, and she is very clear about what she wants: to live as she has done—independently, swimming every morning, meditating—without pain as long as she can and then to die quickly. She’ll have radiation because it might reduce her pain and probably won’t hurt. 

Her parlous state gives the small events of her life more meaning.

Because she lives pretty far away from UCSF, another oncologist—we’ll call him Simpson—closer to home, had administered the chemotherapy before and after her big operation, under the direction of her main oncologist at UCSF. 

There are a lot of chores associated with being ill. Part of avoiding pain is that she has a port, a line to a vein in her chest cavity that can be used to administer medicine or draw blood without pain. It results in an odd protrusion under the tight skin of her breastbone, as if she had implanted a plastic bottle cap. The port has to be flushed once a month and since she actually does not want to die, especially from infection, she is assiduous about making sure that this is done. 

Last week, she thought to spare herself a lengthy trip into San Francisco so she called Dr. Simpson’s office to see if she could get the port flushed at his office. After the initial call, she called me in high dudgeon:

I called and after I waited and hit the all the buttons, eventually I left a message and then when the receptionist finally called me back, I explained the situation and she said that if I waited ten days, I could have an appointment during my swimming time, which was bad enough, but I had to have an appointment with the doctor first. I asked why I had to see Dr. Simpson if all I needed was my port flushed, and she said, “It’s the rule.” I asked what kind of rule that was—the doctor knows me, he’s in contact with my oncologist (at UCSF) and he knows that I’m not having any more chemo—but the receptionist just kept saying that it was the rule. I was so mad!

It wasn’t the effort of seeing the doctor that enraged her. It was what she perceived as waste. “They’re just money-grubbers! They’re getting rich off of Medicare!” She had resolved that, instead of seeing Simpson to get the port flushed, she would haul herself two hours each way by car, BART (Bay Area Rapid Transit), and shuttle bus to San Francisco. 

After arranging the flush with UCSF (“They were so nice!”), she called Dr. Simpson’s office back to cancel. Then, she called me in a state of enhanced indignation. She reported that after the receptionist canceled the flush, she said, “But what about your appointment with the doctor?” Mom replied, “I don’t need an appointment with the doctor.” The receptionist said, “But there’s a note here that says that you are supposed to have a check up with him every four to six months. You need to make an appointment.” Mom reasonably explained, “He’s not my main oncologist. He just administered the chemo that I had last year. I’m not having any more chemo. I don’t need to see him.” This is where things began to be weird. The receptionist replied, with some anxiety, “But there’s a note here that says that he wants to see you every four to six months.” “But I’m not having any more chemotherapy.” “But there’s a note here. The doctor wants to see you.” Evidently, they went around the block a couple more times in increasingly emotional tones until Mom finally replied, “I’m not having any more chemo. I’m not coming back to see Dr. Simpson. I’m having radiation and then I’m going to die. Good-bye.” 

If we were to argue about this interaction, Mom would see the receptionist as autonomous and responsible for her own behavior. I’m a softer woman than my mother and I would have felt sorry for the receptionist. From my point of view, Mom was arguing with someone who was bondage to an information system. 

Yet, Mom’s ground truth was rooted not only in her lifelong habit of expecting and demanding dignity from herself and others, but also in the test of this approach in the face of impending death. 

The system that dominated the receptionist’s behavior had no place for Mom’s actual condition, although Mom’s care was its putative object. It attempted to erase her pain, her fatigue, her agency—her identity. She resisted. Dealing with such systems is not part of living a good life. Yet resisting such systems might be—by resisting, she was asserting herself and also asking the receptionist to be better than the system. And the receptionist might have seen her power to behave differently in similar situations. Something might have been learned.

There are two views about how socio-technical systems should be designed. One is that the computer system itself should have been designed to encourage negotiation between the receptionist and my mother. The other is that the receptionist should be trained to take more power over the system. But the need for either or both of these is invisible. It is rare for the death of a thousand cuts suffered by countless system users to be visible to even one other person. 

I admire my mother’s insistence on resistance to the machine, writ large. I would like to die, and to live, as she has. I cannot do anything about her cancer or her impending death, but we designers can strive to perceive and advocate for issues of dignity and compassion even when these issues are not widely acknowledged.



Posted in: on Wed, July 06, 2016 - 1:47:55

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


No Comments Found


Brigitte (Gitti) Jordan: An obituary


Authors: Elizabeth Churchill
Posted: Thu, June 23, 2016 - 2:43:25

It is with great sadness that we report on Brigitte (Gitti) Jordan’s death; Gitti died on May 24, 2016, at her home in La Honda, California, surrounded by loved ones. She was 78.

While the influence of anthropology and the practice of ethnography may seem very familiar to us in the HCI and technology design world today, it was not always so. During the 1980s and 1990s, a number of trailblazing researchers laid the groundwork for what we now take for granted when it comes to methods for understanding human activities with and around technologies. Gitti was one of those trailblazers. Gitti’s worldview was broad, but she always put people and their interests at the center, whether she was studying childbirth, use of productivity tools, or autonomous vehicles. Her field sites included village huts, corporate research labs, and virtual worlds. 

One of the pioneers of business and corporate ethnography and what we now call design ethnography, Gitti always emphasized grounding design, and especially technology design, by understanding people's everyday landscapes, their needs, values, behaviors, and settings. Although SIGCHI and Interactions was not one of Gitti’s intellectual and publishing “homes,” she certainly had influence on a number of people and projects that are part of the SIGCHI canon. Gitti coined the term “lifescapes of the future” and consistently used research on what is happening now to speculate about the changes that are likely to be wrought by the introduction of new technologies. 

Gitti’s curiosity about technologies in use by consumers, but also by technologists, went back a long way, and throughout her long career she sustained an engaged yet highly critical relationship with computer modelers, cognitive scientists, and artificial intelligence researchers. Her master’s thesis in 1971 explored how computer simulations might be better exploited by anthropologists. The thesis, Diffusion, Models and Computer Analysis: A Simulation of the Diffusion of Innovations, earned her an M.A. in 1971 from Sacramento State College. 

Gitti’s Ph.D. was based at the University of California, Irvine, where she engaged deeply with developments in ethnomethodology and conversational analysis, emerging thinking in cognition (now known as situated and distributed cognition), learning theory, and more. She also furthered her interest in computer science, taking a course with the young professor John Seely Brown, who went on to lead Xerox PARC (Palo Alto Research Center) and who, when there, invited Gitti to join PARC. At PARC, Gitti worked with Lucy Suchman, Jeanette Blomberg, Julian Orr, and other pioneers to advance the contributions of anthropological and ethnographic study of complex technology. At the same time, she became a senior researcher at the Institute for Research on Learning (IRL), where she played a central role in establishing IRL’s depth of focus and understanding of processes of social learning wherever it is found. Gitti led numerous teams through rich and challenging projects in corporate workplace settings to examine and help support meaningful knowledge economies. Gitti had a keen interest in methodology, leading regular interaction analysis labs at both PARC and IRL. 

In the last years of her life, she consulted to the Nissan Research Center in Silicon Valley, a lab led by artificial intelligence scientists and roboticists aiming to develop autonomous vehicles. In this environment, Gitti insisted on the need to dedicate equal, if not more, attention to the human implications of this emerging technology.

Gitti was characterized by her boundless curiosity, personal warmth, and encouraging style. Her standards regarding ideas and empirical realities and their interactions were high and exacting. Again and again people point to her as the reason they are doing what they are doing—and more importantly, for finding and rekindling their sense of excitement and importance in the work that they do. She was respected, admired, and loved by her colleagues, family, and friends, and her multiple legacies will live on as others continue to carry forward her work in the major fields she helped to found. 

Written with gracious assistance and contributions from Melissa Cefkin, Bob Irwin, Robbie Davis Floyd, Lucy Suchman, Jeanette Blomberg, and Susan Stucky.

Posted in: on Thu, June 23, 2016 - 2:43:25

Elizabeth Churchill

Elizabeth Churchill is a director of user experience at Google. She has been a scholar and research manager focused on human-computer interaction for over 20 years. A Distinguished Scientist of the ACM, her current work focuses on HCI aspects of the social web and the emerging Internet of Things.
View All Elizabeth Churchill's Posts


Post Comment


No Comments Found


Technology and liberty


Authors: Jonathan Grudin
Posted: Tue, May 03, 2016 - 9:22:39

The absence of plastic microbeads in the soap led to a shower spent reflecting on how technologies can constrain liberties, such as those of microbead producers and consumers who are yearning to be clean.

Technologies that bring tremendous benefits also bring new challenges. Sometimes they create conditions conducive to oppression: oppression of the weak by the strong, the poor by the rich, or the ignorant by the clever. Efforts on behalf of the weak, poor, and ignorant often infringe on the liberty of the strong, rich, and clever. As powerful technologies proliferate, our survival may require us to get the balance right. Further constraints on liberty will balance new liberating opportunities.

Let’s start back a few million years, before beads were micro and technologies changed everything.

Fission-fusion and freedom

For millions of years our ancestors hunted and gathered in fission-fusion bands. A group grew when food was plentiful; in times of scarcity, it split into smaller groups that foraged independently. When food was again plentiful, small groups might merge… or might not. A fission-fusion pattern made it relatively easy for individuals, couples, or small groups to separate and obtain greater independence. This was common: Homo sapiens spread across the planet with extraordinary rapidity, adapting to deserts, jungles, mountaintops and the arctic tundra. That freedom led to the invention of diverse cultural arrangements.

1. Agriculture and the concentration of power

It is a law of nature, common to all men, which time shall neither annul nor destroy, that those who have more strength and power shall rule over those who have less. – Dionysius of Halicarnassus

Agriculture was a transformative technology. Food sufficiency turned roaming hunter-gatherers into farmers and gave rise to large-scale social organization and an explosion in occupations. Everyone could enjoy the arts, crafts, diverse foods, sports, medicine, security, and potential religious salvation, but with it came implicit contracts: Artists, craftspeople, farmers, distributors, athletes, healers, warriors, and priests were guaranteed subsistence. People were collectively responsible for each other, including many they would never meet, individuals outside their immediate kinship groups. People who wanted freedom might slip away into the wilderness, but those who reaped the benefits of civilization were expected to conform to cultural norms that often encroached on personal liberty.

The leader of a hunter-gatherer band had limited power, but agriculture repeatedly spawned empires ruled by despots—pharaohs in Egypt, Julio-Claudian emperors in Rome, and equally problematic rulers in Peru, Mesoamerica, and elsewhere. The Greek historian Dionysius lived when Rome was strong and powerful.

Why this pattern? Governments were needed for security and order: to protect against invasion and to control the violence between kinship groups that was common in hunter-gatherer settings but which interfere with large-scale social organization.

The response to oppression of the weak by the powerful? Gradually, more democratic forms of government constrained emperors, kings, and other powerful figures. Today, violence control is the rule; strong individuals or groups can’t ignore social norms. Even libertarians acknowledge a role for military and police in safeguarding security and enforcing contracts that the strong might violate if they could.

2. The second industrial revolution and the concentration of wealth

Another technological revolution yielded a new problem: oppression of the poor by the wealthy. In the early 20th century, monopolistic robber barons in control of railroads and mines turned workers into indentured servants. Producers could make fortunes by using railroads to distribute unhealthy or shoddy goods quickly and widely; detection and redress had been much easier when all customers were local.

The response to the oppression of the poor by the wealthy? Perhaps to offset the rise of populist or socialist movements, the United States passed anti-trust legislation in the early 20th century, giving the government a stronger hand in regulating business.  Also, the interstate commerce clause of the Constitution was applied more broadly, encroaching on the liberty of monopolists and others who might use manufacturing and transportation technologies exploitatively. It was a steady process. Ralph Nader’s 1965 book Unsafe at Any Speed identified patterns in automobile defects that had gone unnoticed and triggered additional consumer protection legislation. In contrast, after a loosening of regulations that enabled wealthy financiers to wreck the world economy a decade ago, the 2010 Dodd–Frank Wall Street Reform and Consumer Protection Act constrained the liberty of the wealthy, an effort to head off a recurrence that may or may not prove sufficient. Some libertarians on the political right, such as the Koch brothers, are vehemently anti-regulation, but for a century most people have accepted constraints [1].

3. Information technology and the concentration of knowledge

Libertarian friends in the tech industry believe that they desire the freedom of the cave-dweller. Sort of. Not strong and powerful, they support our collective endeavors to maintain security and enforce signed contracts. They are not among the 1%, either, and they favor preventing the very wealthy from reducing the rest of us to indentured servitude in the manner of robber baron monopolists.

However, my libertarian tech friends are clever, and they oppose limiting the ability of the intelligent to oppress the less intelligent through contracts with implications or downstream effects that the less clever cannot figure out: “The market rules, and a contract is a contract.” Technology that provides unencumbered information access gives an edge to sharp individuals. The Big Short illustrated this; banks outsmarted less astute homeowners and investors, then a few very clever fellows beat the bankers, who succeeded in passing on most of their losses to customers and taxpayers.

The response to the oppression of the slow by the quick-witted? A clear example is the 1974 U.S. Federal Trade Commission rule that designates a three-day “cooling-off period” during which anyone can undo a contract signed with a fast-talking door-to-door salesman. Europe has also instituted cooling-off periods. The U.S. law applies to any sale for over $25 made in a place other than the seller’s usual place of business. How this will be applied to online transactions is an interesting question. More generally, though, information technology provides ever more opportunities for the quick to outwit the slow. We must decide, as we did with the strong and the rich, what is equitable.

Butterfly effects

Technology has accelerated the erosion of liberty by accelerating the ability of an individual to have powerful effects on other individuals. Twenty thousand years ago, a bad actor could only affect his band and perhaps a few neighboring groups. In agrarian societies, a despot’s reach could extend hundreds of miles. Today, those affected can be nearby or in distance places, with an impact that is immediate and evident, or delayed and with an obscure causal link. It can potentially affect the entire planet. It is not only those with a finger on a nuclear button who can do irreparable damage. Harmful manufactured goods can spread more quickly than a virus or a parasite. A carcinogen in a popular product can soon be in most homes.

We who do not live alone in a cave are all in this together, signatories to an implicit social contract that may be stronger than some prefer, which limits our freedom to do as we please. Constraining liberty is not an effort to deprive others of the rewards of their efforts. It is done to protect people from those who might intentionally or unintentionally, through negligence, malfeasance, oppression, or simply lack of awareness, violate the loose social contract that for thousands of years has provided our species with the invaluable freedom to experiment, innovate, and trust one another—or leave their society to build something different. If the powerful, wealthy, or clever press their advantage too hard, we risk becoming a distrustful, less productive, and less peaceful society.

Plastic microbeads in cosmetics and soaps spread quickly, accumulating by the billions in lakes and oceans, attracting toxins and adhering to fish, reminiscent of the chlorofluorocarbon buildup that once devastated the ozone layer. In 2013 the UN Environment Programme discouraged microbead use. Regional bans followed. Even an anti-regulatory U.S. Congress passed the Microbead-Free Waters Act of 2015. It only applies to rinse-off cosmetics, but some states went further. The most stringent, in California of course, overcame opposition from Proctor & Gamble and Johnson & Johnson. Our creativity has burdened us with the responsibility for eternal vigilance in detecting and addressing potential catastrophes.

Endnote

1. Politicians who favor freedom for themselves but would, for example, deny women reproductive choices might not seem to fit the definition of libertarian, but some claim that mantle.

Thanks to John King and Clayton Lewis for discussions and comments, and to my libertarian friends for arguing over these issues and helping me sort out my thoughts, even if we have not bridged the gap.


Posted in: on Tue, May 03, 2016 - 9:22:39

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


@James Dissertation Writer (2016 06 04)

Nice post. thank you for sharing.


Oh, the places I will go


Authors: Joe Sokohl
Posted: Fri, April 29, 2016 - 12:26:09

Over the decades conferences, symposia, webinars, and summits all have formed critical portions of my professional development. I learned about controlled vocabularies and usability testing and the viscosity of information and personas and information visualization and so much more from attending two- or three-day events—and even local evening presentations from peers and leaders alike, all centered on this thing of user experience.

So along with dogwood blossoms and motorcycling weather, the conference season blooms anew. This year finds me focusing on three events, with the anticipation of meeting up with friends both met and as-yet unknown.

The Information Architecture Summit
May 4–8, 2016, Atlanta, GA
Iasummit.org

I’ve attended all but two of the summits since its inception at the Boston Logan Hilton in 2000. That year, the summit saw itself as a needed discussion and intersection point between the information-design-oriented (and predominately West Coast) information architects and the library-science-oriented East Coasters. 

When it began, at the ebbing of the wave of the dotcom headiness, the American Society of Information Science (and, later, Technology) decided to hold the summit only over the weekend and in an airport hotel, to reduce the impact of folks having to miss work. We had so much to do back then, didn’t we?

As it grew, the IA Summit has expanded the days of the core conference as well as adding several days of workshops before the summit itself.

Because I’ve attended all but two summits (2003 and 2004), I know a lot of the folks who have woven in and out of its tapestry. So, for me, this event is as important for inspiration from seeing who’s doing what and catching up with people as it is in learning from sessions.

But learning is a core component; last year’s summit in Minneapolis reignited an excitement for IA through Marsha Haverty’s “What We Mean by Meaning” and Andrew Hinton’s work on context and embodied cognition. 

So expect both heady bouts with science, technology, philosophy, and practicing work in IA. Oh, and come see me speak on Sunday, if ya’d like.

Then there’s the Hallway, where the conference really takes place. From conversations to karaoke, from game night to the jam, the IA Summit creates a community outside the confines of the mere conference itself.

Enterprise User Experience
June 8–10, 2016, in San Antonio, TX
EnterpriseUX.com

Last year’s inaugural Enterprise UX conference took me and much of the UX world by storm: a much-needed conference focused on the complexity of enterprise approaches to user experience. Two days of a single-tracked session event followed by a day of optional workshops provided great opportunities for learning, discussion, and debate. 

Dan Willis highlighted a wonderfully unique session showcasing eight storytellers rapidly telling their personal experiences in at once humorous, at once poignant ways. 

This year, luminaries such as Steve Baty from Meld Studios in Sydney, MJ Broadbent from GE Digital, and Maria Giudice from Autodesk will be among a plethora of great speakers.

These themes guide the conference this year:

  • How to Succeed when Everyone is Your User
  • Growing UX Talent and Teams
  • Designing Design Systems
  • The Politics of Innovation

Plus, the organization of the conference is simply stellar. Props to Rosenfeld Media for spearheading this topic!

edUi
October 24–26, 2016, in Charlottesville, VA
eduiconf.org

This conference is as much of a labor of love and devotion to the field of UX in EDU as anything. Also, I’ve been involved since its inception: The first year I was an attendee, the second year a speaker, and ever since I’ve been involved in programming and planning. So, yeah, I’ve a vested interest in this conference.

Virginia Foundation for the Humanities’ (VFH) Web Communications Officer Trey Mitchell and former UVA Library programmer Jon Loy were sitting around one day, thinking about conferences such as the IA Summit, the Interaction Design Association’s conference, Higher Ed Web, and other cool UX-y conferences and thought, “Why don’t we create a conference here in Virginia that we’d wanna go to?”

Well, there’s a bit more to the story, but they created a unique event focused on the .edu crowd—museums, universities, colleges, libraries, institutes, and foundations—while also providing great content for anyone in the UX space.

As the website says, “edUi is a concatenation of ‘edu’ (as in .edu) and ‘UI’ (as in user interface). You can pronounce it any way you like. Some people spell it out like “eee dee you eye” but most commonly we say it like “ed you eye.”

Molly Holzschlag, Jared Spool, and Nick Gould stepped onto the podia in 2009, among many others. Since then, Trey and company have brought an amazing roster of folks. 

For the first two years, the conference was in Charlottesville. Then it moved to Richmond for four years. Last year it returned to Charlottesville and Trey led a redesign of the conference. From moving out of the hotel meeting rooms and into inspiring spaces along the downtown Charlottesville pedestrian mall to sudden surprises of street performers during the breaks, the conference became almost a mini festival where an informative conference broke out.

This year proves to continue in that vein. So if a 250-ish conference focused on issues of UX that lean toward (but aren’t exclusively) .edu-y sounds interesting…meet me in Charlottesville.



Posted in: on Fri, April 29, 2016 - 12:26:09

Joe Sokohl

For 20 years Joe Sokohl has concentrated on crafting excellent user experiences using content strategy, information architecture, interaction design, and user research. He helps companies effectively integrate user experience into product development. Currently he is the principal for Regular Joe Consulting, LLC. He’s been a soldier, cook, radio DJ, blues road manager, and reporter once upon a time. He tweets at @mojoguzzi and blogs at sokohl.com. Joe Sokohl is Principal of Regular Joe Consulting, LLC
View All Joe Sokohl's Posts


Post Comment


No Comments Found


Violent groups, social psychology, and computing


Authors: Juan Hourcade
Posted: Mon, April 25, 2016 - 1:55:08

About two years ago, I participated in the first Build Peace conference, a meeting of practitioners and researchers from a wide range of backgrounds with a common interest in using technologies to promote peace around the world. During one session, the presenter asked members of the audience to raise their hands if they had lived in multiple countries for an extended period of time. Most hands in the audience went up, which was at the same time a surprise and a revelation. Perhaps there is something about learning to see the world from another perspective, as long as we are receptive to it, that can lead us to see our common humanity as more binding than group allegiances.

It’s not that group allegiances are necessarily negative. They can be very useful for working together toward common goals. Moreover, most groups use peaceful methods toward constructive goals. The problems come when strong group allegiances intersect with ideologies where violence is a widely accepted method, and dominion over (or elimination of) other groups is a goal.

A recent Scientific American Mind magazine issue with several articles on terrorism highlights risk factors associated with participation in groups supporting the use of violence against other groups. A consistent theme is the strong sense of belonging to a particular group, to the exclusion of other groups, in some cases including family and childhood friends, together with viewing those from other groups as outsiders to be ignored or worse.

Information filters or bubbles can play a role in isolating people so they mostly have a deep engagement with the viewpoints of only one group, and can validate extreme views with people outside of their physical community. These filters and bubbles are not new to the world of social media, but they are easily realized within it as competing services attempt to grasp our attention by providing us with content we are more likely to enjoy.

At the same time, interactive technology and social media can be the remedy to break out of these filters and bubbles. To think about what some of these remedies may be, I discuss some articles that can provide motivations, all cited in the previously mentioned Scientific American Mind issue.

The first area in which interactive technologies could help is in making us realize that our views are not always broadly accepted. This is to avoid a challenge referred to as the “false consensus effect”, through which we often believe that our personal judgements are common among others. Perhaps providing a sense of the relative commonality (or rarity) of certain beliefs could be useful.

Sometimes we may not have strong feelings about something, and it seems that if that is the case, we tend to copy the decisions of others we feel resemble us most, while disregarding those who are different. It’s important in this case, then, to highlight experts from outside someone’s group, as well as helping us realize that people from other groups often make decisions that would work for us too.

Allegiances to groups can get to the point of expressing willingness to die for one’s group when people feel that their identity is fused with that of the group. Interactive technologies could help in this regard by making it easier to identify with multiple groups, so that we don’t feel solely associated with one.

As I mentioned earlier, being part of a tight group most of the time does not lead to problems, and can often be useful. But what if the group widely accepts the use of violence to achieve dominance over others? One way to bring people back from these groups is to reconnect them with memories and emotions of their earlier life, helping them reunite with family and old friends. Social media already does a good job of this, but perhaps there could be a way of highlighting the positives from the past in order to help. With a bit of content analysis, it would be possible to focus on the positive highlights.

There is obviously much more to consider and discuss within this topic. I encourage you to continue this discussion in person during the Conflict & HCI SIG at CHI 2016, on Thursday, May 12, at 11:30am in room 112. See you there!



Posted in: on Mon, April 25, 2016 - 1:55:08

Juan Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Hourcade's Posts


Post Comment


No Comments Found


Collateral damage


Authors: Jonathan Grudin
Posted: Tue, April 05, 2016 - 12:06:30

Researchers are rewarded for publishing, but this time, my heart wasn’t in it.

It was 2006. IBM software let an employer specify an interval—two months, six months, a year—after which an email message would disappear. This was a relatively new concept. Digital storage had been too expensive to hang onto much, but prices had dropped and capacity increased. People no longer filled their hard drives. Many saved email.

When IBM put automatic email deletion into practice, a research manager asked her IT guy to disable it. “That would be against policy,” he pointed out. She replied, “Disable it.” Another IBM acquaintance avoided upgrading to the version of the email system that included the feature. When she returned from a sick leave, she found that a helpful colleague had updated her system. Her entire email archive was irretrievably gone.

“We call it email retention, but it’s really email deletion.”

Word got around that Microsoft would deploy a new “managed email” tool to all North American employees, deleting most messages after six months (extended grudgingly to 12 when some argued email was needed in preparing for annual reviews). Because of exceptions—for example, patent-related documents must be preserved for 10 years—employees would have to file email appropriately.

Many researchers, myself included, prefer to hang onto stuff indefinitely. I paused another project to inquire and learned that a former student of mine was working on it. A pilot test with 1,000 employees was underway, he said. In a company of 100,000, it is easy not to hear about such things. He added that it was not his favorite project, and soon left the team.

Our legal division had assembled a team of about 10 to oversee the deployment. Two-thirds were women, including the group manager and her manager. They were enthusiastic. Many had voluntarily transferred from positions in records management or IT to work on it. My assumption that people embracing email annihilation were authoritarian types quickly proved wrong, it was a friendly group with bohemian streaks. They just didn’t like large piles of email.

I had assumed that the goal of deleting messages was to avoid embarrassing revelations, such as an admission that smoking is unhealthy or a threat to cut off a competitor’s air supply. Wrong again. True, some customers clamoring for this capability had figured prominently in questionable government contracting and environmental abuse. But it is a crime to intentionally delete inculpatory evidence and, I was told, litigation outcomes are based on patterns of behavior, not the odd colorful remark that draws press notice.

Why then delete email? Not everyone realized that storage costs had plummeted, but for large organizations, the primary motive was to reduce the cost of “ediscovery,” not hardware expenditures.

Major companies are involved in more litigation than you might think. Each party subpoenas the other’s correspondence, which is read by high-priced attorneys. They read their side’s documents to avoid being surprised and to identify any that need not be turned over, such as personal email, clearly irrelevant email, and any correspondence with an attorney, which as we know from film and television falls under attorney-client privilege. A large company can spend tens of millions of dollars a year reading its employees’ email. Reduce the email lying around to be discovered, the thinking went, and you reduce ediscovery expenses.

Word of researcher unhappiness over the approaching email massacre reached the ears of the company’s Chief Software Architect, Bill Gates. We were granted an exemption: A “research” category was created, similar to that for patent-related communication.

Nevertheless, I pursued the matter. I asked the team about the 1000-employee pilot deployment. The response was, “The software works.” Great, but what was the user experience? They had no idea. The purpose of the pilot was to see that the software deleted what it should—and only what it should. The most important exception to automatic deletion is “litigation hold”: Documents of an employee involved in litigation must be preserved. Accidental deletion of email sent or received by someone on litigation hold could be catastrophic.

The deployment team was intrigued by the idea of asking the early participants about their experiences. Maybe we would find and fix problems, and identify best practices to promote. This willingness to seek out complaints was to the team’s credit, although I was realizing that they and I had very different views of the probable outcome. They believed that most employees would want to reduce ediscovery costs and storage space requirements, and about that they were right. But they also believed that saving less email would increase day to day operational efficiency, whereas my intuition was that it would reduce efficiency, and not by a small amount. But I had been wrong about a lot so far, a not uncommon result of venturing beyond the ivory tower walls of a research laboratory, so I was open-minded.

“Doesn’t all that email make you feel grubby?”

My new collaborators often invoked the term “business value.” The discipline of records management matured at a time when secretaries maintained rolodexes and filing cabinets organized to facilitate information retrieval. Despite such efforts, records often proved difficult to locate. A large chemical company manager told me that it was less expensive to run new tests of the properties of a chemical compound than to find the results of identical tests carried out years earlier.

To keep things manageable back then, only records that had business value were retained. To save everything would be painful and make retrieval a nightmare. Raised in this tradition, my easygoing colleagues were uncompromising in their determination to expunge my treasured email. They equated sparsity with healthy efficiency. When I revealed that I saved everything, they regarded me sadly, as though I had a disease.

I have no assistant to file documents and maintain rolodexes. I may never again wish to contact this participant in a brief email exchange—but what if five years from now I do? Adding everyone to my contact list is too much trouble, so I keep the email, and a quick search based on the topic or approximate date can retrieve her in seconds. It happens often enough.

I distributed to the pilot participants an email survey comprising multiple choice and open-ended response questions. The next step was to dig deeper via interviews. Fascinated, the deployment team asked to help. Working with inexperienced interviewers does not reduce the load, but the benefits of having the team engage with their users outweighed that consideration. I put together a short course on interview methods.

“Each informant represents a thousand other employees, a million potential customers—we want to understand the informant, not convert them to our way of thinking. For example, someone conducting a survey of voter preferences has opinions, but doesn’t argue with a voter who differs. If someone reports an intention to vote for Ralph Nader, the interviewer doesn’t shout, ‘What? Throw away your vote?’”

Everyone nodded.

“In exchange for the informant trusting us with their information, our duty is to protect them.” After they nodded again, I continued with a challenging example drawn from the email survey: “For example, if an employee says that he or she gets around the system by using gmail for work-related communication—”

White-faced, a team member interrupted me through clenched teeth, “That would be a firing offense!”

At the end of the training session, the team manager said, “I don’t think I’ll be able to keep myself from arguing with people.” Everyone laughed.

The white-faced team member dropped out. Each interview save one was conducted by one team member and myself, so I could keep it on track. One interview I couldn’t attend. The team manager and another went. When I later asked where the data were, they looked embarrassed. “We argued with him,” the team manager reported. “We converted him.”

My intuition batting average jumps to one for three

The survey and interviews established that auto-deletion of email was disastrously inefficient. The cost of the time that employees spent categorizing email as required by the system outweighed ediscovery costs. Time was also lost reconstructing information that had only been retained in email. “I spent four hours rebuilding a spreadsheet that was deleted.”

Workarounds contrived to hide email in other places took time and made reviewing messages more difficult. Such workarounds would also create huge problems if litigants’ attorneys became aware they existed, as the company would be responsible for ferreting them out and turning everything over.

Most damning of all, I discovered that managed email would not reduce ediscovery costs much. The executives and senior managers whose email was most often subpoenaed were always on litigation hold for one case or another, so their email was never deleted and would have to be read. The 90% of employees who were never subpoenaed would bear virtually all of the inconvenience.

Finally, ediscovery costs were declining. Software firms were developing tools to pre-process and categorize documents, enabling attorneys to review them an order of magnitude more efficiently. At one such firm I saw attorneys in front of large displays, viewing clusters of documents that had been automatically categorized on several dimensions and arranged so that an attorney could dismiss a batch with one click—all email about planning lunch or discussing performance reviews—or drill down and redirect items. That firm had experimented with attorneys using an Xbox handset rather than a keyboard and mouse to manipulate clusters of documents. They obtained an additional 10% increase in efficiency. However, they feared that customers who saw attorneys using Xbox handsets would conclude that these were not the professionals they wanted to hire for a couple hundred dollars an hour, so the idea was dropped. Nevertheless, ediscovery costs were dropping fast.

Positive outcomes

At the ediscovery firm, I asked, “Are there changes in Exchange that would help improve the efficiency of your software?” Yes. A manager in Exchange told me that ediscovery firms were a significant market segment, so I connected them and a successful collaboration resulted.

We found ways to improve the interface and flow of our “email retention” software, reducing the inconvenience for anyone who would end up using it.

I learned about different facets of organizations, technology use, and people. I loved working with the records management team and the attorneys. Attorneys in tech companies are relaxed, funny, and have endless supplies of stories. They never let you record an interview, but they are invariably good company.

As the date for the deployment to 50,000 Microsoft North America employees approached, word of our study circulated. The executive vice president overseeing our legal division convened a meeting that was run with breathtaking efficiency, like a scene in The West Wing. He turned to me. “We created a ‘research’ exemption for Microsoft Research. Why are you here?” I said, “I’m doing this for Microsoft, not MSR.”

The deployment was cancelled.

The product was released. Customers wanted it. A partner at a major Bay Area law firm heard of the study and phoned me. He was interested in our analysis of efficiency, but noted that for some firms, profitability was the only issue. “Consider Philip Morris,” he said. “One of their businesses is addicting people to something that will kill them. As long as that business is profitable, they will stay in it. If it ceases being profitable, they will get out. Efficiency isn’t a concern.”

Collateral damage

I saw a cloud on the horizon. The raison d’être of the team that had welcomed me was to oversee a deployment that would not happen. What would they do? I formed a plan. When a subpoena arrives, all affected employees are put on litigation hold. Their email and documents are collected and read by attorneys to identify relevant material. Determining relevance is not easy. It could be signaled by the presence or absence of project code names that evolved over time. People involved in discussions may have left the company. It is often difficult to determine which project a short email message refers to. Some employees file information under project names, but others rely on message topic, sender, recipients, date, urgency, or a combination of features. Some don’t file much at all, relying on Inboxes or other files holding thousands of uncategorized messages. Attorneys sit at computers trying to reconstruct a history that often spans several years and scores or hundreds of people.

I thought, “Here is the opportunity for email management.” Armed with tools and procedures, the team could help attorneys sort this out by working with the employees on litigation hold: identifying attorneys with whom privileged email was exchanged, listing relevant project code names, indicating colleagues always or usually engaged in communication relevant to the subpoenaed project and those wholly unrelated, and so on. This could greatly reduce the time that expensive attorneys spent piecing this together.

I worked on a proposal, but I was not fast enough. The legal division makes an effort to reassign attorneys whose roles are no longer needed, but it does not generate jobs for surplus records managers. The team was laid off, including the manager and her manager.

They had heeded a call to take on important work for the company. The positions they had left had been filled. They had welcomed me and worked with me, and it cost them their jobs. “Our duty is to protect our informants,” I had taught them, and then I failed to do it.

A few found other positions in the company for a time. None remain today. Before leaving, they held a party. It could have been a wake, but it was labeled a project completion celebration. Lacking a “morale budget,” it was potluck. An artist in the group handed out awards. Mine has rested on my office window sill for almost a decade. At the party, someone told me quietly, “We had a discussion about whether or not to invite you. We decided that you were one of the team.” I was the one not laid off, for whom the study was a success. But not a success I felt like writing up for publication.



Posted in: on Tue, April 05, 2016 - 12:06:30

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Extremes of user experience and design thinking: Beards and mustaches


Authors: Aaron Marcus
Posted: Fri, March 25, 2016 - 10:04:27

The characteristics that differentiate us human beings, and at the same time unite people from different regions of the world is a matter that fascinates me.

Recently, I had a half-hour journey to a nearby Austin, Texas, hot-rod show to entertain my grandchildren. On the way, to keep the grandchildren interested, I happened to search on my phone through the Internet for the world’s longest beards and moustaches on men, and hair on women (about 18 feet long was the Guinness record for women, as I recall). We enjoyed looking at these hirsute oddities.

So, it was not without previous demonstration of interest that, just after I was dropped off by my son at a downtown stop to catch the No. 100 bus to the airport, a fellow came up to me and asked me if I was at the outbound stop headed to the airport or the inbound stop, because the inbound stop was just around the corner, and it was understandably confusing to a non-native. I could not help myself, and I immediately became social, extroverted, and interested in his unusual beard and his unusual subset of humanity. I complimented him on his outstanding growth and mentioned that I had just been searching the world for images of long beards and moustaches to show my grandchildren in Austin.

There began a 10-minute conversation with Patrick Dawson, of Seattle. He was just now returning after participating in the Annual Austin Men’s Beard and Mustache Competition, which had taken place Saturday night. Had I only known! This was Patrick’s second visit to Austin for purposes of this competition sponsored by the Austin Hair Club (!?) There were two competitions that day: the Open Competition, open to a maximum of about 250 competitors taking place at the Mohawk Arena. The Mohawk was filled to capacity with about 1000 people. The three top winners in their categories (goatees, beards, moustaches, mutton chops, etc.) compete in the evening competition, which features lots of booze and rowdiness. Patrick had won second place for goatees and first place for mustaches in the open competition and had placed first for “partial beards and goatees" in the evening competition. He had his precious trophy to prove his achievement in his carrying bag. 

There is even a Women’s Division competition, called the Whiskerina, featuring an award for the most creative strap-on beard made out of whatever the women-folk choose to use. I can imagine the creativity.

The Austin Facial Hair Competitions usually take place in February, for purposes of your future scheduling of, please note the exception for next year described below.

Patrick told me more during the half-hour sojourn to the Austin Airport: There is also a World Beard and Moustache Competition that takes place every two years. The last one took place on 3 October 2015 in Leogang, Austria, south of Salzburg (where I hope to be on 4 April at Persuasive Technology 2016 to give a conference tutorial about mobile persuasion design). Guess what? Patrick won first place for his goatee last year! Guess what again!? The next World Beard and Moustache Competition will take place Labor Day, Monday, 4 September 2017. Ladies and gentlemen, mark your calendars!

Patrick gave me permission to post and send his photo, and said he was very happy to participate in these competitions, which do not give money, just fame and honor, and raise money for worthy charities. He said the last one (perhaps he meant the one in Austin last weekend) raised $10,000. Not bad for some whiskers.

Well, I felt gloriously happy that I had been able to connect with him, learn these strange, exotic details, and enjoy my short time with him. His braided goatee made me nostalgically long for my 24-inch hair braid of the 1970s. Sigh…

Well, what did this teach me about user-experience design? First of all, that the exotic differences of human interests, preferences, expectations, values, signs, and rituals, is more fabulously complex than one could ever imagine. At the same time, it seemed heartening that people from all walks of life, all kinds of countries, and all kinds of cultures, could find shared enthusiasms and gather to honor the best of their breed or brood, while at the same time working toward humanitarian goals. It reinforced, for me, the necessity, if one is dedicated to “knowing thy user,” to do thorough research in the target community, to develop adequate personas and use scenarios, to thoughtfully consider the language, concepts, images, and activities of these groups when seeking to develop outstanding user experiences.



Posted in: on Fri, March 25, 2016 - 10:04:27

Aaron Marcus

Aaron Marcus is principal at Aaron Marcus and Associates (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts


Post Comment


No Comments Found


Designing the cognitive future, part IX: High-level impacts


Authors: Juan Hourcade
Posted: Tue, March 22, 2016 - 9:27:21

In previous blog posts I have been writing about how interactive technologies are changing or may change our cognitive processes. In this post I reflect on the high-level impact of these changes, and identify four main areas of impact, each with its own opportunities and risks: human connections and information, control, creativity, and (in)equality.

In terms of human connections and information, I identified two risk-opportunity axes. The first axis goes from social isolation to higher levels of empathy, while the second goes from bias to representative diversity in access to information and communication with others.

The first axis goes to the heart of arguments such as those in Alone Together about the risks in having technologies isolate people and cut them off from others [1]. At the same time, there is the opportunity for interacting with people we would not otherwise be able to reach, and even  better understanding others’ points of view. Anxieties about the impact of personal media on society are not new. For example, in early 19th-century England, there was a significant amount of concern about the growing popularity of novels that cited themes similar to those brought up these days with regard to interactive technologies, such as lack of intellectual merit and the potential to cause insanity through isolation [2]. On the positive end of the axis, technologies could help us re-engage and find the time for face-to-face activities, helping form important bonds, for example, between parents and children. The challenge for interaction designers is to enable opportunities for personal enjoyment while also encouraging and designing for social uses that enable previously unavailable forms of communication.

The second axis, referring to bias versus representative diversity in information access and communication, brings about a new version of a familiar challenge. As I mentioned in my previous blog post on communication, people used to have very localized biases in terms of the information they could access and the people with whom they could communicate. We are now replacing those biases with new biases brought about by personalized experiences with interactive technologies. At the same time, the opportunities to engage with a wide, representative variety of information and people are unprecedented. This access to a more diverse set of people could potentially lower the social distance between us and those who are different from us. Social distance is often a prerequisite for supporting armed conflict [3], and interactive technologies could help in this regard. The challenge for interaction designers is to make people aware of biases while enticing them toward accessing representative sources and communicating with representative sets of people.

In terms of control, the risk-opportunity axis goes from loss of control over our information and decision-making to greater control over our lives and bodies. Loss of control may come through the relentless collection of data about our lives, together with the convenience of automating decision-making, which could lead to significant threats in terms of privacy and manipulation. On the other hand, the same data can give us greater insights into our lives and bodies, help us make better decisions, and help us lead lives that more closely resemble our goals and values. The challenge for interaction designers is to keep people in control of their information and lives, and aware of the data and options behind automated decision-making.

In terms of creativity, the risk-opportunity axis runs from uniformity to greater support for inspiration, expression, and exploration. While there is currently more variety, a few years ago it seemed like most conference’s presentations looked alike due to a large majority of presenters following the paths of least resistance within the same presentation software. This is an example of the risk of uniformity, where even great tools, if most people use them the same way, can lead to very similar outcomes. There are obviously plenty of examples of ways in which interactive technologies have enabled new forms of expression, provided inspiration, lowered barriers to existing forms of expression, and made it easier to explore alternatives. The challenge for interaction designers is to do this while enabling a wide range of novel outcomes.

A final and overarching area of impact is political, social, and economic equality. The risk-opportunity axis in this case goes from having a select few with unequalled power due to their access and use of technology to having technologies that can be used to reduce inequalities in a fair and just manner. If interactive technologies can be thought of as a way of giving people cognitive superpowers, they could potentially bring about significant imbalances, giving us the risk identified above. On the other hand, we could design interactive technologies in such a way that will encourage these superpowers to be used to make the economy and social order more fair and just. This could be accomplished through the capabilities of the technology (e.g., enable high-income people to feel what it is like to live in low-income regions), or by designing interactive technologies that can easily reach the most disadvantaged and marginalized populations in a way that can enable their participation and inclusion.

What are your thoughts on these topics? How do you feel technologies are currently affecting you across these axes?

Endnotes

1. Sherry Turkle. 2012. Alone together: Why we expect more from technology and less from each other. New York, NY, USA: Basic books.

2. Patrick Brantlinger. 1998. The Reading Lesson: The Threat of Mass Literacy in Nineteenth Century British Fiction. Indiana University Press.

3. Dave Grossman and Loren W. Christensen. 2007. On combat: The psychology and physiology of deadly conflict in war and in peace. PPCT Research Publications Belleville, IL.



Posted in: on Tue, March 22, 2016 - 9:27:21

Juan Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Hourcade's Posts


Post Comment


No Comments Found


Critiquing scholarly positions


Authors: Jeffrey Bardzell
Posted: Tue, March 15, 2016 - 11:13:32

If I am right that HCI and neighboring fields will increasingly rely on the essay as a means of scholarly contribution and debate in the future, then it follows that the construction, articulation, and criticism of intellectual positions will become increasingly important.

In Humanistic HCI, we talk about the essay, the epistemic roles of positions, and how they should be peer reviewed. We defined a position thus:

[A] position is not merely a proposition; it instead holistically comprises an expert-subjective voice; a theoretical-methodological stance; its own situatedness within a domain; and a pragmatic purpose. (73)

But as I read design research papers in the fragmented and emerging subdomain of research through design (and similar practices, including constructive design, critical design, and so forth), I have been frustrated with how researchers characterize others' positions, especially ones they disagree with. 

The purpose of this post is not to discourage disagreement.

It is, rather, to seek ways to support disagreement in a scholarly manner.

I approach this topic by way of analogy. When learning social scientific methods, such as interviews, we are encouraged to write down as closely as possible, even verbatim, what participants say. We are discouraged from writing down our reactions. This is partly to minimize the risk that the reactions come to stand in for what research subjects actually said.

I think that risk is sometimes realized in research through design work, that is, that sometimes in our writings our reactions stand in for the actual positions of other researchers, and that this is hindering intellectual progress.

Recommendation: When critiquing or positioning oneself against prior work, one should summarize the position (its claim structure, voice/stance, theoretical and methodological underpinnings, and pragmatic goals/consequences), before and as a condition of, expressing one's reaction to or critique against it.

An example

Let me exemplify what I mean.

I begin by summarizing the argument of a well-known paper theorizing research through design (RtD): Zimmerman, Stolterman, and Forlizzi's 2010 paper, "An Analysis and Critique of Research through Design: Towards a Formalization of a Research Approach."

My summary of the authors' position:

Zimmerman et al. begin with the claim that research through design is an increasingly important practice, but one that it is not well theorized; as a result, it faces many practical challenges. The intended contribution of the paper is that the authors "take a step towards formalizing RtD as a legitimate method of inquiry within the HCI research community by detailing how RtD can lead to design theory" (310). To do so, they provide a critical literature review, summarize interviews with 12 RtD scholars, and analyze several "canonical" RtD projects (as identified by the interviewees). In their findings, they present several specific ways that RtD practitioners produce theory and frameworks as well as artifacts that "codify the designers’ understanding of the current state, including the relationships between the various phenomena at play therein, and the description of the preferred state as an outcome of the artifact’s construction" (314). Other findings they present is that their interviewees expressed a concern that a romantic conception of the designer-as-genius was inhibiting their ability to present RtD as research; that tacit or implicit knowledge was a key outcome of RtD and that, by definition, such knowledge is difficult to articulate; and that standards of RtD documentation were lacking. Near the end of the paper, Zimmerman et al. offer some recommendations: "there is a need for serious development of RtD into a proper research methodology that can produce relevant and rigorous theory" (316); "A need exists for more examples where the intentional choice and use of the RtD approach as a methodology and process is both described and critically examined" (317); and "Researchers who engage in RtD need to pay more attention to the work of other design researchers [...] It is of the utmost importance that RtD is analyzed and critiqued in a serious and ambitious way" (317). The paper concludes by observing that RtD is "alive and well" and "recognized by the design and HCI communities," but that "there is still a lot to be done when it comes to establishing RtD as a recognized and well-developed research approach" (318).

I believe that this summary represents their position, as I defined it earlier:

[A] position is not merely a proposition; it instead holistically comprises an expert-subjective voice; a theoretical-methodological stance; its own situatedness within a domain; and a pragmatic purpose. (73)

I also believe that were I to show the above summary to Zimmerman et al., that they would agree that that was their position. I think most readers of that paper in the community would agree that the above summary was overall faithful to the paper. It also included their own words, set in an appropriate context.

And then at that point I could launch my critique of it, because only then are we all on the same page.

But too often, this is not what happens in design research.

Consider the following quotes from the abstract and introduction of Markussen, Krogh, and Bang's 2015 otherwise excellent paper, "On what grounds? An intra-disciplinary account of evaluation in research through design." The bold is my own; it reflects statements about the Zimmerman et al. paper I just summarized.

In the research literature that is initially reviewed in this paper two positions are located as the most dominant representing opposite opinions concerning the nature of such a methodology. One position proposes a cross-disciplinary perspective where research through design is based on models and standards borrowed from natural science, social sciences, humanities and art, while the other position claims a unique epistemology for research through design insisting on its particularities and warning against importing standards from these other disciplines. [...] This “state of the art” has led some researchers to call for a policing of the research through design label, working out a formalized approach with an agreed upon method to document knowledge (Zimmerman, Stolterman, & Forlizzi, 2010). Other researchers, however, argue for appreciating the controversies and proliferation of research programs currently characterizing the field (Gaver, 2012). In caricature it can be noted that representatives of the first group works to associate design with changing existing research traditions (natural, technical, social sciences and humanities) dependent on the deployed methodology and measures for evaluation whereas the latter works to position design outside classical research and science. (pp. 1-2)

Instead of summarizing Zimmerman et al.'s argument, this paper asserts what appears to me to be a reactive interpretation of that argument: that Zimmerman et al. advocate "models and standards borrowed from natural science, social sciences, humanities and art," that their paper "calls for a policing of the research through design label," and that it seeks to work out "a formalized approach with an agreed upon method."

The whole Markussen et al. paper is then positioned in the introduction as a response to these two prior "positions" in research through design: one exemplified by Zimmerman et al. (the police model) and the other by Gaver (the sui generis model).

But do these two positions actually exist as such?

The problem is that Zimmerman et al. do not call for "policing" that I could see. Nor do they concern themselves with "labels." And while they do use the unfortunate term "formalizing" in several key locations, including the title (!), a reading of their position—what they claim, the expert stance behind those claims, the theoretical and methodological underpinnings by which those claims were made possible, and the pragmatic goals of such claims—suggests a much less controversial project.

Zimmerman et al. spoke to RtD practitioners about their accomplishments and challenges; they sought to gather and organize the accomplishments into a theoretical perspective that others could leverage; they sought to acknowledge practitioner challenges and envision how design research might help them overcome them, in a way that reflects their voices (i.e., the interviews) and their projects (i.e., the exemplars that they co-identified). This doesn't sound like "policing" to me; neither does it sound like natural science. It also doesn't sound like "formalizing" as I understand the word—I personally wish Zimmerman et al. hadn't used that term, or at least had clarified how they were using it, because I think it opened the door to that reactive interpretation that they are "policing."

The upshot of all of this is that once they get down to the business of their own contribution, Markussen et al.'s project looks to me like a welcome extension/expansion of Zimmerman et al.'s. Markussen et al. for instance argue that research on RtD needs to do a better job of attending to "how evaluation is actually being practiced within design research itself" (p. 2). And while I would say that Zimmerman et al. in fact did pay attention to that, nonetheless Markussen et al. paid more attention to it, and offered a more substantive account of that evaluation (they identified five different methods of RtD evaluation). Theirs is an original contribution, one that I intend to try out in my practice (which is probably the highest compliment I can give). And it is based in a reasonable critique of where Zimmerman et al. left off.

What I am pointing to in this instance—but I have seen it elsewhere in design research—is a tendency to offer straw man accounts of prior work that I believe are derived from reactive interpretations rather than a sober and scholarly account of what those positions actually were.

I hope that this blog post provides some practical and actionable guidelines to help design researchers avoid this problem.

If we cannot avoid the problem, we face the consequence of obfuscating where the agreements and disagreements actually are. We are encouraged to fight hobgoblins that aren't even there. At its worst (perhaps—hopefully—not in this example), it sows controversy and division. This can undercut the research community's shared desire to find common ground and learn together.

Summary

Part of the discipline of humanistic argumentation is taking others' positions seriously, even when one wants to criticize them.

To do so, one must first adequately characterize the position as such: its claims structure, its speaking voice and stance, its theoretical and methodological underpinnings, and its pragmatic purposes (and consequences).

An important goal of doing so is to ensure that everyone—including the original authors—is on the same page, that is, that this really is what that earlier position entailed, before the critique begins.

We want a community of learning, one that can accommodate informed disagreements, but we do not want a circular firing squad.

NOTE: This post was lightly edited and reblogged from interactionculture.wordpress.com



Posted in: on Tue, March 15, 2016 - 11:13:32

Jeffrey Bardzell

Jeffrey Bardzell is an associate professor of human-computer interaction design and new media in the School of Informatics and Computing at Indiana University, Bloomington.
View All Jeffrey Bardzell's Posts


Post Comment


No Comments Found


Technological determinism


Authors: Jonathan Grudin
Posted: Wed, March 09, 2016 - 12:21:43

Swords and arrows were doomed as weapons of war by the invention of a musket that anyone could load, point, and shoot. A well-trained archer was more accurate, but equipping a lot of farmers with muskets was more effective. Horse-mounted cavalry, feared for centuries, were also eliminated as a new technology swept across the globe, putting people out of work and prior technologies into museums.

Are we in control of technology, or at its mercy?

Concerns about technology predated computers, but they proliferate as digital technology spreads through workplaces and homes and into our clothing and bodies. We design technology. Do we shape how it is used?

Technological determinism, also called the technological imperative, became a computer science research focus when organizations began acquiring data processing systems half a century ago. In an excellent 1991 Communications of the ACM review titled “Examining the Computing and Centralization Debate,” Joey George and John King note that the initial studies produced conflicting hypotheses: (i) computers lead to the centralization of decision-making in organizations, and (ii) computers lead to decentralization of decision-making. This contradiction led to two new hypotheses: (iii) computerization is unrelated to centralization of organizational decision-making; (iv) management uses computerization to achieve its goals. George and King found that a fifth theory best fit the results: (v) management tries to use computerization to achieve its goals; sometimes it succeeds, but environmental forces and imperfect predictions of cause and effect influence outcomes. They concluded, “the debate over computing and centralization is over.”

In a 1992 paper in Organizational Science titled “The Duality of Technology: Rethinking the Concept of Technology in Organizations,” Wanda Orlikowski applied the structuration theory of sociologist Anthony Giddens to technology use and reached a similar conclusion. Giddens argued that human agency is constrained by the structures around us—technology and sociocultural conventions—and that we in turn shape those structures. Software, malleable and capable of representing rules, is especially conducive to such analysis.

These were guardedly optimistic views of the potential for human agency. Today, media articles that raise concerns such as oppressive surveillance and the erosion of privacy, excessive advertising, and unhealthy addiction to social media conclude with calls to action that assume we are in control and not in the grip of technological imperatives. How valid is this assumption? Where can we influence outcomes, and how?

It’s time to revisit the issue. Twenty-five years ago, digital technology was a puny critter. We had no Web, wireless, or mobile computing. Few people had home computers, much less Internet access. Hard drives were expensive, filled up quickly, and crashed often. The determinism debate in the early 1990s was confined to data and information processing in organizations. The conclusion—that installing a technology in different places yielded different outcomes—ruled out only the strongest determinism: an inevitable specific effect in a short time. That was never a reasonable test.

Since then, the semiconductor tsunami has grown about a million-fold. Technology is woven ever more deeply into the fabric of our lives. It is the water we swim in; we often don’t see it; we do not link effects to their causes. Whether our goal is to control outcomes or just influence them, we must understand the forces that are at work.

Technology is sometimes in control

The march of digital technology causes extinctions at a rate rivalling asteroid collisions and global warming—photographic film, record players, VCRs, rotary dial phones, slide carousels, road maps, and encyclopedias are pushed from the mainstream to the margins.

This isn’t new. The musket was not the first disruptive technology. Agriculture caused major social changes wherever it appeared. Walter Ong, in Orality and Literacy: The Technologizing of the Word, argued that embracing reading and writing always changes a society profoundly. The introduction of money shifted how people saw the world in fairly consistent ways. With a risk of a computer professional’s hubris, I would say that if any technology has an irresistible trajectory, digital technology does. Yet some scholars who accept historical analyses that identify widespread unanticipated consequences of telephony or the interstate highway system resist the idea that today we are swept in directions we cannot control.

Why it matters

Even the most beneficial technologies can have unintended side effects that are not wonderful. Greater awareness and transparency that enable efficiency and the detection of problems (“sunlight is the best disinfectant”) can erode privacy. Security cameras are everywhere because they serve a purpose. Cell phone cameras expose deviant behavior, such as that perpetrated by repressive regimes. But opinions differ as to what is deviant; your sunshine can be my privacy intrusion.

Our wonderful ability to collaborate over distances and with more people enables rapid progress in research, education, and commerce. The inescapable side effect is that we spend less time with people in our collocated or core communities. For millions of years our ancestors lived in such communities; our social and emotional behaviors are optimized for them. Could the erosion of personal and professional communities be subtle effects of highly valued technologies?

The typical response to these and other challenges is a call to “return to the good old days,” while of course keeping technology that is truly invaluable, without realizing that benefits and costs are intertwined. Use technology to enhance privacy? Restore journals to pre-eminence and return conferences to their community-building function? Easier said than done. Such proposals ignore the forces that brought us to where we are.

Resisting the tide

We smile at the story of King Canute placing his throne on the beach and commanding the incoming tide to halt. The technological tide that is sweeping in will never retreat. Can we command a halt to consequences for jobs, privacy, social connectedness, cybercrime, and terrorist networks? We struggle to control the undesirable effects of a much simpler technology—modern musketry.

An incoming tide won’t be arrested by policy statements or mass media exhortations. We can build a massive seawall, Netherlands-style, but only if we understand tidal forces, decide what to save and what to let go, budget for the costs, and accept that an unanticipated development, like a five-centimeter rise in ocean levels, could render our efforts futile.

An irresistible force—technology—meets an immovable object—our genetic constitution. Our inherited cognitive, emotional, and social behaviors do not stand in opposition to new technology; together they determine how we will tend to react. Can we control our tendencies, build seawalls to protect against the undesirable consequences of human nature interacting with technologies it did not evolve alongside? Perhaps, if we understand the forces deeply. To assert that we are masters of our destiny is to set thrones on the beach.

Examples of impacts noticed and unnoticed

Surveillance. Intelligence agencies vs. citizens, surveillance cameras vs. criminals, hackers vs. security analysts. We are familiar with these dilemmas. More subtly, the increased visibility of activity reveals ways that we routinely violate policies, procedures, regulations, laws, and cultural norms—often for good reason. Rules may be intended to be guidelines and not flexible enough to be efficient in all situations.

Greater visibility also reveals a lack of uniform rule enforcement. A decade ago I wrote:

Sensors blanketing the planet will present us with a picture that is in a sense objective, but often in conflict with our beliefs about the world—beliefs about the behavior of our friends, neighbors, organizations, compatriots, and even our past selves—and in conflict with how we would like the world to be. We will discover inconsistencies that we had no idea were so prevalent, divergences between organizational policies and organizational behaviors, practices engaged in by others that seem distasteful to us.

How we as a society react to seeing mismatches between our beliefs and policies on the one hand and actual behavior on the other is key. Will we try to force the world to be the way we would like it to be? Will we come to accept people the way they are?

Community. Computer scientists and their professional organizations are canaries in the coal mine: early adopters of digital technology for distributed collaboration. Could this terrific capability undermine community? The canaries are chirping.

In “Technology, Conferences, and Community” and “Journal-Conference Interaction and the Competitive Exclusion Principle,” I described how digital document preparation and access slowly morphed conferences from community-building to archival repositories, displacing journals. Technology enabled the quick production of high-quality proceedings and motivated prohibition of “self-plagiarizing” by republishing conference results in journals. To argue that they were arbiters of quality, conferences rejected so many submissions that attendance growth stalled and membership in sponsoring technical groups fell, even as the number of professionals skyrocketed. Communities fragmented as additional publication outlets appeared.

Community can be diminished by wonderful technologies in other ways. Researchers collaborate with distant partners—a great benefit—but this reduces the cohesiveness of local labs, departments, and schools. This often yields impersonal, metrics-based performance assessment and an overall work speed-up, as described in a study now being reviewed.

Technology transformed my workplaces over the years. Secretarial support declined, an observation confirmed by national statistics. In my first job, a secretary was hired for every two or three entry-level computer programmers to type, photocopy, file, handle mail, and so on. (Programs were handwritten on code sheets that a secretary passed on to a keypunch operator who produced a stack of 80-column cards.) Later at UC Irvine, our department went from one secretary for each small faculty group to a few who worked across the department. Today, I share an admin with over 100 colleagues. I type, copy, file, book travel, handle mail, file my expense reports, and so forth. 

Office automation is a technology success, but there were indirect effects. Collocated with their small groups, secretaries maintained the social fabric. They said “good morning,” remembered birthdays, organized small celebrations, tracked illnesses and circulated get-well cards, noticed mood swings, shared gossip, and (usually) admired what we did. They turned a group into a small community, almost an extended family. Many in a group were focused on building reputations across the organization or externally; the professional life of a secretary was invested in the group. When an employer began sliding toward Chapter 11, I knew I could find work elsewhere, but I continued to work hard in part because the stressed support staff, whom I liked, had an emotional investment and few comparable job possibilities.

We read that lifetime employment is disappearing. It involved building and maintaining a community. We read less about why it is disappearing, and about the possible long-term consequences of eroding loyalties on the well-being of employees, their families, and their organizations.

The road ahead

Unplanned effects of digital technology are not unnoticed. Communications of the ACM publishes articles decrying our shift to a conference orientation and deficiencies in our approach to evaluating researchers. Usually, the proposed solutions follow the “stop, go back” King Canute approach. Revive journals! Evaluate faculty on quality, not quantity! No consideration is given to the forces that pushed us here and may hold us tight.

Some cultures resist a technology for a period of time, but globalization and Moore’s law give us little time to build seawalls today. We reason badly about exponential growth. An invention may take a long time to have even a tiny effect, but once it does, that tiny effect can extremely rapidly build to a powerful one.

Not every perceived ill turns out to be bad. Socrates famously decried the invention of writing. He described its ill effects and never wrote anything, but despite his eloquence, he could not command the tide to stop. His student Plato mulled it over, sympathized—and wrote it all down! We will likely adjust to losing most privacy—our tribal ancestors did without it. Adapting to life without community could be more challenging. We may have to endure long enough for nature to select for people who can get by without it.

The good news is that at times we do make a difference. Muskets doomed swords and arrows, but today, different cultures manage guns differently, with significant differences in outcomes. Rather than trying to build a dike to stop a force coming at us, we might employ the martial art strategy of understanding the force and working with it to divert it in a safe direction.



Posted in: on Wed, March 09, 2016 - 12:21:43

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


@ComputerWorld (2016 03 14)

I think technology makes us human more because we can control them not it is a mercy. I also do not think that this will take over everything from human.


How efficient can one be? Productivity and pleasure in the 21st century


Authors: Aaron Marcus
Posted: Wed, February 17, 2016 - 10:40:51

Once, I had to take part in an international conference call at 9 a.m. West Coast time, but I had also scheduled my semi-annual dental appointment at 9 a.m. What to do? Well . . . it seemed straightforward. I would bring my mobile phone to the dentist together with my new Bluetooth earpiece, and listen in to the conference call while the dental hygenist worked over my molars. Simple. A combo business-personal moment joined efficiently.

I asked my regular dental assistant, Tanya (it seemed amazing that she had been a constant in the office for five years; even the dentist in charge of the firm had retired and been replaced with another). My request seemed only slightly unusual to her, and she was fine with my trying to combine two important things at once, as she laughed at my request. 

So while she set up her equipment, I set up mine. The only thing I hadn’t counted on was that the ear hook of the Bluetooth earpiece was meant to be used with an ear in the vertical position. The device dangled precariously from my head in my supine repose. Nevertheless, it seemed it would stay put while I dialed in. I had informed the leader of the conference call in advance (our group was planning a worldwide event) that I would be in a dentist’s chair with my mouth preoccupied and would be only listening, not speaking. She was fine with that, also, and thought it funny. As we started the call, she explained my situation to the participants. They all tittered briefly, then we got on with our call. 

I found there were several advantages to my otherwise constrained situation. Having the phone call to distract me kept my mind preoccupied. I noticed even less than normal the dental technician’s working over my gums with that little hooked, pointed metal device that I sometimes dread. All in all, my teeth are in pretty good shape (no cavities!), so I can’t complain, but as she did her thorough, scrupulous cleaning, it was not always pleasant. 

Meanwhile, my focus on the conversation actually seemed to be improved. It seemed more concentrated, because I could not speak! I could hear “inside the speakers’ comments” and seemingly tracked about three levels of the conversation: the surface or literal level, some implications for myself, and the speaker’s likely background intentions. I suddenly realized I had better write down a few notes. Uh, oh. I had forgotten to plan for that activity. Fortunately, while the dental assistant was cleaning my bicuspids, I found the pen I keep with my phone in my left-breast shirt pocket, and I pulled a piece of scrap paper out of my wallet; I think it was a cinema ticket-charge receipt. Perfect. I jotted down some notes that I used later to write up some comments and questions to the leader after the meeting. It had all gone pretty well.

I suggested to Tanya that the dental office might suggest to patients that they combine activities to make their day more efficient, and more effective in distracting them from whatever fear or pain they might be experiencing. I suggested that manicures and pedicures might be a good additional service offering, much as some Beverly Hills dental spas, or spa-equipped dental offices, now offer.

Then I got to thinking. I had given myself my own once-a-month haircut that morning before going to the dentist. This ritual involved dancing an electric buzz-cutter fitted with medium, short, and no plastic clip-on attachments, and running the device around my dome, progressively removing unwanted tufts of what was left of my former waves and curls. Once my hair was long, dark-brown, even braided down my back almost to my hips in the 1960s. Now, sigh, my few remaining hairs were short, gray, and mostly non-existent, with a bare patch growing ever larger starting at my crown. Ah, age. Anyway, the entire procedure takes about 15 minutes. So . . . why couldn’t this activity be done, also, while I was in the chair?

After the dentist’s visit, I had also planned to visit a women’s hairdressing and manicure salon to get a pedicure. I know. A perfect metrosexual man’s morning activity. Friends always eyed me a little suspiciously when I told them, occasionally, where I’d been. Ah, age; my feet seem so far away from my head and hands now. So after the dentist, I stopped in to see if I could get my once-a-month treatment. The place was empty. Great. I settled back into the big, comfortable chair. The assistant reminded me to turn on the back massage, and suddenly the chair sprang to life with strong, rhythmic vibrations, while the little lights on the chair’s control device blinked on and off to remind me of what was happening. As I leafed through the many women’s beauty, gossip, and healthcare magazines, looking for some additional wisdom that would explain how to understand women better, I thought: so this is why women enjoy going to beauty parlors and manicurists! It’s good to be treated like a queen, or a king, or maybe a princess or a prince, as the case may be. When the assistant began to massage my calves and feet, I was reminded: I am definitely hooked on this healthcare/beauty treat. I usually, but not always, pass up the offers of manicures; after all, a manly man can trim these things with a nail clipper or wire cutter, right? Down at the other end of my body, it’s harder to get my hands and feet in the right out-of-body position, and I’ve grown to enjoy this pedicure experience tremendously

So . . . I began to think: wait a minute. What if all of this could be done at once? Then I could have collapsed about two to three hours into just one. What a savings of time, if not money! I began to wonder: what other activities could be added? 

Then it occurred to me: there should be some sort of Guiness Book of World Records competition for the highest number of things that someone could accomplish at once while lying in a reclining chair at some healthcare service center. (Or maybe in an Aeron office chair, for Google employees.) 

After all, many times over the past decade, when I am sitting at my desk, I am looking at my computer screen, but also above that at the high-definition video screen that is fed the DirectTV satellite or cable feeds with a few of about 1000 stations. I might be listening to classical music coming in on www.1.fm from Europe that is broadcast over the speakers attached to the computer, while I might have two headsets on, one for each of the two phones nearby, with the mobile phone earpiece squeezed in under one of them, trying to field three conversations, while I review some email message from among the 200 to 500 that come in daily, and notice that someone is trying to reach me by voice-over-Internet. The Skype icon bleats for my attention and a new “boing” enters my consciousness. Which means, I have to free up at least one ear to add on the special Skype headset with microphone. 

I know. This all sounds a bit complicated. It is. Sometimes I get a bit confused about to whom I am speaking, what with having to quickly do multiple sequential mutes and un-mutes as I try to field questions or reply to comments from two or three people.

So I am not new to trying to juggle several things at once. Well, what would it be to max out the number of things to happen in a reclining chair? Here’s what might be happening at once:

  • Back massage via the built in chair massager
  • Body herbal wrap, somehow leaving the mouth, head, chin, arms, and feet exposed, as necessary
  • Conference call in one ear (at least!) via Bluetooth earpiece, with note-taking equipment on my lap
  • Dental care, but not x-rays
  • Haircut and beard trim, taking care that things don’t fall into my open eyes or mouth
  • Manicure
  • Music coming into the other ear, or perhaps audio feed form the video image on the wall or the videophone signal coming in over the Internet
  • Pedicure

I suppose I could also be fed intravenously, or some additional skillful person could be performing liposuction or other non-invasive laproscopic surgery that doesn’t require general anaesthesia, but I think we are entering the extreme zone of zaniness here. 

As an additional contextual challenge, all of these services might all be happening in a Spa-Bus or Spa-Limo, or even in my own vehicle, since I shan’t have to be bothered with driving, and the other passengers might be the specialists providing entertainment, nutritional, and other healthcare services.

Is there no end to this quest for efficiency combining productivity and pleasure in our lives of the early decades of the 21st century? Are there cross-cultural versions that may add additional, unexpected activities, like my simultaneously grinding white, spicy roots on shark skin to make hot, green wasabi paste or undergoing acupuncture? I am sure some mathematician or cognitive scientist may be able to prove that there is some topological limit to what can be accomplished, like the five-color map problem. Until then we can dream, can’t we, of beating the limit? We may have a new challenge to worldwide creativity. I await the latest results on some Internet blog.

Note: This text was originally composed in 2005 and has been slightly edited to update the document.



Posted in: on Wed, February 17, 2016 - 10:40:51

Aaron Marcus

Aaron Marcus is principal at Aaron Marcus and Associates (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts


Post Comment


No Comments Found


A dark pattern in humanistic HCI


Authors: Jeffrey Bardzell
Posted: Tue, February 09, 2016 - 2:52:11

I have noticed a dark pattern among papers that align themselves with critical or humanistic approaches to HCI. I myself have been guilty of contributing to that pattern (though I am trying to reform). But I still see it all the time as a peer reviewer and also as a Ph.D. supervisor.

And since I spend so much time evangelizing humanistic HCI, I thought it might also be good to point out one of its dark patterns, to encourage critical/humanist HCIers not to do it, and to encourage reviewers to call this out and use it as an argument against accepting the paper.

And of course I want to offer a positive way forward instead.

The dark pattern is:

"I love a critical theory/author; you in HCI should change your practice to use it, too."

Characteristic features of this dark pattern include the following:

  • An assertion of how naive HCI is for not also having known this theory all along.
  • No acknowledgement of nor engagement with the thousands of critical/humanistic papers in HCI.
  • No articulation of a research problem or question that HCI is already asking or expressing within a given HCI domain (e.g., personal informatics, HCI4D, user experience).
  • In other words, the research problem is expressed something like this: "I can't believe you ignoramuses don't know what Deleuze says about the Movement Image, but oh man, if you did, your HCI would be as hep as a Robbe-Grillet novel; happily, I am here to teach Deleuze to you; I've got a whole page devoted to his thought." (I might be exaggerating for effect, but seriously, sometimes this is how it sounds.)
  • A writing style that resembles that of a drunk Derrida trying to impress someone.
  • A references list full of Heidegger, Foucault, Merleau-Ponty, and Deleuze.
  • A references list that literally cites no HCI papers at all.
  • A response to one's inevitable rejection by saying that HCI is too stupid or too bigoted against the humanities to get it (tip: it is not).

Now, there is a light pattern version of this that I would argue for instead.

This version completely buys into the project of importing and evangelizing on behalf of this or that critical theory. But it seeks to do so in a dialogic, rather than imperialistic, way.

The light pattern is:

"I understand current HCI practice in domain D to be Dprax. I understand that Dprob is a known problem/challenge in Dprax. I hope to contribute to Dprax by turning to critical theory T to help understand better/clarify what is actionable about/reframe Dprob."

Characteristic features of this light pattern include the following:

  • Respect for prior HCI research as rigorous, informed, and of high scholarly quality, even if it doesn't seem aware of or make good use of your favorite theory or its ilk.
  • A focus on what that prior research itself articulates as a problem, gap, opportunity, challenge, or whatnot.
  • A generously cited and clearly articulated statement of the state of the art and its known challenges/problems. Your HCI readers, which might include the people you are citing, must be able to see themselves in your characterization; they must agree, within reason, that you've characterized their work and their challenges in a fair way.
  • A positioning of your theory as potentially helping others grapple with that challenge or problem. It won't solve it, so don't say it will.
  • A clear introduction to your theory in a way that is focused on the HCI problem domain you are trying to address. Do not offer a general "history of philosophy" overview of the theory; this is not Philosophy 101. Point readers to the Stanford Encyclopedia of Philosophy or a Cambridge Companion To and get to the point.
  • Write in a style that is accessible to HCI readers. It is OK to push stylistic boundaries and be somewhat challenging to readers—you want to be true to your critical-humanist self. But remember that you are inviting the community to take up your perspective; your writing should feel like an invitation, not a Gallic Howler.
  • You probably should have three different kinds of references:
    • HCI references in your domain of inquiry (personal informatics, crowdsourcing, sustainability, design fictions)
    • HCI references of a critical-humanistic nature (to align your approach with an established way of doing in HCI)
    • Primary and secondary references pertaining to your external theory

Humanistic approaches to HCI should be generous, dialogic, critical, and engaging. They should not be imperialistic power moves that condescend to HCI.



Posted in: on Tue, February 09, 2016 - 2:52:11

Jeffrey Bardzell

Jeffrey Bardzell is an associate professor of human-computer interaction design and new media in the School of Informatics and Computing at Indiana University, Bloomington.
View All Jeffrey Bardzell's Posts


Post Comment


No Comments Found


Wrong about MOOCs


Authors: Jonathan Grudin
Posted: Thu, January 28, 2016 - 3:04:01

This blog began in January 2013. There was a quid pro quo: You take the time to read my informal posts on a range of topics, I post observations only after convincing myself that they are viable. So far, only one has not held up, the first, January 2013’s “Wrong about MOOCs?” The third anniversary of the blog and the post is an occasion to review what went wrong.

In 2012, MOOCs made headlines. The concept and acronym for “massive open online course” were around earlier, but this was the year that the Coursera, Udacity, and edX platforms were founded by leading researchers from top universities. Little else was discussed in July 2012 at the biannual Snowbird conference of computer science deans and department chairs. In the opening keynote, Stanford’s President John Hennessy forecast that MOOCs would quickly decimate institutions of higher learning, leaving only a handful of research universities. He announced that by embracing MOOCs, he would see that Stanford was among the survivors.

I was sceptical. Completion rates were low. Institutions don’t change quickly. But by the time I wrote my first blog post several months later, I had drunk the Kool-Aid. Why?

There was a fateful roulette wheel spin. I randomly selected a MOOC to examine, Charles Severance’s "Internet History, Technology, and Security." I was impressed—and assumed it was typical. Yes, completion rates were low, but as we learned with the Internet and Web, attrition doesn’t matter at all when growth is exponential, and the potential for collecting data and sharing practices made contagion seem promising. The problem was that I stopped with Severance’s course. Had I examined others, I would have found that he was exceptional: experienced, talented, and dedicated. I over-generalized.

The student demographic data presented in Severance’s final lecture might have raised a flag. Most were college graduates. I thought, “Well, history is more interesting to people who lived through it.” But MOOCs still attract people who finished formal school studies. They have not made the strong inroads into undergraduate education that Hennessy and many others, including me, expected.

I overlooked changes in the undergraduate experience since my student days decades earlier. Many of us had arrived in college less informed about career possibilities. We were exploring. More students today arrive with specific career paths, no doubt for good reason. More of them work as they study. Focusing on university requirements, they may complain about the number of required courses and not volunteer to take additional online courses.

And high schools! I thought that secondary school students would take MOOCs in favored topics and outperform college or university students, transforming their image of higher education and forcing change. This was wrong for several reasons. With few undergraduate role models in MOOCs, comparisons aren’t possible. More to the point, I went to school before AP courses existed. At that time a MOOC could have been a godsend. Today, ambitious high school students pile on AP courses and take classes at community colleges. The existing system isn’t perfect, but it won’t change quickly, and it leaves no time for MOOCs.

I concluded the essay by looking ahead nine months: “The major MOOC platforms launched in 2012 claim a few million students, but … if we count students who do the first assignment and are still participating after a week, as we do in traditional courses, enrollment is much lower... The 2013-2014 academic year will provide a sense of how this will develop… The novelty effect will be gone. Better practices will have been promulgated. If there are fewer than 10 million students, the sceptics were right. More than a hundred million? Those who haven’t yet thought hard about this will wish they had.”

How did it turn out? About 10 million at the end of 2013 and 20 million at the end of 2015. This is fine growth, but it appears to be linear. Therefore, overly generous attendance criteria and low completion rates matter. MOOCs are primarily supplemental education, not replacement. Hennessy retracted his forecast. The university administrator panic of 2012 subsided.

Cheating turned out to be more of an issue than I expected. I thought that applicants listing MOOCs would be more carefully screened to confirm that they knew the material. Interviewers could inspect MOOC content or contact instructors. For an applicant to claim knowledge and then have their ignorance exposed in an interview would indicate mendacity or poor scholarship. However, employers would rather other people do the screening, and certification became part of the business model for MOOC providers. It could save prospective employers from having to confirm knowledge, but that provides an incentive to cheat. Careful studies show significant levels of cheating. Some of it is sophisticated. Just as a game player can log on as multiple characters who give accumulated materials to one among them who then quickly becomes powerful, some people log on as multiple students, guessing at test questions to determine the correct answers, which are fed to the real “student” who does well. One person automated this process, acing courses without doing any work at all. A researcher associated with a MOOC provider noted sheepishly that this student showed considerable talent.

Although some MOOC platform founders have moved on, their companies are active, and other large-scale online education efforts have surfaced. A sustainable niche has formed. Research and applied experimentation improves practice; innovative instructors are advancing the medium. Growth may be gradual, with existing institutions accommodating rather than being disrupted by the changes.

MOOCs as a context for research

MOOCs are a great setting for experimental research. With hundreds or thousands of participants, students can often be randomly assigned to different conditions with ease and no ethical downsides. Small-group projects can employ various criteria for grouping students to try different interventions. Students must agree in advance to participate, but instructors can devise assignments and interventions, all of which might improve performance, and find out which ones do, thus not unfairly disadvantaging anyone. For example, will homogeneous or diverse groups do better? Studies in other settings have found that it can vary based on the nature of the collaboration and the dimensions on which homogeneity or heterogeneity are measured. Such issues can be explored far more easily and rapidly in MOOCs than in similar classroom research of the past. Three years ago I felt that MOOCs would thrive because of the ability to iterate and move forward. I underestimated the workload in preparing a good online course, which may leave little time to add a significant research component. But a research literature is appearing.

Reflections on blogging

At the end of each year, I’ve reflected on the blogging experience. When I began in 2013 I thought I had about a year’s accumulation of thoughts that could be useful to others. The discipline of posting monthly was perfect—forcing me to budget time and developing a habit of noticing an email exchange or news story that could be developed into a suitable post. Each essay has been work, but only occasionally hard work; mostly it has been fun. Had I not committed to a monthly post, the activity would have shifted to a back burner and then fallen off the stovetop altogether, as it did for most of my fellow bloggers. There is not much feedback and reinforcement for blogging, so it has to be intrinsically motivated. When the goal is to be useful for others, this is not encouraging.

I have enjoyed many Interactions blog posts by others and wish there were more. They are not peer-reviewed. This can be a benefit, one can ask friends or respected colleagues to comment, but it makes blogging an expenditure of time that does not figure in widely recognized productivity metrics. For me, it is a way to organize thoughts and consider where I could focus more rigorously in the future. And the posts are sometimes read by some people—you, for example.



Posted in: on Thu, January 28, 2016 - 3:04:01

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


@Lilia Efimova (2016 02 02)

I enjoy reading your blog posts, but rarely comment - mainly because they are “food for thought” that doesn’t necessary provoke immediate reaction, but might surface eventually. I can imagine this could be similar for others. And, because it’s not peer-reviewed and might be tangential to one’s work it is not likely to get cited, so you don’t get feedback in a way a published article would.

All of which is next to the fact that blogging ecosystem with RSS reading an citation seems to be almost dead, which I find a pity. I tend to agree with Hossein Derakhshan (e.g. in his Guardian article) about the dynamics of social networks killing linking ecosystem, but would be interested to hear how you see it.

@Jonathan Grudin (2017 02 02)

A very late hello, Lilia. Thanks for the comment. I also tend to agree with you, and discussions seem much more truncated. For me, writing is necessary to really work out thoughts carefully, so relying on social networks would mean less complex, coherent thoughts.


Designing functional design teams


Authors: Ashley Karr
Posted: Tue, January 19, 2016 - 3:10:30

My Top Three Takeaways from Years of Researching and Designing Functional Teams:

  1. Dysfunction in a design is a direct reflection of dysfunction in a team and or organization.
  2. People should come before protocol, because when protocol comes before people, dysfunction occurs. 
  3. When people have control and power over their workplace and how they work, functional teams have space to grow and thrive of their own accord.


Functional Team Plan Brainstorm by Hatim Dossaji, Jill Morgan, and
Mary Pouleson

Introduction

For the past several years, I have been trying to find the answer to the question “How can a functional design team be created and maintained?” I have come to the conclusion that a completely functional design team is not possible, because it seems that “function” is circular rather than linear. A completely functional team would be Spock-like and thus becomes dysfunctional. However, we can minimize dysfunction and create enough function that we put out decent work, get along with our teams well enough, and at least somewhat enjoy our careers. 

The rest of this article will explain how I came to this conclusion and what I do with my teams to make them as functional as humanly possible. Please note as you read this that there is huge room for improvement in my methods and opinions, and any insights that readers have are much appreciated. Please add them to the comments section below or get in touch with me directly. Thank you in advance.

Resources

Current data and resources regarding designing functional design teams are sparse. What data and resources do exist are not very applicable and or actionable for design professionals. Most resources that I found came from management- and business-driven studies and resources and were predicated upon the concept of squeezing as much productivity from employees as possible to turn greater profits. As a design professional and as a humanist, I am more interested in creating team environments where people thrive and put out great work. I believe profits are side effects of compassion and quality, and so I make them my priority and bottom line—not money.

The following is a list of useful data and resources that I found and use on a regular basis:

  • The website 16personalities.com
  • The book Crucial Conversations by Joseph Grenny, Kerry Patterson, and Ron McMillan and the associated website vitalsmarts.com
  • The book Let My People Go Surfing by Yvon Chouinard
  • The website basadur.com
  • The website liberatingstructures.com
  • The book Discussing Design by Aaron Irizarry and Adam Connor
  • The book Designing Together by Dan Brown

Conclusions

I have drawn two overarching conclusions from these resources. In order to create functional teams: 

  1. Team members should have control over their fate in the workplace.
  2. Team members should be empowered to create and generate work, collaborate, relate, interact, socialize, and handle conflict on their own terms.

This reminds me of the famous quote by General George S. Patton: “Don't tell people how to do things, tell them what to do and let them surprise you with their results.” It seems that if design professionals are empowered to have control over how they work and how they interact with their team members, things will be more functional than if they are disempowered and told exactly how to do things and how to interact with their team members by another party. 

When I first began managing teams, I tried to solve every conflict amongst my team members. As I matured as a manager, I told my team members that they were adult professionals and they should be able to handle conflict on their own. As a result, there was less conflict all around. (I do have a caveat that if they cannot resolve the conflict on their own and they have to come to me, they waive their rights, and I get to decide their fate.) 

The Functional Team Plan

Recently, I created something I call the Functional Team Plan. I liken it to a project plan that takes into account emotional intelligence. It helps teams set their emotional tone, sculpt their culture, and form their communication and conflict resolution styles. My teams must draw it up at the same time they draw up their project plan and turn both in before our first formal design evaluation, during which we go over both of these documents. The Functional Team Plan includes the following:

Section 1: Individual Style

  • Each team member’s style of stress response (see Crucial Conversations)
  • Each team member’s personality type and how this affects how they are in the workplace (see 16personalities.com)
  • Each team member’s creative problem solving style (see Basadur)

Section 2: Identify, Define, and Describe (IDD)

  • Dysfunctional teams
  • Functional teams
  • How to move from dysfunctional teams to functional teams
  • How to create and maintain functional teams

Section 3: Plan

  • Communication plan
  • Decision-making plan
  • Conflict resolution plan

Once I created this loose structure for my teams, I found that they were able to handle conflicts on their own and in more respectful ways so that their projects developed at a good clip and their relationships with their teammates deepened. I believe the reason why this happened is because we removed the taboos and stigmas regarding talking about and dealing with conflict. We accept the fact that working on teams can be hard and conflict is bound to happen when people interact with each other on a regular basis. We make the hard things part of the conversation from the inception of a project. This right here is the gem—the heart of a functional team.

I will wrap up this article with my outline for the activity I take my teams through to create their Functional Team Plan. If anyone reading this article decides to run this workshop and the Functional Team Plan, please contact me. I am happy to give any help I can, and I would like to know how it works out for you.

Developing the Functional Team Plan Workshop 

  1. Overview (Total workshop time 2.5 - 3 hours)
    1. Step 0 - Take the 16personalities.com test, complete your Basadur profile, and complete your stress assessment from Crucial Conversations before this activity begins
    2. Step 1 - IDD Dysfunctional Teams
    3. Step 2 - IDD Functional Teams
    4. Step 3 - IDD How to Move from Dysfunctional to Functional Teams
    5. Step 4 - IDD How to Create and Maintain Functional Teams
    6. Step 5 - Discuss Step 0 results
    7. Step 6 - Create your communication, decision making, and conflict resolution plans
    8. Step 7 - Submit your Functional Team Plan
  2. Step 1: IDD Dysfunctional Teams (10 minutes)
    1. Diverge - 5 minutes
      1. identify, define and describe dysfunctional teams on post-its
      2. you can use words, phrases, examples, stories
    2. Converge - 5 minutes
      1. create an affinity diagram and find emergent themes
      2. ii.capture these themes to help you create your Functional Team Plan
  3. Step 2: IDD Functional Teams (10 minutes)
    1. Diverge - 5 minutes
      1. identify, define and describe functional teams on post-its
      2. you can use words, phrases, examples, stories
    2. Converge - 5 minutes
      1. create an affinity diagram and find emergent themes
      2. capture these themes to help you create your Functional Team Plan
  4. Step 3: IDD Dysfunctional > Functional Teams (10 minutes)
    1. Diverge - 5 minutes
      1. identify, define and describe how to move from a dysfunctional team to a functional team on post-its
      2. you can use words, phrases, examples, stories
    2. Converge - 5 minutes
      1. create an affinity diagram and find emergent themes
      2. capture these themes to help you create your Functional Team Plan
  5. Step 4: Creating and Maintaining Functional Teams (10 minutes)
    1. Diverge - 5 minutes
      1. identify, define and describe how to create and maintain a functional team on post-its
      2. you can use words, phrases, examples, stories
    2. Converge - 5 minutes
      1. create an affinity diagram and find emergent themes
      2. capture these themes to help you create your Functional Team Plan
  6. Step 5 - Discuss Step 0 Results (30 minutes)
    1. Each team member explains their results from 16personalities, Basadur, and Crucial Conversations stress assessments so team members can understand their personality type, problem solving approach, and how they react to stress.
  7. Step 6 - Create Communication, Decision Making, and Conflict Resolution Plans (30 minutes)
    1. Team members agree and put in writing how they will communicate with one another, how they will make decisions, and how they will resolve conflict.
  8. Step 7 - Submit Your Functional Team Plan (30 Minutes)
    1. Teams will write out their team plan and submit to their manager. (The plans are usually 2 pages in length.)

Posted in: on Tue, January 19, 2016 - 3:10:30

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


No Comments Found


Job growth


Authors: Jonathan Grudin
Posted: Mon, January 11, 2016 - 10:03:16

Automation endangers blue and white collar work. This refrain is heard often, but could new job creation keep pace with job loss? Some leading technologists forecast that few of us will find work in fifteen years. They describe two possible paths to universal unemployment.

1. Robots or computers become increasingly capable. They have already replaced much human labor in farms, factories, and warehouses. Hundreds of thousands of telephone operators and travel agents were put out of work. Secretarial support in organizations thinned. In this view, jobs will be eradicated faster than new ones are created.

2. The technological singularity is reached: We produce a machine of human intelligence that then educates itself around the clock and designs unimaginably more powerful cousins. Human beings have nothing left to do but wonder how long machines will keep us around. Wikipedia has a nice article on the singularity. The concept arose in its current form in the mid-1960s. Many leading computer scientists predicted that the artificial intelligence explosion would occur by 1980 or 1990. Half a century later, leading proponents are more cautious. Some say ultra-intelligence will arrive before 2030. The median forecast is 2040. Ray Kurzweil, an especially fervent analyst, places it at 2045.

If the singularity is never reached, the jobs question centers on the effect of increasingly capable machines. If the singularity appears, all bets are off, so our discussion is limited to employment between now and its arrival.

My view is that the angst is misplaced: The singularity won’t appear [1] and job creation will outpace job loss. I apologize in advance for a U.S.-centric discussion. It is the history I know, but in our increasingly globalized economy much of it should generalize.

Occupational categories such as farming, fishing, and forestry are in long-term decline. Automation eliminates manufacturing jobs and reduces the need for full-time staff to handle some white collar jobs. Even when more new jobs appear than are lost, the transition will be hard on some people. Not everyone had an easy time when computerization displaced telephone operators and digital cameras eliminated the kiosks where we dropped off film canisters and picked up photos a day later. Nevertheless, jobs increased overall. Productivity rose, and could provide resources for safety nets to help us through disruptions.

The first massive employment disruption

For hundreds of thousands of years, until agriculture arose in the Fertile Crescent, China, Mesoamerica, and South America, our ancestors were hunters and gatherers. To shift from hunting to domesticating animals, from gathering to planting and tending crops, required a significant retooling of job skills. Suddenly, fewer people could produce enough food for everyone! Populations soared. With no television or social media, what would former hunters and gatherers do with their time? 

The parallel is strong. Existing jobs were not needed—more efficient new production systems could be handled by fewer people, in a time of population growth. Some people could continue to hunt and gather, and decry change. The effect was not mass unemployment, it was an unprecedented rise in new occupations.

These included working to improve agriculture and animal husbandry, breed more productive plant and animal species, and develop irrigation systems. But most new occupations were outside food production. Music, arts, and crafts flourished. Pottery and weaving reached exquisite levels; the Inca developed light tightly woven garments superior to the armor worn by the Spanish. Metallurgy flourished, useful and aesthetic. Trade in these goods employed many. Accounting systems were developed: Literacy and mathematics arose in agricultural communities. Stadiums were built for professional athletes. Surplus labor was used to build pyramids, which involved developing and applying engineering methods and management practices. Armies and navies of a scale previously unimaginable appeared on different continents. Political, religious, and medical professions arose.

Charles Mann’s 1491 describes what our species accomplished in the western hemisphere following the annihilation of traditional jobs. Before diseases arrived from Europe, western hemisphere populations were far larger. Archaeologists have only recently discovered the extent of their accomplishments. Mann identifies fascinating distinctions between the agricultural civilizations in the south and the hunter-gatherers who held sway in the north.

Prior to the transition to agriculture, relatively primitive tool-making, healing, cave-painting, and astronomy were part- or full-time occupations for some [2]. When agriculture automated the work of hunting and gathering, side activities exploded into organized occupations. Self-sufficiency in food made possible Chinese philosophers, Greek playwrights, and Incan architects.

Industrial revolutions

I lived in Lowell, Massachusetts, where ample water power in the 1820s (somewhat before I took residence) gave rise to the first industrial revolution in the U.S., built on pirated 50-year-old British technology. The transition from hand-crafted to machine production started with textiles and came to include metals, chemicals, cement, glass, machine tools, and paper. This wide-scale automation put many craft workers out of jobs. The Luddite movement in England focused on smashing textile machines. However, efficient production also created jobs—and not only factory jobs. In Lowell, the initial shortage of workers led to the extensive hiring of women, who initially received benefits and good working conditions [3]. Over time, they were replaced by waves of immigrant men who were not treated as well. Other jobs included improving factory engineering, supplying raw materials, and product distribution and sales. Inexpensive cement and glass enabled construction to boom. Despite the toll on craft work, the first industrial revolution is credited with significantly raising the overall standard of living. Of course, pockets of poverty persisted. As is true today, wealth distribution is a political issue.

The second industrial revolution began in the late 19th century. This rapid industrialization was called “the technological revolution,” though we might like to repurpose that title for the disruption now underway. Advances in manufacturing and other forms of production led to the spread of transportation systems (railroads and cars); communication systems (telegraph and telephone); farm machinery starting with tractors; utilities including electricity, water, and sewage systems; and so on. Not only buggy whip manufacturers were put out of business. Two-thirds of Americans were still employed in agriculture at the outset; now it is 2%. The U.S. population quadrupled between 1860 and 1930, largely through immigration. Job creation largely kept pace and the overall standard of living continued to rise, although many people were adversely affected by the changes, exacerbated by economic recessions. In developed countries, democracies offset disruptions and imbalances in wealth distribution by constraining private monopolies and creating welfare systems.

Since the end of the second industrial revolution in 1930, the U.S. population has tripled. Technological advances continue to eradicate jobs. Nevertheless, unemployment is lower than it was in the 1930s. How can this be?

A conspiracy to keep people employed

Productivity increases faster than the population. People have an incentive to work and share in the overall rise in the standard of living. When machines become capable of doing what we do, we have an incentive to find something else to do. Those who own the machines benefit by employing us to do something they would like done. They do not benefit from our idle non-productivity; in fact, they could be at risk if multitudes grow dissatisfied. The excesses of the U.S. robber barons gave rise to a socialist movement. High unemployment in the Great Depression spawned radical political parties. The U.S. establishment reacted by instituting a sharply progressive tax code, Social Security, and large jobs programs (WPA, CCC), with World War II subsequently boosting employment. Should machines spur productivity and unemployment loom, much-needed infrastructure repair and improvement could employ many.

If we face an employment crisis. The U.S. does not at present. The Federal Reserve raised interest rates in part to keep unemployment from falling further, fearing that wages will rise and spur inflation.

Many new jobs are in the service sector, which some say are “not good jobs.” Really? What makes a job good? Is driving a truck or working an assembly line more pleasant than interacting with people? “Good” means “pays well,” and pay is a political matter as much as anything else. Raise the minimum wage enough and many jobs suddenly get a lot better. Service jobs that are not considered great in one country are prestigious in others, with relative income the key determinant.

Where will new jobs come from?

The agricultural revolution parallel suggests that activities that already have value will be refined and professionalized and entirely new roles will develop. Risking a charge of confirmation bias, let me say that I see this everywhere. For example, in the past, parents and teachers coached Little League and high school teams for little or no compensation (and often had little expertise). Today, there is a massive industry of paid programs for swimming, gymnastics, soccer, dance (ballet, jazz, tap), martial arts, basketball, football, yoga, and other activities; if kids don’t start very young they won’t be competitive in high school. There is a growing market for paid scholastic tutors. Technology can help with such instruction, but ends up as tools for human coaches who also address key motivational elements (for both students and parents). At the other end of the age spectrum, growth is anticipated in care for elderly populations; again, machines will help, but many prefer human contact when they can afford it. For those of us who are between our first and second childhoods, there are personal trainers and personal shoppers, financial planners and event planners, uber drivers and Airbnb proprietors, career counselors and physical therapists, website designers and remodel coaches. Watch the credits roll for Star Wars: The Force Awakens—over 1000 people, many in jobs that did not exist until recently.

My optimism is not based on past analogies. It comes from credit for human ingenuity and the Web, which provides the capability to train quickly for almost any occupational niche. Documents, advice repositories, YouTube videos, and other resources facilitate expertise acquisition, whether you select teaching tennis, preparing food, designing websites, or something else. Yes, anyone who wants to design a new website can find know-how online, but most will hire someone who has already absorbed it. The dream of “end-user programming” has been around for decades; the reality will never arrive because however good the tools become, people who master them will have skill that merits being paid to do the work quickly and effectively. For any task, you can propose that a capable machine could do it better. But a capable machine in the hands of someone who has developed some facility will often do even better, and developing facility becomes ever easier.

For example, language translators and interpreters are projected to be a major growth area as globalization continues. Machine translation has improved, but is not error-free. Formal business discussions will seek maximum accuracy. Automatic translation will improve the efficiency of the human translators who will still be employed for many exchanges.

A challenge to the prophets of doom

When well-known technologists predict that most of their audience will live to see zero employment, I wonder what they think the political reaction to even 50% unemployment would be. The revolt of the 19th-century Luddites with torches and sledgehammers could be small potatoes compared to what would happen in the land of Second Amendment rights.

Fortunately, it won’t come to that. Instead of predicting when all the jobs will be gone, let the prophets of job loss tell us when the number of jobs will peak and begin its descent. Until that mathematically unavoidable canary sings, most of us can safely toil in our coal mines.

Let’s assume that machines grow more capable every year. It doesn’t always seem that way, but I don’t use industrial robots. The amusing Amazon warehouse robot videos do show automation of reportedly not-great jobs. Despite our more capable machines, the U.S. economy has added jobs every single month for more than five years. Millions more are working than ever before, despite fewer government workers, a smaller military, and no national work projects. Once or twice a decade a “market correction” reduces jobs temporarily, then the upward climb resumes [4].

Is it a coincidence that as the population doubled over and over again, so did the jobs? Of course not.

Endnotes

1. We can’t prove mathematically that the singularity will not be reached, but the chance of it happening in the 21st or 22nd century seems close to zero, a topic for a different blog post.

2. Why did these appear so late in human evolution? Possibly a necessary evolutionary step was taken. Perhaps reduction in predators and/or climate stabilization made hunting and gathering less of a full-time struggle.

3. The national park in Lowell covers the remarkable women’s movement that arose and was suppressed in the mills.

4. Use the slider on this chart: https://alfred.stlouisfed.org/series?seid=PAYEMS

Thanks to John King for discussions on this topic; his concerns about short-term disruptions have tempered my overall optimism.



Posted in: on Mon, January 11, 2016 - 10:03:16

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Heartbreak House


Authors: Deborah Tatar
Posted: Thu, January 07, 2016 - 10:30:53

Years ago, when I read Bernard Shaw’s play Heartbreak House, I didn’t like it and it made me angry. Written before WWI and (not surprisingly) set in England, it focused in two groups of people, the denizens of Heartbreak House and those of Horseback Hall. Those of Heartbreak House are sensitive, aware, artistic, and unable to act. Those of Horseback Hall are confident, callous, and bellicose. And they dominate, dominate, dominate. They do not sell—we’re talking about cartoons that delineate a certain kind of British view in located in a particular historical moment—but they do rule unquestioningly. In my distant memory, the play ends with the members of Heartbreak House going outside to stare at the sky while waiting impotently and impatiently for the bombs to drop, because at least that would change something. Bernard Shaw was not sufficiently prescient to anticipate the horrors of Verdun, the Somme, Gallipoli, and so forth. But he saw that war prosecuted by Horseback Hall would be truly terrible, and it was.

At that time in my life, I was not a designer. In fact, I barely knew what a designer was or did, apart from making excessively expensive clothes and accoutrements. But my response to the play was designerly in that it was founded in the need to do something. How could those who knew what was important tolerate inactivity? How could they cede power to fools? 

Despite a subsequent career in making things and, in that way, taking action, I understand the sadness of the situation of Heartbreak House better now than I did at twenty. Why were the inhabitants of Heartbreak House so passive? To act rightly, we first structure our world so that we know what actions to take, when to take them, and what they mean. We make choices that bring us to the brink of action and then it is only a little step over the brink. The inhabitants of Heartbreak House were helpless to do the right things when it counted because their scope of action was defined by what was promoted as valuable by the stentorious inhabitants of Horseback Hall. 

Recently, I wrote a blog piece about the importance of Feminist Maker Spaces, as reported by Fox, Ulgado et al [1]. I don’t want to over-romanticize these, but I want to express how happy it makes me to imagine them as a kind of modern, designerly, more functional response to a kind of split that in some ways is not unlike the Heartbreak House/Horseback Hall split. Of course, it is not about war. No bombs are going to drop if Google does evil. But it is about hegemony. Feminist Maker Spaces invite small and local actions, but they also presage and forecast different kinds of larger actions and rhetoric than we commonly see in the dominant technological culture. 

Endnote

1. Fox, S., Ulgado, R. R., & Rosner, D. (2015, February). Hacking Culture, Not Devices: Access and Recognition in Feminist Hackerspaces. Proc. of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (pp. 56-68). ACM. http://dl.acm.org/citation.cfm?doid=2675133.2675223



Posted in: on Thu, January 07, 2016 - 10:30:53

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


No Comments Found


Thoughts on the SIGCHI Accessibility Report


Authors: Jennifer Mankoff
Posted: Wed, December 30, 2015 - 10:43:54

This post is based around the SIGCHI Community Accessibility Report, posted on behalf of the SIGCHI Accessibility Community. Find us at https://www.facebook.com/groups/SIGCHIaccess/ and http://www.sigchi.org/communities/access; contact us at sigchi-accessibility@googlegroups.com.

About 15% of people worldwide have a disability [1] and the likelihood of experiencing disability naturally increases with age. SIGCHI can attract new members, and make current members feel welcome, by making its events and resources more inclusive to those with disabilities. This in turn will enrich SIGCHI, and help it to live up to the ideal of inclusiveness central to the concept of user-centered design. It will also help to drive innovation, as accessibility efforts often drive more general technology advances (an example is speech recognition, which has many applications outside of accessibility today).

Inclusion of individuals with disabilities and accessibility have long been a focus of the community of scholars and practitioners affiliated with SIGCHI, starting in 1994, when SIGCHI’s flagship conference CHI had an accessibility chair. The CHI conference has a large number of papers dealing with accessibility (9% of papers in CHI 2015 were accessibility or disability related [2]). The inclusion of researchers and participants with disabilities within the SIGCHI community has led to advances in general technologies [3] and in research practices (e.g., Ability-Based Design [4]). 

However, written works about the accessibility of our scientific processes [5] and outputs [6], and reports and experiences from members of our community with disabilities have revealed a gap in the accessibility of conferences, research papers, and other aspects of SIGCHI. This spurred the efforts by the SIGCHI EC, which in turn encouraged the formation of the SIGCHI Accessibility Community [7], so that people with disabilities would have an avenue for helping to improve things. The mission of the Accessibility Community is to improve the accessibility of SIGCHI conferences and meetings (which includes awards ceremonies, program committee meetings, conferences, and so on) and the digital accessibility of SIGCHI websites and publications.

The first action of the Accessibility Community was to create an Accessibility Report intended to support informed decisions in the future and set goals that are responsive to the best practices and biggest problems facing our community, including specifically SIGCHI’s “physical” services (conferences and meetings) and “digital” services (websites, videos, papers, etc.), as well as its overall inclusiveness for people with disabilities. Our findings were based on input from the community at large, survey data from CHI attendees, and a survey of 17 SIGCHI conferences (only four of which had accessibility chairs in 2014). They show that many conferences and other SIGCHI resources do not adequately address accessibility. The report also sets out (hopefully) achievable goals for addressing them. 

While the report began as a well-intentioned data-collection effort, it has sparked a variety of positive and negative responses over the past few months from the people it was meant to help (SIGCHI members who face accessibility challenges) and the people it impacts (SIGCHI leaders, conference organizers, and so on). While everyone appreciates the effort that went into the report, it has functioned almost like a straw man in drawing out issues and facts that were not available (or that we did not have the insight to go after) when we were writing the report. Some of these include:

  • The inability of such a report to provide any sort of concrete handle by which disabled conference attendees (or conference organizers) can get real resources applied to problems of accessibility. This is a hot button issue that includes a range of wishes and concerns, from legal action against in-accessible conferences to the financial bottom line of conference chairs, who frequently fear a budget deficit up until the conference is over.

  • The lack of communication between the accessibility community in general (and disabled conference attendees in particular) and SIGCHI’s/ACM’s leadership. It turns out SIGCHI is putting money toward video captioning, ACM has been working toward universal accessibility of papers for some time (and has the beginnings of a plan in place), and more is possible (but only if communication channels are open).

  • The varied problems faced by conference chairs running conferences of different sizes were not represented at all in our report (something we hope to rectify in this years data-collection efforts). These are complex and multi-faceted, and include trade-offs that are easy to ignore when a single advocacy goal is in place (as by the accessibility community) but impossible to ignore when running a conference.

These are just some examples, and to a disabled attendee whose career depends on successful conference networking, they probably seem irrelevant to their basic right for equal treatment. However, when it comes to the more ambiguous problem of enacting accessibility, they are primary concerns that must be dealt with. 

Which raises my HCI and interaction design antennae sky high. This is a wicked problem [8], with all of the difficulties inherent in attempting to modify a complex multi-stakeholder system. In addition, one solution will never fit all the varied contexts in which accessibility needs to be enacted. Worse, to the extent that the “designer” here is the accessibility community, it’s not clear that our conceptual understanding matches that of the “user” we are designing around (conference organizers). Value-sensitive design, mental model mismatches (between different stakeholders affected by changes intended to increase accessibility), multi-stakeholder analyses, service design... all of these frames may help with the task of making SIGCHI as inclusive as I believe we’d all like it to be. 

So unsatisfying a conclusion to reach when the people who are differentially affected deserve a straightforward solution that directly addresses their needs and their right to access. Yet well-meaning change is not enough. Well-designed change is the bar we should strive to reach. 

Endnotes

1. http://www.un.org/disabilities/default.asp/default.asp?id=18

2. 34 of 379 papers, listed here: http://cs.rochester.edu/u/brady/chi2015accessibility.html

3. For example, speech synthesis and OCR have early roots in the Kurzweil Reader, a reading tool for people with visual impairments.

4. Wobbrock, J. O., Kane, S. K., Gajos, K. Z., Harada, S., & Froehlich, J. (2011). Ability-based design: Concept, principles and examples. ACM Transactions on Accessible Computing (TACCESS), 3(3), 9.

5. Brady, E., Zhong, Y., & Bigham, J. P. (2015, May). Creating accessible PDFs for conference proceedings. In Proceedings of the 12th Web for All Conference (p. 34). ACM. http://dl.acm.org/citation.cfm?id=2746665

6. Reuben Kirkham, John Vines, and Patrick Olivier. 2015. Being Reasonable: A Manifesto for Improving the Inclusion of Disabled People in SIGCHI Conferences. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA '15). ACM, New York, NY, USA, 601-612. http://doi.acm.org/10.1145/2702613.2732497 

7. http://www.sigchi.org/communities/access

8. Rittel, H. W., & Webber, M. M. (1973). Dilemmas in a general theory of planning. Policy sciences, 4(2), 155-169.



Posted in: on Wed, December 30, 2015 - 10:43:54

Jennifer Mankoff

Jennifer Mankoff is an associate professor in the Human Computer Interaction Institute at Carnegie Mellon University.
View All Jennifer Mankoff's Posts


Post Comment


No Comments Found


Crying wolf


Authors: Jonathan Grudin
Posted: Fri, December 11, 2015 - 2:58:59

In a stack of old papers headed for recycling was a Wall Street Journal article subtitled “Managers who fall for their office PCs could be the downside of the computer age.” In 1987, hands-on computer use was considered dangerous, for employees and employers alike!

Since Mary Shelley’s Frankenstein (1818), technology has often been viewed with dread. Woe unto us for challenging the gods with artificial intelligence, personal computers, email, the Internet, instant messaging, Wikipedia, Facebook, and Twitter.

AI is a special case. Grim outcomes at the hands of intelligent machines are a perennial favorite of some researchers, filmmakers, and the popular press, but the day of reckoning is put off the lack of technical progress. We don’t know what an intelligent machine will do because none exist. The other technologies did exist when the hand-wringing appeared—PCs, the Internet, Facebook, and so on. The fear was not that they would defeat us, but that we would use them foolishly and perish. An “addictive drug” metaphor, not a lumbering monster.

But the predictions were wrong. Most of us find ways to use new technologies to work more effectively. Our personal lives are not adversely affected by shifting a portion of television-watching time to computer use. Does fear of technological Armageddon reflect a sense of powerlessness, our inability to slow carbon emissions and end political dysfunction? Perhaps our inner hunter-gatherers feel lost as we distance ourselves ever more from nature and magical thinking. Alternatively, it could be that each of these technologies challenged an ancient practice that strengthened in recent centuries: hierarchy.

1987

In the article that I set aside a quarter century ago, the technology reporter from the Wall Street Journal’s San Francisco Bureau wrote of “Rapture of the Personal Computer, a scourge characterized by obsessive computer tinkering, overzealous assistance to colleagues with personal computer problems, and indifference to family, friends, and non-computer job responsibilities.” Indifference to family, friends, and responsibility is a common theme in dystopian assessments of a new technology.

“In the long run, it’s a waste of time for the organization,” an assistant vice president of Bank of America concluded. A consultant described training 600 employees of another company to use desktop computers. “About 50 pushed into the technology much deeper, becoming de facto consultants to their departments. But a short time later, 40 of the 50 were laid off.”

The horror stories emphasize bad outcomes for computer users, but on close inspection, hierarchy seems threatened more than organizational health. The author writes, “The question of how to handle the 8% to 10% of users who seem to fixate on costly machines has dogged managers up and down the organizational flow-charts.” A good manager “leads subordinates by the hand through new software packages.” “One key to getting the most from resident experts is to shorten their leashes.” A manager is quoted: “The intention is not to stamp out creativity, but the important thing is that creativity has to be managed.”

“The problem has grown so serious,” the author maintains, “that some companies are even concluding that decentralized computing—placing a little genie on every desk instead of keeping a big one chained in the basement—may not have been such a keen idea after all.” In the end, not many acted on such conclusions. Little genies grew in number through the 1980s and 1990s.

The article concludes with an object lesson, a “so-called information-systems manager,” who after seventeen years wonders how his life could have been different. Despite a degree in economics, which to the Wall Street Journal means that he could have been a contender, he “weathered endless hours of programming frustration, two detached retinas, and the indignity of most people taking his work for granted.”

Managing what we don’t understand

In 1983, I took a job in a large tech company that had an email system in place. My new manager explained why no one used it: “Email is a way that students waste time.” He noted that it was easy to contact anyone in the organization: I should write a formal memo and give it to him. He would send it up the management ladder to the lowest common manager, it would go down to the recipient, whose reply would follow the reverse path. “You should get a response quickly,” he concluded, “in three to five days.” He advised me to write it by hand or dictate it and have it typed up. “Don’t be seen using a keyboard very much, it’s not managerial.”

Technology could be threatening to managers back then, even in tech companies. Few could type. Their cadence of planned, face-to-face meetings was disrupted by short email messages arriving unpredictably. Managing software developers was as enticing as managing space aliens; promises that “automatic programming” would soon materialize delighted managers.

As email became familiar, new technologies elicited the same fears. Many companies, including IBM and Microsoft, blocked employee access to the Internet well into the 1990s. When instant messaging became popular in the early 2000s, major consulting companies warned repeatedly that IM was in essence a way that students waste time, a threat to productivity that companies should avoid. In 2003, ethnographer Tracy Lovejoy and I published the article “Messaging and Formality: Will IM Follow in the Footsteps of Email?” [1]. 

People tried several new communication technologies in the early 2000s as they looked for ways to use computers that they had acquired during the Internet bubble. This software, popular with students, also incurred management suspicion.

Studying IM, employee blogging, and the use of wiki and social networking sites in white-collar companies, I found that they primarily benefit individual contributors who rely on informal communication. Managers and executives focus more on structured information (documents, spreadsheets, slide decks) and formal communication; most saw little value in the new media. As with email in an earlier era, individual contributors using these tools can circumvent formal channels (which now often includes email!) and undermine hierarchy.

However, the 2000s were not the 1980s. Managerial suspicion often ran high, but it was more short-lived. Many managers were tech users. Some found uses for new communication technologies. A manager stuck in a large meeting could IM to get information, chat privately with another participant, or work on other things. Some executives felt novel technologies could help recruit young talent. There was some enthusiasm for wikis, which offer structure and the hope of reaching the managers’ shimmering, elusive El Dorado: an all-encompassing view of a group’s activity and status. But wikis thrive mainly in relatively chaotic entrepreneurial settings; once roles are clear, simpler communication paths are more efficient. A bottom-up wiki approach competes, a little or a lot, with a clear division of labor and its coordinating hierarchy.

Knowledge and power

My daughters occasionally ask for advice on a homework assignment. If I need help, I usually start with a string search or Wikipedia. They often remind me that their teachers have drilled in that Wikipedia is not an acceptable source.

Do you recall the many denunciations of Wikipedia accuracy a decade ago? Studies showed accuracy comparable to the print encyclopedias that teachers accepted, but the controversy still rages; ironically, the best survey is found in Wikipedia’s Wikipedia entry. Schools are only slowly getting past blanket condemnations of Wikipedia.

I average two or three Wikipedia visits a day. Often I have great confidence in its accuracy, such as for presidential primary schedules. Wikipedia isn’t the last word on more specialized or complex academic topics, but it can provide a general sense and pointers to primary sources. Hearing about an interesting app or organization, I check Wikipedia before its home page. For pop culture references that I don’t want to spend time researching, a Wikipedia entry may get details wrong but will be more accurate than the supermarket tabloids on which many people seem to rely.

Why the antagonism to a source that clearly works hard to be objective? If knowledge is power, Wikipedia and the Web threaten the power of those who control access to knowledge: teachers, university professors, librarians, publishers, and other media. Hierarchy is yielding to something resembling anarchy. The traditional sources were not unimpeachable. I recall being disappointed by my parents’ response when I excitedly announced that My Weekly Reader, distributed in school, reported that we would use atomic bombs to carve out beautiful deep-sea ports. More recently, I discovered in 1491 that much of what we learned in school about early U.S. history was false. My science teachers, too, were not all immune to inventing entertaining accounts that took liberty with the facts. Heaven knows what they teach about evolution and climate change in some places. If a student relies on Wikipedia instead, I can live with that.

If a wolf does appear?

I heard Stewart Brand describe deforestation and population collapse on Easter Island, specifying the date when someone cut down the last tree, “knowing that it was the last tree.” Former U.S. Secretary of Defense Robert McNamara became a fervent advocate of total nuclear disarmament after living through three close brushes with nuclear war. Neither Brand nor McNamara were confident that we will step on the brakes before we hit the wall.

Perhaps we will succumb to a technological catastrophe, but I’m more optimistic. We may not address global warming until more damage is incurred, but then we will. We’ll rally at the edge of the abyss. Won’t we?

Musical chairs

In the meantime we have these scares. Perhaps the Wall Street Journal, Gartner, and others were right to warn managers of danger, but missed the diagnosis: The threats are to the managers’ hierarchical roles. When employees switched to working on PCs, their work was less visible to their managers. My manager in 1983 was not a micro-manager, but he got a sense of my work when my communication with others passed by him; when I used email, he lost that insight and perhaps opportunities to help. Public concern about automation focuses on the effects on workers, but the impact on managers may be greater as hierarchies crumble [2].

Consider Wikipedia again. Over time it became hierarchical, with more than 1000 administrators today. This may seem a lot, but it is one for every 100 active (monthly) editors and 20,000 registered editors. A traditional organization would have ten times as many managers. Management spans grow, even as more work becomes invisible to managers.

Fears about online resources may ebb when management ceases to feel threatened. Concerns were raised when medical information of variable quality flooded the Web. Today, many doctors take in stride the availability of online information to patients who still consider their doctor the final authority. Dubious health websites join village soothsayers and snake oil salesmen, who always existed, and may have been less visible and accountable. And might sometimes help.

In organizations, individual contributors use technology to work more efficiently. Hierarchy remains, often diminished (especially in white collar and professional work). Can hierarchy disappear? Perhaps, when everyone knows exactly what to do, in the organizational form that Henry Mintzberg labeled adhocracy. For example, a film project —an assembly of professionals handed a script—or a barn-raising by a group who all know their tasks. Technology can help assemble online resources and groups of trained people who can manage dependencies themselves, leaving managers to monitor for high-level breakdowns.

This is the efficiency of the swarm. An ant colony has no managers. Each worker is programmed to know what to do in any situation, with enough built-in system redundancy to withstand turnover. In our case, each worker has education, online resources, and communication tools to identify courses of action. With employee turnover on the rise, organizations build in redundancy, either in people or with online resources and tools that enable gaps to be covered quickly.

Someday a wolf may appear. In the meantime, the record indicates that each major new technology changes the current way of working and threatens those who are most comfortable with it, primarily management. Forecasts of doom are accompanied by suggestions that the tide can be ordered back, that the music can continue to play. Then, when the music stops, a few corner offices will have been converted to open plan workspace, and work goes on.

Endnotes

1. Links to this and studies of employee blogging, wiki use, and social networking in organizations are in a section of my web page titled “A wave of new technologies enters organizations.”

2. JoAnne Yates in Control through Communication described the use of pre-digital information technologies to shape modern hierarchical organizations and enable them flexibility beyond the reach of hierarchies that had existed for millennia. She mentioned ‘humanizing’ activities such as company parties and newsletters, less about information than emotional bonding, creating an illusion of being in one tribe, and thereby strengthening rather than undermining hierarchy.

Thanks to John King for general discussion and raising the connection to Yates’ work.



Posted in: on Fri, December 11, 2015 - 2:58:59

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Leaning


Authors: Deborah Tatar
Posted: Wed, November 25, 2015 - 12:42:23

I started this series of posts with concerns about the allure of “going west” to my undergraduate students. My concern is about all the students and indeed my own sons, but especially the women. Sarah Fox, Rachel Rose Ulgado, and Daniela Rosner wrote a wonderful article about feminist hackerspaces [1] for the 2015 CSCW conference. The women they studied were by-and-large employed as professionals in the high tech industry. They were successful. Yet these women did not find what they sought or imagined through work. And when they turned to hacker spaces, there was more disappointment. The male hacker spaces were imbued with what they called ****-testing. 

Fox et al.’s account gave me a kind of PTSD-y flashback. Silicon Valley was a world in which the more prestige I acquired, the less I enjoyed success. The more I encountered the ultra-confident fantasies of freedom and superiority that drove so much behavior, the less I wanted to play the game. Eventually, I escaped. In Silicon Valley, freedom is often a zero-sum game, enforced by what some social scientists call micro-aggressions. Efficiency is often a way of one person taking for him or herself without having to think about or appreciate others.

I am so glad that the women Fox et al. report on have been able to make their own spaces, and I hope that these spaces truly help them lead the lives of whole people. But I do not think that feminist hacker spaces are going to solve the problems. 

The conditions that lead women to create feminist spaces are not the conditions that my students imagine when their eyes light up with the hope of going west. Well, that is not true. Some of my female students have a kind of untrammeled ambition. They really do seem to believe that, as one of my male students wrote in an essay on ethics some years ago, their chief obligation in life is to have a job. Anything that might threaten their job or success is just an inefficiency.

The feminist hackers were more like the other kind of student, the kind that hope to use their capabilities to be part of something bigger than themselves. This is a confusing mental space to be in. The feminist hackers exhibit an instructive ambivalent resentment towards Sheryl Sandberg’s “Lean-In Circles.” As Fox et al. note, while the feminist hackers have long engaged in many of the behaviors that Sandberg now recommends, they resent her and her advice. Indeed, it is important to realize that the same advice, the same behavior, is not the always the same. When women, for example, get together to address “imposter syndrome,” their larger attitude makes all the difference. Is the discussion a tool to understand their position in the world or a club used as a way to reproach women for lack of perfection? Lots of people say “Be more confident!” but no one seems to notice the cost that we pay for failed attempts at assertion. I remember watching my contributions to meetings regularly being ascribed to men—and then being called “arrogant” by my boss for acting exactly as I believed that the men had acted. I was devastated—and trapped. 

It also makes all the difference in the world whether the womens’ collective ambition is to dominate others or to connect. Sheryl Sandberg may be a perfectly lovely person, but her ability to get herself heard is also part of a willingness to profit off of other people’s compliance. I don’t admire that and I don’t want to design for it. 

Endnote

1. Fox, S., Ulgado, R. R., & Rosner, D. (2015, February). Hacking Culture, Not Devices: Access and Recognition in Feminist Hackerspaces. Proc. of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (pp. 56-68). ACM. http://dl.acm.org/citation.cfm?doid=2675133.2675223



Posted in: on Wed, November 25, 2015 - 12:42:23

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


No Comments Found


Technology and nature


Authors: Jonathan Grudin
Posted: Fri, November 06, 2015 - 3:00:07

The black water mirrored the tropical forest above it so perfectly that the shoreline was impossible to pinpoint. The reflection included the palms with cannonball-clustered fruit that had stained the water. We kayaked for hours, seeing no sign of other human presence on the network of channels and lagoons that drain the marshes along the Caribbean coast. Or so we thought.

Costa Rica ranks first in sustainable ecotourism, with a plan to be the first carbon-neutral country by 2021 and a carbon-neutral airline already flying. Costa Rica ended deforestation and draws 90% of its energy from renewable sources. It long ago disbanded its army and recently banned recreational hunting—good news for its panthers, tapirs, sloths, monkeys, sea turtles, and colorful birds. It has the world’s highest proportion of national park and protected land (25%), and fifty times the world average biodiversity per hectare.

Technological support preceded our two-week visit. Anywhere Costa Rica provides online booking with an informative 24x7 chat line. It was effectively free. Our prices for hotels and services were what we would pay walking in off the street; Anywhere Costa Rica receives commissions. Telecommunication was reliable: Pickups were on time and convenient. The company even accommodated a spontaneous schedule change. 

Costa Rica’s three principal tourism zones are the Pacific coast, inland mountains including Arenal volcano, and the Caribbean coast. We skipped the Pacific with brand-name resort hotels nestled in coves. Inland and on the Caribbean, we found only small hotels and restaurants reportedly owned and operated by Costa Ricans. Fast food franchises were virtually non-existent. Technology again played a key role: The small enterprises rely heavily on TripAdvisor.com reviews. The surrounding tourism-dependent communities actively contribute by providing uniformly excellent service.

Subtle uses of technology included the coordination of national park, private enterprise, and non-profit wildlife personnel in preventing poaching and enabling us to observe sea turtles beach, lay eggs, and leave, with minimal distraction. Young guides enlisted for short excursions were the best-informed and the most talented at human relations I’ve ever encountered. Many have university degrees in tourism, an attractive career option in a country where tourism accounts for more revenue than coffee and bananas.

Why we travel

Why didn’t our ancestors who moved into Europe and the Western Hemisphere all settle on the Mediterranean and California coasts? Some moved to the arctic, to deserts, or settled high in the Andes and Himalayas and deep in festering Amazon jungles. Times of scarcity that affect any region could motivate people to spread out, but which ones left? Perhaps a combination of curiosity and antisocial tendencies were decisive. 

In any case, our species has always traveled, and technology changed the experience, from ships, trains, and planes to radio, telephone, and satellite telecommunication. Traveler’s cheques gave way to ATMs. For decades, a staple of my day abroad was a search for a copy of the International Herald Tribune. I paid extortionate prices for a 4-day-old copy in some remote places. Today, the search is for an Internet connection, usually not hard to find.

In 1989, there were few computers and no Internet access between Cairo and Cape Town. On a hot, sunny day that year, looking out over a bustling, colorful Mombasa street scene, I had an epiphany: “Not one of the hundreds of people before me will ever be affected by computer technology! Their lives will be unaffected by my work.” As epiphanies go, this was not impressive. Today, the Mombasa-City.com home page has links to tourist information, insurance and shipping companies, estate agents, and the answer to the question: The jinis of Mombasa: True or myth?

Technology and travel continue to evolve in tandem, for better and worse. British travelers said that TripAdvisor.com helped them overcome timidity toward complaining about poor service. But we also heard of travelers threatening a bad review to blackmail merchants who depended on ratings.

Similarities and differences

I expected to be met at Kano International Airport on my first visit to Africa, in 1983. I wasn’t. No one knew I was coming. My destination, Jos, a large city in the temperate highlands of Nigeria, had no international phone service or telex, and my letter had not arrived. Nor was there local phone service—the one phone at the airport was a long-dead relic. Later, to phone his hospitalized wife in Kenya, a university colleague of my host drove for hours at night to a telecommunications relay station.

Travel back then was often like parachuting in with the clothes on your back. It evolved. In 1998, my wife and I visited Madagascar. We saw a satellite phone in use in a remote corner of the country. While we were there, the English-language newspaper in the capital announced the country’s first public demonstration of the Internet and Web, led by the students of a technical high school. When we pushed into the interior and lost outside contact, we emerged to discover that Frank Sinatra had died and the Department of Justice was suing Microsoft. In contrast, on a 10-day African camping safari in July 2015, we had to choose to stay out of touch. (Less dramatic news greeted us on resurfacing: Greece was still struggling and Donald Trump was still rising in the polls.)

I haven’t been to Scotland

“When you have made up your mind to go to West Africa,” wrote a 19th-century traveler, “the very best thing you can do is to get it unmade and go to Scotland instead; but if your intelligence is not strong enough to do so, abstain from exposing yourself to the direct rays of the sun, take 4 grains of quinine every day, and get an introduction to the Wesleyans; they are the only people on the Gold Coast who have got a hearse with feathers“ [1].

In 1983, expats still liked to talk about diseases in the region once called “White Man’s Grave.” I spent time with some who were ambulatory with malaria and worse afflictions. When my health faltered, I searched for garlic, whose curative powers were not yet recognized by medical science but which I’d come to trust a decade earlier in Guatemala. In 1983, diseases in Nigeria were rarely fatal if one could fly out for treatment by a knowledgeable doctor, but finding one could be a challenge. One colleague had returned to England with an illness, but was unable to convince doctors to prescribe the right drug until he had lost over 30 pounds and was close to dying. Expats advised me to load up on drugs—no prescriptions were required—before I returned to England, so I could self-medicate if I was harboring a parasite.

This might remain good advice, if prescriptions are still not required. This July, shortly before we left Africa for England, a spider bite sent neurotoxin into my back and around my thigh and triggered fever and hives across my body, and a tick lodged behind my wife’s ear and infected her. Web searches identified our assailants—a sac spider and African Tick Fever.

We arranged clinic visits in England. My problem was new and interesting to our young British doctor. “Is that s-a-x spider?” she asked. She told me to watch for secondary infections. She did not suggest applying an external antibiotic to reduce the odds of needing a skin graft later, advice I found on the Web. My Neosporine, previously used on monkey scratches and bites, seemed to do the trick. There is not much else to do for a neurotoxin. The lumps and fever dissipated. Gayna was not so lucky.

The Web said doxycycline would resolve tick fever in 48 hours. The doctor prescribed amoxicillin. When my wife’s fever exceeded 103 degrees Fahrenheit a few days later, we returned. An older doctor now on duty was convinced that she had malaria. No mosquitoes survived the cold South Africa winter nights, and we had taken anti-malarials anyway, but he wouldn’t prescribe doxycycline. Later, we got a call—someone at the clinic had phoned around and found a doctor from South Africa. They let Gayna pick up doxycycline after working hours and 48 hours later her fever was gone, although side effects of the antibiotic persisted for a week.

I guardedly consider this a success for technology in travel, thanks to Wikipedia and the Web.

Beyond being there

Travel to experience nature and different cultures is not the same revelation it once was. Hundreds of beautifully produced documentaries are available online. You can spend time and money to travel and look at a field, and see a field. Or you can watch a program that distils hundreds of hours of photography and micro-photography into a year in the life of a field, accompanied by expert commentary. Sure, travel provides a greater field of view, texture, nuance, and serendipity, but often less depth of understanding. In addition, distant lands now come to us—foods, music, arts, and crafts from around the world are in our malls.

The encroachment of science and technology on the natural world is eloquently lamented by Thomas Pynchon in Mason & Dixon and other works. We are indeed “winning away from the realm of the sacred, its borderlands one by one.” Agriculture, extraction of minerals, and the housing needs of growing populations fence in land that was available for animal habitation and migration. Wildlife is increasingly managed. If a zoo is a B&B for animals, national parks and reserves have become B’s: Bed is provided and the guests find their own food. With migratory paths blocked, animals that once trekked to sources of water during dry seasons are accommodated by constructing waterholes. When smart cats learn to drive dumb herbivores toward fences where they can easily be taken down, we build separate enclosures to keep some herbivores around. Vegetation along roads is burned so tourists can see animals that would be invisible if the forest came up to the road. Wild animals become accustomed to humans and willingly provide photo ops. Guides know where animals frequent and alert one another of sightings. Many animals have chip implants; geolocation and drones may soon ensure successful viewing.

These are all fine adjustments. Not everyone is content driving hours with no animal sightings. It provides a sense of wilderness spaces and makes sightings special, but viewing throngs of animals along rivers and waterholes is undeniably spectacular, and many tourists are in a hurry to check off the Big Five.

And Africa remains wild. Hippos are second only to mosquitoes as lethal animals on the continent. Hippos are aggressive, fearless, can outrun you, and they ignore protestations that as herbivores they shouldn’t chomp on you [2]. 

Implications for design

The debate is not whether to reshape the natural world, it is how we reshape it. And it turns out that this is not new. Charles Mann’s brilliant book 1491 documents humanity’s extensive transformation of the natural world centuries ago. Prior to the onslaught of European diseases, the Western Hemisphere was densely populated by peoples who carefully designed the forests and rainforests around them. Reading 1491 as we traveled in Costa Rica, I realized that prehistoric inhabitants would not have let those palms grow along the waterways. The black, reflective water conceals edible fish, lethal crocodiles and caiman, and venomous snakes and frogs. Such palms would be consigned to land far from water, their fruit used for pigments or other purposes. After the Spanish arrived and disease depopulated the region, untended palms spread to their present locations.

One reading of Mann’s message might be that since “primeval” forests were landscaped, why should we constrain change now? A better reading is that we have a powerful ability to design the natural world around us, and we should do it as thoughtfully as humanly possible. Technology can surely help with this.

Endnotes

1. http://myweb.tiscali.co.uk/kenanderson/histemp/whitemansgrave.html

2. Hippos can hoof it 20 mph. A fast human sprints 15 mph. Olympic sprinters reach 25 mph. Since hippos don’t organize track and field competitions, perhaps one lives that could outpace Usain Bolt. Hippos have taken boats apart to reach the occupants. Zulus considered hippos to be braver than lions. Crocodiles and water buffalo are also fast and lethal on land.



Posted in: on Fri, November 06, 2015 - 3:00:07

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Multiple scales interaction design


Authors: Mikael Wiberg
Posted: Mon, November 02, 2015 - 11:54:10

”Attention to details” has always been a key concern for interaction design. With our attention to details we ensure that we as interaction designers think carefully about every little detail of how the user might interact with and through the digital technologies we design. As interaction designers, we share this belief that every tiny detail matters for the overall experience.

Although our field has evolved over several different “waves” of interaction design paradigms, this concern for interaction design at the scale of the details has remained. For instance, when we developed command-based interaction back in the 70s and 80s our focus was on dialogue-based system design. When we went for GUI design in the late 80s and onwards we again went straight for the details and developed a whole vocabulary to enable detailed conversations about the details of the GUI (including notions such as WYSIWYG, balance, symmetry, etc.). 

Today when we talk about interaction design it seems like attention to details might be  the concern for interaction design—at least if we look at the development in industry. Whether we look at interaction design in smartphones, smart watches, smart TVs, or just about any new interactive gadget, this concern for the details—in interface design, in icon design, in menu design, in hardware design, and so on—is always present. 

In relation to this contemporary concern for “the details,” some voices have been raised for the re-introduction of an industrial design approach to interaction design. While this approach for sure can help us to advance our close-up focus on the details, due to its long history of craftsmanship and associated focus on details, I suggest there are additional demands on interaction design competence emerging at the current moment. 

“Attention to details” helps the interaction designer to stay focused—on the details. This is a core competence for any designer, this ability to stay focused on the small details that also determine the overall impression of a product. However, it might be the case that there is a need at the current moment to think about attention to details not just as a single-threaded focus, but as something that works across multiple different scales. Traditionally, “attention to details” has meant “close-up attention to the visual details” (of the user interface). However, today our digital products are also parts of greater designed wholes in many different ways. Apps, for instance, are of course designed to run on smartphones and tablets, but they are also supposed to be designed in relation to other apps (to work and look like other apps), to be designed in a way that works in the context of an app store, and maybe also work on several different devices and at different screen sizes. Further on, many apps are also very much integrated with social media and as such these apps need to incorporate typical ways of interacting and communicating via such platforms (e.g., by sharing, liking, or re-tweeting others online). For the interaction designer, this means that nowadays he or she cannot only pay attention to the nitty-gritty details in the user interface. Beyond this, the designer needs to think about the interplay between the app and the device/hardware running the app, how modes of interacting with the app correspond to ways of interacting with social media, how interaction with the app might even be part of interacting via social media, etc., etc.

While one could say that the contemporary interaction designer might need to have “split-vision,” I would instead say that interaction design is beginning to work more and more like the profession of architecture. The well-experienced architect pays attention to details on very many different things at once. Typically this is referred to as working with architecture at multiple scales. The overall scale might be a concern for the visual appearance of the building; at another scale it might be about how the building might work as a social intervention in the context where it will be built; at yet another scale attention to details concern the program of the building or the placement of windows, doors, and hallways. Here it is impossible to say that one thing is more important than another to focus on. On the contrary, it is the ability to focus on all of these different scales that leads to great architecture.

In a similar way of seeing things I would say that at the current moment great interaction design demands a similar sensitivity, an ability to pay close attention to a multitude of scales operating simultaneously in any interaction design project. If only paying attention to user needs, then something else is left aside. Likewise, if only staying focused on some details in the user interface, then something else is probably overlooked.

So, what are the implications from this for the profession of interaction design? Well, we already know how to go for the “attention to details.” Let´s never forget about this core competence! But beyond this, I would say that there is a need for any interaction designer to develop skills and a sensitivity for which scales are at play in any particular design project and then learn not just how to pay attention to the details at each scale of the project, but also how to merge these details into functional wholes, that is, into great interactive products and services. From my perspective, this is about compositional interaction design and, accordingly, the interaction designer moves from his/her single-threaded attention to some details to also think about how to arrange these details into well-working compositions across these multiple different scales—thus this proposed notion of “multiple scales interaction design.” 

So, stay focused! On all the different scales of your design project!


Posted in: on Mon, November 02, 2015 - 11:54:10

Mikael Wiberg

Mikael Wiberg is Professor of Informatics in the Department of Informatics at Umeå University, Sweden.
View All Mikael Wiberg's Posts


Post Comment


@Mattias Arvola (2015 11 02)

Sounds like what Schön and Wiggins (Kinds of Seeing and their Functions in Design, Design Studies 13(2), 135-156) talked about as shifts between domains in architecture. The shifts are driven by a propagation of consequences of a design move from one doman to other domains. I have previously conceptualized it as zooming in and out beteween detal and abstraction levels, but perhaps it is more like transpositions between domains equally detailed. I think I need to think some more on this…

@Monica (2015 11 04)

UX and architecture have always comparable in many ways. I very much agree with this observation. I don’t think it’s a new phase, but one that is getting more recognition. The devil is in the details, but a designer much look at the solution in a holistic manner, as well.  The flow throughout the ecosystem, much as in architecture, along with the structural and technical requirements, very much map to UX. I like the comparison of the levels of attention. I would place the visual UX more along side the Interior Decorating phase of a house. The last phase, but one that very much integrates with the overall intention of the structure. Great piece, thank you.


Action and research


Authors: Jonathan Grudin
Posted: Fri, October 09, 2015 - 5:01:55

Three favorite research projects at Microsoft that were never written up: automated email deletion, an asynchronous game to crowdsource answers to consulting questions, and a K-12 education tool. I expected they would be, as were projects that led to my most-cited Microsoft work: persona use in development, social networking in enterprises, and multiple monitor use. What happened to them?

The unpublished projects had research goals, but they differed in also having immediate applied goals. Achieving the applied goal was a higher priority than collecting and organizing data to satisfy reviewers. In addition, there were research findings, but project completion provided a sense of closure that in other projects only comes with publication, because they aimed to influence practice indirectly: Publishing was the way to reach designers and developers.

Projects with immediate goals can include sensitive or confidential information, but this did not prevent publishing my studies. In my career, the professional research community has tried and sometimes succeeded in blocking publication much more than industry management.

I hadn’t thought carefully about why only some of my work was published. It seems worthwhile to examine the relationships among research, action, and our motives for publishing.

A spectrum of research goals 

Research can be driven by curiosity, by a theoretical puzzle, or by a desire to address a specific problem, or to contribute to a body of knowledge. The first HCI publication, Brian Shackel’s 1959 study of the EMIac interface, addressed a specific problem. Many early CHI studies were in the latter category: By examining people carrying out specific tasks, cognitive psychologists sought to construct a general theory of cognition that could forever be used in designing systems and applications. For example, the “thinking-aloud” protocol was invented in the 1970s to obtain insight into human thought processes. Only in the 1980s did Clayton Lewis and John Gould apply it to improve designs by identifying interface flaws.

When GUIs became commercially viable, the dramatically larger space of interaction design possibilities shattered the dream of a comprehensive cognitive model. Theory retreated. Observations or experiments could seek either to improve a specific interface or to yield results that generalize across systems. The former was less often motivated by publication, and as the field became more academic, it was less likely to be judged as meriting publication.

Action research is an approach [1] that combines specific and general research goals. Whereas conventional research sets out to understand, support, or incrementally improve the current state, action research aims to change behavior substantially. Action researchers intervene in existing practice, often by introducing a system, then studying the reaction. Responses can reveal aspects of the culture and the effectiveness of the intervention. This is a good option for phenomena that can’t be studied in a lab and for which small pilot studies won’t be informative. A drawback to action research is that it is often undertaken with value-laden hypotheses that can undermine objectivity and lower defenses against the implacable enemy of qualitative researchers, confirmation bias. Action research is often employed in cultures not shared by the researchers. It is at one end of a continuum that reaches conventional research. My research was not intervention-driven; it has sought to understand first, then improve.

Publication goals

The goals of publication partly mirror the goals of research. Publication can contribute to a model, framework, or theory. It can help readers who face situations or classes of problems that the researchers encountered. Publication can enable authors to earn academic or industry positions, gain promotion or tenure, attract collaborators or students, astonish those who thought we would never amount to much, or become rich and famous. All worthy goals, and not mutually exclusive. Many are in play simultaneously—it’s nice when a single undertaking contributes to diverse goals.

An examined life

Returning to the question of why my favorite work is unpublished, aiming for an immediate effect introduces constraints and alters priorities, but it doesn’t preclude careful planning for subsequent analysis. Steve Benford and his colleagues staged ambitious public mixed reality events under exacting time and technology pressure, yet they collected data that supported publication.

Early in my career, my research was motivated by mysteries and roadblocks encountered while doing other things. There were a few exceptions—projects undertaken to correct a false conclusion in a respected journal or to apply a novel technique that impressed me. Arguably I was too driven by my context, afflicted by a professional ADHD that distracted me from building a coherent body of work. On the other hand, it insured a degree of relevance in a dynamic field: Some who worked on building a large coherent structure found that the river had changed course and no longer flowed nearby.

My graduate work was in cognitive psychology, not HCI. I took a neuropsychology postdoc. My first HCI experiment was a side project, using a cool technique and a cool interface widget in a novel way, described below. The second aimed to counter an outrageous claim in the literature. The third explored a curious observation in one of the several conditions of the second experiment.

I left research to return to my first career, software development. There, challenges arose: Why did no one adopt our multi-user features and applications? Why were the software development practices of the mid-1980s so inappropriate for interactive software? Not finding answers in the literature, I gravitated back to research.

I persevered in researching these topics, but distractions came along. As a developer I first exhorted my colleagues to embrace consistency in interface design, but before long found myself often arguing against consistency of a wrong sort. I published three papers sorting this out. I was lured into studying HCI history by nagging questions, such as why people managing government HCI funding never attended CHI, and why professional groups engaged in related work collaborated so little.

Some computer-use challenges led to research projects. My Macs and PCs were abysmal in exploiting two-monitor setups; what could be done to fix that? As social media came into my workplace in the 2000s, would the irrational fear of email that organizations exhibited in the 1980s recur? Another intriguing method came to my attention: Design teams investing time in creating fictional characters, personas. Could this really be worthwhile? If so, when and why? And each of the three favorite projects arose from a local disturbance.

Pressure to publish. Typically a significant motivation for research, I escaped it my entire career. My first week of graduate school, my advisor said, “Be in no hurry. Sit on it. If it’s worth publishing, it will be worth publishing a year later.” Four years later, too late to affect me, he changed his mind, having noticed that job candidates were distinguished by their publications. No telling how publication pressure would have directed me, but the pressure to finish a dissertation seemed enough.

I published one paper as a student. My first lab assignment was to report on a Psychological Review article that proposed an exotic theory of verbal analogy solution. Graduate applications had required Miller’s Analogy Test, and I saw a simpler, more plausible explanation for the data. I carried out a few studies and an editor accepted my first draft, over the Psych Review author’s heated objection. This early success had an unfortunate consequence—for some time thereafter, I assumed that “revise and resubmit” was a polite “go away” rejection.

Publication practices pushed me away from neuropsychology. I had been inspired by A. R. Luria’s monographs on individuals with unusual brain function. I loved obtaining a holistic view of a patient—cognitive, social, emotional, and motivational. The standard research approach was to form a conjecture about the function of a brain region, devise a short test, and administer it to a large set of patients. Based on the outcome, modify or refine the conjecture and devise another test. This facilitates a publication stream but it didn’t interest me.

The early CHI and INTERACT conferences had no prestige. Proceedings were not archived, only journals were respected. There was not yet an academic field; most participants were from industry. Conferences served my goal of sharing results with other practitioners who faced similar problems. It was not difficult to get published. Management tolerated publishing, but exerted no pressure to do it.

When I returned to academia years later, conferences had become prestigious in U.S. computer science. Like an early investment in a successful startup, my first HCI publications had grown sharply in value. I continued to publish, but not under pressure—I already had published enough to become full professor.

I did however encounter some pressure not to publish results along the way.

“Sometimes the larger enterprise requires sacrificing a small study.”

My first HCI study adapted an ingenious Y-maze that I saw Tony Deutsch use to enlist rats as co-experimenters when I was in grad school. It measures performance and preferences and enables the rapid identification of optimal designs and individual differences. I saw an opportunity to use it to test whether a cool UI feature designed by Allan MacLean would lure some people away from optimally efficient performance. It did.

A senior colleague was unhappy with our study. The dominant HCI paradigm at the time modeled optimal performance for a standard human operator. If visual design could trump efficiency and significant individual differences existed, confidence in the modeling endeavor might be undermined. He asked us not to publish. I have the typical first-born sibling’s desire to please authority figures, so it was stressful to ignore him. We did.

A second case involves the observations from software development that design consistency is not always a virtue. I presented them at a workshop and a small conference, expecting a positive reception. To my dismay, senior HCI peers condemned me for “attacking one of the few things we have to offer developers.” My essay was excluded from a book drawn from the workshop and a journal issue drawn from the conference. I was told that it had been discussed and decided that I could publish this work elsewhere with a different title. I didn’t change the title. “The case against user interface consistency” was the October 1989 cover article of Communications of the ACM.

Most obstacles to publishing my work came from conference reviewers who conform to acceptance quotas of 10%-25%, as though 75%-90% of our colleagues’ work is unfit to be seen. It is no secret that chance plays a major role in acceptances—review processes are inevitably imprecise. A few may deny the primacy of chance—was your paper assigned to generous Santas or annihilators? Was it discussed before lunch or after lunch? And so on, just as a few deny human involvement in climate change, and probably for similar reasons. One colleague argued that chance is OK because one can resubmit: “Noise is reduced by repeated sampling. I think nearly every good piece of work eventually appears, so that the only long-term effect of noise is to sprinkle in some bad work (and to introduce some latency in the system)” [2].

Buy enough lottery tickets and you will win. In recent years, few of my first submissions have been accepted, but no paper was rejected three times. However, there are consequences. Rejection saps energy and good will. It discourages students. It keeps away people in related fields. Resubmission increases the reviewing burden.

The status quo satisfies academics. More cannon fodder, new students and assistant professors, replace the disheartened. But for research with an action goal, such as that which I haven’t published, the long-latency, time-consuming publication process has less of a point.

This is an intellectual assessment. There is also an emotional angle. Some may regard a completed project that strived for an immediate impact dispassionately as they turn to the next project. I don’t. Whatever our balance of success and failure, I treasure the memories and the lessons learned. Do I hand this child over to reviewers tasked with killing 75% of everything that crosses their path? Or do I instead let her mingle with friends and acquaintances in friendly settings?

Endnotes

1. Or set of approaches: Different adherents define it differently.

2. David Karger, email sent June 24, 2015.



Posted in: on Fri, October 09, 2015 - 5:01:55

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Scripted interaction


Authors: Mikael Wiberg
Posted: Mon, October 05, 2015 - 9:51:06

"Interaction design" is a label for a field of research and for a practice. When we design interactive tools and gadgets we do interaction design. But what is it that we´re designing? And is this practice changing? Let me reflect on this a little bit.

Inter-action design

If we take this notion of interaction and we split it up into two words we get inter and action. From this viewpoint, inter-action design is about designing for the inter-actions to be supported by a computer system and between a computational device and a person. This leads to a design paradigm focused on which actions a particular design should support, and on how to trigger these supported actions via an interface. In short, interaction design becomes an issue of mapping supported actions to an understandable user interface. Of course, we can add to this design paradigm that menus, buttons, etc. designed to trigger these actions should be logically placed, easy to understand and use, etc. Clearly, very much of tradional HCI/interaction design relies on this interaction design paradigm, ranging in focus from interface design to issues of usability. From this viewpoint, interaction design is about arranging computational devices to support actions between users and their computers. Clearly human-computer interaction (HCI) was a perfect label for describing the main entities in focus for this design paradigm—in particular, a focus on how to design interfaces for efficient interactive turn-takings between humans and machines.

Interaction design

But interaction design might not only be about designing the interfaces for triggering certain supported actions. As we can notice from the HCI and interaction design research community, interaction design is also very much about experiences and experience design. Given one such perspective on interaction, we can think about interaction design not only in terms of designing the interfaces to trigger actions between entities, but also in terms of designing the form of the interaction per se, i.e., should the interaction be a rapid or slow? Frequent (in terms of user involvement)? Or should it be about massive amounts of information? When we start to think about interaction design from the viewpoint of designing not only the interfaces for interaction but interaction per se we shift focus from interface design to issues of experiences, perception, emotions, bodily engagement, etc. With a focus on designing the interaction, we then need to add a focus on how we experience this interaction we are designing. Again, and in relation to this design paradigm, we should acknowledge how our community has a stable ground for doing interaction design from this perspective, including for instance a focus on experience design, embodied interaction design, etc.

Less and less interaction?

However, we are now facing a new trend and a new challenge for interaction design. With the advent of ubiquitous computing, robotics, the Internet of Things, and new digital services like IFTTT (If This Then That), it is likely we will interact less and less with digital products (at least in terms of frequent and direct turn-takings between users and computers in typical interactive sessions as we have though about interaction design for more than a couple of decades now). In fact, we do not want to frequently interact with our robotic lawn mower, much less experience any interaction with this computational device. We just want it to work. On the other hand, if we are forced to interact with it then it is probably due to a breakdown. In relation to this scenario, does interaction design then become a practice of designing for less (direct) interaction or ultimately almost no interaction at all? (For a longer and more in-depth discussion on ”less and less interaction” see the paper ”Faceless Interaction” [1].) However, this does not mean than we are moving into an era where interaction design is not needed. On the contrary(!), if we are going in the direction of having less and less direct contact with the computational stuff surrounding us in our everyday lives, then we need an interaction design paradigm that can guide us toward the design of well-functioning tools, objects, and devices that can live side-by-side with us in our everyday lives. So, how can we go about doing such interaction design?

A proposal: A focus on scripted interaction

In HCI and interaction design research we have a long practice of studying practices and then designing interactive systems as supportive tools for these practices. Typically this has lead us in the direction of a ”tool perspective” on computers and we have looked for ways of supporting human activities with the computer as a tool. As we switch paradigms from the user as the active agent doing things supported by computational tools toward a paradigm that foregrounds the computer and how it is doing lots of things on behalf of its user/owner, we can no longer continue to focus on designing the turn-taking with the machine (in terms of interface design, etc.), nor can we focus on designing how we should experience this turn-taking, these interfaces, or the device. Instead, and here is my proposal, we need to develop ways for doing good scripted interaction design in terms of how the computational device can carry out certain scripted tasks. Of course, it should not be ”dump scripts,” but rather scripts that can take into account external input, sensor data, context-aware data, etc). As a community, we already have techniques for developing good scripts from thinking about linked actions. For instance, we are used to thinking about user scenarios, and have even developed techniques for doing story boards, etc. However, we also need techniques for thinking about chains of actions done by our computational devices; we need tools for simulating how various interactive tools, systems, and gadgets can work in concert; we need tools and methods for designing interaction design across services; and we need interactive tools for and ways of examining and imagining how these interactive systems will work and when breakdowns can ocur. However, as we´re standing in front of this development and the design challenges ahead, we can also move forward in informed ways. For instance, the development of scripted interaction design methods, techniques, and approaches can probably find a good point of departure in the book Plans and Situated Actions by Lucy Suchman [2]. While our community always seems to look for the next big thing in terms of tech development, we can simultanously feel secure in the fact that our theoretical grounding will somehow show us the way forward! 

Endnotes

1. Janlert, L-E., & Stolterman, E. (2015). Faceless Interaction - a conceptual examination of the notion of interface: past, present and future. In Human–Computer Interaction, Vol. 30, Iss. 6, 2015.

2. Suchman, L. (1987 ) Plans and Situated Actions: The Problem of Human-Machine Communication, Cambridge University Press.



Posted in: on Mon, October 05, 2015 - 9:51:06

Mikael Wiberg

Mikael Wiberg is Professor of Informatics in the Department of Informatics at Umeå University, Sweden.
View All Mikael Wiberg's Posts


Post Comment


No Comments Found


Go West!


Authors: Deborah Tatar
Posted: Tue, September 01, 2015 - 3:18:05

As a professor, I now get to witness young people aspiring to “go West.” They know the familiar trope “Go West, young man,” ascribed to the 19th-century publisher Horace Greeley. They inherit the idea of manifest destiny, even when the term itself was buried in a single paragraph in their high-school American History textbooks. They have heard of the excitement of Silicon Valley, the freedom of San Francisco, the repute of Stanford, and perhaps experienced the beauty of the Bay with its dulcet breeze and the sun that seems to transmit energy to young healthy human animals directly through the skin. Seattle, Portland, Vancouver: a bit darker and wetter, but all romantic technology and design destinations, infused with Makr, and open source ideology, and even the tragically hip. And then, there is the far, far west—although neither Taiwan [1] nor Shenzhen [2] are really on my students’ minds yet. Go West and seek your future!

Yet the advice to go West is associated with a darker kind of idealism: a tall, thin, awkward young man in WWI England haltingly explaining his enlistment, saying that he will “go West” [3]—that is, die—proudly if he must. As Siegfried Sassoon wrote, witheringly from the trenches: “You'd think, to hear some people talk/ That lads go West with sobs and curses/ … But they've been taught the way to do it/ Like Christian soldiers; not with haste/ And shuddering groans; but passing through it/ With due regard for decent taste.” (S. Sassoon, How to Die). In going West, what are my undergraduate students going into? Is it trench warfare or all Googly-eyes?  

Luckily for my conscience, my role does not really much matter. It does not much matter what I say because these undergraduates already have a fixed image. And they are not the only ones. Indeed, I started writing this in chagrin after a recent conversation with my own mother. She was excited and shocked to report that she had learned that Steve Jobs was a Deeply Flawed Human Being. I tamped my reaction to this news down to the utterance, “This is not a surprise.”

(This is the first of six or so posts that will have to do with our design choices, the settings in which we make them, and, especially, the position of women.)


Endnotes

1. Bardzell, S. Utopias of participation: Design, criticality, and emancipation. In Proceedings of the 13th Participatory Design Conference: Short Papers, Industry Cases, Workshop Descriptions, Doctoral Consortium Papers, and Keynote Abstracts 2 (Oct. 2014), 189–190. ACM, New York.

2. Lindtner, S., Greenspan, A. and Li, D. (2015) Designed in Shenzhen: Shanzhai manufacturing and maker entrepreneurs. In Proceedings of the 2015 Critical Alternatives 5th Decennial Aarhus Conference (2015), 85–96.

3. Cambridge Idioms Dictionary, 2nd ed. S.v. "west." (Retrieved Aug. 30, 2015 from http://idioms.thefreedictionary.com/west)



Posted in: on Tue, September 01, 2015 - 3:18:05

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


@Smart Menu (2015 09 29)

Yes Agree.. When a professional man share his/her personal experience, The opinion keeps double value in that field.


Building troops


Authors: Jonathan Grudin
Posted: Wed, August 19, 2015 - 9:44:32

“Will you miss Khaleesi?” asked Isobel. At that moment, the samango urinated on Eleanor’s shoulder. “Ummm, yes?” Eleanor replied.

The primates at the Riverside Wildlife Rehabilitation Centre outside Tzaneen, South Africa were abandoned or confiscated pets, survivors of massacres by farmers who left newborns behind, injured by predators or motor vehicles, and so on.

Unexpectedly, the principal task of the volunteers (including my family in early July) on whom the center relies is not rehabilitating individual animals. It is building troops—establishing contexts that encourage and strengthen appropriate social behaviors in groups of vervet and samango monkeys and baboons. Many have no history of interacting with others of their species. For most volunteers, it is deeply engaging. Many extend their planned stays or return. Alpha volunteer Ben initially came for eight weeks; after two weeks he extended his tour to several months and was partway through a year-long return visit.

More predictably, a consequence of working closely with other primates is to reflect on similarities and differences between them and us. This yielded an insight for my work on educational technology design.

The cast

We joined three permanent staff members, a handful of local contract workers, about 35 volunteers, 140 baboons, 17 samango or Syke’s monkeys, and over 300 vervet monkeys. The 30 hectares are also home to several dogs (who get along fine with the monkeys), chickens, two ostriches, and wild troops of baboons and vervets. The volunteer population fluctuates seasonally, ranging from about 5 to 35. Primate numbers usually fluctuate slowly; most animals spend years there. However, a release into the wild of a troop of 80 baboons was imminent.

Covered wire-mesh enclosures for juvenile primates [1] comfortably hold a few dozen of them and us. They include constructions of suspended two-inch diameter stripped hardwood branches and bars on which the primates can race around. Larger enclosures house the older “middles,” in which troops are formed. The still larger pre-release camps are homes to the increasingly self-sufficient troops that are approaching release into carefully planned wild locations. These animals are often out of human view. Fences with an electrified top strand surround uncovered enclosures.

Major tasks

Food prep. Five hundred primates eat a lot. Tzaneen is in a major agricultural region. Riverside gets free or discounted imperfect food. Papayas, bread, cabbage, bananas, eggplants, oranges, and grapefruit arrive in truckloads and must be offloaded, stored in crates, and sorted daily for ripeness. Morning food prep is a major task: washing food and cutting it with machetes into different sizes for delivery in large bowls and crates to juvenile, middle, and main camps, with smaller quantities for quarantine and clinic enclosures that house a handful of animals. Milk with a pro-biotic is prepared for seven very young baboons and Khaleesi, the infant samango. Dozens of crates, machetes, and the food prep area are scrubbed or hosed down every day.

Enclosure cleaning. Equally important and laborious is the dirty, olfactory-challenging job of cleaning juvenile enclosures: removing food remains and excrement from everything and scrubbing it all down.

Monkey time. The other major task is to play with primates, particularly the juveniles. A group of volunteers sit in a cage [2], talking for an hour or more while baboons or vervets eat, race around, climb on the branches or the volunteers, inspect and present to the volunteers, and so on. Volunteers soon recognize the personality differences of different primates—and vice versa, baboons and vervets have favorite volunteers. We are teaching troop behaviors, taking the place of adult baboons or vervets and gently keeping them in line. The power of troop identification is demonstrated in daily walks to a distant pool. A juvenile baboon that escapes an enclosure may avoid recapture for days, but when a group of us let out all 11 baboons and walk to the pool, they race around freely (or climb up a volunteer to be carried) but do not try to escape. We then sit around the pool and talk as they play, racing around, mock-fighting, climbing trees, jumping onto and off of us, stealing sunglasses or hairbands, leaping into the pool, exploring odd things they find, and getting into mischief. After an hour we walk back, and they stay with us, a protective troop, and they return without protest to their enclosure. Social behavior is constant—individual baboons approach volunteers and present in different ways; we are told how to respond and what to avoid doing. Ben, the most familiar human, often had four or five baboons climbing on him as we walked along.

Other tasks. We harvest edible plants from areas outside the enclosures to supplement what grew in the larger ones, to familiarize them with what will constitute their fare in the wild. Another task is to assess troop health: We walk to the large pre-release enclosure, select a primate, and observe it for half an hour, coding its behavior in two-minute intervals. Other tasks arise less frequently, such as inserting a microchip for location monitoring into a new arrival or the ingenious process of introducing a new primate to a troop. And as discussed below, teaching newer volunteers the details for the tasks just described is itself a major task.

The fourth primate

In an unspoken parallel to baboon and monkey preparation, Riverside forms effective troops of volunteers, fostering skills that will serve them well when they leave. A month visit can be invaluable for resilient young people taking a gap year or a time-out to reflect on their careers. My family were outliers—most volunteers were single, with some young couples and siblings. Almost all were 15-35, two-thirds were women, and they came from the UK, Netherlands, USA, Belgium, Switzerland, Israel, and Norway. A chart in the dining room listed past involvement from dozens of other countries. Some volunteers had prior experience in centers focused on lions, elephants, sharks, and other species. We had previously visited a rescue center in Costa Rica.

Most days a volunteer or two left and others arrived, creating an interesting dynamic. Getting newbies up to speed on all tasks is critical. After a week, volunteers know the tasks. After two weeks, they are experts. After three weeks, most people who had had more experience have left, so they become group leaders. Group leaders are responsible for training new arrivals and getting crucial tasks accomplished. This means guiding a few less-experienced volunteers who have different native languages, backgrounds, and personalities. We saw volunteers grow in confidence and leadership skills before our eyes. Rarely do people develop a sense of mastery, responsibility, and organizational dynamics in such a short time while doing work that makes a difference. As a side benefit, future parenting of a messy, temperamental, dependent infant will not be intimidating, although this could differ for volunteers in a shark or lion rehab center.

Children and adults

Years ago, I watched schoolchildren who in large numbers shared an unusually long multi-floor museum escalator with me. I did one thing—watch them—but the kids were whirlwinds of activity, talking with those alongside, behind, or in front of them; hopping up or down a few steps; taking things from backpacks to show others; looking around and spotting me looking at them; and so on. In a few minutes, most shifted their attention a dozen times or more.

Juvenile primates are like that. One found a bit of a mostly-buried metal connector next to me at the pool and pulled at it, then quickly dug out the dirt around it, pulled more, brushed it off, pulled again, and then raced off to chase another baboon. Anything they found or stole, they examined curiously as they ran off with it, then dropped it. Their main focus was each other and us. Whether leaping in the pool or climbing trees, they tended to do it in groups, chattering constantly. When one of us retrieved a large hose they were trying to drag off, they looked for an opportunity to mischievously make off with it again.

The adult baboons in the pre-release troop are different. They usually walk slowly and focus at length on one task—harvesting and eating flowers or pods, sitting and surveying the compound, and so on. They interact less frequently—usually amicably, but not playfully. Adults and juveniles seem different species.

Adult baboon software developers would have trouble designing tools that delight juvenile baboons.

Implications for design

Watching them, I realized that although I’ve spent hours sitting in classrooms, I’d not thought holistically about a troop of children. “Ah,” you might say, “but it’s obvious, we know children are different. We were once kids. Many of us have kids.”

No, it’s not obvious. Differences are there, but we don’t see them. Painters depicted children as miniature adults for centuries before Giotto (1266?-1337), famous for naturalistic observation, painted them accurately, with proportionally larger heads. Today, a child is often seen to be a partially formed adult, with some neural structures not yet wired in—a tabula rasa on which to write desirable social behaviors. This focuses on what children are not. It overlooks what they are, individually and collectively.

Without unduly stretching the analogy, children monitor “alpha males and females” and figuratively hang on to favorites much as the juvenile baboons literally hung onto Ben. Video instruction will not replace teachers for pre-teens. Kids explore feverishly, test boundaries, learn by trial and error and from cohort interactions, and get into mischief more than adults do. They wander in groups, carrying backpacks stuffed with books, folders, and tools.

I hadn’t seen these distinctions as comprising a behavioral whole. Some, I hadn’t considered at all.

Tablets. For adults, carrying tablets everywhere is inconvenient. Making do with a mobile phone is fine. Seeing students as a troop, arriving at school with stuffed backpacks, it sank in for the first time that for them, adding a tablet could be no problem. With digital books, notebooks, and tools available, a tablet can reduce the load.

Learning through trial and error. Adults, including most educational technology designers, use pens, not pencils. I collected the pencils left around our house by Eleanor and Isobel. The problem isn’t finding a pencil, it is finding one with some eraser left on. Trial and error is endemic in schoolwork; a digital stylus with easy modeless erasing fits student behavior.

Color. Adults may forget that when they were children, they too were fascinated by colors. When collecting pencils around the house, I also found many color pencils and markers of different sizes. Unlimited color space and line thickness also come with digital ink.

Cohort communication. The pace of communication among students might be more systematically considered. Teachers often first hear about new applications from students. One Washington school district designates “Tech Minions” to exploit this information network.

Conclusion: the significance of troops

More examples of student-adult distinctions that inform the design of technology for students could be identified. More broadly, this experience brought into focus the salience of social behavior. Studies of primate intelligence focus on tool use—rocks to crack nuts or thorns to spear insects. Useful, but these guys eat a lot of things. Finding food may be a lower priority than avoiding being food: for hyenas, wild dogs, lions and other cats on the ground, for eagles, snakes, and leopards when arboreal. They organize socially and observe other species to increase safety. They work as a group when threatened. They have a range of sophisticated social behaviors. Could Riverside’s process for inserting a baboon peacefully into a troop be adapted for bringing on a new development team member?

Ummm, yes?

Endnotes

1. At Riverside the younger primates are called “babies.” I call them “juveniles” here because, although less than a year old, they are actively curious and mischievous, climb trees, fight and play together, and recognize a range of people and conspecifics—often resembling children or young teens. My daughter Isobel firmly notes that at times they act like babies.

2. Although called cages, these enclosures of about 15’ by 15’ by 10’ are spacious given the small size of the residents and their ability to climb the walls or retreat up into the branches.

Thanks to Eleanor Grudin and Isobel Grudin for Riverside details and reports from the juvenile enclosures, Gayna Williams for planning the trip, and Michelle Vangen for an art history assist.



Posted in: on Wed, August 19, 2015 - 9:44:32

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Central control


Authors: Jonathan Grudin
Posted: Tue, July 14, 2015 - 10:14:10

Two impressive New Yorker articles described powerful, laser-focused leaders whose vision affects technology design and use. Xi Jinping consolidated control of the world’s largest country. Jonathan Ive did so at the world’s most valuable company. Both have reputations for speaking frankly while avoiding impolitic statements. They listen, then make decisions confidently. Each is a good judge of character who assembled a highly capable and loyal inner circle. They triumphed through strategic, non-confrontational politics.

“Before Xi took power, he was described, in China and abroad, as an unremarkable provincial administrator… selected mainly because he had alienated fewer peers than his competitors” [1].  Xi volunteered for geographically remote posts that distanced him from the Beijing struggle—until a call came for a new face that would address corruption.

“Ive’s career sometimes suggests the movements of a man who, engrossed in a furrowed, deferential conversation, somehow backs onto a throne” [2].

Once enthroned, each extended his control with a clear sense of purpose.

Some analysts argue that centralized management is not viable in the complex 21st century global economy. In the late 20th century, Ronald Reagan, the autocrat Lee Kuan Yew who created modern Singapore, Steve Jobs, and Bill Gates made impacts. Today, presidents, prime ministers, sultans, and sheiks seem to struggle as large corporations drift from one CEO to another. However, the careers of People’s Republic of China President Xi Jinping and Apple Chief Design Officer Jonathan Ive suggest an effective new leadership style.

The roused lion and the Internet

Xi Jinping set out to attack corruption, address pollution and environmental threats, assert China’s place on the world stage, and maintain economic growth. The first two goals will take time. Xi chose initial steps toward them that also consolidated his power.

The understandable third and unsurprising fourth goals serve rational Chinese interests, but they will conflict at times with goals of other countries. Napoleon once described China as a sleeping lion. Xi remarked, “The lion has already awakened, but this is a peaceful, pleasant, and civilized lion.” We hope so. The treatment of China’s ethnic minorities undermines confidence. China’s efforts to control disputed islands are domestically popular but risk destabilizing the world. Should China’s economy falter, distracting the public with assertiveness abroad could be a temptation.

A few months ago China shut down all internal access to a range of Internet sites, including Bloomberg, Reuters, The New York Times; Facebook, Twitter, and Google. Filters that had previously blocked them could be circumvented by the use of virtual private networks (VPNs). Communist China has long granted special privileges to a hierarchical elite group, of which Xi was a member from birth. The elite are trusted to behave. The government decided that too many untrusted actors had acquired VPN access.

Restricting information to an elite is not new. For centuries, the Catholic Church tolerated discussion and dissent by an elite whose VPN was Latin and Greek. The masses were excluded. Indeed, to prevent the public from seeing how far church practice had strayed from scripture, translating the Bible from Latin to a living language was a crime more serious than murder [3]. The first person to publish an English Bible translation was executed. Martin Luther declared war by posting proclamations in German, not Latin.

That was in the age of monarchs. How widespread is elite governance today? Chinese leaders grew up together, many British leaders attended Eton, and all nine current Supreme Court Justices plus all U.S. presidents since Reagan are alumni of Harvard or Yale [4]. Five million Americans—more than 1%!—have security clearances that permit them to access information but not discuss it publicly. Differences are mostly of degree. No doubt many in Chinese military, intelligence, and economic planning retain Internet access, even apart from those using it to hack our systems. 

To maintain economic growth, China may steal intellectual property through hacking and informants. Annoying, but again, nothing new. Americans stole carefully guarded British water power intellectual property to launch our industrial revolution. The French bugged high-price Air France seats to collect economic information. A colleague at a previous job pretended to drink alcohol at business gatherings to catch gems that fell from loosened lips. But it’s not fun when it’s your turn to be victim.

Will curtailed Internet access pay off for China? Suppressing dissent could build consensus and national spirit—would that compensate for reduced innovation and nimbleness? Mass media enjoys focusing on technology negatives, but many of us appreciate the information access provided by Wikipedia, YouTube, and Facebook. China could drive hundreds of thousands of budding entrepreneurs to emigrate. Of course, that would leave a billion people to soldier on. Mao reportedly said that China should kill one person in a thousand to remain unified. A more civilized, pleasant lion might send them into exile, perhaps buying time to prepare the country for greater openness. We’ll see.

Empowered design

Steve Jobs was forced out of Apple in 1985. When he returned in 1997, he gave Jonathan Ive, an award-winning designer who had joined in 1992, responsibility for the group that designed the iMac, iPod, iPhone, and iPad hardware. After Jobs’ death, Ive’s control extended to software and architecture. Ian Parker describes Ive working tirelessly with a tight-knit group and a clear vision, focusing intensely on Apple’s first major post-Jobs product, the Apple watch.

In April, the watch was available for pre-order. In May, Ive became Chief Design Officer, Apple’s third chief officer. Lieutenants assumed responsibility for the day-to-day hardware and software operations. Ive’s focus was said to be Apple’s ambitious building project, but that seemed well underway months earlier. Is Ive focused on an unannounced project, such as Apple’s rumored automobile? Will he remain broadly engaged despite his lieutenants’ promotions, or did the watch project exhaust Ive, who reportedly took his first long vacation when it was done? In any case, more than Apple’s financial returns may ride on the success of the watch.

Does direct contact with users have a future?

Steve Jobs thought not.

Substantial human factors/usability engineering went into the Macintosh. Most of it was done at Xerox PARC. An eyewitness account depicted in the film Jobs  starts with Steve Jobs accusing Bill Gates of stealing the Mac GUI. "Well, Steve, I think there's more than one way of looking at it,” Gates replies. “I think it's more like we both had this rich neighbor named Xerox and I broke into his house to steal the TV set and found out that you had already stolen it.”

Apple hired Xerox’s brilliant advocate of user testing, Larry Tesler. But only after Jobs was forced out in 1985 was Tesler able to form a Human Interaction Group. Between 1985 and 1997, Apple built two HCI groups, one in the product organization and one in “advanced technology” or research. Joy Mountford arrived in early 1987, recruited outstanding contributors with backgrounds ranging from psychology to architecture and theatre, and led Apple in staging bold demonstration projects at CHI, publishing research papers and the influential book The Art of Human-Computer Interface Design, and forging new links between HCI and the fields of design and film. In 1993, Apple hired Don Norman, who became the first executive with a User Experience title.

The meteoric rise of HCI at Apple was followed by a faster descent when Jobs returned in 1997 in the midst of financial challenges. Jobs laid off Don Norman and the researchers, reportedly telling one manager, “Fire everyone, then fire yourself.” He eliminated the HCI group in the product division. Some graphic designers were retained, but of the usability function, Jobs reportedly said, “Why do we need them? You have me.”

For almost two decades, Apple has been absent from HCI conferences. Ian Parker describes Apple engineers walking about waving prototype watches to see if they behave but systematic user testing has not been a priority. Consequences include products such as the hockey puck mouse that quickly disappeared and performance problems more generally.

Nevertheless, Apple has succeeded spectacularly. This success calls into question the value of direct interaction with users. HCI conferences such as CHI, HCII, and INTERACT draw thousands of submissions and attendees every year. Apple ignores them, yet outperforms rivals who participate. How do we explain this? Possibilities:

  1. User research is important. Apple got lucky. The string of successes were rolls of dice that also came up with the Lisa, the Newton, and Apple TV.
  2. This is possible, but it would be unfortunate because few will believe it. Management experts are like the drug cartel chiefs in a Ridley Scott movie, about whom one character says ominously, “They don’t really believe in coincidences. They’ve heard of them. They’ve just never seen one.”

  3. User research is no longer cost-effective. Brilliant visual designers, telemetry, and agile methods that enable rapid iteration with a delivered product can fix problems quickly enough.
  4. This would suggest that much of the HCI field risks being an academic promotion mill that could implode, leaving the job to visual design and data analytics.

  5. User research isn’t needed for consumer products for which everyone has intuitions. Brilliant visual design and branding provide the edge. User research may be needed when developing for vertical markets or enterprise settings where we lack experience.
  6. This is probably at least part of the explanation. Visual design was long suppressed by the cost of digital storage. When Moore’s law lifted that constraint, a surge of innovation followed. From this perspective, the prominence of design could be transient or could remain decisive. For established consumer products like automobiles and kitchen appliances, design is a major partner. In more specialized domains, equilibrium will take longer to reach.

  7. User research is important unless someone with extraordinary intuitive genius is in control. Apple did just need Jobs.
  8. Ive and Jobs collaborated intensely for a decade, designer and design exponent. If Jobs had intuitive insight into public appeal and channeled the designers, guiding Apple and Pixar products to success, what happens now that half the team is gone?

Ive personally gravitates toward luxury goods. He drives a Bentley Mulsanne, described by Parker as “a car for a head of state,” flies in a personal jet, buys and builds mansions, hangs out with celebrities, and designs unique objects for charity auctions. Does he have deep insight into the public? The nature of Steve Jobs’ contribution or genius may be revealed by the subsequent trajectory of his partner.

If the smartwatch succeeds, (2) and (3) are supported, (1) and (4) are not. Initial reports are mixed. A class of master’s degree students peered at me over their Macbooks a few weeks ago. “Who plans to buy an Apple watch?” I asked. No hands went up. “That seems to have come and gone,” one said. But some media accounts hail the watch as a runaway success. It is early: The iPod was slow to take off and the Mac itself floundered for 18 months, a factor in Jobs being fired in 1985 [5].

Are design and telemetry enough?

Apple’s success in relying on design has been noticed by rivals. At Microsoft 10 years ago, user research (encompassing usability engineering, ethnography, data mining, etc.) was a peer of visual design. Today, it is subordinate: Most user researchers, re-christened design researchers, report to designers. Fewer in number, most have remits too broad to spend much time with users. In another successful company, a designer described user researchers as “trophy wives,” useful for impressing others if you can afford one.

Management trusts telemetry and rapid iteration to align products and users. Usability professionals discovered long ago that to be most effective they should be involved in a project from the beginning, not asked at the end “to put lipstick on a pig.” Telemetry and iteration propose to fix the pig after it has been sold and returned. It can work with simple apps and easygoing customers, but not when an initial design led to architecture choices that block easy fixes.

Useful telemetry requires careful design and even then reveals what is happening, but not why. Important distinctions can be lost in aggregated data. Telemetry plays into our susceptibility to assume causal explanations for correlational data, which in turn feeds our tendency toward confirmation bias. Expect to see software grow increasingly difficult to use. A leading HCI analyst wrote, “Apple keeps going downhill. Their products are really difficult to use. The text is illegible. The Finder is just plain stupid, although I find it useful to point out the stupidities to students—who cannot believe Apple would do things badly.” As Apple’s approach is emulated, going downhill could become the new normal.

Maybe it is a pendulum swing. The crux of the matter is this: A few hours spent watching users can unquestionably improve many products, but when is it worthwhile? Quickly launching something that is “good enough” could be a winning strategy in a fast-paced world of disposable goods. The next version can add a feature and fix an egregious usability problem. Taking more time to build a more usable product could sacrifice a window of opportunity.

A new leadership style may emerge. Osnos concludes, “In the era of Xi Jinping, the public had proved, again, to be an unpredictable partner…‘The people elevated me to this position so that I’d listen to them and benefit them,’ he said... ‘But, in the face of all these opinions and comments, I had to learn to enjoy having my errors pointed out to me, but not to be swayed too much by that. Just because so-and-so says something, I’m not going to start weighing every cost and benefit. I’m not going to lose my appetite over it.’” 

On a more cheerful note, systematic assessment of user experience may be declining in some large software companies, but it is gaining attention in other companies that are devoting more resources to their online presence. For now, skills in understanding technology use have a strong market.

Endnotes

1. Evan Osnos, Born red. The New Yorker, Apr 6, 2015. http://www.newyorker.com/magazine/2015/04/06/born-red.

2. Ian Parker, The shape of things to come. The New Yorker, Feb 25 2015. http://www.newyorker.com/magazine/2015/02/23/shape-things-come.

3. This was a plot element in the recent drama Wolf Hall.

4. Reagan’s principal cabinet officers were from Harvard, Yale, and Princeton.

5. The first Mac had insufficient power and memory to do much more than display its cool UI. This is discussed here and was briefly alluded to in the film Jobs.

Thanks to Tom Erickson and Don Norman for clarifying the history and terminology, and to Gayna Williams and John King for suggestions.


Posted in: on Tue, July 14, 2015 - 10:14:10

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Critical waves


Authors: Deborah Tatar
Posted: Mon, July 13, 2015 - 10:41:13

I have just been trying to figure out how to explain some of our design ideas inside the various ACM/CHI-ish communities in the upcoming paper-writing season. 

I have found that if I keep my head down and focus on the work of shaping ideas and things, enough comes along career-wise to keep body and soul together, but I’m not really all that good at getting my ideas published. (My second-most-cited paper, “The Three Paradigms of HCI” [1], was never properly published because, for structural reasons, it was routinely evaluated by people who disliked it. Warning: If you write a paper called “The Three Paradigms of X,” you can expect it to be reviewed by three people, each an esteemed proponent of one of the paradigms, and each equally likely to dislike the premise of the paper, since no claim is made about the primacy of any of them.) 

But the fact that I’ve made a career despite problems like this does not particularly help my students. In general, I love my students and would like to tell them Useful Things that lead to their Growth, Well-Being, and Success. 

So, in the course of positioning work, I turned to a slowly unfolding debate to figure out how to present some of the ideas that our group has been working with and to explain to myself why one of my most important papers, reflecting on what important topics design should address in the next 10 years, was recently rejected. This note reflects a kind of a selfish approach, less concerned with what people actually have been saying and more concerned with reading how papers will be or are being read. In this post, I call that their shadow, but I might also call it connotation rather than denotation, or, coming from a different philosophical position, perlocutionary meaning.

I went back and re-read some of the Bardzells’ work on critical design (“What’s 'Critical' about Critical Design?” [2] “Analyzing Critical Designs” [3]) and then Pierce et al.’s response [4] in the 2015 CHI proceedings. And then I asked myself what purpose human-computer interaction research fulfills in the world—and who gets to decide. A key idea to me is to answer through research and thought, “what is important to design?”, “how do you know that you have done it?” and, increasingly, “how does/can design shape the world?” Many of these are ideas that I share with Steve Harrison. 

Critical design: reflexivity and beyond

The notion of critical design is very welcome and very familiar to me—my first degree was in English and American Literature and Language, and I love the idea of provoking design ideation through critical inquiry. The Bardzells develop the ideas of critical theory for HCI design in “What is Critical…”, a CHI paper that advocates critical theory as an important lens for both designer and consumer: reflection, perspective shifting, and theory-as-speculation as methods of understanding what is around us and taking design action, and supporting these with dialogical methodology. I or my students can take that paper, use it to propose a design, and say “This is what we were thinking. What do you think?” 

This is in line with Schon’s reflective practitioner, of course. More broadly, this has a strong family resemblance to the design tensions framework [5] that I proposed in 2007, which was also a method of getting people to think more deeply about the meaning of their design choices and what constitutes value in a design situation. The design tensions framework was concerned with how easy it is in HCI to overlook design success if that success is due to an accretion of small factors rather than a single big idea. But it is only one piece of the puzzle, and “What is Critical…” seems to me to provide another piece. 

Canonical—and non-canonical—examples

Following on “What is Critical…”, a graduate student, Gabriele Ferri, working with the Bardzells, wrote a DIS 2014 paper that could be interpreted as playing a nice speculative game, taking critical design to the next level. On one hand, this paper addresses absolutely crucial questions for design researchers and for teachers: What constitutes a contribution in the field and how do we recognize one? One component consists of examples, particularly teaching examples, of critical design. This caught my attention, in part because in the “Three Paradigms” paper, we criticize HCI for a lack of key teaching examples, and note that teaching examples were one of the chief elements that Kuhn noted in identifying something as a paradigm. The absence of teaching examples in HCI makes teaching HCI seem like trying to land on Jupiter; we can point to an accretion of stuff that moves around the sun in an orderly way, but, hey, it’s all gas! Does a person really want to go to a planet where there is nothing to land on or grasp? So, I respect the fact that the paper took on the question of examples. Furthermore, I appreciate that part of the paper is saying, “Let’s openly talk about what is in and what is out.” It accepts that people’s careers rise and fall on judgments about their work (that, at least, is graspable!). This is very important for discourse in the field. 

On the other hand, there was something troubling about the Ferri et al. 2014 DIS paper. As I read it, I could hear unpleasant echoes of family holiday meals long ago at which my various relations would defend the scholarly canon of knowledge, scoff at “the so-called social sciences”—“They’re not sciences!” “Physics is the only really science!” “Math is the queen of disciplines”—and otherwise revel in their refinement and insight. 

OK, the paper does not, strictly speaking, propose a canon. In fact, the canon that the paper does not propose (that is, a non-canon) is very intimidating. If you, newbie, claim to be doing critical design, are you going to get smacked down because your design isn’t in the non-canon? What if your design does not rise to the level that a person would call, for instance, transgression (transgression being part of the non-canon)? If you say that your design is in the non-canon, then does it sound less like research? 

The notion of canon that is not proposed by Ferri et al. is scary enough so that reading the paper, I do not know what to tell my students. Anxiety is not always a bad thing—maybe my students and I should be anxious, maybe we all should be a lot more anxious about the contributions of our field, maybe this is precisely what I helped call for in a different paper, “Making Epistemological Trouble” [6]—but, indeed, the shadow cast by this paper is quite anxiety provoking! 

Trying to illuminate HCI using shadows from design 

So, with these musings, I came to the Pierce et al. paper from this past CHI that, I’m told, is widely interpreted as a response. The authors propose a bunch of problems, some deep, some shallow, with the approach proposed and explored by the Bardzells’ and their colleagues. Like me, Pierce et al. also appeared to think that the prospect of a canon, even the shadow of a canon, is scary. They point out that critical design is not the only way for design and HCI to interact. Pierce et al. are certainly correct that only a trickle of ideas about design has made it into HCI thinking. True, but somehow unfair; I have to point out that only a trickle of ideas about experimentation and modeling have made it into HCI, yet few people associated with the Third Paradigm trouble themselves to obtain anything more than a stereotypic, fifth grade view of the Second Paradigm. The authors then go on to propose some possible futures. There are some nice ideas about openness. As with the earlier papers, there is a denotative content that is interesting and the sense of active minds at play. 

But, like the Ferri and Bardzell paper it criticizes, the Pierce et al. paper also has shadows, and to my mind they are darker ones. It really does not take on the problem that the Bardzells and company address about how HCI should be open to design. The shadow of the Pierce et al. approach is that, while it promises freedom (“open up HCI to all kinds of ideas from design”), it threatens domination by the few, the arbiters of design. Is design in HCI nothing more than translation from design to HCI? If it is only translation, that does not sound like freedom to me. And then the injunction to “focus on tactics, not ontology” makes a major power move that seems to threaten to shut down discourse. Although the authors disclaim the desire to control speech about design themselves, they write that “the issue of the designer’s intention needs to be handled carefully.” What does that mean? Who gets to say it? 

Pierce et al. calls for metadata about the designer’s intention to be taken into account in discussing ideas. 

Really? What justifies that position?

Addressing the designers’ intention cannot find its justification in arguments about how design works, because when a designer creates an artifact it goes out into the world with whatever juju marketing and circumstance give it. Michael Graves was recently memorialized in Metropolis by someone who had worked on his low-end consumer product line for Target. His colleague praised Graves’ focus on usability. That’s nice. Nonetheless, I am perfectly free to tell you that I was relieved when my Graves-designed hand-mixer finally died. The balance was wrong and my kitchen walls were frequently spattered. The only reason I know anything about the designer’s thought is because Graves’ achievements lay towards the art side of design, and artists may sometimes be offered a little blurb and an occasional token of respect or, in this case, a tribute. I concede that Graves’ purpose was not to promote egg-based wall decoration in my home but this is irrelevant to my experience as a user or even a client of the design work. 

So this reverence for the designer’s intent is not, I think, a claim about design. Is it a claim about design research? Well, on the one hand, in any kind of research, we make reference to prior work, but, in HCI more than most of the other fields of my concerns and expertise, we are also highly selective. For goodness sake, we write ten-page papers! Why should the designer’s intention, which is, importantly, also the researcher’s intention, be so prioritized? The direct thought that Pierce et al. present is about respecting authorial voice. That is an important discussion to have, a discussion that I would recast as concerned with developing schools of thought as we move forward in design research in HCI. But let’s go back to the shadow and the question of what I tell my students. 

I see something much less discussable and more draconian in the shadow of the Pierce et al. paper than in the Bardzells’ papers. I am sure that Pierce and company themselves would be horrified were I to tell my students, “Do not reconsider the meaning of (for example) the Drift Table because Gaver hath spoken and he hath named it Ludic!” That is a very dark shadow indeed, because unanswerable.

Years ago, Steve Harrison, Maribeth Back, and I wrote a paper called “It’s Just a Method!” [7]. The idea was that design methods are not important in-and-of themselves, but only because of what they enable the designer to perceive about the situation. In parallel, I would like my students to be able to use theory to inspire themselves and to describe their projects to others. But it’s hard to use much of these theories this way. We put our little 10-page barque to sea and theory threatens to fall on our heads as though we floated beneath a calving iceberg. 

Meanwhile, none of this gets the field closer to the issues that so trouble me about the status of technology in shaping our views of ourselves as subjects. Students, back to shaping our little research boat! 

Endnotes

1. Harrison, S., Tatar, D., & Sengers, P. (2007, April). The three paradigms of HCI. In Alt. Chi. Session at the SIGCHI Conference on Human Factors in Computing Systems. San Jose, California, USA (pp. 1-18).

2. Bardzell, J. and Bardzell, S. (2013). What is "critical" about critical design? In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 3297-3306.

3. Ferri, G., Bardzell, J., Bardzell, S., & Louraine, S. (2014, June). Analyzing critical designs: categories, distinctions, and canons of exemplars. In Proceedings of the 2014 conference on Designing interactive systems. (pp. 355-364). ACM.

4. Pierce, J., Sengers, P., Hirsch, T., Jenkins, T., Gaver, W., & DiSalvo, C. (2015, April). Expanding and Refining Design and Criticality in HCI. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 2083-2092). ACM.

5. Tatar, D. (2007). The design tensions framework. Human–Computer Interaction, 22(4), 413-451.

6. Harrison, S., Sengers, P., & Tatar, D. (2011). Making epistemological trouble: Third-paradigm HCI as successor science. Interacting with Computers, 23(5), 385-392.

7. Harrison, S., Back, M., & Tatar, D. (2006, June). It's Just a Method!: a pedagogical experiment in interdisciplinary design. In Proceedings of the 6th conference on Designing Interactive systems (pp. 261-270). ACM.



Posted in: on Mon, July 13, 2015 - 10:41:13

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


No Comments Found


Design by the yard


Authors: Monica Granfield
Posted: Tue, June 30, 2015 - 9:34:26

Be a yardstick of quality. Some people aren't used to an environment where excellence is expected. — Steve Jobs 

How do you build a common language with which to communicate design goals for a product and measure if the goals have been met? Organizations are moving faster than ever, leaving little time for long meetings, discussions, and explanations. Ideas, designs, and research findings need to be synthesized down to the salient points to sell an idea or discovery to the organization. So then, as user experience practitioners, how can we set expectations to align and achieve a common set of design goals, upfront, for all ideas, findings, and designs, to work towards?  

Establish a UX yardstick. A UX yardstick is comprised of a dozen base words that encompass what you want a design to achieve. The more basic, the better, which is one of the reasons why you need about a dozen words to drive designs. Words like “Clear” or “Procedural” are base words that are less likely to be misinterpreted. However, design buzzwords like "Simple" or "Intuitive" can mean different things to different people. John Maeda wrote an entire book on what simple means, The Laws of Simplicity. It turns out that describing or achieving simplicity is not so simple after all. Designers understand how the concepts of these buzzwords are surfaced in a design and what it takes to achieve them. However, how we define simple and how a CEO might define simple may be worlds apart, therefore driving the need to use the most raw and basic descriptive words possible, to set the design goals and establish a common language.  

The yardstick will not only establish a common language across the organization, it will serve as a tool with which to set and measure design goals. The words appear on the yardstick, left to right, respectively, from easiest to achieve, to the most challenging to achieve. The order of the words helps to determine the order of achieving the design goal, and weights the level of design effort. A word that appears at the beginning of the yardstick, such as "Clean," might be much easier to achieve than, say, a word further down the yardstick, such as "Effortless." Using the words on the yardstick will clearly set design expectations in manner that is commonly understood. To build a common understanding, the words should align with the company's product goals and missions, and the meaning and order of appearance should be understood and approved with stakeholders.

Milestones can be set on the yardstick and used as a mechanism to explain a grouping of words. Here I use the paradigm of metals—bronze, silver, gold, and platinum—to explain the level of design effort and the goals that design effort encompasses. So, for example, the design goal for "bronze" design coverage is that the design is clean and useful, and the design goal for "silver" coverage requires that the design is clean, useful, learnable, straightforward, and effective. Metrics can also be placed on the goals to further explain and measure the success of each design goal. The words build on one another, establishing more goals to reach as you move across the yardstick, grounding and guiding your designs, avoiding churn and disagreement later down the line. Returning to the goal words throughout the product development process will assist in determining if the design approach is meeting expectations based on facts, rather than consensus or opinion. Keeping designs focused and in alignment with company and product goals will result in more successful users and products. 

When it comes to project scoping or resourcing, the yardstick can be useful too. Not all features can or need the same level of attention for each release. The order of the words on the yardstick assists in weighing the level of design effort needed. Some may be small while others may be of high impact to the customer base or the market space. So the small upgrade feature may only need to encompass the first three goal words on the yardstick and a new, market-competitive feature may need to reach all the way to the end of the yardstick, encompassing all of the words as the goal of the design—a full yard of design. A feature that needs to achieve a full yard of design will need more time and resources than a feature that only needs a half a yard of design. The UX yardstick can set the pace for time and resources needed to achieve a commonly understood design goal.

Other inspirations, such as the company's business or product goals, can be placed on the yardstick for reference and guidance. I have placed these in the upper right corner of the yardstick. A quote or a motto for a design organization on the yardstick can also serve to inspire and guide the design intentions for the product.  

Creating a UX yardstick can assist in framing and measuring the goals and expectations of a design, helping to determine if a feature needs a half or full yard of design and what it means to achieve that level of design. The design goals on the yardstick should remain fluid and change, keeping pace with any business and product changes, yet always serving as the guide, beacon, and inspiration on design direction. 



Posted in: on Tue, June 30, 2015 - 9:34:26

Monica Granfield

Monica Granfield is a user experience strategist at Go Design LLC.
View All Monica Granfield's Posts


Post Comment


No Comments Found


Customers vs. users


Authors: Jonathan Grudin
Posted: Wed, June 03, 2015 - 9:41:03

Perspectives on handwriting and digital ink in schools.

Going through customers to reach users is a challenge as old as HCI. When computers cost a fortune, acquisition decisions weren’t made by hands-on users. Those responsible believed that they knew what users needed. They were often wrong. Making life worse for designers, the marketers who spoke with customers felt they knew best what users needed and often blocked access to customers and users. Developers were two unreliable jumps from use.

Enterprise settings today haven’t changed much. Marketing and acquisition remain overconfident about their understanding of user needs. However, users now have more options. They can request inexpensive software or customize what is provided. Employees who experience decent consumer software are bolder about communicating their needs. If not listened to, they bring in their own tools for some tasks.

The consumer market has one fewer hurdle—customers are the users. A product organization still deals with distributors who may be overly optimistic about their knowledge of consumers, but this self-corrects—a product sells or it doesn’t. A useful product ignored by distributors may fail, but a poor consumer product isn’t forced on users, as often happens in enterprises. 

Although people, including me, have long praised efforts to get direct feedback from hands-on potential “end-users,” many teams settle for A/B testing or less. In the critical endeavor of educating the world’s billion school-age children, an unusually clear illustration of the challenge has appeared. Resistance to an advantageous change has different sources, including cost, but even where cost is not an issue, a chasm separates customers and users, delaying what seems desirable and probably inevitable.

“The pen is mightier than the keyboard.”

This is the title of an elegant 2014 paper published in Psychological Science by Pamela Mueller of Princeton and Daniel Oppenheimer of UCLA [1]. Three experiments compared the effects of taking lecture notes with a pen or a keyboard. In the first, subsequent memory for factual information was equal, but students taking notes by hand had significantly more recall of conceptual information. Keyboard users had taken more notes, typing large chunks of verbatim lecture text. Those writing by hand couldn’t keep up, so they summarized; this processing appears to aid recollection.

In the second experiment, students were told that verbatim text is not as useful as summaries. Despite this guidance, the results were the same: lower performance and more verbatim text for keyboard users. In the third experiment, students could study their notes prior to testing: Although the typists had more notes, they again did worse.

Sharon Oviatt, author of The Design of Future Educational Interfaces (Routledge, 2013), conducted a wide range of rigorous comparisons in educational settings. She looked at pen and paper, the use of styluses with a tablet, and handwriting with a normal pen and special paper that enables digital recording. The results were dramatic. Students with pen interfaces did significantly better in hypothesis-generation and inferencing tasks. They solved problems better when they had a pen to diagram or to jot down thoughts.

Oviatt and others have observed that digital pens as a keyboard supplement help good students, but somewhat unexpectedly they can dramatically “level the playing field.” Students who think visually, who compensate for shorter memory spans by quickly jotting down notes, or who benefit from rapid trial-and-error, engage more and perform better. How this technology, with others, can save students from “falling through the cracks” could fill another essay.

Mueller and Oviatt discussed their studies in these keynote presentations at WIPTTE 2015, well worth 90 minutes. The authors primarily make a case for handwriting over keyboards for a range of tasks. More surprising at first glance, Oviatt found digital pens outperforming pen and paper. In a Venn diagram task, digital pens led to more sketches and more correct diagrams than pen and paper. Digital ink supports rapid trial and error due to the ease of erasing. Page size, page format (blank, lined, grid), ink colors, and line thicknesses are easily varied, engaging students and supporting task activities. Handwriting recognition enables students to search written and typed notes together. Digital notes are readily shared with collaborators and teachers.

Students can’t use keyboards to write complex algebraic equations (or even practice long division). A keyboard and mouse aren’t great for drawing the layers of a leaf or light going through lenses, placing geographic landmarks on a map, creating detailed historical timelines, or drawing illustrations for a story. SBAC annual state assessments, it was announced, will require digital pens in 2017.

Who would resist including a digital pen with computers for students, the key users of education?

Customers

Last week, a journalist friend mentioned that although he didn’t use a digital pen, his daughter borrows his tablet and uses its pen all the time. So did my daughter before she got her own tablet, which she insisted have a good pen.

I’m never without a traditional pen. I take notes, mark up printed drafts, make sketches, and compile weekly grocery-shopping lists. The journalist takes interview notes with a pencil, which holds up better when paper gets wet. We rarely use our digital pens. I use a digital signature for letters of recommendation, that’s about it.

Why would children but not their parents use digital pens? Well, few adults write as many equations as the average child, but it may be more relevant that unlike students, we rarely share handwritten work with others. We were taught that it’s unacceptable or unprofessional to turn in handwritten work—essays are to be typed up, illustrations recreated with a graphics package. Meeting minutes that are taken by hand are retyped before distribution. A whiteboard might be photographed after a meeting, but the notes are then typed. Colleagues who comment on my drafts may initially write on a paper printout, but they then typically type it into comment fields—additional effort for them, easier to read but more difficult for me to contextualize than ink-in-place would be. We consider handwriting second-class and let it deteriorate. “I can’t even read my own handwriting half the time anymore,” said a colleague.

The customers—superintendents or administrators making the purchasing decisions—don’t use a digital pen. Digital ink enthusiasts tell them that they would be more efficient if they did, but these customers are successful professionals, happy with how they work and not planning to drop everything to buy a new device and learn to use a digital pen. “Are you saying I’m inefficient?” Such exhortations can be counterproductive.

Instead, remind such customers that their needs differ from their users’ needs. K-12 is different from most professions: Handwriting is part of the final product for both students and teachers. Students don’t retype class notes, which include equations and sketches. They don’t resort to professional graphics programs. Teachers mark papers by hand; it is more personal and more efficient. Lecturing to an adult audience, I can count on them to follow my slides, but teachers guide student attention by underlining, circling, and drawing arrows.

These customers are unaware that they don’t fully appreciate the world of the users: teachers and students. A superintendent thinks, “Digital pens are a frill, an expense, they’ll get lost or broken. Students should improve their keyboard skills, which is better professional training anyway.” But students can’t type electron dot diagrams or feudal hierarchy structures.

The future seems clear, but these customers are not always wrong about the present. When a student uses a computer once a week or in one class a day, digital ink has less value. Most class notes will be on paper in a binder or folder, so digital notes will be dispersed unless printed. Students have no personal responsibility for the pen. This changes when a student carries a tablet everywhere: to classes, home, on field trips, to work on the bus to an athletic competition. Two years ago I described forces that were aligning behind 1:1 device:student deployments, which are now spreading in public schools. Several new low-cost tablets come with good digital pens. Prices will drop further if pens come to be considered essential.

Who will win?

Steve Jobs railed against digital pens, but he also opposed color displays until he embraced them. Apple is now patenting digital ink technology, feeding rumors about a better pen than the capacitive finger-on-a-stick iPad stylus. Google education evangelists described handwriting as obsolete, but recently Google announced enhanced handwriting recognition. Microsoft stopped most digital ink work when it embraced touch, but is now strongly committed to improving it.

An adverse trend: A comfortable digital pen is too wide to garage in ultra-thin tablets. On the other hand, vendors may realize that 80% of the world’s population does not use the Roman alphabet and finds keyboard writing very inconvenient. In China, tablets are available with digital pens of higher resolution than can be purchased elsewhere. Cursive writing and calligraphy may not return to fashion, but digital pens are likely to. 1975–2025 may become known as “the typing era,” a strange interlude forced on us by technology limitations.

Acceptance may be slow. 1:1 deployments will not be the norm for a few more years. Aging customers who speak on behalf of middle-aged teachers and young students rarely sit through classes and may not learn new tricks. In an era of tight budgets, many don’t grasp the implications of the downward trajectory in infrastructure and technology costs and the upward trajectory in pedagogical approaches that can take advantage of technology that, at last, has the capability and versatility to help.

The greatest challenge

Students have absorbed the message: Professionalism requires typing. Long essays must be typed so overworked teachers can read them more quickly. A job résumé should look good. No one points out that for rapid exchanges, handwriting is often faster and more effective—and the world is moving to brief, targeted communication.

Oviatt asked students whether they would prefer keyboarding or writing for an exam. They choose the keyboard, even when they get significantly better outcomes with a pen! The customer has gained mind control over the users. Education needs a reeducation program.

In concluding, let’s pull back to see this education example in the larger framework of the conflict between customers and users. Overall, much is improving. Users have more control over purchases and customization. Users have more access to information to guide their choices. They have more ways to express dissatisfaction. Customers, too, have paths to greater understanding of the users on whose behalf they act. They do not always succeed in finding those paths, as we have seen, so vigilance is to be maintained.

Endnote

1. This play on "the pen is mightier than the sword” was independently used almost a decade ago by computer graphics pioneer Andy van Dam, for talks lauding the potential of digital pens or styluses.

Thanks to the many teachers, students, and administrators who have shared their experiences, observations, and classrooms.


Posted in: on Wed, June 03, 2015 - 9:41:03

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Robots: Can’t live with them, can’t live without them


Authors: Aaron Marcus
Posted: Mon, May 04, 2015 - 9:05:20

Robots, androids/gynoids (female robots), and AI agents are all the rage these days...or all the dread, depending on your views. (In much of the discussion below, I shall refer to them collectively as robots for simplicity, since any disembodied AI agent with sufficient access to the world’s technology could arrange for human or non-human forms to represent itself.)

It seems one can’t avoid seeing a news article, editorial opinion, or popular opinion about them in social media every day, or a new movie appearing about them, now as primary characters, every week.

A recent article reports that China may have the most factory robots in the world by 2017 [1]. Another article reports that humanoid customer-service robots are starting service in Japan [2]. Still another reports that robots will serve next to human “co-workers” in factories [3], perhaps to re-assure human workers that they will not be completely replaced. Humanoid robots like the Japanese Honda Asimo have captured people’s imagination worldwide.

Recent movies have focused on robots, androids/gynoids, or non-embodied artificial intelligence agents as well, turning them into lead characters. We’re a long way from Maria the “maschinenmensch” of Metropolis or Robbie the Robot in Forbidden Planet turning them into lead characters, as in, Her, Chappi, and more recently Ex Machina or Avengers: Age of Ultron.

Movies have been “humanizing” and “cutifying” robots for years, ever since George Lucas made us laugh at/with the Laurel and Hardy characters of R2D2 and C3PO in the Star Wars movies of the late 1970s. In 2008’s Wall-E, this approach was taken to new heights of “adorableness”...perhaps technology is using Hollywood unconsciously to soften us up, making us think of them as friendly and non-threatening, allowing us to forget more ominous representatives from the Terminator series, Total Recall movies, I, Robot, or Avengers: Age of Ultron, perhaps preparing us for the coming wave of robots everywhere.

Although much of moviedom has focused recently on friendly robots and some news focuses on convenient use of drones to deliver packages to our homes, other more sinister signs have emerged. A recent national public radio (NPR) program focused on ethical issues. These included the rise of “killer robots” being developed by military in several countries. Should a killer robot incorporated into our armed forces be programmed to have compassion for a young child (and hesitation to fire upon someone) who has been given a lethal weapon by a female family member, as we saw in American Sniper?

That program also discussed the rise and use of sex robots in Japan, currently a “harmless” entertainment for (mostly) men. It seems there is no end to men’s ability to treat women as objects...and objects as women). This seems perhaps a less harmful way to make “comfort women” available to human (male) armed forces. The discussion of sex robots did recognize the potential for encouraging a-social or anti-social behavior in people. Some argue that the possibliity of sex/emotion robots for those unable to have “normal” relations with people might be helpful, but the discussants debated the value of offering child-aged sex robots for pedophiles. Ethical review, discussions, and potentially new laws seem in order. Such strategies are discussed in [4]. 

There seems likely to be growing interest and need for human-centered, sophisticated solutions to Human-Robot-Interaction (HRI) in all phases of their deployment. This represents a new age of HCI, where the “computer” has assumed human form and/or seems to exhibit human intelligence and personality. 

I was reminded by another public radio discussion recently of the work of the philosopher Martin Heidegger, a discussion about being, consciousness, existence, thought. As with the earlier radio program, new issues seemed to pop up readily in my mind, and no doubt in others’. Some of these topics have probably been discussed elsewhere in the world, and resources/discussions are no doubt available on the Internet. I have not had time to pursue any/all of these:

  • Are most robots shown in Hollywood movies a product of Western culture? Does Asimo exhibit characteristics of Japanese culture? Will we see the emergence of cross-cultural similarities and differences of robots, androids, gynoids, and AI agents? What would a “Chinese” robot look like, speak like, and behave like? What would an “Indian” robot be like?

  • Should humans be allowed to marry robots? Should robots be allowed to marry humans? Should robots be allowed to marry each other? 

    Definitions of marriage are being hotly debated these days. If robots are sufficiently “intelligent” to be almost indistinguishable from humans, and humans fall in love with them (as depicted in Her), or vice-versa, ought we not to consider soon what the legal ramifications are for state and federal laws? At the end of Her, Samantha, the AI agent, abandons her human special friend and runs off with other AI agents because they are more intelligent and fun to play with. Is this one of several likely future scenarios?

  • How many spouses should a robot be allowed to have? Although past potentates had many, many today might argue that it is hard enough to manage the relationship with just one. However, AI agents seem much more capable. In Her, the AI agent Samantha admits to having “intimate,” “special” emotional relations (and at least attempting a form of physical relationship using a sex surrogate) with 641 people (male? female? both?) other than the human (male) lead character of the film, Theodore Twombly, and “she” talks to more than about 30,000 others. Might one advanced “female” AI marry 642 human beings? As for “mass marriages” of a group of people to another, I think the Catholic Church in approximately the 16th century introduced the concept of Christians (or Christian nuns) being married to Jesus, or to the Church, which today I believe still survives as a concept within the religion.

  • Can there be such a thing as a Jewish robot? Can a robot convert to some religion? Why or why not?

  • Won’t all the philosophers and thinkers of the past (Plato, Aristotle, Machiavelli, Wittgenstein, Arendt, etc., to name a few) have to have their concepts, principles, and conclusions reconsidered in the light of robots asking the very same questions? Listening to a debate about the meaning of the terms of Heidegger caused me to think that Buber’s “I-Thou” concepts, Sartre’s existentialism, and Kantian moral/behavior-theory based on “categorical imperatives” may all have to be reconsidered in the light of non-human intelligence/actors in our society.

  • Can robots inherit our property and other assets? What happens to our human legacies with respect to property and other assets? Can robots be inheritors of trusts and family assets? If corporations in the U.S. are now like persons, will robots be far behind? Can/should they vote? What rights do they have?

  • Where are the senior robots? Most of the human-clad ones seem to look like the young, beautiful ladies on display in Ex Machina, which seems yet another 15- to 35-year-old male techie’s a-social, somewhat misogynist fantasy. Do robot women really need to wear 4-inch heels to be able to make eye-to-eye contact with their male overlords?

  • Are robots our “mind-children” and destined to replace us? Some are beginning to take this approach today, as indicated in numerous position papers, editorials, and feature articles. Perhaps we should, like mature parents, be grateful for our progeny and hope that they will remember and respect their past elders. I am reminded of an Egyptian papyrus from millenia ago that complained about the younger generation not giving enough respect to their seniors. Ah, the spirit of the late comedian Rodney Dangerfield, “I don’t get no respect!” may be transferred to the entire human race.

  • What exactly are the basic principles of HRI? Have they already been established in HCI? In general human-human relations? Which should we be aware of and/or worry about?

We are in the middle of experiencing a monumental change in technology and human thought, communication, and interaction, akin in significance to our actually encountering alien beings from other planets (which does not yet seem actually to have occurred in a widespread form, setting aside the few representatives of the Men in Black series), or the the reality of split-brain experiments first carried out in 1961, which exposed the possibility of more than one “person” residing inside our skulls.

Stay tuned for more challenges, fun, and games, as we enter the Robot Reality.

Acknowledgment

Portions of this blog are based on a chapter about robots in my forthcoming book HCI and User-Experience Design: Fast Forward to the Past, Present, and Future, Springer Verlag/London, September 2015, which in turn is based on my “Fast Forward” column that appeared in Interactions during 2002-2007.

Endnotes

1. Aeppel, Timothy (2015). “Why China May Have the Most Factory Robots in the World by 2017.” Wall Street Journal, 1 April 2015, p. D1.

2. Hongo, Jun, “Robotic Customer Service? In This Japanese Store, That’s the Point.” Wall Street Journal, 16 April 2015.

3. Hagerty, James R. (2015). “New Robots Designed to Be More Agile and Work Next to Humans: ABB introduces the YuMi robot at a trade fair in Germany.” Wall Street Journal, 13 April 2015. 

4. Blum, Gabriella, and Witten, Benjamin (2015). “New Threats Need New Laws.” Wall Street Journal, p. C3,  18 April 2015.



Posted in: on Mon, May 04, 2015 - 9:05:20

Aaron Marcus

Aaron Marcus is principal at Aaron Marcus and Associates (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts


Post Comment


No Comments Found


A matter of semantics…


Authors: Richard Anderson
Posted: Tue, April 28, 2015 - 11:07:54

In 2005, I wrote a blog post entitled, “Is ‘user’ the best word?” followed a year later by “Words (and definitions) matter; however…” The debate about the words we use in our field and their meaning has continued since that time, with many of the old arguments being resurrected. For example, regarding the beleaguered term user:

  • Jack Dorsey dropped its use at Square, arguing that it is a rather passive word that “is a massive abstraction away from real problems people feel on a daily basis. No one wants to be thought of as a “user.”

  • Margaret Gould Stewart revealed that Facebook sort of banished the term, saying it is “kind of arrogant to think the only reason people exist is to use what you built. They actually have lives, like, outside the experience they have using your product.”

  • Natalie Nixon argued “the next time you begin to ask about your users, stop. Reorient and remind yourself that you are solving problems for people. That subtle shift in language will do wonders for your sense making skills and build a different sensitivity to the challenge at hand.”

  • Eric Baumer et al., in Interactions, argued that studying non-users is as important as studying users and stated that “only two professions refer to their clients as users: designers and drug dealers.”

The preferred alternatives, as a decade ago, are usually person and people or human(s). Baumer et al. argued for consideration of “potentially more descriptive terms such as fan, player, client, audience, patient, customer, employee, hacker, prosumer, conscript, administrator, and so on.” But even such alternatives might have shortcomings. For example, regarding the word customer (also preferred by Dorsey):

I still can’t imagine the term user going away anytime soon. Indeed, some have defended it, as reflected in the following tweets:

Nevertheless, there has been an increase in the volume of objections to the term, reflecting, I think, a recognition of the need to think bigger—to consider and design experiences beyond the digital in order to design the best possible digital experience.

I address such issues beginning on the first day of my teaching of General Assembly’s UX Design Immersive course. Students need to know that the terms we use in our field matter and, though not spoken of much above, are sometimes defined differently by different people. This has included two of my instructor colleagues, one of whom called all paper prototype testing “Wizard of Oz” testing and the other who called all paper prototype testing “walkthroughs.” Say what?!? In my view, neither one of them are correct.

Some of the other areas of debate regarding terms we use include what UX design means and how it differs from UI design (see, for example, “The experience lingo”), what an MVP is (see, for example, “The MVP is NOT about the product”), and whether it is even an adequate concept (see, for example, “Minimum Compelling Product”).

Such debates seem destined to never end, which might possibly be a good thing. As Jared Spool recently tweeted:




Posted in: on Tue, April 28, 2015 - 11:07:54

Richard Anderson

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.
View All Richard Anderson's Posts


Post Comment


No Comments Found


Digital divides considered harmless


Authors: Jonathan Grudin
Posted: Tue, April 21, 2015 - 7:00:16

The problem with early technology is that you get stuck with all this legacy sh*t.
– Director of Technology at a leading private high school

The impermanence of elevation differentials in seismically active terrain

Educational technologists have expressed concern about disadvantaged students falling farther behind; haves versus have-nots. Education faces challenges, but I assert that digital divides are not the big ones. Paradoxical as it may sound, divides can at times offset the advantages enjoyed by wealthier schools. The uninterrupted increase in the capability and decrease in the price of technology can be a challenge for early adopters. This is especially evident in education now, because primary and secondary public schools have reached a tipping point, but let’s first consider other domains in which apparent advantages proved to be short-lived or illusory.

Before Germany was reunified in 1990, wealthy West Germany had a strong technology infrastructure. It then invested two trillion euros in the former East Germany. Not all of the expenditures were wisely planned, but a strong digital infrastructure was. Soon after, a friend in the West complained that the East had better computational capability than he and his colleagues, who were saddled with older infrastructure and systems.

The exponential rise in capability and decline in cost often rewards late adopters. Who among us, contemplating a discretionary hardware upgrade, hasn’t wondered how much more we could get by waiting a few months? Early adopters spend money for the privilege of debugging a new technology and working out best practices through costly trial-and-error, after which the price drops. The pioneers establish roles and develop work habits that are shaped for systems that are soon surpassed in capability by offerings that may benefit from different approaches. The “have-nots” of yesterday who start today can adopt practices tuned to better, less expensive systems. They benefit from knowing what has and hasn’t worked.

In her 1979 book In the Name of Efficiency, Joan Greenbaum revealed that executives marketing early mainframe computers could not document productivity gains from the use of extremely expensive systems. Businesses paid millions of dollars for a computer that had roughly the computational power of your smartphone. Mainframe vendors were selling prestige: Through the mystique around technology, customers who bought computers impressed their customers, an indirect benefit as long as no one realized that the emperor wore no clothes.

This phenomenon was reflected in Nobel prize-winning economist Robert Solow’s comment in the mid-1980s: “You can see the computer age everywhere but in the productivity statistics.” It was labeled “the productivity paradox.” Two decades of computer use had delivered no apparent economic benefit.

Those who followed did benefit: purchasers of systems that arrived in the late 1980s and 1990s. The systems were much less expensive, and software designers had learned from the ordeals of mainframe users. Productivity gains were measured.

Optimistic technologists define the digital high ground. Their colleagues in marketing build dazzling if not always tangible castles, attracting those who can afford the price tag. Consider home automation. Fifteen years ago, wealthy homeowners built houses around broadband or tore up floors and walls to install it. This set them apart, but how great or long-lasting was their advantage? I soon heard groans about maintenance costs. Only after wireless provided the rest of us with equivalent capability at a small fraction of the cost did services materialize to make access valuable. The early explorers had to decide how long to maintain their legacy systems.

This introduces a second challenge: An aging explorer may not realize when the rapid movement of underlying tectonic plates shifts yesterday’s high ground to tomorrow’s low ground. Late entrants can steal a march on early adopters who are set in their ways.

To have and have not

Digital divides melt away. “Have-nots” become “haves,” and by then more stuff is worth having. In the 1970s, Xerox PARC built personal computers with tremendous capability, 10 years ahead of everyone else. The allure of working with them attracted researchers from minicomputer-oriented labs and elsewhere. It was said that for many years, no researcher left PARC voluntarily. In the 1980s, another research opportunity for the wealthy arose: LISP machines built by Symbolics, LMI, and Xerox, expensive computers with hardware optimized for the programming language favored for artificial intelligence.

There was a clear divide in the research community. And then, in the early 1990s, Moore’s law leveled it. High-volume chip producers Intel and Motorola outpaced low-volume hardware shops. I remember the shock when it was announced that Common LISP ran faster on a Mac than on a LISP machine. LISP machines were doomed. Less predictably, interest in LISP (and AI) declined, perhaps due to a loss of the mystique that masked a failure to deliver measurable benefit. PARC researchers had made landmark contributions, albeit not many that Xerox profited from, but as its researchers shifted to commercial platforms, PARC’s edge faded. A digital divide evaporated.

The same phenomenon unfolded more broadly. Through the 90s, leading industry and university research labs could afford more disk storage, networking, and high-end machines. It made a difference. A decade later, good networking was widely accessible, storage costs plummeted, and someone working at home or in a dorm room with a new moderately priced high-end machine had as good an environment as many an elite lab researcher with a three-year-old machine. Money still enables researchers to explore exotic hardware domains, but for many pursuits, someone with modest resources is not disadvantaged.

The big enchilada

Discussions of haves and have-nots often focus on emerging countries: the challenges of getting power, IT support, and networking. I thought harvested solar energy would solve the problem sooner than it has. Mobile phones arrived first. In many emerging regions, phone access is surprisingly close to universal. Soon all phones will be smart. If mobile, cloud-based computing is the future, those in emerging markets who focus now on exploiting mobile technology could outpace us, just as Germans in the East leapfrogged many in the West.

Promoting accessibility to useful technology is undeniably a good thing. But our optimism about our wonderful inventions can exaggerate our estimates of the harm done to those who don’t rush in. Some of the same people who lament digital divides turn around to decry harmful effects of the over-absorption in technology use around them.

Education: And the first one now will later be last…

For over forty years I didn’t think computers were a great investment for primary or secondary schools, even though my destiny changed in high school when I taught myself to program on a computer at a nearby college that was unused on weekends. It sat in a glass-walled air-conditioned room and had far less computational power than a hand-calculator did twenty years later. I first programmed it to discover twin primes and deal random bridge hands. It was fun, but I saw no vocational path or educational value—the college students weren’t using it, my classmates had other concerns, and maintaining one cost more than a teacher’s salary. Pedagogy was the top priority in K-12. I may have sensed even then that ongoing professional development for teachers was second. As the decades passed and costs declined, having a computer or two around for students to explore seemed fine, but a digital divide in education didn’t seem a threat. Some wealthy schools struggled with expensive, low-capability technology, subsidizing the collective effort to figure out how to make good use of it.

But the times they are a-changing. It was evident two years ago. New pedagogical approaches, often called 21st-century skills and tied to Common Core State Standards: critical thinking, problem-solving, communication and collaboration skills, adaptive learning, project-based learning, and adaptive online-only assessment. I’ve seen this fresh, intelligent reorientation in my daughters’ classes. Students and teachers face a learning curve. Parents can’t help much, not having experienced anything like it. But if successful, the results will be impressive.

Software will be useful in supporting this. Sales of high-functionality devices to schools are rising rapidly. The price of a good tablet has halved in two years and will continue to drop. Public K-12 schools are joining the private schools that are 1:1 (“one to one,” each student carries a device everywhere). When not poorly implemented, 1:1 is transformational. I have attended public school classes with beneficial 1:1 Kindle, iPad, Chromebook, and laptop PC deployments.

Students in the past used school computers when and how a teacher dictates. A student issued a device decides when and how it is used, in negotiation with teachers and parents. The difference in engagement can be remarkable. Third and fourth graders are strikingly adept, middle school seems a sweet spot, and high school students are often out in front of their teachers.

Hardware and software value propositions change dramatically with a 1:1 deployment. When a student can take notes in all classes, at home, and on field trips, good note-taking software is invaluable. A good digital pen shifts from being an easily lost curiosity, when used once a day in a computer lab, to a tool used throughout the day to take notes, write equations, sketch diagrams and timelines, adorn essays with artwork, label maps, and unleash creativity.

Dramatic changes in pedagogy are enabled. Time is saved and collaboration opportunities are created when all work is digital. Teachers who formerly saw a student’s work product now see the work process.

Rapidly dropping prices and recognition of real benefits are bringing 1:1 to public schools in many countries, erasing a digital divide that separated them from elite private schools that could previously afford to issue every student a device. Of course, it requires preparation: Professional development for teachers, addressing new pedagogical approaches and technology, and wireless infrastructure for schools, which is expensive although also declining in cost. Most teachers today have experience with technology. The shift to student responsibility and initiative helps—tech-savvy fourth graders need little assistance when they reach middle school. In fact, students often help teachers get going.

As 1:1 spreads, pioneers risk being left behind. Private-school students are often from computer-using homes where parents have strong preferences for Macs, PCs, or Android. As a result, many private schools adopted a bring-your-own-device (BYOD) policy. Teachers who face classes with myriad operating systems, display sizes, and browser choices are limited in assigning apps, giving advice, and trouble-shooting. They are driven to lowest-common-denominator web-based approaches. In the past, students nevertheless gained experience with technology. A digital divide existed, though the benefits of technology use were not always evident.

Public schools can’t require parents to buy devices. Fewer public school students arrive with strong preferences and almost all are delighted to be given a device. School districts get discounts for quantity purchases of one device, and teachers can do much more with the resulting uniform environment. When preparing to go 1:1, one of the largest U.S. school districts had classes try over a dozen different devices in structured tasks. In summarizing what they learned, the Director of Innovative Learning began, “One of the things that our teachers said, over and over again, is, ‘Don’t give students a different device than you give us.’ That was an Aha! moment for us” [1].

Many public schools that carefully research their options choose this path. The future may be device diversity, but teachers struggle with non-uniformity. They can’t assume students have a good digital pen, which is far more useful when available for sketching, taking notes, writing equations, annotating maps, and so on in every class, at home, and on field trips. Bring Your Own is rarely the way to start learning anything. Instruction in automobile mechanics began with everyone looking at the same car to learn the basic parts and their functions. Allegory is taught by having a class read Animal Farm together, and later encouraging independent reading. Digital technology is no different.

I have visited many public and private 1:1 schools. Some private schools that are BYOD do not realize how much the affordances have shifted. Back when technology capabilities were limited, their students were on the advantaged side of a digital divide. Today, technology can make a bigger difference, and I have seen public school students who benefit more than nearby private school students.

Conclusion

Serious economic inequalities affect healthcare, housing, and nutrition. Before adding technology to the list, consider its unique nature: a steadily flowing fountain of highly promoted but untested novelty that takes time to mature as prices drop. We nod at the concept of the “technology hype cycle.” We are not surprised by productivity paradoxes. Yet some books that belabor technology for its shortcomings and frivolous distractions also decry digital divides: Questionable technologies are not available to everyone! The news is not so bad. Courtesy of Moore’s law and those who are improving technology, divides are being erased faster than they are being created.

Endnote

1. Ryan Imbriale. The smart way to roll out 1-to-1 in a large district. Webinar presented March 11, 2015.

Thanks to Steve Sawyer, John King, and John Newsom for observations, comments, and suggestions.


Posted in: on Tue, April 21, 2015 - 7:00:16

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Six lessons for service blueprinting


Authors: Lauren Chapman Ruiz
Posted: Mon, April 20, 2015 - 7:38:39

Learning about customer experience, and how to leverage the service blueprint as a research tool, is essential for researchers and designers, as this will help them stay ahead in this rapidly changing world.

This March, I was lucky enough to facilitate a Thinkshop with 25 designers attending the AIGA Y Design Conference.

We left with some interesting conclusions around how to build and use service blueprints as research tools.

Lessons that emerged from the Thinkshop:

  1. Know when you need a journey map verses a service blueprint. A journey map illustrates the customer experience and provides insight into how a customer typically experiences a service in different contexts over time. It focuses on the primary actions the customer is taking, along with what they feel and think as they move through your service. Journey maps are good for revealing a customer's experience for strategic, visionary decisions, while blueprints are good for reworking a specific process once a strategic decision has been made.

  2. Before you start problem solving, you need to have a clear picture of your current state. It's great to come up with all kinds of exciting ideas, but without clearly knowing the pain points and the service strengths, you risk creating something that falls flat, or could worsen your service experience.

  3. Understanding and mapping the experience of a service employee is just as critical as the customer experience. A blueprint falls flat if it doesn’t include a clear picture of the service provider, along with the backstage people and processes. It’s great to research your customer, but just as important is researching the service provider, who has great influence on the experience of your customer.

  4. The blueprint helps us identify opportunities. Large time gaps between customer actions are always opportunities, moving activities up or down lanes can create big opportunities for innovation, and removing extraneous steps or props can simplify the service experience. Every touchpoint—people, places, props, partners, and processes—has the opportunity to provide value to the customer (and service provider!).

  5. Value isn’t always measurable. While it’s important to identify metrics along your service blueprint—places where you can track for success—there is a level of value that is critical but may not be trackable, and that’s okay. For example, some people go to coffee shops solely for the reason that the shop felt like a place they belonged. This warm feeling of belonging can’t be tracked, but it’s why people keep coming back. This value isn’t always measurable, but it’s often at the core of why customers stick with a service. Clearly articulating value exchanges, and the opportunity of new value, helps everyone see why each touchpoint is important, even if it isn’t measurable.

  6. Just because it can be a service doesn’t mean it should. This one is simple—not everything needs to be become a service.

For those of you designing for a service-oriented company, what lessons have you learned? What are critical skills and tools you use each day?


Posted in: on Mon, April 20, 2015 - 7:38:39

Lauren Chapman Ruiz

Lauren Chapman Ruiz is a Senior Interaction Designer at Cooper in San Francisco, CA, and is an adjunct faculty member at CCA .
View All Lauren Chapman Ruiz's Posts


Post Comment


@James (2017 10 15)

Definitely, what a fantastic website and informative posts, I definitely will bookmark your blog.All the Best!


The Facebook “emotion” study: A design perspective would change the conversation


Authors: Deborah Tatar
Posted: Sat, April 18, 2015 - 11:58:05

Jeff Hancock from Cornell gave the opening plenary at the 2015 CSCW and Social Computing conference in Vancouver last month (3/16/15). Jeff was representing and discussing the now infamous “Facebook Emotion Study,” in which a classic social psychology study was conducted on over 600,000 unwitting Facebook members, to investigate the effects of increasing the percentage of positive or negative elements in their news feeds on the use of emotion words in their subsequent posts. He apologized, he explained, and he did so with a pleasing and measured dignity.

But he also made choices that dismissed or at least downplayed what to my mind are some of the most important issues and implications. He focused on the research study itself, that is, whether and how it is ethical to conduct research on unwitting participants. Hancock focused on the ethics of conducting research. We can share that focus: we can, for example, focus on whether Cornell lacked sufficient oversight, or we can focus on informed consent. I’m glad that he is doing that.

But that view ignores the elephant in the room. The way we interpret the ethics of the research is grounded in the way we evaluate the ethics of the underlying practice as conducted by Facebook. The study aroused so much public heat (The Guardian, The New York Times, Forbes) in part because it exposed how Facebook operates routinely.

We could argue that researchers are not responsible for the systems that they study and therefore that the underlying ethics of Facebook’s practice are irrelevant to the discussion of research. But that point of view depends on the existence of a very clear separation of concerns. In this case, the main author was an employee of Facebook. We must consider that pleasing Facebook, one of the most powerful sources and sinks of information—and of capital—in the world, was and is a factor in the study and its aftermath.

To my mind, the wrong aspects of the research are grounded in the wrong aspects of the system itself, and while Jeff Hancock is a multitalented, multifaceted guy, an excellent experimentalist, and presumably a searcher after truth, I also think that he gave Facebook a pass in his CSCW presentation. That, by itself, is an indicator of the deeper problem. It is really hard to think critically about an organization that has such untrammeled power. Hancock put the blame on himself, which is an honorable thing to do. He does not deserve more opprobrium and it was pretty brave of him to talk about the topic in public at all. But in some sense the authors of the study are secondary to the set of considerations the rest of us should have.

Regardless of what Hancock said or did not say, Facebook and other large corporations—Google, Amazon, Facebook and so forth—the so-called GAFA companies—make decisions about what people see in unaccountable ways. These decisions are implemented in algorithms. As Marshall and Shipman (“Exploring Ownership and Persistent Value in Facebook”) reported the next day, people do not know that algorithms exist much less what they contain. Users can imagine that the algorithms are at least impartial, but who actually knows? And, even if the algorithms are impartial, we must remember that impartial does not always mean fair or right.  It certainly does not mean wise. On one hand, all of these companies take glory in their power; on the other hand, they fail to claim responsibility for their influence and their power, to a large degree, rests in their influence.
 
Is what Facebook is doing actually wrong? Not everyone thinks so. Hancock cited Kariahalios’s work, indicating that when people learn about what Facebook does routinely they are initially very upset, but after a couple of weeks they realize they want to read news that is important to them. But this is precisely the place where Hancock’s argument disappointed. Instead of scrutinizing this finding, he moved on.  

I have recently in ACM Interactions called out the ways that computers, as they are designed and most people interact with them today, dominate humans through their inability to bend, the way people often or even usually do. I hypothesize that computers put users in a habitually submissive role. On this analysis, the real damage inflicted by the influence of the large, unregulated companies on internet interaction is that the systems they create fail to reflect to us the selves we wish we were. Instead, they reflect to us the people they wish we were: primarily, compliant consumers. And I have raised the possibility that this has epidemiological-scale effects.

This is important from a design perspective. As I said, Hancock is a multifaceted, multitalented guy, but he is not a designer. The design question is always “What could we do differently?” and he neither asked that nor pushed us to ask it. Instead of talking about all the ways that Facebook or a competitor could provide some of its services (perhaps a little compromised) in a better way, the analysis tacitly accepted the trade-off that we cannot have both—on the one side, transparency, honesty, and control, and on the other side, paired-down and selected information.  A person cannot do everything in one talk, but this was an important missing piece.

I am not the only person to talk this way about the importance of reconceptualizing technologies such as Facebook or the possible dangers of ignoring the need to do so.  In my Interactions article, I cited an intellectual basis for the claims in a wide range of thinkers (Suchman, Turkle, Nass) and would have cited more but for the word limit. Lily Irani’s Turkopticon is an exercise in critical design. Chris Csikszentmihaly’s work at the Media Lab represented a tremendous push-back. The Bardzells have also been central in designing responses.  

And then, some of the points I make here—and more—were brought up beautifully in the closing plenary by Zeynep Tufekci of UNC Chapel Hill. Tufekci did not give Facebook a pass. She was forthright in her criticisms. She analyzed the situation from a different intellectual basis, offering a range of compelling examples of issues and problems. Most plaintive was the example of the New Year’s card, created by Facebook, that read “It’s been a great year” featuring the picture a 7-year old girl who had died that year. The heartbreaking picture had received a lot of “likes” and so was impartially chosen by the algorithm. The algorithm was written, as all algorithms are, by people, who were not so prescient as to imagine all the situations in which a large number of people might “like” a picture, much less how their assumptions might play out in actual people’s lives. The algorithm was written to operate on information that was, by the terms of the EULA (end-user licensing agreement), given to Facebook.  Are we allowed to give our information to Facebook and other companies in this way? After all, we are not allowed to sell ourselves into slavery, although many early immigrants from Ireland and Scotland came to North America this way.

But the considerable agreement between Tufekci’s criticism, mine, and others’ is tremendously important. It exists in the face of a countervailing tendency to think that the design of technology has no ethical implications, indeed no meaning. My on-going effort is to design technologies that, sometimes in small ways, challenge the user relationship with technology and create questions.

After Hancock’s talk, but before Tufekci’s, one of my friends commented that the real threat to Facebook’s success would be another technology that does not sell data. In fact, Ello is such an organization, constructed as a “public benefit company,” obligated to conform to the terms of its charter. It intends to make money through a “freemium” model. According to Sue Halpern of The New York Review of Books (“The Creepy New Wave of the Internet,” November 2014), Ello received 31,000 requests/hour after merely announcing its intention to construct a social networking site that did not collect or sell user data. At 31,000 requests an hour, the trend would have had to have continued for many, many, many hours to start being able to compete with Facebook’s 1.4 billion users, but this level of response suggest that there is a deep hunger for alternatives.

Perhaps le jour de gloire n’est pas encore arrivé, but, designers, there is a call to arms in this! Thank god for tenure and a commitment to academic free speech.

Aside from the ways that we are, like Esau in the Bible, selling our birthrights for a mess o’ pottage (that is, selling our information and ultimately our freedom to GAFA companies for questionable reward), there is another issue of great concern to me: the almost complete inutility of the ACM Code of Ethics to address the ethical dilemmas of computer scientists in the current moment. I asked Jeff Hancock about this, and he said that “it had been discussed” in a workshop held the previous day about ethics and research.  I will look forward to hearing more about that, but it seemed clear that because his focus is primarily on the narrower issue of research, and this was the matter brought up repeatedly in the press coverage and public discourse, he is more concerned with new U.S. Institutional Review Board and Health and Human Services regulations rather than that the position of the ACM. But CSCW, and, for that matter, Interactions, are ACM products. ACM members should be concerned with the code of ethics.


Posted in: on Sat, April 18, 2015 - 11:58:05

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


@andersontudosobreblog (2015 09 03)

good post

@James (2017 10 14)

Hello, always i used to check blog posts here in the early hours in the daylight,


Designing the cognitive future, part VIII: Creativity


Authors: Juan Hourcade
Posted: Wed, April 08, 2015 - 9:44:36

In the past decade there has been an increasing amount of interest in the HCI community on the topic of creativity. While it is not a process at the same basic level as perception or attention, creativity is often listed as a topic in cognition, and it is the focus of this post. 

Creativity is not easy to define. Reading through several definitions, I liked the one by Zeng et al. who defined it as “the goal-oriented individual/team cognitive process that results in a product (idea, solution, service, etc.) that, being judged as novel and appropriate, evokes people's intention to purchase, adopt, use, and appreciate it” [1].

If we want to enhance creativity, it is worth learning a bit about the factors that appear to affect creativity. The research literature points at two factors: diversifying experiences [2] and fluid intelligence mediated by task switching [3].

In terms of diversifying experiences, there is anecdotal evidence that many highly creative people grew up with diverse experiences, for example, speaking many languages, living in many countries, or having to cross cultures [2]. It makes sense that the ability to have multiple perspectives on a topic or experience would help with creativity. There is also evidence that people can be more creative in the short term right after experiencing a situation that defies expectations [2]. Perhaps throwing our neuronal systems off-balance makes it more likely that a new path be traveled in our brains.

The other factor that seems to make creativity more likely is fluid intelligence, the ability to solve problems in novel situations. More specifically, one factor related to fluid intelligence that appears to make a difference is task switching, the ability to switch attention between tasks (or approaches) as needed [3]. 

So how are technologies affecting creativity, and how might technologies affect creativity in the future?

When it comes to diversifying experiences, interactive technologies can certainly help provide more of those. We can already experience media from all sorts of sources in all sorts of styles. These can provide us with much broader backgrounds full of different perspectives. They can also give us convenient access to inspirational examples. In addition, it is easier than ever to find perspectives and points of view that may challenge ours, modifying our neuronal ensembles. 

Other ways through which technologies may provide us with even richer perspectives is by enabling us to interact remotely with a wide variety of people. Just like online multiplayer games enable gamers from around the world to form ad-hoc teams, these could be formed for other purposes. Having truly interdisciplinary, multicultural teams come together after a few clicks and keystrokes could potentially make it much easier to gain new perspectives on problems. Similar technologies could also make it easier to reach groups of diverse people who could quickly provide feedback on ideas to see which ones are worth pursuing.

Interactive technologies could also help with the task switching necessary to consider several alternative solutions to problems. Technologies could, for example, enable the quick generation of alternatives, or may enable quicker shifting by easily keeping track of ideas. Tools can also make it easier to express ideas that are in our heads. High-quality design tools are an example. They simply give us a much bigger palette and toolbox. These design tools can be complemented by other tools that can make these ideas concrete, such as 3D printers. Holographic displays could also be very helpful in this respect. 

Could interactive technologies get in the way of creativity? It’s possible. If the sources of experiences become standardized, it could affect the ability to gain different perspectives. If we have technologies deliver experiences that keep us in our comfort zones, this is also likely to reduce our ability to be creative. If most people use the exact same tools to pursue creative endeavors, then we are more likely to come up with similar ideas.

So what could the future of creativity hold? The ideal would include readily available diverse experiences, especially those that challenge our views and help us think differently. It could also include powerful, personalized tools that help us make our ideas concrete, discover alternatives, and obtain quick feedback, perhaps doing so with diverse groups of people. 

What would you like the future of creativity to look like?

Endnotes

1. Zeng, L., Proctor, R. W., & Salvendy, G. (2011). Can Traditional Divergent Thinking Tests Be Trusted in Measuring and Predicting Real-World Creativity? Creativity Research Journal, 23(1), 24–37. http://doi.org/10.1080/10400419.2011.545713

2. Ritter, S. M., Damian, R. I., Simonton, D. K., van Baaren, R. B., Strick, M., Derks, J., & Dijksterhuis, A. (2012). Diversifying experiences enhance cognitive flexibility. Journal of Experimental Social Psychology, 48(4), 961–964. http://doi.org/10.1016/j.jesp.2012.02.009

3. Nusbaum, E. C., & Silvia, P. J. (2011). Are intelligence and creativity really so different?: Fluid intelligence, executive processes, and strategy use in divergent thinking. Intelligence, 39(1), 36–45. http://doi.org/10.1016/j.intell.2010.11.002



Posted in: on Wed, April 08, 2015 - 9:44:36

Juan Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Hourcade's Posts


Post Comment


No Comments Found


The future of work


Authors: Jonathan Grudin
Posted: Tue, March 24, 2015 - 11:23:12

Some researchers and pundits predict that automation will bring widespread unemployment. This is unlikely. The shift of some labor to technology has been in progress for decades, but in the past 5 years the United States added almost 12 million jobs. Where is the automation effect? What will materialize to shift us from fast forward to permanent reverse gear? What drives this fear?

In an earlier post, I mentioned an invitation to a debate on this topic after being among the optimists in a Pew Research Center survey. Pew’s respondents were divided. 48% believe technology will increase unemployment; 52% believe employment will increase. I was quoted: “When the world population was a few hundred million people there were hundreds of millions of jobs. Although there have always been unemployed people, when we reached a few billion people there were billions of jobs. There is no shortage of things that need to be done and that will not change.”

Four principal speakers at the Churchill Club forum on Technology and the Future of Work were eminent economists, including a former chair of the White House Council of Economic Advisors and a former director of the White House National Economic Council. The other four were technologists. A Singularity University representative insisted that within five years, all work would be done by intelligent machines. Jobs in China, he said, would be the first to go. The President of SRI said that it would take 15 years for all of us to be out of work. A third technologist exclaimed that the impact of technology now was like nothing he had ever seen.

“That’s because you didn’t live in the 19th century,” an economist said dryly.

Off-balance, the technologist responded, “Neither did you!”

“No, but I’ve read about it.”

The best guess at tomorrow’s weather: the same as today’s

Technologies eradicated occupations, yet the workforce grew. Americans employed in agriculture fell from 75% to about 2% in a little over a century—and that was after the industrial revolution transformed the Western world. Hundreds of thousands of telephone operators were replaced by technology in the late 20th century. The economists at the forum did not anticipate imminent doom, but some expressed reservations about the nature of the jobs that will be available and concern over growing disparities in income and wealth. Will workers who lose jobs retool as rapidly as in the past? We don’t know. Shifting from farming to manufacturing required major changes in behavior and family organization.

At one time, positively rosy views of automation-induced leisure were common. Buckminster Fuller predicted that one person in ten thousand would be able to produce enough to support ten thousand people. Since one person in ten thousand would want to work, he surmised, no one would have to work unless they wanted to. Arthur Clarke, another writer, inventor, and futurist, had similar views.

Alas, the evidence suggests otherwise. Hunter-gatherer societies were relatively egalitarian as they struggled to survive. Agricultural self-sufficiency and the ability to satisfy basic necessities did not lead to leisure—hierarchies of privilege sprang up and funneled most resources to the top 1%. Growing income inequality in the United States may be a regression to the norm [1]. The 1% profit by finding ways to keep the 99% producing: There are armies to equip and pyramids to build.

Let’s bet against a job-loss pandemic. Despite the current enthusiasm for deep machine learning, the Singularity isn’t imminent. We hear more complaints about systemic malfunctions than breathless reports of technological adroitness. Deep Blue was an impressive one-trick pony focused on a constrained problem: a handful of objects governed by a limited number of rules. It was dismantled. The Watson search engine retrieves facts, which is lovely, but it’s our job to use facts. Four years after the Jeopardy victory, investors are pessimistic about IBM.

In the 1970s, it was widely predicted that programming jobs such as mine at the time would be automated away. This was music to management’s ears—programmers were notoriously difficult to manage and insisted on high wages, which they tended not to use for haircuts, suits, and country club memberships. Tools improved and programmers became software engineers, but employment rose.

The web brought more predictions of job loss—who would need software developers when anyone could create a web page? But markets appeared for myriad new products—developers didn’t go away— as did hundreds of millions of web sites. Anyone can create a site, but it requires an investment of time to learn a skill that will be exercised infrequently and needs to be maintained. It is more cost-effective to hire someone with a strong sense of design who can do a sophisticated job quickly. The new profession of web site designer flourished.

I visited my niece, a prosperous organic farmer. She has solar panels that supply a battery that maintains an electric fence. In the distance, solar panels power a neighbor’s well pump that irrigates a field. Solar already employs twice as many Americans as coal mining. Employment created by technology, and it is early days—the Internet of Things will bring jobs we can’t imagine.

Work that is not inherently technical benefits from technology. Almost any skill that could be considered as a career—cooking fish, growing orchids, professional shopping, teaching tennis—can be acquired more rapidly through resources on the web. Reaching a level of proficiency for which people will pay takes less time than in the past.

When I was 18, I knew the one “right way” to hit each tennis stroke and was hired to teach in a city park system. Twenty years later, I attended a weekend morale event at a tennis camp. There was no longer one right way to teach. Technology had changed coaching. My strokes were videotaped. The instructor identified all weaknesses. (Today, fifteen years later, machine vision might identify the problems.) The coach’s task shifted: She decides which of the five things I’m doing wrong to focus on first. She gauges how many I’m capable of taking on at once. A coach sizes up your personality and potential. Should she say “good job!” to avoid discouraging you or “try again, hold it more level!” to keep you going? A good instructor knows more about strokes and also understands motivation. Jobs are there. A weekend warrior could go online, have a friend videotape strokes, and perhaps find analytic software. How many will bother?

We all have agents

An employee of the science fiction author Robert Heinlein told me that Heinlein was methodical. He and his wife researched the planet to find the perfect place to live and settled on a plot between Santa Cruz and San Francisco. The only drawback was seismic. The Heinleins designed a house to “ride earthquakes like a ship rides the sea,” anchoring heavy furniture deep in the foundation.

When Heinlein worked on a book, he stayed in his room and gained weight. To avoid obesity he lost weight before starting a book, ending up back at normal. To the dismay of his agent, Heinlein was not that fond of writing and his royalties covered his needs, my friend said. Heinlein’s agent determined that his job was to find and entice the couple with expensive consumer items, to convince Heinlein to undertake the ordeal of writing another book.

We all have agents who benefit from our labor by convincing us to work for things we may not need. One can have reservations about consumerism, but the ingenuity to devise and market objects is a notable human skill.

Good jobs

“Will the new jobs be good jobs?” What does this mean? Were hunting, gathering, and farming good jobs? Working on an assembly line or as a desk clerk? Is an assistant professor’s six-year ordeal a good job? “Good” generally means high-wage; wages are set more by political and economic forces than by automation. The inflation-adjusted minimum wage in the U.S. peaked in 1968, almost 50% higher than today—wage growth will raise the perceived quality of many jobs. Globalization and competition can drive down wages and are enabled by technology, but are not consequences of automation. 

Despite the rapidly falling U.S. unemployment rate, there is uneasiness. Youth unemployment is high, not all of the new jobs are full-time, wage growth is rising more slowly than anticipated, income disparity is increasing. And it is reasonable to ask, “Could another shockingly sudden severe recession appear?”

I see students in high school and college who have none of the optimism my cohort did that good jobs will follow graduation. Anxiety is high, fed by reports they won’t live as well as their parents. Some of this is the reduction in support for education: Student debt was rare in my era. Because my friends and I were confident in our future prospects, we could take a variety of classes and explore things that were interesting, whether or not they had obvious vocational impact. We spent time talking and thinking. Well, and partying. Maybe mostly partying. Anyway, students I encounter today express a greater need to focus narrowly on acquiring job-related skills. There is still some partying. Maybe not as much.

Automation

Growth in U.S. employment is the strongest since the Internet bubble years. Productivity growth has tapered off—the opposite of what would be expected were automation kicking in. With little objective evidence of automation-induced joblessness, it is natural to wonder who is served by the discourse around employment insecurity.

Two beneficiaries come to mind. The top of the hierarchy benefits if the rest of us worry about unemployment and accept lower wages. Employers benefit when graduates are insecure. Also, technologists benefit in different ways. The reality may be that productivity gains from computerization have leveled off, but the perception that technology is having a big impact could be to the psychological and economic benefit of technologists. Who would want to lay off those writing code so powerful it will put everyone else out of work?

Endnote

1. In the conclusion to the formidable 1491, Charles Mann suggests how the young United States resisted this norm.

Thanks to John King for discussions on these issues.



Posted in: on Tue, March 24, 2015 - 11:23:12

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Humanity’s dashboard


Authors: Aaron Marcus
Posted: Wed, March 18, 2015 - 10:33:50


The Doomsday Clock. Source: http://imgkid.com/doomsday-clock.shtml

Speaking of the Apple Watch or iWatch, as it is called informally...Time is running out...

Many decades ago, people around the world learned of the Doomsday Clock (maintained since 1947 by the Bulletin of Atomic Scientists, and referred to indirectly in the movie Dr. Strangelove, which spoke of the Doomsday Machine). The clock, poised at about three minutes to midnight, summarized estimates of how close humanity was to a global nuclear holocaust.

Thankfully (?) we have other challenges to consider these days in addition to nuclear proliferation and the threat of nuclear war. Alas, people do not think enough about these challenges in a way that might move them to inform themselves more and to lead them to action.

Of course, one can debate forever about what are the worst calamities awaiting human civilization. Don’t forget that in the 2013 remake of the science-fiction classic The Day the Earth Stood Still, Klatu came to the planet earth to save the earth from its human population—not to save us, because humankind had demonstrated to another civilization beyond the earth that it was simply incapable of managing the earth in a beneficial way. Let’s hope that we can do better than the miserable scorecard implied by the movie.

Let’s also avoid unproductive name-calling and blame, and simply concentrate on “What do we have here?” and “What can we do to improve things?” If you asked me for a quick list of major challenges/issues, I might list the following, not necessarily in order of urgency or impending doom:

Solar power: To provide sustainable energy, we must solve a way to make use of the abundant energy given to us daily by the sun. There is simply no excuse. Yes, big oil companies may experience some technical difficulties, but the long-term benefits are indisputable. A project I worked on in 1978 focused my attention on visualizing global energy interdependence. We have made some positive steps, but we are in need of much more progress.

Desalination of the oceans to provide drinking water: Another project we did for SAP a few years ago focused our attention on the crisis facing all megacities of the earth: they are depleting their underground water tables and will run out of drinking water in a few decades unless something drastically changes. We have abundant oceans that cover about 70% of the earth. There is simply no excuse. We must solve how to create easy, simple, inexpensive, abundant desalination processes that can be undertaken wherever feasible.

Race/religion/gender equality: Recent events in the USA have uncovered racism that was thought to have been minimized in past decades. The USA is not the only place to suffer from persecution and punishment of people because of race, religion, and gender orientation. News from many other countries tell of horrible acts, unfair laws, and lack of justice. There is simply no excuse. We must do better.

Nuclear war: The Doomsday countdown may have changed, but it has not disappeared. Negotiations with Iran and with other countries bring our precarious hold on global safety to our attention. There is simply no excuse. We must do better.

Education for all: One of the best solutions for long-term equality, prosperity, and peace is education, for all. Yet UNICEF estimates that about 100 million children have no access at all to education, with more than half of these girls. People cannot get access to even the rudiments of a suitable education. There is no excuse. We must do better.

Perhaps you can think of 1-2 others that belong at the top of the list for multiple Doomsday Clocks. 

Which brings me to the iWatch. Like the ipod, iPad, and iSomethingElse, all of these products focus on the individual consumer—on the hedonistic pleasure of that individual’s life. Apple frequently touts a display for its new watch with many colorful circles/signs/icons/symbols, almost like a toy, or a reference to some Happy Birthday party with colorful balloons. All well and good. Yet there is a different approach possible.

Our work at AM+A for the past five years on persuasion design and our Machine projects (to be documented in mid 2015 in Mobile Persuasion Design from Springer UK) taught us the value of dashboards as a key ingredient (together with journey maps or roadmaps to clarify where we’ve come from and where we are going, focused social networks of family and key friends, focused just-in-time tips and advice, and incentives) to keep people informed and motivated for their journey...to behavior change.

And what journey am I considering: humanity’s journey through history and through our very own lives, each day, for all of us. I think we need to be reminded more of our Membership in Humanity.

I propose that we don’t need so much of an iWatch as a WeWatch...with a Dashboard of Humanity, that would constantly be the default display on our wrist-top device (of course, with the local time discretely and legibly presented) to remind us: Where are we? How are we doing? How much time does humanity have left? 

Perhaps it would be a star chart with five to seven rays, or just five to seven bars of a bar chart. The position, length, direction, and colors would become so familiar to us that we could “read it” easily, and detect immediately if something ominous or propitious had recently occurred, and it would be the gateway to exploring all the underlying details that might motivate us and move us forward on our own personal journeys. 

Perhaps it would be something that could be made available by the manufacturers for all platforms, all price ranges, from the most humble to Apple’s new $17,500 luxury version of its watch for the “0.1 Percent Users.”

Yes, showing the time is important. But so is showing the future of humanity for all of us on earth. 

Yes, the latest stock-market report is important. But so is showing the future of humanity for all of us on earth.

Something to consider in thinking about the next “killer” app for humanity...as Apple prepares to release its new watch.



Posted in: on Wed, March 18, 2015 - 10:33:50

Aaron Marcus

Aaron Marcus is principal at Aaron Marcus and Associates (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts


Post Comment


No Comments Found


Designing the cognitive future, part VII: Metacognition


Authors: Juan Hourcade
Posted: Fri, March 06, 2015 - 11:49:00

In this post, I discuss my views on designing the future of metacognition. The definition of metacognition I use in the post refers to the monitoring and control of other cognitive processes. Monitoring helps us reflect on what is happening and what happened, while control enables us to regulate cognitive processes. While monitoring and control can happen automatically, here I focus on explicit metacognition: our ability to reflect on and justify our behavior based on the processes underlying it. 

It turns out that it is very difficult to report on cognitive processes because we have little direct conscious access to them. However, we do have access to their outcomes. In fact, we experience actions and their consequences closer in subjective than objective time (this phenomenon is called intentional binding). But, this only happens when our actions are voluntary. This phenomenon helps us experience agency and feel responsibility for our actions.

Psychologists are increasingly arguing that metacognition is most useful to help us better collaborate. One of the hints that this may be the case comes from studies suggesting that we tend to be better at recognizing the causes of behavior in others than in ourselves. In addition, there is evidence that our metacognitive abilities can improve by working with others, and that collaborative decisions (at least among people with similar abilities) tend to be superior to individual decisions, given shared goals.

Think of how reflecting with someone else about our behavior, decisions, and perceptions of the world can help us make better decisions in the future. For example, a friend can help us reflect about the possible outcomes of our decisions and provide different points of view. It is through contact with others that we can learn, for example, about cultural norms for decision-making. 

These discussions can also be useful for collaborative decision-making. Being able to communicate about our goals, abilities, shortcomings, knowledge, and values can help us better work together with others. Understanding the same things about others (a.k.a. theory of mind) can take us one step further. It can get us to understand collective versions of goals, abilities, shortcomings, knowledge, and values.

Metacognitive processes can help us make joint decisions that are better than individual ones, develop more accurate models of the world, and improve our decision-making processes. Through these, we can get better at resolving conflict with others, correcting our mistakes, and regulating our emotions.

So what are some roles that technologies are playing and could play with respect to metacognition? 

One obvious way technology is helping and can continue to help is in enabling us to record our decisions and our rationale for these. Rather than having to recall these, we can review them, analyze them, perhaps even chart them to better understand the areas in which we are doing well and the ones where we could be making better decisions. 

Technologies can also be useful in helping us gain a third-person view of our own behavior. Video modeling, for example, is widely used with special needs populations, such as children diagnosed with autism, to help them better reflect on behavior. Similar tools could be used to reflect on group processes.

Communication technologies can also be helpful. They could help us have richer face-to-face discussions, as well as have quicker access to more people with whom to talk about our behavior and decision-making. Expanding the richness of these communications and the number and diversity of people we are likely to reach could help us improve our metacognitive abilities. 

The most exciting and controversial developments are likely to come from technologies that help us better understand and control our cognitive processes. One approach that is already being used in the human-computer interaction community is neurofeedback. Current solutions use electroencephalogram technology to obtain information on brain activity. Researchers have used these, for example, to help people relax by making them aware of the level of activity in their brains. 

An up-and-coming alternative is near-infrared spectroscopy, which can be used to scan activity in the brain’s cortex. If there are cognitive processes that occur in the cortex, they could be monitored. These monitoring technologies could also be used to share this information with others. They may be useful, for example, in working with a therapist. At the same time there could be serious privacy issues. Could your employer require you to wear a device to know when you are not paying attention?

To assess control over cognitive processes, one option that is currently being explored is transcranial ultrasonic technology, which some researchers think could eventually be used to activate specific regions of the human brain (e.g., see William Tyler’s work at Arizona State). This again poses significant ethical challenges. Would you allow someone else to make decisions on stimulating your brain?

I would largely prefer expanding on what already works (e.g., better communication with others), although I find advanced monitoring and control technologies intriguing, especially if I could keep information private and be in full control of any brain stimulation.

How would you design the future of metacognition? Would you want to have more information about and control over your cognitive processes? Or you would you prefer technologies that help you do more of what already works (i.e., communicating with others)? Would you want to do both? 

This post was inspired in part by this article: Frith, C.D. The role of metacognition in human social interactions. Philosophical Transactions of the Royal Society B: Biological Sciences 367,1599 (2012), 2213–2223.



Posted in: on Fri, March 06, 2015 - 11:49:00

Juan Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Hourcade's Posts


Post Comment


No Comments Found


Ai Wei Wei on Alcatraz


Authors: Deborah Tatar
Posted: Tue, March 03, 2015 - 12:07:35

When I saw that the Chinese dissident artist Ai Wei Wei was to have a show on Alcatraz, I knew that I had to attend. Not only had I written three prior blog posts about his work and its relationship to criticism and freedom through design, but there was something immediately compelling about the idea of art from an artist that had been imprisoned situated in a notorious former prison. 

When I started these blog entries for ACM Interactions, I called them “the background-foreground playground.” My plan was to talk a lot about the importance of what we treat as background and as foreground in design, and how bringing forward something that has previously been backgrounded can be a crucial contribution, both in design and in design research. And, of course, I thought that I would eventually get around to thoughts about the importance and mystery of background. Yeah, well, it hasn’t worked quite that way. Only if you look at my blog posts from exactly the right perspective can you see these interests. 

So I thought, “Ai Wei Wei on Alcatraz! Of course!”

But then, my thought stopped short. 

Alcatraz is, of course, an island in San Francisco Bay with a prison. But it is no Robben Island or Chateau d’If. Alcatraz’s most famous prisoner was the gangster Al Capone. Al Capone was not a prisoner of conscience, the victim of desperate circumstance, a freedom fighter, or a casualty of the brutal use of power. Al Capone was Not a Nice Man. Mothers felt their children were safer when he was locked up in an impregnable fortress. 

So, the relationship between Alcatraz and Ai WeiWei is not actually obvious. What could the home of Al Capone have to do with prisoners of conscience? Perhaps the show was just a bad idea. 

When I finally got my ticket and boarded the ferry, I was anxious to see what a great, but still fallible artist did with the setting. How did he make felt meaning?

The first experience, unhappily, consisted of announcements—on the ferry, brochures, a video, and a ranger talk—that over and over told visitors that the Ai Wei Wei exhibit was not about imprisonment but rather about the relationship between freedom and imprisonment. This fit with the National Park service slogan, “Alcatraz: More than a prison,” but it was not an auspicious start. The announcements struck me as sapping energy from the setting. There is plenty of dramatic natural beauty around the San Francisco Bay. Why visit Alcatraz if not to experience a frisson of horror? I wondered whether Ai Wei Wei even knew that these announcements frame the experience. Although he is not currently in jail, the Chinese government has not restored his passport, and he was not able to visit the site himself. 

But maybe the tedious announcements serve some purpose. Perhaps people don’t like surprise. And surprise is what we get. 


With Wind greets visitors head on, just above eye-level.

After hiking the 300+ feet upwards from the ferry, we walk into a large barren concrete room, formerly a prison work room. We are greeted by the huge head of a Chinese dragon kite. Its body follows, consisting of a sequence of round silk panels hanging from the ceiling, snaking perhaps a hundred feet back into the room. Each panel is gorgeous and different. 

Eventually one notices that, integrated into some of the panels, are embedded quotations, in English, from prisoners of conscience. Many of these quotations are quite hard to read because they are embedded in other elaborate patterns. I have included a picture of one of the easier ones, which is, interestingly, from one of the few Americans, North Americans, or even Western-Hemispherians in the set. The cumulative impact of these panels is, for me, almost like a line of people, tied together but each with a unique and uniquely represented life, some glimpses of which are seen through the occasional words and names. We wonder what unknown and unspoken struggles are signified by the panels without words.


With Wind extends backwards into the space. NB: the drab cement of ceiling and floor look polished in this snapshot but they are not.

Around are other Chinese kites, many not unlike those that frequently fly on the greensward of the San Francisco Marina just across the Bay, but far more luxurious and primarily made out of silk, festooned with pictures of birds, mostly owls. I don’t think that the dragon kite could fly, but with its low-hanging head, it also echoes another tradition that has come to San Francisco from China: the dragons that twist and wind along the narrow streets in the parade every Lunar New Year, amidst the popping and smoke of firecrackers. The dominant feeling is celebration, freedom, joy, light. The piece speaks of transcendence.

Ai Wei Wei is concerned with prisoners of conscience. Because he is on Alcatraz, he does not have to say that transcendence is hard, achieved rarely, perhaps just a hope or a dream. The cement of the floor, ceiling and walls and the rusted, decrepit sink in the corner tell us that. The kites would be treacle in another setting, but Alcatraz makes them commentary. He establishes an altogether different set of background thoughts than the specter of Al Capone. 


Every few panels shows an embedded message.

Three other pieces are worth mentioning. Just behind the kite room is another huge room, with 16 columns between the cement floor and the cement ceiling. On the cement floor lies a “graveyard” of images of prisoners of conscience, each consisting of a portrait made of Legos. As you walk amongst these Lego portraits, they are very hard to see. You have to actively struggle to see that which is in front of you. But if you photograph them, say, with your iPhone, all ambiguity falls away and only the clarity of the image is preserved. 

Douglas Coupland made a similar point in some pieces in his recent Vancouver show: that which is complex in real life is simple when seen through the lens of mediation. But we could not see Coupland’s pieces except through mediation. The Coupland show caused us to question what we consider real. 

Here the blurring is a different design decision addressing a different aspect of being. It is as if to say that the least that we, the viewers, can do is make the effort to see. Our view of prisoners of conscience is constructed.

Later, we have the opportunity to see the piece from prison guards’ gallery. Because we are farther away, the faces are clearer, but we see them through bars. They are at once more clear and less personal. The prisoners are accounted for. (Batya Friedman’s “The Watcher and the Watched” [1] tries to address this latter experience in relationship in CHI as well as other work harking back to Bentham’s Panopticon.) 

The other two pieces that I will mention seem to me to move us into more deep engagement. One is explicit. There is an array of postcards, each addressed to a different living prisoner. People pick a postcard and sit in the prison cafeteria at long benches, each trying to imagine what to say to a real imprisoned prisoner of conscience. (Apart from sentiment, this has a pragmatic function of showing the governments that these people are not forgotten.)

The last piece engages more explicitly with the prison location, but by now (if your reaction resembles mine), we are far beyond “othering,” the frisson of fear and interest we might feel with respect to a gangster, his gat, his moll, his rolling walk and big cigar. We are ready to be sad, and real, and from a place of real sadness, respect principled resistance. 


A cell at Alcatraz, playing the words of Iranian poet Ahmad Shamlu.

Twelve to fifteen cells in the oldest, smallest, dampest, most decrepit wing of the main cell block are associated with audio clips. The slight contextual change of entering the cell now puts us into the experience of prison. No one would voluntarily spend time in these dreadful spaces. Yet, when entering the cell, we are able to hear music and poetry associated with a prisoner of conscience, as if in the dream space that they might have created. Toxica’s grating sounds and anarchic hostility brings vivid people and events of creation into the space of its particular cell. So, too, does Pussy Riot. So too does the poetry of Victor Jara or the songs of the Robben Island singers. Each recording makes a place in the way that Steve Harrison and I described in "People, Events, Loci" and that Jonas Fritsch talks about in his dissertation on affect and technology [2]. Each space has meaning, and the meaning is stronger because music, like life, moves quickly from experience to memory.

This is the most expansive piece in the sense that it includes words and music from around the world and across time (although most of the singers are dead or musical groups defunct). The Czech Republic is, for example, represented both in resistance to the Soviet Union, by Toxica, and in resistance to the Nazis by Pavel Haas’ Study for String Orchestra, written and first played in the Terezin Concentration camp. The piece shows us digitized sound, in some sense, in that each experience is cataloged into its own place in the cell block, which can be seen as an array with three rows and about 20 cells in each row. Yet, despite this digitization, the piece is also transcendent as a totality. 

So far my blog entries have backgrounded issues of background-and-foreground. @Large: AI Wei Wei on Alcatraz succeeds in foregrounding them.

Endnotes

1. Friedman, B., Kahn Jr, P. H., Hagman, J., Severson, R. L., and Gill, B. (2006). The watcher and the watched: Social judgments about privacy in a public place. Human-Computer Interaction, 21(2), 235-272.

2. Fritsch, J. 2011. Affective Experience as a Theoretical Foundation for Interaction Design. Ph.D. thesis submitted to Aarhus University.



Posted in: on Tue, March 03, 2015 - 12:07:35

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


No Comments Found


Body/brain data/licenses


Authors: Aaron Marcus
Posted: Fri, February 27, 2015 - 3:06:38

We all have bodies and brains.

Some of us have driver’s licenses, social-security numbers, passports, and email addresses issued by or monitored by one or more governments and their agencies. Our identifiers of ourselves have limited shelf lives. We all arrive on earth stamped with an expiration date or “best used before” date in our genes. Now, with the Internet of Things, not only can our refrigerators and our chairs and our floors communicate with the Internet to let other people and/or things know about ourselves, our current location, condition, mood, and state of cognitive, emotional, and physical health, but our own bodies can be communicating with the universe at large, whether we are paying attention to that fact or not. Most people won’t notice or care...

I do.

Because I have been sort-of dead at one point, as my heart was stopped (I was on an artificial heart during triple-bypass surgery), and because all of my original nuclear family members are dead (may they rest in peace), and my own body is being kept alive with about five stents in the arteries around my heart, I think about my body/brain data.

This awareness/knowledge does not necessarily lead to depression, lethargy, or enervated wandering of the mind. This being attentive can sharpen and focus attention, to decide about what one can, must, should do with (as for me) about 350 million seconds left of life. There is even one wristwatch that offers a death clock to remind one of the countdown.) This awareness/knowledge can lead one to jettison many frivolous commitments and objectives (unless one decides to devote oneself to frivolity, of course).

For me, it has led to internal observation and speculation about body/brain data in the age of the Internet. Some comments/observations follow.

Marilyn Monroe’s estate carefully guards the memory and legacy of the original movie star born Norma Jean Baker. Michael Jackson and Elvis Presley continue to generate income from the sale of their music and videos, and their effigies, photos, videos, etc., which are carefully monetized and managed. Imagine the future of body data, including 3D scanned images of iconic stars, which might be captured while they are alive, and be made available after their deaths.

Actually the licensing of personal data could start during life. I have speculated for my Health Machine project (see, for example, Marcus, A. The Health Machine. Information Design Journal, Vol. 19, No. 1, 2011, pp. 69-89, John Benjamins, publisher) that Lady Gaga might distribute/franchise licensed data about what she eats for breakfast, and these data might be distributed not only to her “Little Monsters” (her fans), but be distributed by FitBit or other personal health-management products, such as electronic, “wired” toothbrushes that record one's brushing history and compare that history to others who set good examples of health maintenance.

The birth of death stars 

I speculate that some enterprising company will offer (perhaps some already do) authentic, guaranteed, websites or social media centers that collect your lifetime body data, in addition to your thoughts and messages, as you voluntarily record them, or as they are automatically recorded (note the recent ability to “read” an image of what someone is seeing by monitoring her/his brain waves) in order to send them out to family and friends, loved ones, or hated ones, for, say, the next 100 years.

In a way, you will be able to “live on,” including “your” reactions to future events. This phenomenon would give new import to “What would Jesus do?” (or say). In fact, anyone could continue to broadcast messages, somewhat like messages from a “dead star” just as light takes years, in some cases millions of years, to reach places long after the “sender” far away in the universe is gone. 

I have not speculated on the cost or the legalities of guaranteeing “permanence.” Note that the cryogenic companies had to guarantee the viability of preserved human heads or bodies in the 1980s or 1990s...and someone had to pay for this service. That matter may be worthy of further speculation.

Brain buddies

Slightly off-topic, but speaking of our brains and possible companions...Apple’s Siri has made quite an impression. Then came Microsoft’s Cortana. Now Microsoft China has introduced XiaoIce, Cortana’s little sister. Each has its own personality traits, a far cry from Microsoft’s Paper Clip of decades ago. The movie Her portrayed a super-intelligent agent voiced by the sultry Scarlett Johannson. That story had a sad ending: the super-agent decided to go off and play with other intelligent agents rather than relate to feeble human beings.

Let’s avoid looking at the pessimistic side and assume that a Super-Siri of the future will be our Brain Buddy and accompany us through our lives, assigned at birth. Like a social security number and email address, which all “with it” babies nowadays acquire at birth, the future you will get an AY = Augmented You, or AI = Augmented I. Take that as a starter thought and go with it. Where do we land?

Will these companion cranial entities survive us mortal souls and continue to emit snappy chatter to all who inquire of us and our latest thoughts on Topic X? Only time will tell...



Posted in: on Fri, February 27, 2015 - 3:06:38

Aaron Marcus

Aaron Marcus is principal at Aaron Marcus and Associates (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts


Post Comment


No Comments Found


Designing the cognitive future, part VI: Communication


Authors: Juan Hourcade
Posted: Tue, February 03, 2015 - 4:33:59

Continuing the series on designing the cognitive future, in this post I discuss communication. This is a topic on which the HCI community has spent a significant amount of energy, with conferences such as CSCW fully dedicated to it.

Two hundred years ago, most personal communications occurred face-to-face, with the most common exception being letter writing, for those who were literate. This meant that most communication was in real time with those in physical proximity, requiring the need to process information through the senses (mostly audio and visual), and respond through speech and gestures in real time. Letter writing involved different kinds of cognitive processes. Besides reading and writing, it also involved thinking about when the recipients might read the letter, their context, and how long it may take to receive a reply. Overall, most people lived in a very localized bubble; with perspectives and points of view that were very local, only expanded for those with the means and education through books and other print media.

The telegraph brought greater efficiency to remote communications, with the telephone significantly extending our ability to communicate remotely, eventually without intermediaries. However, expenses associated with these communication methods meant that most personal communication still occurred with those nearby. 

With the combination of mobile devices and Internet connectivity we have nowadays, we have the ability to communicate anytime, anywhere, with an unprecedented number of people. Not only that, but for many people in high-income regions, it can happen quite conveniently and at affordable costs. This trend is likely to expand into lower-income regions, with remote communication becoming more widely available, more convenient, and less expensive, while providing access to more people with greater fidelity. The cognitive processes involved in communication are still quite similar to those used for face-to-face communication, although the bandwidth is narrower, making these communications easier to process, but often more ambiguous. There are cognitive challenges, though, in navigating the myriad of communication options and their respective etiquettes. 

The trend moving us from communicating primarily with those nearby to communicating with our favorite people from around the world is likely to continue. One possible challenge brought about by this trend is that many people may end up communicating almost exclusively with people they like, who share their lifestyle, viewpoints, ideas, and values. This could be exacerbated by increased automation in service industries, meaning that people could avoid previously necessary interactions with strangers. People are also increasingly accessing personalized mass media. Taken together this could lead to people operating in a new kind of bubble, one that tends to only reinforce personal beliefs, and that may make people unaware of others’ realities, including those of people physically around them.

At the same time, communication technologies can enable people to stay in contact with others from halfway around that world, perhaps people they met only once. This could have the opposite effect, providing new perspectives, ideas, and realities. As translation technologies improve, there are also possibilities of people engaging in sustained communication with others with whom they do not share a language. Perhaps they could meet through a mutual interest (e.g., music, art, sports) and communicate in ways not previously possible. This could have even greater effects in lowering barriers between cultures and moving past stereotypes.

What about the technology? It’s easy to imagine extensions of what we are currently seeing: anytime, anywhere communication with anyone; higher-fidelity, fully immersive remote communications engaging all our senses with the highest quality audio, high-definition holograms, full-body tracking and haptic feedback, and who knows, maybe even smell and taste. My guess is most people would stick with the audio and video a majority of the time.

But there could be other communication technologies that go beyond. A longstanding wish and feature in science fiction stories is the ability to communicate thoughts. Something that may be more attainable is technology to help understand the feelings of others, during both face-to-face and remote communication. This could involve processing facial expressions, tone inflections, heartbeat, and so forth. The outcomes could make communication easier for some groups, such as autistic people. 

At the same time, personal communication technologies could further enable people to more fully express themselves. I saw an example the first time a girl diagnosed with autism used a tablet-based zoomable drawing tool. What on a piece of paper would have looked like scribbles turned into a person drawn from the details to the whole, with two large eyes looking at it. The tool enabled her to tell us what it felt like to be observed. This ability to express thoughts in ways that were not previously possible is something personal communication technologies can enable, something that could potentially make a big difference in the lives of people who have difficulty expressing their thoughts.

So what are my preferences for the future of personal communications? While I greatly enjoy easily being able to communicate with loved ones and people who share my values, I find the opportunities in enabling communication with those who live in very different contexts crucial to helping us make wise decisions about our world. Similarly, most of us are quite fortunate in being able to express ourselves at least to the degree that we can fulfill our basic needs and interests. As a community, we need to continue our work in extending that ability to all people.

How would you design the future of personal communications? 



Posted in: on Tue, February 03, 2015 - 4:33:59

Juan Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Hourcade's Posts


Post Comment


@Arthur (2015 07 02)

Hi Juan Pablo,

I liked that article very much, thanks for posting this. And actually I am now working on an academic project in HCI that does exactly what you are talking about: helping people from different culture communicate better.
I think that the place with the most room for development is intercultural communication, because as you mentioned, the fact that the world is getting more and more interconnected allowed us to communicate with people we would never have met otherwise. And the speed of technological advances in the domain is much higher than the speed of our adaptation.
Especially, I want to augment video-conference tools by automatically analyzing facial expressions, emotions and gestures, and displaying information to the chat-users in order for them to learn culture-general capabilities.
Specifically, I am now looking for tools that can help me achieving this. Needs to be accurate and JavaScript-ready.
Have you already worked with e.g. Kinect, Intel RealSense, or VisageSDK? If yes, what is your opinion about them?
Any pointers?

Thanks,
Arthur


Taking stock


Authors: Jonathan Grudin
Posted: Thu, January 22, 2015 - 4:20:50

Two years of monthly posts. A year ago I weighed the experience and suggested that discussion is becoming a less effective use of time, given the ease of scanning masses of information and perspectives on most topics. A blog contributes to the information pile, but engaged discussion may diminish. I see occasional spontaneous flare-ups or flurries. Does your online or offline experience differ?

Twelve posts later, another time to reflect. When Interactions began online blogs, my goal of one a month seemed unambitious—we were asked to write twice that. I stuck to it and now feel downright gabby. I persevered because the discipline of finishing by the end of the month ensures that something gets written. We may have envisioned less polish and more conversation. Some people read them—I’m not sure how many, but everyone is busy.

In 2010, Don Norman reflected on five years of authoring Interactions columns, writing “My goal has always been to incite thought, debate, and understanding.” He asked, “Have they made a difference? How can one tell? If I am to judge by the paucity of email I receive, the infrequent citations, even in blogs, and the need for me to repeat many of my arguments year after year, I would have to say that the columns have not had any impact. Is this due to the work’s inelegance, the passivity of this audience, or perhaps the nature of the venue itself? I reject the first reason out of self-interest and the second out of my experience that in person, you are all a most vocal group. That leaves the third reason.”

He shifted to the magazine Core77. His essays there drew more comments but were ephemeral. Typing “Don Norman” into its search engine returns 931 hits; in no discernible order one finds links to his articles, passages mentioning him, and other things. In contrast, Don’s 37 Interactions essays are easily found in the ACM Digital library. Two are among the top 10 most-cited Interactions articles and one 1999 article was downloaded 221 times in the past six weeks

If immediate impact is the goal, and sometimes it is, Core77 won. On the other hand, Interactions could be more promising to someone who shares Arthur Koestler’s view: “A writer’s ambition should be to trade a hundred contemporary readers for ten readers in ten years’ time and for one reader in a hundred years.” Don’s Core77 essays are collected on his website, but few authors expect readers to forage on their sites.

I edited and wrote Interactions history columns for eight years. I hoped they would interest some readers, but a major target was scholars embarking on a history of HCI “in ten years’ time.” (In a hundred years, the Singularity may write our history without requiring bread crumbs.) The online blogs are indexed and will be easily retrieved as long as Interactions and its blog interface are maintained, but they aren’t archived by ACM. There is no download count to identify which attracted readers. They may rarely be found. Why post?

I noted in "Finding Protected Places" that blogging is a way to explore, clarify, and sometimes discover ideas, to fix holes in fact or logic that do not become evident until reading a draft. Friends can help improve a short essay. A published post can later be revisited and expanded upon. Whatever its quality, each of my posts reflects thoughts that are carefully organized in the hope of propelling someone who is heading down a path I took, a little faster than I managed.

Publishing is an incentive to finish and to ask friends for feedback, but an essay that fails to clear my (fairly low) bar of “potentially useful to someone” goes unpublished. Don identified three goals in publishing. Unlike Don, I never hope to incite debate. All else equal, I’d rather calm and move past debate. But Don’s other goals, incite thought and understanding, yes. When I’ve agonized and made a connection, however modest, sparing the next person some agony is a contribution. If a more complete treatment exists, undiscovered by me and my well-read friends, it doesn’t matter if others are now wrestling with the same issues. Once in a while I discover a better analysis that was written prior to the publication of mine; I can revisit the issue and point it out. I have a few demons, but NIH isn’t one of them.

Conclusion

Where does this reflection leave us? After two years, continue blogging or rest? One of my demons is a fear of running out of new things to say without noticing it. Links to past posts, such as the two above, can only up to a point be justified as building on previous thoughts. Even with a low bar, the supply of topics I’ve thought about enough to produce a worthwhile essay in a few days is limited. Will a worthwhile thought per month materialize? A few remain, and electrodes could be hooked up to the carcasses of a few unfinished past efforts to see if they can be brought to life.

My greatest concern is that views shaped in the 1970s and 1980s offer little to folks with radically different experiences, opportunities, and challenges. I frequently dream of wandering lost in familiar rooms and streets, thronged with busy people I don’t recognize. They’re friendly, but they can’t help me find my destination, and show no sign of needing help finding theirs.

Since 1970, a guiding image for me has been the final scene of Fellini’s Satyricon. I recall the movie as a grotesque view of ancient Rome that Fellini shaped into a personal vision. A boy we have seen mature into a man attends the funeral of the poet who inspired him. The poet appears to die twice. At the beginning, he is a poor man kept to amuse a corrupt aristocrat. Burned and beaten for speaking the truth, he collapses after bestowing the spirit of poetry on the boy. Near the end, he unexpectedly returns as a wealthy man surrounded by aging admirers. The source of his wealth is not explained—I assumed it was his literary reputation. He is fading and soon dies peacefully. His will is read: His fortune will go to those who eat his body. His followers cite cannibalism precedents in the literature. The protagonist and several other young men and women briefly watch the gruesome feast, then turn to a seagoing sailboat and distant shores.



Posted in: on Thu, January 22, 2015 - 4:20:50

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Lifetime effigies: 3D printing and you


Authors: Aaron Marcus
Posted: Tue, January 20, 2015 - 2:54:02

We all have limited shelf lives. During our own lifetimes, or afterwards, some of us might wish to “publish” many hundreds, thousands, millions, or even billions of lifelike effigies, miniature or full-sized replicas of ourselves. 3D printing now offers a relatively practical, medium-cost way to achieve that objective. What was once the prerogative of wealthy pharaohs, kings/queens, and dictators of Middle-Eastern or Southeast Asian countries, is now available to almost anyone.

Many of the greats of the Internet world (only around, say, since the explosive arrival of the Web in 1994, or maybe a decade or two earlier if you count the technical origins) and many of the greats of the design world (say, for the past 60 years of the post-World War 2 era) have already died. Every one of them will, eventually. Perhaps we can capture them now, or in the near future, to keep around as replicas.

This awareness/knowledge of impending death does not necessarily lead to depression, lethargy, enervating wandering of the mind. This consciousness can focus attention to help us decide about what one can, must, and should do with (as for me) about 350 million seconds left. There is even one wristwatch that offers a death clock to remind one of the countdown. This awareness/knowledge can lead one to jettison many frivolous commitments and objectives (unless one decides to devote oneself to frivolity, of course).

For me, it has led to observations and speculations about what to do about death, or life-after-death, in the age of the Internet. Some comments follow.

Jeremy Bentham (February 15, 1748 – June 6, 1832) was a British philosopher and social reformer. Before he died, as early as 1769, he began planning for the dissection of his body upon death and its preservation as an “auto-icon.” His remains were displayed three days after his death. His skeleton and head were preserved and stored in a wooden cabinet, the skeleton padded with hay and wearing his clothes. The University College London acquired his remains in 1850 and displayed them to the public. A 360-degree rotatable, high-resolution “Virtual Auto-Icon” is available at the UCL Bentham Project's website

Dr. Gunther von Hagens gained fame in the end of the last century for replacing the water in cells of dead bodies and making plasticized versions of them, which were/are exhibited in the Body Worlds exhibits (see my article about this phenomenon: Marcus, Aaron (2003). “Birth/Death of Information as Art: ‘BodyWorlds.’” Information Design Journal, Vol. 11, No. 2/3, pp. 246, John Benjamins Publishing Company. See also here). 

Some years ago, I had speculated that people would take this plastination process, mass-produce it, and lower the cost, enabling any family to have beloved past family members, pets, and perhaps a few close personal friends preserved and on display in one’s home or business. It was only a matter of time.

After all, and as suggested above, life-sized, and larger, effigies of rulers have graced palaces, monuments, cathedrals, and numerous other governmental, religious, and commercial sites. Also, sports figures: I am reminded of a two- or three-story-sized three-dimensional effigy of the internationally famous Chinese basketball star Yao Ming, which graced a building in Beijing or Shanghai when I drove past it in 2002. I speculated that 3D printing might bring about something like the democratization of effigies.

Now, as before, that reality, the evolution in honoring past or present personages, has already been anticipated by several companies. The first I encountered using 3D printing technology was in Berlin in October 2014. I discovered TwinKind 3D, which has a franchised shop in Berlin. They produce miniature 3D-printed replicas of people based on specialized scans that they make. The cost is “reasonable”: 100 Euros (about $125 for a 10-centimeter figure, and 25,000 Euros (about $32,000) for a two-meter (that is, life-sized) version.


Photos of 3D printed bodies from TwinKind3D store in Berlin.
(Photos by Aaron Marcus, used with permission of TwinKind3D, www.twinkind.com.)

I found a similar company in China while reviewing exhibits recently at SIGGRAPH Asia 2014 in Shenzhen, China: Apostrophe’s in Hong Kong, but with locations, also, in Beijing and Taipei, offers 10cm effigies for 1900 Hong Kong dollars, or about $245. They produce effigies for games, advertising, 3D posters, and other commercial uses.


Examples of Apostrophe’s 3D images of people.
(Photo by Aaron Marcus, used with permission of Apostrophe’s, www.apostrophes.co.)

One immediately speculates about the future of such effigies, whether miniature or life-sized.

Licensed effigies

Imagine the global hubbub caused when Lady Gaga releases her next.... no, not Internet music video, but the latest licensed, authentic, non-reproducible 3D effigy of herself. Would some media stars do this? Of course they would! To better communicate with their “Little Monster” fans and to increase income from purchases of branded items, just as Hello Kitty has added her adorable face to pens and combs. 

Speaking of copies, authorized or not, I am reminded of my initial shock in 2002, in Xian, China, I when I encountered an “authentic reproduction” facility located near the archeological site of the 10,000 Chinese warriors that had been unearthed nearby in Central China. Here, the interested tourist could buy official copies, miniature, half-sized, and full-sized. I chose half-sized, and the official government knock-off factory kindly packaged them and shipped them to my home for an additional cost. Clearly, the government had realized the value of official copies. 

Who hasn’t had a pin-up of some favorite music, cinema, or television star ornamenting a teenage (or younger) bedroom? Now Lady Gaga might be featured in your home or business, wherever you would like her, provided she made available official, licensed replicas. 

I have written earlier about this topic of copies in an earlier publication and lecture in which I cited Alexander Stille’s brilliant essay “The Culture of the Copy and the Disappearance of China’s Past” (in his book, Stille, Alexander (2002), The Future of the Past, Farrar, Straus and Giroux, pp. 40-70). Stille mentions the entirely different attitude that traditional Chinese culture has had to copying. In many works, the original materials of an artifact may have been completely replaced as they decay over centuries, but the work is still revered as the “authentic entity,” just as most of the cells of our body are replaced after a period of time, but we still consider the “person” to be the same entity. In China, artists have been traditionally expected to copy their masters. There is one case of a well-known, successful painter who sold his own work as well as copies of previous masters signed with their signatures. (For further reading on copies and their role in culture, please see Schwarz, Hillel (1996). Culture of the Copy. Cambridge: MIT Press, 568 pp.)

While I am on the subject of Lady Gaga and the quantified self, I have also speculated in our case study of the Health Machine (see for example, Marcus, Aaron (2011). “The Health Machine.” Information Design Journal, 19 (1), pp. 69-89), that Lady Gaga might license the data about what she had for breakfast this morning, thus encouraging her “little monsters” fans to copy her eating behavior and thus to be persuaded to change their own eating, and exercise, behavior.

Now, back to effigies...

Gallery of former selves

One can make one’s own Madame Tousseau exhibit based on laser scans of one’s naked or clothed body. Remember how embarrassed you were as a child when your parents showed others nude photos of you as a baby or toddler? Imagine what is coming up! Together with videos, photos, and recordings, an entire Museum of One’s Self could be possible, for the obsessively self-centered and suitably moneyed. Silicon Valley entrepreneurs, please note.

Gallery of your favorite dead relatives

For those who are obsessively nostalgic, imagine the delight in assembling a flotilla of dead relatives at family gatherings, dinners, and other celebrations. Just like the statues at Hearst Castle in California...only yours. An entire family legacy in miniature could take up just a cabinet...or a room, depending on your size preferences.

Recall that Jeff Koons the noted pop-artist has sold his work showing 3D sculptures of famous people and objects for spectacular prices. On November 12, 2013, his Balloon Dog (Orange) sold at Christie's Post-War and Contemporary Art Evening Sale in New York City for $58.4 million. In 1991, one of his Michael Jackson and Bubbles sculptures, a series of three life-size gold-leaf-plated porcelain statues of the sitting singer cuddling Bubbles, his pet chimpanzee, sold at Sotheby's New York for $5.6 million. 

Now, 3D printing brings the possibility of favorite icons to an affordable level, for those who would to savor, save, or swoon. We may all rejoice...yes?

Cemeteries: A new playground for effigies

Let me mention two important experiences I had in cemeteries. 

Many years ago, probably in the 1980s, I was in Japan and visited a cemetery. I witnessed families having picnics near their relatives’ graves. I thought that odd and interesting, because in the USA, cemeteries are usually thought of as dull, dreary, somber, and scary places. In Japan, they seemed to be places of joy, where family members communed with the spirits of their dead ancestors.

In 2008, I visited a Jewish cemetery in Odessa. I was shocked to find large-scale life-sized photographic portraits of the deceased sandblasted or acid-etched into the large tombstones. 


Odessa Jewish cemetery tombstone. (Photo by Aaron Marcus.)

This type of display was quite unlike what Jewish cemeteries usually do, or for that matter, most Christian cemeteries in Europe and North America. Of course, we have had life-like portraits on caskets dating back to the Coptic Christians in Egypt from thousands of years ago, and Egyptian and other ancient civilizations were not hesitant to make life-sized, even more likely giant-sized life-like or stylized replicas of their dead rulers, as mentioned above.

Now comes 3D printing! A new business opportunity for the recorders and archivists. What vast sums of money will be spent to decorate one’s grave with a chosen life-sized 3D replica of oneself, in whatever pose one wishes (subject to local laws on disreputable behavior or attire). Imagine the makeover of cemeteries worldwide, becoming a kind of Disneyland of the Dead. 

In sum, 3D printing enables anyone to gain the eternal display capability and economic publishing of one’s image once available only to a few in past centuries and millennia. We have the democratization of death-effigies.

Note, also, that some organizations or institutions already in existence may swoop in quickly to capitalize on producing the first “authentic replicas” or “commemorative replicas,” just as the Franklin Mint in the United States produces authentic commemorative coins of its own minting.

The future of 3D effigies is bristling with possibilities. Of course, we shall have to consider side effects that may be undesirable, but...we’ll leave that for another pondering.



Posted in: on Tue, January 20, 2015 - 2:54:02

Aaron Marcus

Aaron Marcus is principal at Aaron Marcus and Associates (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts


Post Comment


No Comments Found


Wavy hair is in


Authors: Monica Granfield
Posted: Fri, January 09, 2015 - 4:17:41

Has anyone else wondered if UI design is becoming trendy and fashionable? Like hair, clothing, interiors, and architecture, are user interfaces succumbing to waves of industry trends, becoming period-based creations? 

Lately, as I move from one application to another, I have noticed how much applications are all beginning to look more and more alike. It feels very much like the look-alike teenage girls at the mall, with their long, bone-straight hair, in their black NorthFace jackets, skinny jeans, and Ugg boots. They all look in style, but they all look the same. This is the point for teenagers, but is it the point of a good user experience? 

There is the style movement of flat UI happening, but then there is the actual presentation. With Windows Metro stepping into the flat but colorful arena first came the welcome new wave of clean, flat UI. It was great to have grids, better use of fonts, and the notion of whitespace all in the game! It’s just that from Google to Apple and beyond, the interfaces and experiences all seem to be so similar, so much so, that at a glance, sometimes I can't tell one from the other. 

A few months ago, I was talking to a colleague about an assortment of applications, which had been printed out to get a high-level look at the current state of trending applications in the market. What I immediately noticed was how many apps are using the same visual stylings, such as the same colors, the same rounded images, and the same light outline of the icons. These presentations are nice, except when you can't tell one from the other. Is this because of pressure to get software out the door, and that perpetuating what is a known entity is faster and more predictable? The problem with this is creating an experience that is so similar to all others, both visually and experientially, is not only mundane, it also weakens the brand. How do you differentiate your product and promote your brand? 

Are we evolving into periods of UI design? There was the "Battleship Gray" era of the late 80s and early 90s, then the colorful, almost bulbous, three-dimensional interfaces of the new millennium. After years of gray, how fun and refreshing it was to have all that color and tangible components. And how can we forget the excitement of the period of interfaces without color, the elegance of the transparent era of glass. Then Windows Metro brought us the clean, crisp, and modern flat UI. This has left me wondering, could we be in the "mid-century" era of UI design? If so, it would be great to have some of the mid-century flare, beyond round user images, to support more distinct visual presentations and overall user experiences. We now have motion graphics that aid in adding distinct dimension to the end-user experience. As we know, similar interaction can be useful and predictable, but if we, as designers, provide the right affordances, interactions should be useable, discoverable, and meet the user’s needs for the intended experience. Will flat UI design, like mid-century design, become a classic look and feel that lasts throughout the ages? Will UI cycle through styles, revisiting various styles and trends again? Will we embrace a style period but still have the confidence to apply unique and identifiable visuals to our own experience, within the mainstream market? 

If UI is fashionable, what is the next big trend? Will battleship gray cycle back around or will wavy interfaces with subtle curves come into fashion next? After all, flat hair is out—wavy hair and Bean boots are making a comeback. So maybe it’s time to spice things up in the UX world and not only move beyond the flat hair, fleece, and fuzzy boots, but find your own color boots or fleece this time around. 



Posted in: on Fri, January 09, 2015 - 4:17:41

Monica Granfield

Monica Granfield is a user experience strategist at Go Design LLC.
View All Monica Granfield's Posts


Post Comment


No Comments Found


Destroying the box: Experience architecture inspiration from Frank Lloyd Wright


Authors: Joe Sokohl
Posted: Mon, January 05, 2015 - 11:58:06

"When we said we wanted a house at Bear Creek," client Lillian Kaufmann said to Frank Lloyd Wright, "we didn't imagine you would build it ON the creek!" 

To which Wright replied, "In time you'd grow tired of the sight of creek...but you'll never grow tired of the sound."

And he was right. Fallingwater stands as the most recognized house in architecture. Yet it's not just a landmark...it was a home. The Kaufmanns loved it.

Similarly, owners of other Wright-designed buildings may have struggled with the architect, the implementation may have had flaws, the builders and other constructors may have gone behind Wright's back to fix perceived design flaws...but they all loved the buildings. The architect's vision remains inspiration to this day.

For the past five years or so, I’ve been writing and presenting on the topic of how Wright’s work and life provide inspiration for my work in architecting experiences. Here I’m taking a look at three Wright landmarks: Fallingwater in Ohiopyle, the Pope-Leighy house in Alexandria, and Taliesin West in Phoenix. I believe that, through Wright's examples, we can learn elements that take our approaches to experience architecture to newly useful and inspiring levels for our clients and the users of our work.

Why these houses? Plain and simple: They showcase aspects of Wright’s interaction with clients, technical problems, and different contexts of use. Plus, I’ve visited them, studied them, and photographed them. In addition, they cover the gamut of Wright’s philosophy of architecture.

Fallingwater and “the box”

Undoubtedly Fallingwater is an iconic house. Yet the path that Wright took to create this house lies not just in the choices to cantilever the house over Bear Run, but the devotion to the experiences that the Kaufmanns and their guests would have. 

Wright prepared for the designing of the house by immersing himself in the context. He spent hours getting to know E.J. and Lillian Kaufmann and Edgar Kaufmann Jr. (“Junior” spent time at Taliesin in Wright’s famous fellowship). 

When Wright received the commission, he spent time walking the grounds where the Kaufmann’s wanted to site a weekend house near Bear Run—and specifically near a large, sloping rock next to a 15-foot-high waterfall.

After he returned to Wisconsin, Wright dispatched a team of draftsmen to make elevation drawings of every tree, boulder, bush, and rivulet in the site’s area...and then did nothing for three months.

Wright drew the entire concept in three hours while Kaufmann the client drove up from Chicago to Spring Green.

The key UX takeaway is that we need to have time to allow our concepts to gestate. Great design comes from spending time to do it. In addition, Wright’s focus on what clients want rather than what they need has parallels to our work. Our research needs to be detailed enough that we can postulate great solutions, not just pedestrian ones.

Experience architecture considerations

In working on the design for Fallingwater, Wright took into account how the family would use the property. Yet he also looked beyond what they had said outright and instead discovered what they really wanted. His casemate windows opened without a corner support, and he liberally used glass corners in both Fallingwater and Taliesin West, truly “breaking the box” of the traditional home. Wright’s famous retort to Lillian Kaufmann about Fallingwater’s placement highlights this. Wright could look beyond the Kaufmanns’ desire to look at the waterfall and cut to the core of their desire: to be in nature, not just at nature.

Other details at Fallingwater show his ability to design elements of the environment that would add to the Kaufmanns’ experiences there. 

For example, the built-in kettle that would swing over the fire has an indentation that it fits in when not in use. This approach to available yet integrated design elements abounds throughout Wright’s designs. 

E.J. complained that the desk in his bedroom was too small; he wanted something bigger. Wright disagreed, saying that it was the right design. The client and architect went back and forth, until Kaufmann said, “I need a bigger space on which to write my checks to you.” So Wright came up with the innovative approach—instead of allowing the desk to block the window, or redesigning the window to open outward, he created the scallop in the desk.

In a similar way, when confronted with objections from clients or team members such as technologists, we need to seek innovative solutions to design problems. Rather than taking the easy path, Wright would have us find the right path for the context, the use, and the nature of the experience.

Context is king 

As the Lao Tse quotation in Taliesin West’s music auditorium says, “The reality of the building does not consist in roof and walls but in the space within to be lived in.” You see this quotation as you ascend the staircase, and its prominence and inescapability highlight its place in Wright’s vision.

This commitment to an understanding of the context where experience occurs provides a keen, focused approach we can bring to our designs. We can seek inspiration from Wright by finding a third option when we’re on the horns of a dilemma. The approach to context answers well the question, how does the site selection integrate with user needs and desires?

When it doesn’t fit

Sometimes, a design conflicts with other elements of the experience. At Taliesin West, some of Wright’s favorite students gave him a vase as a present. He placed it on a shelf in the living room but the base was too big for the shelf.

He could have widened the shelf...but he said, “No, the shelf is correct.” 

He could have asked them to replace it with a smaller vase...but he said, “No, the vase is correct.” Instead, he had a circle cut out of the glass so the vase would fit.

At the Pope-Leighy house’s construction, Loren Pope came up with design-altering solutions that Wright allowed to stand. After Wright left the site of his house while it was being constructed, Loren Pope convinced the builder to place the horizontally designed windows vertically in the kids’ bedroom. Though this step conflicted with Wright’s design, he allowed it to remain.

Sometimes, we need to understand when a design idea from the client does not destroy the entire design. As Wright said, “The more simple the conditions become, the more careful you must be in the working out of your combinations in order that comfort and utility may go hand in hand with beauty as they inevitably should.”



Posted in: on Mon, January 05, 2015 - 11:58:06

Joe Sokohl

For 20 years Joe Sokohl has concentrated on crafting excellent user experiences using content strategy, information architecture, interaction design, and user research. He helps companies effectively integrate user experience into product development. Currently he is the principal for Regular Joe Consulting, LLC. He’s been a soldier, cook, radio DJ, blues road manager, and reporter once upon a time. He tweets at @mojoguzzi and blogs at sokohl.com. Joe Sokohl is Principal of Regular Joe Consulting, LLC
View All Joe Sokohl's Posts


Post Comment


No Comments Found


Life after death in the Age of the Internet


Authors: Aaron Marcus
Posted: Mon, December 29, 2014 - 11:45:47

We all have limited shelf-lives. We all arrive on earth stamped with an expiration date or a “best used before” date in our genes. It’s just that most of the Internet, most of the Web, most mobile products/services, most wearables, and most of the creation/discussion of the Internet of Things, created by and for younger people, just don’t seem to notice the clock ticking...

I do.

Some of my childhood friends have recently died. My parents (may they rest in peace) are dead. My younger brother (may he rest in peace) died 15 years ago. I don’t say “passed” or “passed away.” I say “died.” The real deal. No euphemisms. Just reality.

Many of the greats of the Internet world (only around, say, since the explosive arrival of the Web in 1994, or maybe a decade or two earlier if you count the technical origins), many of the greats of the design world (say, for the past 60 years of the post-World War 2 era) have already died. Every one of them will, eventually.

This awareness/knowledge does not necessarily lead to depression, lethargy, or enervated mind-wandering. This awareness/knowledge can sharpen and focus attention, to decide about what one can, must, and should do with (as for me) about 350 million seconds left. There is even one wristwatch that offers a death clock to remind one of the countdown. This awareness/knowledge can lead one to jettison many frivolous commitments and objectives (unless one decides to devote oneself to frivolity, of course).

For me, it has led to observation and internal speculation on what to do about death, or life-after-death, in the age of the Internet. Some comments/observations follow.

Life-after-death management systems (LADMs)

I have a website, Twitter account, Facebook account, email account, LinkedIn account, and many other accounts, so numerous that I cannot even keep up, remember, or contribute to them. What will happen to them? What should I do with them? Who will care for them? Should they just all be properly triggered now to expire when I do? How?

Clearly the Internet business startup community must create a death management system, in which one can set up the proper termination, or continuation of all these accounts, including a means to fund them for, say, a 100 years maximum, or at least 10-20 years, by which time, society will have so changed that one cannot predict whether the continuation of any of these will be usable, useful, or appealing at all.

A few other life-after-death on the Internet possibilities offer themselves as likely products/services of the future.

Authentic, guaranteed, “eternal” websites or social-media platforms 

OK, you’ve worked a long time to build up a website and a social-media platform, perhaps managed with Toot-Suite. What happens when you die? Not all of us have Marilyn Monroe or Elvis Presley organizations to manage our legacy. Clearly, we shall need a place to gather our thoughts (before death), as well as videos or audio recordings to collect them and to have them be available in perpetuity, provided we have paid the right maintenance fee, just as one pays a cemetery a fee for maintaining a funeral plot. Only the possibilities here are more complex and exciting. Imagine being able to deliver messages, emails, thoughts on current and future events composed from documents and data you have prepared while still alive, to communicate to your friends, your family members, and your enemies, or for the general public. For 100 years, or more you will live on, including “your” reactions to future events, based on clever algorithms that intuit what you might have done or said in the future. This gives What Would Jesus Do? Or Say? a new significance.

Digital attics: Authentic, organized (or less expensive, disorganized), eternal

In past millennia and centuries, only very wealth or famous people could collect, preserve, and guarantee the availability of their belongings, articles, diaries, etc. Now some centers preserve the physical legacy of publications and objects of a select few artists, writers, philosophers, government and political notables, etc. 

In the future, we shall all have the challenge and the possibility of preserving our own collections in digital attics, whether organized or not, presumably authentic. In them, one will be able to purchase “guaranteed” storage of all digital, scanned, or generated media and artifacts: email, photos, videos, drawings, recordings, business cards, other memorabilia, scrapbooks, diaries, journals, collections of business cards, and other ephemera and memorabilia. Our descendants may (or may not) examine these items. Others, scholars or detectives, or interested professionals, may search among the artifacts (for a fee no doubt) to look for sociological, anthropological, design-history, historical, political, or other patterns of information that may be of value. 

If you think this ridiculous, recall that the famous Geniza (Jewish) collection in Cairo, Egypt, was a place where people placed religious books that were no longer being used, for a thousand years, as well as laundry lists, and other seemingly mundane messages, which gave scholars unique insight into societies of earlier centuries when they were discovered a thousand years later. Naturally, wealthy and/or powerful people will have the most elaborate and extensive of these archives or attics, but digital media, servers, etc., offer the chance for everyone to preserve almost everything...if they choose to do so, and someone can pay for it forward in time.

These collections of your digital things would carry on after you die, and would offer guaranteed storage of all digitized, scanned artifacts. Of course, wealthy people will have the most thorough collections, just as we were able to pour over King Tutankhamen’s belongings 4000 years later. Today, such collections are available to a few at the Harry Ransom Center at the University of Texas/Austin, to those who archive their stories in NPR’s Story Corps collection archived at the USA’s National Library, or to those photographers, like Walker Evans, who was able to retain a collection of anonymous people from the southern USA in the 1930s. Tomorrow, anyone who wishes can (for the right price) preserve a digital legacy. 

Growth of professional archivists

With all this preserving going on, it seems there will likely come into being a fleet of professional archivists who will help “vacuum” data and scan artifacts, and undertake interviews with selected family members and friends. To some extent these people already exist, but in the future, with the legion of people preserving all of their past, this seems a profession likely to grow in numbers worldwide. Pre-death data gathers would guarantee confidentiality and quality results, so you can say what you want, to be released only after your own death and/or the death of all whom you mention specifically. There might even be a Pre-Death Data Gathers Society, with annual conventions at which they discuss their techniques and methods.

Pre-death funerals

Some years ago, I invented what I thought was a unique phenomenon: the pre-death funeral. One could arrange for this in advance, perhaps at a time when one knew there was little time left. Then, one could send out invitations, have speakers, ceremonies, rituals, foods, and publications, as appropriate. Imagine being able to enjoy the accolades (presumably people would be thoughtful, kind, or at least funny, as in celebrity roasts on TV/the Internet), rather than being just a silent, somber, stiff participant at normal funerals. 

Here is what I wrote, but never published, in 2011:

Today, 23 March 2011, television and radio announcers interrupted the regular news (of the Japan post-earthquake and tsunami nuclear reactor meltdown plus the announcement of double-dose radiation in Tokyo that endangers babies, and the ongoing Libyan battles for overthrowing Qadaffi plus other protests throughout Middle-East countries) to announce that Elizabeth Taylor died last night at the age of 79. 

I am 67. Her death and the television video clips of her life, especially in the last decades caused me to consider. She had accepted a special Academy Awards honor for her fight for AIDS. The audience applauded long and loudly. She seemed thrilled at the acclaim. What a wonderful experience she must have had to receive the respect, honor, love, and acknowledgment for her achievements. How many of us have such achievements? How many of us have been celebrated in such an event? A few members of the human race. 

Some organizations sponsor lifetime achievement awards, like the National Design Museum, the American Institute of Graphic Arts, and the Business Forms Management Association. However, these are usually fairly brief affairs, sometimes with multiple persons being so honored.

Some people have the benefit of retirement ceremonies at their place of employment, but as the movie “Schmidt” (featuring Jack Nicholson) showed, these may be somewhat routine, tentative in their seriousness, and mixed in their success, often with the anxiety of the change that is currently taking place as the employee leaves the company, with perhaps a watch or a small trophy after decades of work.

Of course, there are also the “roasts” of the Comedy Central satellite/cable network. However, these are something else: some praise and acknowledgment of achievements, but heavily mired in insults and disrespect. Hardly a replacement for the Academy Awards acknowledgment ceremony.

How many of us would like to be respected, honored, loved, and acknowledged for our achievements, for our contributions to bettering the world, of our lives, even if modest in scope? Probably every one of us. However, society has not developed a system for such lifetime honors and ceremonies. Except one: the eulogy at someone’s funeral. Alas, that person eulogized is not “present” (except as a corpse in a casket) to hear the words, to accept the praise, to enjoy the company of family and friends.

I have a modest solution to this problem: the Pre-Passing Ceremony or the Pre-Death Funeral. 

This event would take place late in someone’s life, perhaps at the age of 65, or whatever age was deemed appropriate, unless they were already officially notified of a lethal disease that, unfortunately, predicted an early death. Of course, if they somehow overcame this medical/legal situation and went on to live a long life, they might qualify for two such celebrations. 

Who might “authorize” such celebrations? I am not sure, but the organization of a national or international government or NGO, which we might call the Pre-Passing Ceremony Commission, like the organizations that assign Internet domains, would help to make things orderly, official, authorized, and more significant. 

That organization might also handle post-death maintenance of all Internet-related properties such as Facebook pages, blogs, etc. This organization might also make arrangements for pre-death interviews, post-death email messages and video/phone calls to family and friends, to help keep the dead person’s life and memory ever-present among selected family and friends. But that is another story. 

Back to the Pre-Passing Ceremony…

Mortuaries, churches, synagogues, mosques, social groups, and business organizations might all be interested in sponsoring such gatherings. Why? Because of the possibilities of selling tickets to attendees, and being repaid for the expenses of organizing, publicizing, recording, catering, and managing these events, including Lifetime Books and Lifetime Websites. The event might be simulcast to other locations so that people not able to attend could take part, just as there are ‘round-the-world video connections for the Oscars and New Year’s Eve celebrations. This might expand the audience, participation numbers, publications, PR…and budgets.

The fund-raising, as well as the spirits-raising possibilities are numerous. One might find a use for nearly empty movie theatres that desperately offer their locations for business events before they succumb and close their doors. 

All of these events and publications do not preclude a post-death funeral or memorial service, which might take advantage of previous existing documents, contacts, and events that can be repurposed as appropriate. They might even help generate the structures for ongoing anniversaries or “yorzeit” celebrations as they are called in Judaism.

Perhaps this ceremony would start in California, ground-zero for new ceremonies, new cults, new chips, new technology, and new social media. Are you ready to start celebrating a lifetime…before it is too late?

Alas, I discovered that someone earlier in the 20th century had already thought of this idea and even staged his own funeral, and Bill Murray, the comic actor in the movie Get Low in 2010 had been inspired to consider the idea. I am not speaking of merely pre-funeral arrangements, nor merely “ celebrating the life of X,” but something with the awareness of pending mortality.

Well, in any case, there seems to be a lot of life left in the idea of pre-death rituals and after-death Internet/social-media virtual you’s. You’ll get used to it...



Posted in: on Mon, December 29, 2014 - 11:45:47

Aaron Marcus

Aaron Marcus is principal at Aaron Marcus and Associates (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts


Post Comment


No Comments Found


The rise of incompetence


Authors: Jonathan Grudin
Posted: Thu, December 11, 2014 - 10:08:26

“To become more than a sergeant? I don't consider it. I am a good sergeant; I might easily make a bad captain, and certainly an even worse general. One knows from experience.”
   — from Minna von Barnhelm, by Gotthold Ephraim Lessing (1729–1781)

“There is nothing more common than to hear of men losing their energy on being raised to a higher position, to which they do not feel themselves equal.”
    — Carl von Clausewitz (1780–1831)

"All public employees should be demoted to their immediately lower level, as they have been promoted until turning incompetent."
   — José Ortega y Gasset (1883–1955)

“In a hierarchy individuals tend to rise to their levels of incompetence.”
   — Laurence Peter (1919–1990)

We should be enjoying a golden age of competence. We have easy access to so much information. YouTube videos show us how to do almost anything. Typing a question into a search engine very often retrieves helpful answers. We see impressive achievements: Automobiles run more efficiently and last longer, air travel grows steadily safer, and worldwide distribution of a wide variety of products is efficient. Nevertheless, there is a sense that overall, the world isn’t running that smoothly. Governments seem inept. One industrial sector after another exhibits bad service, accidents, inefficiencies, and disastrous decisions. The financiers whose ruinous actions led to worldwide recession and unemployment didn’t even lose their jobs. In HCI, many nod when Don Norman says “UI is getting worse—all over.” How could incompetence be on the rise when knowledge and tools proliferate?

The last of the four opening quotations, known as the Peter Principle, was introduced in the 1969 best-selling book of the same name. The other writers noted that people are promoted to their levels of incompetence; Peter went further, explaining why organizations keep incompetent managers and how they avoid serious harm. I will summarize his points later, but for now join me in a thought exercise:

Assume the Peter Principle was true in 1969. How are technology and societal changes affecting it?

There are several reasons to believe that managerial incompetence is escalating, despite the greater capability of those who are competent—who, in Peter’s words, “have not yet reached their levels of incompetence.”

Strengthening incompetence

1.  Competent people are promoted more rapidly today. Thus, even if well-trained, they can reach their levels of incompetence more quickly. In the rigid hierarchical organizations of the past, promotions were usually internal and often within a group. Few employees had to wait the 62 years and counting that Prince Charles has for his promotion, but wait they did. Today, with the visibility that technologies enable, competent employees can easily find suitable openings at the next higher level in the same or a different organization. Organizational loyalty is passé. A software developer joins a competitor, an assistant professor jumps to a university that offers immediate tenure, a full professor is lured away by a center directorship or deanship. The quickest way to advance in an organization can be to take a higher position elsewhere and return later at the higher level. LinkedIn reduces the friction in upward trajectories.

2. Successful organizations grow more rapidly than they once did, creating a managerial vacuum that sucks people upward. Enterprises once started locally and grew slowly. Mass media and the Internet enable explosive growth, with technology companies as prime examples. As a project ramps up and adds team members, experienced workers are incented and pressured to move up a management ladder that can quickly grow to 8 or 10 rungs. A person can plateau at his or her level of incompetence while very young.

3. The end of mandatory retirement extends the time that employees can work at their levels of incompetence. In 1969, Peter’s great teacher who became an incompetent principal probably had to retire at 65. Today he could have a decade of poor performance ahead.

4. The decline of class systems and other forms of discrimination is terrific, but egalitarian systems are less efficient if everyone progresses to their level of incompetence, whereas competent employees trapped beneath a class boundary or a glass ceiling are ineligible for promotion and thus fail to achieve incompetence. In the 1960s, many women found job opportunities only in teaching, nursing, and secretarial work. Accordingly, there were many extraordinarily capable teachers, nurses, and secretaries. I benefited from this indefensible discrimination in school and my father benefited from it in his job. (If this argument seems alarming, read to the end!)

5. Increased job complexity is a barrier to achieving and maintaining competence. As the tools, information, and communication skills required for a job increase, someone promoted into the position is less likely to handle it well. The pace of change introduces another problem: A competent worker could once count on remaining competent, but now many skills become obsolete. “Life-long learning” isn’t a cheerful concept to someone who was happy to finish school 30 years ago.

Accept the premise of the Peter Principle and these are grounds for concern. But you may be thinking, “The Peter Principle is oversimplified, competence isn’t binary, lots of us including me haven’t reached our levels of incompetence and don’t plan to.” Peter would disagree and insist that you are on a path to your level of incompetence, if you haven’t reached that destination already. I will summarize Peter’s case, but first let’s consider another possibility: Do other changes wrought by technology and society undermine the Peter Principle? The answer is yes.

Weakening the Peter Principle

1. Technology has so weakened hierarchy in many places that it’s difficult to realize how strong hierarchy once was. Peter christened his work “hierarchiology” because flat organizations are not built on promotions. The ascent at the heart of his principle is almost inevitable in rigid hierarchies where most knowledge of a group’s functioning is restricted to the group. I worked in places where initiating a work-related discussion outside the immediate team without prior managerial approval was unthinkable. Memos were sent up the management chain and down to a distant recipient; the response traveled the same way. The efficiency and especially the ambiguous formality of email broke this. A telephone call or knock on the door requires an immediate response; an email message can be ignored if the recipient considers it inappropriate to circumvent hierarchy. Studies in the 1980s showed that although most email was within-group, a significant amount bypassed hierarchy. Hierarchy is not gone, but it continues to erode within organizations and more broadly: Dress codes disappear, children address adults by first name, merged families have complex structures, executives respond directly to employee email, and everyone tweets.

2. Hierarchy benefits from an aura of mystery around managers and leaders. Increased transparency weakens this. In hierarchical societies, rulers tied themselves to gods. Celebrities and the families of U.S. Presidents once took on a quasi-royalty status. In The Soul of A New Machine, the enigmatic manager West was held in awe by his team. Not so common anymore. Leaders and managers are under a media microscope, their flaws and foibles exposed [1]. When managerial incompetence is visible, tolerating it to preserve stability and confidence in the hierarchy is more challenging. In addition, internal digital communication hampers an important managerial function: reframing information that comes down from upper management so that your unit understands and accepts it. The ease of digital forwarding makes it easier to pass messages on verbatim, and risky to do otherwise because a manager’s “spin” can be exposed by comparison with other versions.

3. When organizations are rapidly acquired, merged, broken up, or shut down, as happens often these days, employees have less time to reach their levels of incompetence. Unless brought in at too high a level, they may perform competently through much of their employment.

And the winner is…

…hard to judge definitively. We lack competence metrics. People say that good help is harder to find and feel that incompetence is winning, perhaps because we expect more, promote too rapidly, or keep people around too long. But could it be that only perceived incompetence is on the rise? Greater visibility and media scrutiny that reveal flawed decisions could pierce a chimera of excellence that we colluded in maintaining because we wanted to believe that capable hands were at the helm.

Despite these caveats, I believe that managerial incompetence is accelerating, aided by technology and benign social changes that level some parts of the playing field. Two of the three counterforces rely on weakened hierarchy, but hierarchical organization remains omnipresent and strong enough to trigger hierarchy-preserving maneuvers at the expense of competence, as summarized below.

Part II: Hierarchy considered unnatural

Peter’s “new science of hierarchiology” posits dynamics of levels and promotions. Archaeology and history [2] reveal that when hunter-gatherers became food-sufficient, extraordinarily hierarchical societies evolved with remarkable speed: Egypt and Mexico, China and Peru, Rome and Japan, England and France. Patterns of dysfunction often arose, but hierarchy persevered. Our genes were selected for small-group interaction; large groups gravitate to hierarchy for social control and efficient functioning. Hence the universality of hierarchy in armies, religions, governments, and large organizations.

As emphasized by Masanao Toda and others, we evolved to thrive in relatively flat, close-knit social organizations where activity unfolded in front of us. Hierarchical structures are accommodations to organizing over greater spatial and temporal spans. They can be efficient, but because they aren’t natural we should not be surprised by dysfunction. Hierarchy that emerges from our disposition to jockey for status in a small group can play out in less than optimal ways in large dispersed communities.

Those at the top work to preserve the hierarchy, with the cooperation of others interested in stability and future promotion. When employees are promoted but prove not up to the task, removing them has drawbacks. It calls into question the judgment of higher management in approving the promotion. Who knows if another choice will be better? The person’s previous job is now filled. If it is not disastrous, best to leave them in place and hope they grow into the job. In this way, a poor school superintendent who was once a good teacher or athletic coach hangs on; an incompetent officer is not demoted. When high-level incompetence could threaten an organization, other strategies are employed: An inept executive focuses on procedural aspects of the job and is given subordinates “who have not yet risen above their levels of competence” to do the actual work.  An incompetent manager is “kicked upstairs” to a position with an impressive title and few operational duties. Peter labels this practice percussive sublimation and describes organizations that pile up vice presidents “on special assignments.” In a lateral arabesque, a manager is moved sideways to a role in which little damage can be done. Another maneuver is to transfer everyone out from under a high-level non-performer, yielding a free-floating apex. Reading Peter’s amusing examples of these and other such practices can bring to mind a manager one has known. Perhaps more than one [3].

The Peter Principle

Researching a book on the practices of good teachers, Laurence Peter encountered examples of poor teaching and administration. His humorous compilation of “case studies” drawn from education and other fields, fictionalized and padded with newspaper stories, eventually found a publisher and became a best-seller. A 1985 book subtitled “The Peter Principle Revisited” promised “actual cases and scientific evidence” behind “the new science of hierarchiology.” It delivered no such thing. He may have observed and interviewed hundreds as claimed, but he provides a limited set of examples: capable followers promoted to be incompetent leaders, capable teachers who made poor administrators, experts on the shop floor who became bad supervisors, great fundraiser-campaigners who prove to be poor legislators, and so on. Sources of eventual incompetence are intellectual, constitutional, social, and other mismatches of skill set to position requirements.

The phenomenon is also evident in less formal hierarchies. A good paper presenter is promoted to panel invitee. A successful panelist receives a keynote speech invitation. A young researcher is invited to review papers. Promotions to associate editor or associate program chair, editorship or program chair, and more prestigious venues or higher professional service can follow until incompetence is achieved. Percussive sublimation and lateral arabesques are found in professional service as well as in organizations. The visibility of competent performance can undermine it by spurring invitations: A strong, proactive conference committee member may deliver weak, reactive service when subsequently on four committees simultaneously.

At times Peter claims that there are no exceptions to his principle. Pursuit of universality led him to dissect apparent exceptions, yielding the insights into how organizations handle high-level poor performers. Elsewhere Peter acknowledges that many people work ably prior to their “final promotion,” suggests a few ways to avoid promotion to your level of incompetence, and presents the nice class boundary analysis that identifies pools of competence.

The prevalence of class systems explains why the 18th and 19th century quotations above described the possibility of promotion to incompetence whereas the 20th century quotations stated its inevitability in an age with less discrimination. Less discrimination against white males, anyway. Allow me an anecdote: As CSCW 2002 ended, I went to a New Orleans post office to mail home the bulky proceedings and other items. As I started to box them, one of two black women behind the counter laughed and told me to give them to her, whereupon she discussed the science of packaging while rapidly doing the job and entertaining her colleague with side comments. I left with no doubt that the two of them could have managed the entire New Orleans postal service. Courtesy of workplace discrimination perhaps, I had the most competent package wrapper in the country.

Spending a career with a single employer—actors in the studio system, athletes and coaches with one team, faculty staying at one university, reciprocal loyalty of employees and company—was once common. Promotions were internal, waiting for promotions was the rule, and years of competent performance was common, abetted by glass ceilings and early retirements. Those days are gone.

The versatility of programming made it a nomadic profession from the outset. When I worked as a software developer in the mid-1980s, we questioned the talent of anyone who remained in the same job for more than three years. A good developer should be ready for new challenges before an opening appeared in their group, so we were always ready to find a job elsewhere. When after two years I left my first programming job—work I loved and was good at—to travel and take classes, my manager tried to retain me by offering to promote me to my level of incompetence—that is, he offered to hire someone for me to manage.

With job opportunities in all professions visible on the Internet and intranets, a saving grace of the past disappears: When the number of competent employees exceeded the number of higher positions, not all could be promoted. Today, a capable worker aspiring to a higher position can likely find an employer somewhere looking to fill such a position. 

Concluding reflections

This essay on managerial efficacy began with the observation that rapidly accessed online information is a powerful tool for skill-building: In many fields, individual competence and productivity has never been higher. Because someone who does something well is a logical candidate for promotion to manage others doing it, this undermines managerial competence: Managing is a complex social skill that is learned less when studying online than through apprenticeship models.

As class barriers and glass ceilings are removed, subtle biases continue to impede promotion, so by the logic of the Peter Principle, past victims of overt discrimination are especially likely to be capable as they more slowly approach their final promotion.

What should we do? Think frequently about what we really want in life, and keep an eye on those hierarchies in which we spend our days, never forgetting that they are modern creations of human beings who grew up on savannahs and in the forests.

Endnotes

1. “It is the responsibility of the media to look at the President with a microscope, but they go too far when they use a proctoscope.”— Richard Nixon

2. Charles Mann’s 1491 provides incisive examples and a thoughtful analysis.

3. The reluctance of monarchs to execute other royalty reflected the importance of preserving public respect for hierarchy. The crime of lèse-majesté, insulting the dignity of royalty, was severely punished and remains on the books in many countries. Today we are reluctant to prosecute or even force out financial executives who made billions driving our economy into ruin. Our genes smile on hierarchy, our brains acquiesce.

Thanks to Steve Sawyer, Don Norman, Craig Will, Clayton Lewis, Audrey Desjardins, and Gayna Williams for comments.

Posted in: on Thu, December 11, 2014 - 10:08:26

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Mobile interaction design in the 2013-2014 academic year


Authors: Aaron Marcus
Posted: Wed, December 03, 2014 - 4:32:12

In May 2014, I was blessed with three invitations to be a guest critic at three end-of-the-semester design courses in three different departments of two educational institutions. Here is a quick summary of my experience, much delayed because of professional course/workshop/lecture presentations and book writing.

New Product Development Course, Mechanical Engineering and Haas School of Business, University of California at Berkeley, Berkeley, California

This course is led by Prof. Alice M. Agogino, Department of Mechanical Engineering. (The course description is here.)

From what I have seen over the past few years, the course provides an experience in preliminary project planning of complex and realistic mechanical engineering systems, but includes the possibility of projects that are, in effect, mobile user-experience design. Design concepts and techniques are introduced, and student's do innovative design/feasibility studies and present them at the end of the course. The primary reading material is the textbook Product Design and Development (Second Edition) written by Karl Ulrich and Steve Eppinger, a rather basic step-by-step discussion of the processes. The course objectives include innovation and achieving customer-driven products. The topics covered include personas and empathic design; translating the "voice of the customer"; concept generation, selection, development and testing; decision analysis; design for the environment; prototyping; an ethics case study; universal design and entrepreneurship; and intellectual property. Sounds pretty comprehensive doesn’t it! Well, I am sure the students do get a good introduction to the design/development process.

The reviews were held in Innovation Lab at the Haas School of Business. Some projects that seemed, to me, especially interesting to the HCI/UX/CHI/IXD communities were these: 

Headphones: A product that improves existing headphones by increasing longevity for daily active users while enhancing durability and ergonomics 

ResidentSynch, The Smart Home: A product to integrate, monitor and optimize the use of the various products in urban households

Samsung-Intel Next Digital 1 (IoT): A platform that connects all of our devices with objects around us 

Samsung-Intel Next Digital 2 (Sensorial Experience): The integration of technology and sensorial experiences, enabling consumers the ability to interact with technology in a way that stimulates multiple human senses simultaneously 

Smart Alarm Clock: Redesign the experience of waking up and starting your day.

One of the more intriguing projects, which I reviewed in detail, was the Samsung Loop project, with a Samsung staff member as a mentor/guide. When people actually meet (not virtually), controlled information can pass from one ring to another when in close contact, for example, when people meet and shake hands. The device was clever, minimal in form, and similar to other ongoing research for “finger-top” devices. Of course, there is the cultural challenge that not all people shake hands upon meeting.


Example of the Loop finger-device for exchanging
information between people who actually meet.
(Photo by Aaron Marcus)

Even if some projects were quite inventive, most of the projects had mediocre presentations of their end results, perhaps not surprising from a group of engineering and business students. One exception was the work of an outstanding student Elizabeth Lin, a computer-science major, whose graphic design and explanatory skills were at a professional level.

University of California at Berkeley, Department of Computer Science, Course in Mobile Interaction Design, Berkeley, California

Prof. Bjørn Hartmann invited me to be a critic in his course, in which he offers, together with Prof. Maneesh Agrawala, a semester’s activity in learning about mobile user-experience design, mobile user-interface design, and mobile interaction design. The projects were impressive, as presented in two-minute visual, verbal, and oral summaries. Two other critics and I were asked to judge them. The two other people were much younger user-experience/interaction design professionals: Henrietta Cramer and Moxie Wanderlust, both from the San Francisco Bay area (by the way, Moxie Wanderlust made up his name in graduate school....an interesting user-experience design project in itself!). 

Amazing to me was the fact that, after judging 24 presentations in under an hour, we were all almost identical in our judgments as to the best overall project, the best visual design, and the most original. I was a little worried that we would differ greatly in our reviews. We didn’t!

Here in this class, students were able to code or script working prototypes within eight weeks. Alas, insufficient time had gone into user-experience research, usability studies, and visual design. Not a surprise from a center for computer science. The course and the projects are well documented online: 

California College of Arts, Department of Graphic Design, San Francisco, California

The lead professors of the senior thesis presentations, Prof. Leslie Becker and Prof. Jennifer Morla, invited me to join the group of guest critics for 20 senior thesis projects of varying media and subject matter. One of the most interesting was that of Maya Wiester, whose project focused on 3D printing of food and a mobile app that would manage food ordering. New characteristics for one’s favorite cuisine might include not only ethnic/national genre (Italian, Chinese, Thai, Mexican, etc.) and sustainability/healthfulness (vegan, organic, low-salt, no-MSG, gluten-free, local, etc.) but new attributes such as shape (Platonic solids, free-form, etc.), color (warm, cool, multi-colored, etc.), and surface texture (smooth, rough, patterned, etc.).


Example of 3D-printed food by Maiya Wiester.

Many of the projects were quite interesting and often powerful formal explorations, but usually without business/production/implementation considerations (no business plans), little testing/evaluation, and little or no implementation of computer-software-related applications.

What was evident from visiting all three educational sites was that each had its special emphasis, but no one place succeeded in providing, at an undergraduate (or graduate) level, the sufficient depth needed to produce the user-experience design professionals so much needed now. It seems that on-the-job experience is what gives new graduates the necessary depth and breadth of educational experience and expertise.

Perhaps it has always been this way. Some educational institutions claim to provide it, that is, the “complete package.” I think most do not deliver the complete set of goods, but some are trying harder. Having visited or talked with faculty at 5-10 institutions in the last year or two in five or six different countries, I am prepared to say that at least the educational leadership is aware of the challenge.



Posted in: on Wed, December 03, 2014 - 4:32:12

Aaron Marcus

Aaron Marcus is principal at Aaron Marcus and Associates (AM+A) in Berkeley, California.
View All Aaron Marcus's Posts


Post Comment


No Comments Found


Lightning strikes!


Authors: Deborah Tatar
Posted: Tue, December 02, 2014 - 12:15:05

Every once in a while, in the world of high technology, I encounter someone who is doing a perilous, marvelous thing: planting his/her feet on the ground, and, in grounding him/herself, becoming a conduit for far more. 

At the Participatory Design Conference in Namibia last month, Cristóbal Martínez did this both figuratively and literally. Cristóbal is a graduate student of James Paul Gee and Bryan Brayboy’s, in Rhetoric, at Arizona State. He is also a mestizo from el pueblo de Alcalde, located just north of Santa Fe, New Mexico. Martínez is also part of a collective, Radio Healer, that explores, through rhetoric and performance, indigenous community engagement. And some of that engagement involves appropriation of pervasive media. 

At PDC, Martínez committed an act of digital healing for—or perhaps with—us. The performance that I saw integrated traditional and novel elements. He first spoke, then donned a mask to dance and made music with shell ankle-rattles, a flute, his voice, and three Wi-mote-enabled instruments. One, a bottle that he tilts and turns, has drone-like properties, almost like a Theramin, that establish a kind of keening baseline for the performance. The other two—handheld revolving platforms with Wi-motes’s affixed—are musically more complex. They provided rhythmic form through the period of revolution of the platforms as he held them, and melodic content through the variation in tonality as the Wi-platforms revolved.


Martínez engaged in performance

In some important sense, this was not much different from the buffalo dances my family and I have witnessed and enjoyed on New Year’s Day near to Martínez’ home in New Mexico. I experienced it with similar emotion in Namibia as in New Mexico, and indeed my direct experience in Namibia was overlaid with a memory. We had taken our three-year-old, Galen, to the Buffalo Dances, and he stood there on the arid yellow winter ground, entirely absorbed, for several hours, apparently indifferent to the considerable cold. He was also uninterested in offers of snacks, lunch, or naps. (Our baby was also completely absorbed from the comfort of his much warmer stroller, so that made an entirely absorbed family.) 

Eventually, the huge Abuelo (grandfather) in his cowboy hat came over to Galen. Galen, also in a cowboy hat, and with his great serious, steady dark eyes, and his soft trusting little baby cheeks, looked up at the Abuelo. The Abuelo silently held out his great huge man hand and folded Galen’s little one into a serious man’s handshake. After which he gave him a very tiny, very precious candy cane. There it was, in two silent, economical gestures: acknowledgement of the elements of the man my son would become (and now is), and the child that he was. We had come to witness the dances, but we were, in this way, also seen. 

In some sense, Martínez’ performance was like that for me. It was replete with directly experienced meaning—meaning perceptible to my three-year-old and even the 8-month-old baby. I was a witness; I found my own meaning in it despite my non-native status, and I was to some extent and in some generous way invited to partake.

But there are some differences worth thinking about. Martínez was performing in isolation from his collective; he was performing for us, a small and sympathetic but definitely global audience; and then—the trickiest bit—he was adopting and adapting Wi-motes. 

I am not in much position to talk about what Radio Healer means in its own setting, except that we cannot think about what Martínez was showing us without thinking about the framing he gives it. His purpose is to open up and enliven a self-determining community through the assertion of what he calls “indigenous technological sovereignty.” He wants his people to engage in critical discourse around the appropriation of technology. He wants them to re-imagine it for their own purposes. Possibly “he wants” is too egocentric a phrase. He probably sees himself as a conduit of a larger collective wanting. This quest is given purpose by their own pursuit of their own cultural logic and lives, but it is also given poignancy by pressures that particularly impinge on Indian sovereignty. Not only have Indians been decimated, abused, underserved and neglected in the past, but they also are currently colonized in many ways, not the least of which is a considerable pressure to be a kind of living fossil—to live as stereotypes of themselves, unable to change, and unable to create living community. 

Martínez is not the only person to protest this pressure. Glenn Alteen and Archer Pechawis expressed it at last spring’s DIS conference in Vancouver in explaining the operation of grunt gallery (especially Beat Nation). And, while in Vancouver, I was lucky enough to be able to see Claiming Space, the hip-hop Indian art show at the Museum of Anthropology, created by teenage urban First Nation people. The question that was posed around that curated exhibit was, “Why should it be noteworthy that Indians engage with hip-hop?” 

And the good question that follows on this (“Why should it be noteworthy that Indians use Wi-motes in their ritual?”) brings us back to the meaning of Martínez’ performance for us at PDC. Because Indians are also Americans or Canadians or citizens of the world, and, even if they were space aliens, Martínez’s healing tacitly suggests that the performance has relevance beyond its indigenous origin. 

To my mind, the relevance stems first from the fact that we must all—Indian, not-Indian, American, African—decide how to live, given the palpable options in front of us and then, secondarily, among the pressure of computation. Computing is a structural enterprise, but we are not structural creatures. How do any of us create lives, our own rich lives, in the constant presence of the reductionist properties of the computer? In this sense, computation is a colonization that we all face. 

Martínez’ performance underscores the ways in which it is terribly, desperately hard to wrest even a device as simple and innocuous-seeming as a Wi-mote from its place in fulfilling and fueling consumption in a consumer society towards location as an expressive element in a spiritual practice. He cannot even do this simple thing, using it in a performance, without it being noteworthy, even definitional! How much more difficult it is to resist the self-definition we see in the systems embedded into the everyday activities of our lives? Martínez, arguably, provides us with a model for culturally responsive critical engagement with emerging pervasive technologies in the early 21st century. His performance raises appropriation to a design principle: Use that which is noteworthy, but use it for your own purposes. Make sure that they are your purposes. Or as Studs Terkel used to say, “Take it easy, but take it.” 



Posted in: on Tue, December 02, 2014 - 12:15:05

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


@Lucy Suchman (2014 12 12)

Deborah, thanks so much for this lucid reflection on a complicated event!  It makes its significance clearer even for those of us who were present.

Lucy


User experience without the user is not user experience


Authors: Ashley Karr
Posted: Mon, November 24, 2014 - 10:09:43

Take away: User experience (UX) is a method of engineering and design that creates systems to work best for the intended user. In order to design in this way, users must be included in the design process through user research and usability testing. If user research and usability testing are not practiced, then UX is not being practiced.

What is the problem?

I have seen and interacted with countless individuals and organizations claiming that they practice UX, but in reality, they practice what I call personal experience design, stakeholder experience design, or client experience design. In this type of design, there may be a department called UX within an organization, there may be a few individuals with UX in their job descriptions, or there may be a consultant or agency that sells UX services to clients. However, these departments, agencies, and individuals never work directly with representative users. Simply put, what these people and groups are practicing is not user experience because they are not including the user in their design process. 

Why do we exclude the user from user experience?

Excuses for not working with users include: lacking the time and money to conduct research, not knowing that working with users is part of the UX process, not believing working with non-designers or engineers would positively impact a design, actually being afraid of recruiting and working with users, and not knowing how to conduct research and testing. My hope is that the frequency of these excuses will drop considerably over the next few years as more people become aware of how valuable user research and usability testing methods are and how fast, easy, and enjoyable user research and usability testing can be. 

User research and usability testing can speed up the design process for a number of reasons. The most important of which, in my opinion, is that the data gathered from research and testing is very difficult to argue with. This data aligns teams and stakeholders and cuts back on time spent agonizing and arguing over design decisions. If two departments or two team members can’t agree on a design element, for example, they can test it and let the users choose. Setting up and running a test is often times faster and more productive than arguing or philosophizing about the design with teams, clients, and/or stakeholders.

Why is it important to work with users?

Working with users allows us to understand users’ mental models regarding a design. This, in turn, allows us to design an interface that matches users’ mental models rather than the interface matching the engineers’, designers’, or organizations’ implementation models. It should make sense to design something that the user will understand. The way to understand what makes sense to the user is to talk to, listen to, and observe them. Although making sure that users understand how to use your design does not guarantee success, it does increase your chances. I also want to mention before closing this paragraph that by working with users in this way, we often gain deep insights that spur innovation for our current and future designs. Needs we may never have spotted expose themselves and point us toward a new and important function or a completely novel product.

What are user research and usability testing?

User research is a process that allows researchers to understand how a design impacts users. Researchers learn about user knowledge, skills, attitudes, beliefs, motivations, behaviors, and needs through particular methods such as contextual inquiry, interviewing, and task analysis. Usability testing is a method that allows a system or interface to be evaluated by testing it on users. The value of usability tests is that they show how users actually interact with a system—and what people say they do is often times quite different than what they actually do. Note that the biggest difference between user research and usability testing is this: A usability test requires a prototype and user research does not necessarily require a prototype.

How can you begin practicing user research and usability testing now?

Depending on your situation, you may be able to begin practicing user research and user interviewing informally right now by simply asking people around you for feedback regarding your design. For most of us, this is very feasible and we do not need permission from anyone to get started. If your representative users are quite different than the people around you, you are working on a project that is protected by non-disclosure agreements, or you are working within a large organization with an internal review board, you will have a few more hoops to jump through before being able to begin using these techniques. You will just have to work a little harder to recruit representative users, require research participants to also sign NDAs, and/or receive approval from the internal review board before you can start. 

Let’s set the hoops aside now and accept that getting out and away from your own head, assumptions, desk, office, and devices in order to talk to and observe users in their native habitats is one of the most powerful design techniques that exists. Being afraid or uninformed can no longer be excuses. Once you have conducted a user interview or a round of usability testing, the fear subsides, and there are so many amazing and free resources available online to guide practitioners through research and testing. I recommend usability.gov and a hearty Google search to get the information you need to begin incorporating user research and usability testing into your process. And, that is the point of this article—to get you to begin.



Posted in: on Mon, November 24, 2014 - 10:09:43

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


No Comments Found


Debatable


Authors: Jonathan Grudin
Posted: Fri, November 21, 2014 - 3:18:54

Issues that elicit passionate and unpredictable views arrive, fast and furious:

  • The right to be forgotten
  • Facebook emotion manipulation
  • Online bullying and cyber hate
  • Institutional Review Boards
  • Open publishing
  • The impact of technology on jobs

Is consensus eroding?
Has HCI broadened its scope to encompass polarizing topics?
Are social and mainstream media surfacing differences that once were hidden?

After briefly reviewing these topics, I’ll look at controversies that arose in years past—but back then it was two or three per decade, not one every six months. Unless I’ve forgotten some, the field is more contentious in its maturity.

Recent controversies

Expunging digital trails. The European Union ruled that people and organizations can prevent search engines from pointing to some past accounts of their activity. This might favor those with financial resources or high motivation due to particularly unsavory histories, but it could counter an erosion of privacy. That said, it doesn’t expunge records, it just puts a hurt on (non-European) search engine companies. Will it work? The BBC now lists articles that search engines must avoid, which could increase the attention the articles receive.

The Facebook study. Researchers set off a firestorm when they claimed to induce “emotional contagion” by manipulating Facebook news feeds. Whether or not the study showed emotional contagion [1], it revealed that Facebook quietly manipulates its news feed. The use of A/B testing to measure advertising effectiveness is not a secret, but this went further. If happy people prove more likely to click on ads, could Facebook systematically suppress grumpy peoples’ posts? Many in the HCI community defended the study as advancing our understanding; I could not predict which side a colleague would come down on. I was a bit torn myself. I’m a strong advocate for science, but what if one of the unidentified half million people barraged with selectively negative friends’ posts dropped out and another leapt from a bridge? The “contagion” metaphor isn’t comforting.

Prior approval to conduct research. Did the Facebook terms of agreement constituted ethically adequate “informed consent” by those in the study? The limited involvement of an Institutional Review Board (IRB) was challenged, which brings us to another ongoing debate. IRB approval is increasingly a prerequisite for academic research in health and social science, intended to avoid harm to study subjects and legal risk for research institutions. It seems reasonable, but in practice, researchers can wear down examiners and get questionable studies approved, and worthwhile research can be impeded or halted by excessive documentation requirements.

The Facebook study team included academic researchers. Industry researchers are often exempt from IRB review, which can irritate academics. I have experienced both sides. I’ve seen impressive, constructive reviews in clinical medicine. In behavioral and social science, I’ve seen good research impeded. Could a PhD based on human subject observation or experimentation become a license to practice, as a surgeon obtains a license, with onerous IRB reviews invoked only after cases of “malpractice”?

Hate speech and bullying vs. free expression. A few well-publicized teenage suicides after aggression in online forums led to legislation and pressure on social media platform and search engine developers to eliminate unwanted, disturbing online confrontations. But it’s complicated: What is offensive in one culture or subculture may not be in another. Offensiveness can depend on the speaker—“only Catholics can criticize the Pope,” but on the Internet no one knows you’re a Catholic. Some people find it entertaining to insult and horrify others—“trolls” exhibit poor taste, but is it hate speech?

We want to protect the young, but how? When a child is the target and a parent is known, should abuse be reported? Some kids can handle it and would rather not involve their parents. Some researchers argue that children with abusive parents might be worse off if a parent is brought in. This topic calls for research, but legislation is being enacted and software developers can’t wait. People take different sides with considerable passion.

Open publishing. Publishing becomes technically easier every year. Why not cut publishers out of the loop? Many who have looked closely respond, “Because publishers oversee details that busy professionals would rather avoid.” Nevertheless, ACM is under pressure despite its very permeable firewall: Individual and conference “author-izer” features permit free worldwide access, and thousands of institutions buy relatively inexpensive ACM site licenses without grumbling. Open access was embraced first in mathematics and physics, where peer review plays a smaller role, and in biology and medicine, where leading journals are owned by for-profit publishers. Within HCI and computer science, calls for open access are strongest in regions that rely less on professional societies for publication.

Publishers sometimes spend profits in constructive ways. ACM scans pre-digital content that has no commercial value into its useful digital library archive and supports educational outreach to disadvantaged communities. Some for-profit publishers have trail-blazed interactive multimedia and other features in journals. Publishers do shoulder tasks that over-taxed volunteers won’t.

The impact of technology on jobs. In 1960, J.C.R. Licklider proposed a “symbiosis” between people and computers. Intelligent computers will eventually require no interface to people, he said, but until then it will be exciting. Many of his colleagues in 1960 believed that by 1980 or 1990, ultra-intelligent machines would put all humans out of work—starting with HCI professionals. No one now thinks that will happen by 1980 or 1990, but some believe it will happen by 2020 or 2030.

A recent Pew Internet survey found people evenly split. Digital technology was credited for the low-unemployment economy prior to 2007, so why not blame high tech for our lingering recession? That’s easier than taking action, such as employing people to repair crumbling infrastructures.

In September, a Churchill Club economic forum in San Francisco focused on automation and jobs. The economists, mostly former presidential advisors, noted that in the past, new jobs came along when technology wiped out major vocations. The technologists were less uniformly sanguine. A Singularity University representative forecast epidemic unemployment within five years. The president of SRI more reassuringly predicted that it would take 15 years for machines to put us all out of work. I contributed a passage written by economics Nobel Laureate and computer scientist Herb Simon in 1960, appearing in his book The Shape of Automation for Men and Management: “Technologically, machines will be capable, within 20 years, of doing any work that a man can do.”

Past controversies: Beyond lies the Web

The HCI world was once simple. We advocated for users. Now it’s more complicated. Users hope to complete a transaction quickly and leave; a website designer aims to keep them on the site, just as supermarket designers place frequently purchased items in far corners linked by aisles brimming with temptation.

Some disagreements in CHI’s first 30 years didn’t rise to prominence. For example, many privately berated standards as an obstacle to progress, whereas a minority considered standards integral to progress, claiming that researchers generally favor standards at all system levels other than the one they are researching. Social media weren’t there to surface such discussions and mainstream media rarely touched on technology use. However, controversy was in fact rare. Program committees desperately sought panel discussions that would generate genuine debate. We almost invariably failed: After some playful posturing by panelists, good-natured agreement prevailed.

There were exceptions. Should copyright apply to user interfaces? Pamela Samuelson organized CHI debates on this in 1989 and 1991. Ben Shneiderman advocated legal punishments and testified in a major trial. On the other side, many including Samuelson worried that letting a lawyer’s nose into the HCI tent would lead to tears.

In spirited 1995 and 1997 CHI debates, Shneiderman took strong positions against AI and its natural-language-understanding manifestation [2]. His opponents were not mainstream CHI figures and the audience generally aligned with Ben. AI competed with HCI for the hearts and minds of funders and students. NLU dominated government funding for human interaction but never established a significant foothold in CHI, where most of us knew that NLU was going nowhere anytime soon.

Today, there is some openness to discussing values and action research. Back then, CHI avoided political or value-laden content. We were engineers! One might privately lean left or libertarian, but the tilt was scrupulously excised from written work. Ben Shneiderman was a lonely voice when he advocated that CHI engage on societal issues.

The prevailing view was that HCI must eschew any hint of emotional appeal to get a seat at the table with hardware and software engineers, designers and managers. Ben’s calls to action were balanced by his conservative reductionist methodology: There is no problem with experiments that more experiments won’t solve.

The most heated controversies were over methods. Psychologists trained in formal experimentation, engineers focused on quantitative assessment, and those who saw numbers as essential to getting seats at the table were hostile to both “quick and dirty” (but often effective) usability methods and qualitative field research.

In 1985, in the first volume of the journal Human Computer Interaction, Stu Card and Alan Newell said that HCI should toughen up because “hard science” (mathematical and technical) always drives out “soft science.” Jack Carroll and Robert Campbell subsequently responded that this was a bankrupt argument, that CHI should expand its approaches to acquire better understanding of fundamental issues. In 1998, again in the journal HCI, Wayne Gray and Marilyn Salzman strongly criticized five influential studies that had contrasted usability methods. They drew 60 pages of responses from 11 leading researchers.

The methodological arguments subsided. Today’s broader scope may reflect greater tolerance for diverse methods, or maybe the tide turned—advocates of formal methods shifted to other conferences and journals.

Concluding observation

Today contention flares up, but public attention soon moves on. Is there any sustained progress? Some topics are explored in workshops or small conferences. A Dagstuhl workshop made progress on open publishing, but we did not produce a full report and uninformed arguments continue to surface. Some people with strong feelings aren’t motivated to dig deeply; some with a deep understanding don’t have the patience to repeatedly counter emotional positions.

The vast digital social cosmos that surfaces controversies may diminish our sense of empowerment to resolve them. We touch antennae briefly and continue on our paths.

Endnotes

1. The study showed that news feeds that contain negative words are more likely to elicit negative responses. For example, if people respond to “I’m feeling bad about my nasty manager” by saying “I’m sorry you’re feeling bad” and “Tough luck that you got one of the nasty ones,” the authors would conclude that negative emotion spread. Other explanations are plausible. When my friend is unhappy, I may sympathize and avoid mentioning that I’m happy, as in the responses above.

2. A record of the CHI 1997 debate, “Intelligent software agents vs. user-controlled direct manipulation,” is easily found online. A trace of the more elusive CHI 1995 debate can be found by searching on its title, “Interface Styles: Direct Manipulation Versus Social Interactions.”

Thanks to Clayton Lewis for reminding me of a couple controversies and to Audrey Desjardins for suggestions that improved this post and for her steady shepherding of Interactions online blogs.



Posted in: on Fri, November 21, 2014 - 3:18:54

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Disrupting the UX design education space


Authors: Richard Anderson
Posted: Wed, November 19, 2014 - 11:31:55


Room 202

My teaching partner Mandy and I stood in silence looking around the room one last time in which magic had happened the preceding 10 weeks. We teach the UX Design immersive for General Assembly in San Francisco. 10 weeks, 5 days/week, 8 hours/day of teaching and learning, of intense, hard work, of struggle, of laughter, of transformation, of bonding that will last forever. Educational experiences don’t get any better than this.

The UX Design immersive is intended mostly for people wanting to make a career transition. Students make a huge commitment by signing up for the course, stopping whatever they were doing prior, and in some cases, traveling long distances to do one thing: to become a UX designer.

General Assembly is one of several new educational institutions that are slowly disrupting the higher education space. Jon Kolko has identified the following qualities shared by many of these institutions’ programs:

  1. they are short;
  2. they focus on skill acquisition;
  3. they produce a portfolio as evidence of mastery;
  4. they are taught by practitioners;
  5. they promote employment and career repositioning, rather than emphasizing the benefits of learning as an end in itself;
  6. they typically focus on "Richard Florida" type jobs and careers: the creative disciplines of software engineering, product design, advertising, marketing, and so-on.

As described by Jon:

“Students who graduate from these programs have a body of work that they can point to and say ‘I made those things.’ This makes it very easy to understand and judge the quality of the student, particularly from the standpoint of a recruiter or hiring manager.”

and:

“These educators have a deep and intimate understanding of both the material that is being taught and the relevancy of that material to a job.”

Given the increasingly heard argument that academic programs are not producing the kinds of designers needed most by industry (see, for example, "On Design Education")… And given that 90% of UX Design immersive students secure jobs within 90 days of the end of their cohort... (I might be moderating a panel contrasting different institutional instructional models at the Interaction 15 Education Summit in February.)

What is it like teaching the UX Design immersive at General Assembly? To get a sense of this, read the Interactions magazine blog post written earlier this year by our fellow UX Design immersive instructor in Los Angeles, Ashley Karr, entitled, “Why Teaching Tech Matters.” Also, Mandy and I might be conducting a mock classroom at the Interaction 15 Education Summit in February to give attendees a mini-experience of the immersive program.

Tears filled the room on the final day of the course. We all had put everything we had into the preceding 10 weeks, and we could not help but be emotional. We hope the magic will happen again when we teach the course again in December. But it all will happen in a different space (a new campus opens tomorrow), and Mandy and I will be paired with other instructors instead of each other. 

I will miss the magic of Room 202 in the crazy, crowded 580 Howard Lofts with only one bathroom and no air conditioning, situated next to a noisy construction site; I will miss the magic of working closely with the amazing Mandy Messer; and I will miss the magic of getting to know a certain 21 special, fabulous people who are now new UX designers. 

But we will do it again, and we will try to do it even better.



Posted in: on Wed, November 19, 2014 - 11:31:55

Richard Anderson

Richard Anderson is a consultant and instructor who can be followed on Twitter at @Riander.
View All Richard Anderson's Posts


Post Comment


@Jonathan Grudin (2014 11 19)

Congratulations and very nice, Richard. Almost makes me wish I was still a practitioner so I could join in. I hope you find a way to franchise this.

@Ashley Karr (2014 11 19)

Hello! Thanks for writing this - and thanks for the shout out. This is a very good article to reference when folks asks questions regarding types of programs they should look into re: career changes and continuing education. Good luck with your next round!


The top four things every user wants to know


Authors: Ashley Karr
Posted: Thu, November 13, 2014 - 10:43:16

I have been conducting research with human participants for roughly 14 years. Some of my studies have been formal (i.e., requiring the approval of an ethics or internal review board), some a have been informal (i.e., guerilla usability testing for small start-ups), and some have landed in between those two extremes. Additionally, the populations and domains that I studied ranged widely; however, I have found certain similarities with all my participants. I am sharing with you now the results of my meta-analysis of all the usability testing, user research, field studies, and ethnographies that I have completed so far in my career. I call my meta-analysis, “The Top Four Things Every User Wants to Know.” I use it on a daily basis—it really does come in handy that often. I share it with you now:

  1. Users want to see a sample of your design. For example, they want to see pictures, a demo, or a video of how your design functions. Telling your users about your design through speech or text is not as effective as showing them, unless, of course, you are working with a visually impaired population. If users aren’t able to see a representative example of your design functioning, they will lose interest very quickly.

  2. Users want to know what your design will cost in terms of money. Hiding fees or prices associated with your design breaks trust. If users can’t find fees or prices quickly and easily, they become disinterested in your design and either look to competitors or simply move on. When conducting research on pricing and fee structures, 100% of users in my studies have told me that they are willing to pay more money (within reason) for a product or service if that means they are able to find out how much money they will be spending upfront. Another design decision that can break trust is using the term free. 100% of users that I have studied regarding this phenomenon state something like, “Nothing in this world is free. What are they hiding from me?” Interestingly, I have done studies on systems that are purely informational and very distant from the world of commerce. Users were still concerned that they would be charged in some way for their use of the system.

  3. Users want to know how long it will take to use and/or complete a particular task with your design. Users do not believe you when you say or write that your design saves time, is easy to learn, and quick to use. If you tell your users these things, you waste their time and break their trust. If you show them how quick and easy your design is, and your design actually performs this well, then you’re getting somewhere.  

  4. Users want to know how your design will help or harm them if they decide to start using it. As one of my students said, “Users want to know how your design will make their life more or less awesome before they decide to truly commit and interact or purchase.” This transcends time and money, and moves into deeper realms, such as adding more meaning to peoples’ lives or more positive engagement with their surroundings.

To recap, users want to know what your design is, what it costs, how long it will take to use, and how it will make their life more awesome (or not)—and in that order. These may seem like overly simplistic design elements, but very basic things get overlooked in the design process all the time. If those basic elements do not make it into the design before launch or production, disaster and failure may strike. Also, this list helps me when I get stuck due to over thinking and possibly over complicating a design. It reminds me to keep my design straightforward and highly functional for my users. 

Straightforward. Highly functional. These may be the two most important characteristics of any design. Add those to the four things that every user wants to know, and we have six quality ingredients for user-centered design.



Posted in: on Thu, November 13, 2014 - 10:43:16

Ashley Karr

Ashley is a UX instructor with GA and runs a UX consulting firm, ashleykarr.com.
View All Ashley Karr's Posts


Post Comment


No Comments Found


Love it or list it


Authors: Monica Granfield
Posted: Wed, November 12, 2014 - 2:57:45

Have you ever tried to coordinate a project, a group of people, their activities, and their progress? Or organize your thoughts or what needs to get done? What has been your most efficient tool?

For me it's most often some form of a simple list. 

All kinds of systems to help individuals and organizations get organized and improve efficiency have been created over time. From pads of paper with checkboxes printed on them, to magnetic boards with pre-canned task components, for kids and adults alike. Paper planners and systems like Franklin Planners have stood the test of time. The digital age has brought a slew of complex products for targeted industries or personal use. All come with the promise of organizing, tracking, planning, and even projecting work; resourcing; generating analytics; optimizing for efficiency; and having hordes of free time to have coffee with friends. How many deliver on this promise?

From professional to personal use, there's a productivity product out there for you. I have had the pleasure of evaluating some of these tools, using some of these tools, and yes, even designing a tool to help boost communication, organization, and productivity for users. The dynamics of organizing people and activities is not easy and can quickly become complex.

I have witnessed the initial flood of enthusiasm over the promise of accomplishment they bring and then watched the enthusiasm fizzle out and the use of the product simply fade out over time. Too often these tools to boost productivity become a full-time job for at least one person in an organization. I have also witnessed users struggle with the use of these products. Sometimes users are successful using portions of the product; other times products are so complex and hard to grasp that hours of training and use still fails in making them successful. The vast majority of users become confused and use only the top few features that will meet with the expectations of management. Management is often quiet about how productivity tools enter an organization, with little feedback from the people who will use them.

Many of these products try to emulate a conceptual model, rather than how people work or what they need from a product. If you are not familiar with the conceptual model, learning the product will prove to be a challenge. One product I used tried to emulate the conceptual model of the Agile process. However, there are many interpretations of what the Agile process is and how to implement it. Also, Agile typically includes software development, but not other related disciplines such as documentation, UX, marketing, or hardware. Roles in disciplines not included as part of the process in the product are retrofitted into the conceptual model. The users in these roles don’t understand the model, get frustrated learning or managing the product, and then start the decline into becoming a non-user.

Rather than struggle with a tool that doesn't meet the needs of the group, organization, or users, it becomes easier to just resort back to a good old-fashioned organizational tool such as a list. Most of these lists end up being created and managed in a spreadsheet or documentation program, which are more familiar to users, therefore making it easier to successfully manage your people and activities such as tracking changes, sorting, filtering, and simply checking something off when completed. No complex processes where you assign tasks and stories, or forward users to a new phase of a project. No logging in, no trying to find who owns what and who did what. No time wasted trying to figure out how this glorified list with the complex system of built-in features works. All you need to do is glance at the list, with its glorious titles, headers, columns, and rows, all there right in front of you. Prioritize the list, reorder, highlight items, cross something off and ta-dah... you are done. Now you can go and have coffee with your friends.

If an application that is meant to organize and increase productivity becomes too complex and hard to use, the abandonment rate will rise. Organizations will abandon one product for another and, if all the while their users don't love the product, the users most likely will slowly and quietly resort to listing it.

The simplicity of a list is all that is needed to keep me organized and boost my productivity. Lists play a key role in tracking what needs to be done, keeping inventory of issues, and tracking and assigning who needs to do what, when.

Sometimes the simple and straightforward solution just works. If you don't love your productivity tool, do you list it? 



Posted in: on Wed, November 12, 2014 - 2:57:45

Monica Granfield

Monica Granfield is a user experience strategist at Go Design LLC.
View All Monica Granfield's Posts


Post Comment


No Comments Found


Batman vs. Superman (well, actually, just PDC vs. DIS)


Authors: Deborah Tatar
Posted: Tue, October 28, 2014 - 9:48:54

The Participatory Design Conference (PDC), which just had its 13th meeting in Windhoek, Namibia, is a close cousin to DIS, the Design of Interactive Systems conference. Both are small, exquisite conferences that lead with design and emphasize interaction over bare functionality; however, like all cousins (except on the ancient Patty Duke show in which Patty Duke played herself and her “British” cousin), there are some important differences. Unlike DIS, PDC is explicitly concerned with the distribution of power in projects; furthermore, the direction of distribution is valenced: more power to those below is good. 

In a way, it is odd for designers to think about distributing power downward—how much power do designers actually have?—and yet the PD conference is an extremely satisfactory place to be. Even if designers do not, in fact, have much power, we are concerned with it. The very act of designing is an assertion of power. Why design, if not to change behavior? And what is changing behavior if not the exertion of power? And if we are engaged in a power-changing enterprise, how much finer it is to, within the limitations of our means, take steps that move power in the right direction rather than ourselves accepting powerlessness? To contemplate what is right is energizing! 

DIS has been held in South Africa, but PDC one-upped it by being held in Namibia and by attracting a very significant black African cadre of attendants. Consequently, a key question after the initial round of papers featured the fears of the formerly colonized. The question was whether the practices of participatory design are not in some sense a form of softening up the participating populace for later, more substantial exploitation. Of course, one hopes not, but how lovely to be at a conference that does not sweep that important issue under the rug. 

As this question points out, we do not always know what is right. Ironically, acknowledging this allows the conference to feel celebratory. Perhaps it was the incredible percussive music, the art show, or Lucy Suchman’s “artful integration” awards, received this year by Ineke Buskers, representing GRACE (Gender Research in Africa into ICTs for Empowerment), and by Brent Williams, representing Rlabs, which supports education and innovation in townships in South Africa and impoverished communities around the world. Both organizations are local, bottom-up, and community-driven appropriations of technology. Or perhaps it was the fact that so many papers focused on the discovery of the particulars that people care about in their lives and the creation of technologies that influence their lives. 

I went to the conference because I have been hoping to jump-start more thought about power in the DIS, CHI, and especially the CSCW communities. I know that many people in these communities have become increasingly concerned about power in the last few years. I hear whispering in the corridors, the same way Steve Harrison, Phoebe Sengers, and I heard whispering before we wrote our “Three Paradigms” paper (the one that tried to clarify basic schools of thought within CHI and how they go together as bundles of meaning). Now the whispering is different. It is about how the study of human-computer interaction needs to be more than the happy face on fundamentally exploitative systems. 

In any case, I knew that PDC would be ahead of me and that the people who would have the most sophistication would very likely not be American. As wonderful as Thomas Jefferson is, he—and therefore we in America—are too much about social contract theory and what Amartya Sen calls “transcendental institutionalism” to adjust easily to certain kinds of problems of unfairness, especially unfairness that requires perception of manifest injustice. As Amartya Sen points out, Americans draw very heavily on the idea that if we have perfect institutions, then the actual justice or fairness in particular decisions or matters of policy does not matter. Transcendental institutionalism is a belief that makes it very difficult to effectively protest fundamentally destructive decisions such as treating corporations as people. In HCI and UX, we tend to think that if users seem happy in the moment, or we improve one aspect of user experience, the larger issues of the society that is created by our design decisions are unimportant. Transcendental institutionalism, again. 

These issues are explored more at PDC. I attended a one-day workshop on Politics and Power in Decision Making in Participatory Design, led by Tone Bratteteig and Ina Wagner from the University of Oslo. Brattetieg and Wagner would have it, in their new book (Disentangling Participation: Power and Decision-making in Participatory Design, ISBN 978-3-319-06162-7) as well as at the conference, that the key issue in the just distribution of power is choice in decision making. The ideal is to involve the user in all phases of decision making: in creating choices, in selection between choices, in implementation choices (when possible) and in many choices that surround the evaluation of results. Furthermore, a participatory project should, they feel, have a participatory result: it should increase the user’s power to. “Power to” arises from a feminist notion of power in which dominion is not paramount. Instead, it is closer to Amartya Sen’s notion of capability. Freedom, from Sen’s perspective, consists of the palpable possibilities that people have in their lives. 

Pelle Ehn, who gave the keynote and is making a farewell round of extra-US conferences before retiring (he will be the keynote speaker at OzCHI shortly), put this view in a larger context by reminding us of Bruno Latour’s notions of “parliaments” and “laboratories.” In this way of thinking, the social qualities of facts are paramount. Though Latour appears to have pulled back from this view later in his life, it has the great advantage of helping us perceive issues of power in design. Power is hard to see. The waves of utopian projects (one even called Utopia) that Ehn has shepherded during his long career each reveal more about the shifting and growing power of technology and the institutions that profit from it. The implicit question raised is “what now?” 

A portion of this concern might seem similar to what we say all the time in human-computer interaction. After all, what is the desirable user experience if not the experience of power to? However, in fact, the normal conduct of HCI and PD only resemble one another at a very high level of abstraction. They differ in the particulars. My t-shirt from the 2006 PDC conference reads, “Question Technology,” but I would say that the persistent theme is not precisely questioning technology as much as questioning how we are constructing technology. PDC does not ask whether technology serves, but who, precisely, now and later. PDC asks “what is right?” 

PDC is the still-living child of CPSR (Computer Professionals for Social Responsibility), an organization that, after a long life, sadly went defunct just this year. I have not been involved with it in recent years, and I am not sure what to write on the death certificate, but it seems to me that the rise of untrammeled global corporate capitalism fueled by information technology has created new problems, that something like CPSR is much needed, and that PDC stands for many questions that need better answers than we currently have. 

I did not get to hear Shaowen Bardzell’s closing plenary, because I had to journey over 30 hours to get home and be in shape to lecture immediately thereafter, but rumor has it that she energized the community. I am optimistic that her insider-outsider status as a naturalized American, raised in Tapei, gives her the perspective to address issues at the edge between, as it were, power over and power to.

Returning to the question of Batman vs. Superman, DIS is also a very delightful place to be. But the freedom of spirit that has characterized it since Jack Carroll resuscitated it in 2006 exists in uneasy implicit tension with the concerns and measurements of its corporate patrons.



Posted in: on Tue, October 28, 2014 - 9:48:54

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts


Post Comment


@Jonathan Grudin (2014 10 29)

Nice essay, thanks Deborah.

@Ineke Buskens (2014 10 31)

Delightful piece Deborah, thank you! Much love from South Africa!


Uses of ink


Authors: Jonathan Grudin
Posted: Fri, October 17, 2014 - 9:06:47

Many species communicate, but we alone write. Drawing, which remains just below the surface of text, is also uniquely ours. Writing and sketching inform and reveal, record, and sometimes conceal. We write to prescribe and proscribe, to inspire and conspire.

My childhood colorblindness—an inability to see shades of gray—was partly overcome when I read Gabriel Garcia Marquez. But for nonfiction I continued to prize transparency. Crystal clarity for all readers is unattainable, but some writers come close. For me, Arthur Koestler’s breadth of knowledge and depth of insight were rivaled by the breathtaking clarity of his writing, no less impressive for often going unnoticed.

Lying on the grass under a pale Portland sun the summer after my sophomore undergraduate year, I took a break from Koestler—The Sleepwalkers, The Act of Creation, The Ghost in the Machine, and his four early autobiographical volumes—to read Being and Nothingness. Sartre had allegedly changed the course of Western philosophy. After a few days I had not progressed far. I had lists of statements with which I disagreed, felt were contestable, or found incomprehensible. “You are reading it the wrong way,” I was informed. Read it more briskly, let it flow over you.

This struck me as ink used to conceal: Jean-Paul Sartre as squid. A cloud of ink seemed to obscure thought, which might be profound or might be muddled.

As years passed I saw clear, deep writers ignored and opaque writers celebrated. In Bertrand Russell’s autobiography, Frege declared Wittgenstein’s magnum opus incomprehensible and Russell, Wittgenstein’s friend, felt that whatever it meant, it was almost certainly wrong. Decades later, my college offered a course on Wittgenstein (and none on Frege or Russell). I had given up on Sartre but took the Wittgenstein course. I liked the bits about lions and chairs, but unlike my classmates who felt they understood Wittgenstein, I sympathized with Frege.

It finally dawned on me that in nonfiction, as in complex fiction, through the artful construction of an inkblot, a verbal Rorschach, a writer invites readers to project conscious or unconscious thoughts onto the text and thereby discover or elaborate their own thoughts. The inkblot creator need not even have a preferred meaning for the image.

It requires skill to create a good verbal projection surface. A great one has no expiration date. “Sixty years after its first publication, [Being and Nothingness] remains as potent as ever,” says Amazon. (It’s now over seventy years.)

Let’s blame it on the Visigoths

George Orwell was a clear writer. His novels 1984 and Animal Farm are unambiguous enough to be assigned to schoolchildren. In a long-defunct magazine he published this beautiful short essay, a book review. (Thanks to Clayton Lewis for bringing it to my attention.)

The Lure of Profundity
George Orwell, New English Weekly, 30 December 1937

There is one way of avoiding thoughts, and that is to think too deeply. Take any reasonably true generalization—that women have no beards, for instance—twist it about, stress the exceptions, raise side-issues, and you can presently disprove it, or at any rate shake it, just as, by pulling a table-cloth into its separate threads, you can plausibly deny that it is a table-cloth. There are many writers who constantly do this, in one way or another. Keyserling is an obvious example. [Hermann Graf Keyserling, German philosopher, 1880–1946.] Who has not read a few pages by Keyserling? And who has read a whole book by Keyserling? He is constantly saying illuminating things—producing whole paragraphs which, taken separately, make you exclaim that this is a very remarkable mind—and yet he gets you no forrader [further ahead]. His mind is moving in too many directions, starting too many hares at once. It is rather the same with Señor Ortega y Gasset, whose book of essays, Invertebrate Spain, has just been translated and reprinted.

Take, for instance, this passage which I select almost at random:

“Each race carries within its own primitive soul an idea of landscape which it tries to realize within its own borders. Castile is terribly arid because the Castilian is arid. Our race has accepted the dryness about it because it was akin to the inner wastes of its own soul.”

It is an interesting idea, and there is something similar on every page. Moreover, one is conscious all through the book of a sort of detachment, an intellectual decency, which is much rarer nowadays than mere cleverness. And yet, after all, what is it about? It is a series of essays, mostly written about 1920, on various aspects of the Spanish character. The blurb on the dust-jacket claims that it will make clear to us “what lies behind the Spanish civil war.” It does not make it any clearer to me. Indeed, I cannot find any general conclusion in the book whatever.

What is Señor Ortega y Gasset's explanation of his country’s troubles? The Spanish soul, tradition, Roman history, the blood of the degenerate Visigoths, the influence of geography on man and (as above) of man on geography, the lack of intellectually eminent Spaniards—and so forth. I am always a little suspicious of writers who explain everything in terms of blood, religion, the solar plexus, national souls and what not, because it is obvious that they are avoiding something. The thing that they are avoiding is the dreary Marxian ‘economic’ interpretation of history. Marx is a difficult author to read, but a crude version of his doctrine is believed in by millions and is in the consciousness of all of us. Socialists of every school can churn it out like a barrel-organ. It is so simple! If you hold such-and-such opinions it is because you have such-and-such an amount of money in your pocket. It is also blatantly untrue in detail, and many writers of distinction have wasted time in attacking it. Señor Ortega y Gasset has a page or two on Marx and makes at least one criticism that starts an interesting train of thought.

But if the ‘economic’ theory of history is merely untrue, as the flat-earth theory is untrue, why do they bother to attack it? Because it is not altogether untrue, in fact, is quite true enough to make every thinking person uncomfortable. Hence the temptation to set up rival theories which often involve ignoring obvious facts. The central trouble in Spain is, and must have been for decades past, plain enough: the frightful contrast of wealth and poverty. The blurb on the dust-jacket of Invertebrate Spain declares that the Spanish war is “not a class struggle,” when it is perfectly obvious that it is very largely that. With a starving peasantry, absentee landlords owning estates the size of English counties, a rising discontented bourgeoisie and a labour movement that had been driven underground by persecution, you had material for all the civil wars you wanted. But that sounds too much like the records on the Socialist gramophone! Don’t let’s talk about the Andalusian peasants starving on two pesetas a day and the children with sore heads begging round the food-shops. If there is something wrong with Spain, let’s blame it on the Visigoths.

The result—I should really say the method—of such an evasion is excess of intellectuality. The over-subtle mind raises too many side-issues. Thought becomes fluid, runs in all directions, forms memorable lakes and puddles, but gets nowhere. I can recommend this book to anybody, just as a book to read. It is undoubtedly the product of a distinguished mind. But it is no use hoping that it will explain the Spanish civil war. You would get a better explanation from the dullest doctrinaire Socialist, Communist, Anarchist, Fascist or Catholic.

Clarity, ink clouds, and ink blots in HCI

In our field, we write mostly to record, inform, and reveal. At times we write to conceal doubt or exaggerate promise. The latter are often acts of self-deception, although spurred by alcohol and perhaps mild remorse, some research managers at places I’ve worked have confessed to routinely deceiving their highly placed managers and funders. They justified it by sincerely imagining that the resulting research investments would eventually pay off. (None ever did.)

Practitioners who attend research conferences seek clarity and eschew ambiguity. They may not get what they want—unambiguous finality is rare in research. But practitioner tolerance for inkblots is low. Over time, as our conferences convinced practitioners to emigrate, openings were created for immigrants from inkblot dominions, such as Critical Theory.

For example, echoing Orwell’s example of beardless women, the rejection letter for a submission on the practical topic of creating gender-neutral products stated, “I struggle to know what a woman is, except by reference to the complex of ideological constructions forced on each gender by a society mired in discrimination.” Impressive, but arguably an excess of intellectuality. It does not demean those struggling to know what a woman is to say that when designing products to appeal to women, “a better explanation” could be to define women as those who circle F without hesitation when presented with an M/F choice. A design that appeals both to F selectors and M selectors might or might not also appeal to those who would prefer a third option or to circle nothing, but in the meantime let’s get on with it.



Posted in: on Fri, October 17, 2014 - 9:06:47

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Designing the cognitive future, part V: Reasoning and problem solving


Authors: Juan Hourcade
Posted: Tue, October 14, 2014 - 2:38:54

I have been writing about how computers are affecting and are likely to affect cognitive processes. In previous posts I have touched on perception, memory, attention, and learning. In this post, I discuss reasoning and problem solving.

Computers are quite adept at deductive reasoning. If all facts are known (at least those we would use to make a decision), computers can easily use logic to make deductions without mistakes. Because of this, it is likely that we will see more and more involvement of computers in our lives to help us make decisions and guide our lives through deductive reasoning. We can see this already happening, for example, with services that tell us when to leave our home for our flight based on our current location and traffic on the way to the airport. 

These trends could go further, with many other activities that involve often-problematic human decision-making moving to the realm of computers. This includes driving cars, selecting what to eat, scheduling our days, and so forth. In all these cases computers, when compared with people, would be able to process larger amounts of information in real time and provide optimal solutions based on our goals.

So what will be left for us to do? One important reasoning skill will involve an understanding of the rules used to determine optimal outcomes in these systems, and how these relate to personal goals. People who are better able to do this, or go further determining their own sets of rules, are likely to derive greater benefits from these systems. One of the bigger challenges in this space comes from systems that could be thrown off balance by selfish users (e.g., traffic routing). People who are able to game these systems could gain unfair advantages. There are design choices to be made, including whether to make rules and goals transparent or instead choose to hide them due to their complexity.

What is clear is that the ability to make the most out of the large amounts of available data relevant to our decision-making will become a critical reasoning skill. Negative consequences could occur if system recommendations are not transparent and rely on user trust, which could facilitate large-scale manipulation of decision-making.

The other role left for people is reasoning when information is incomplete. In these situations, we usually make decisions based on heuristics we developed based on past outcomes. In other words, based on our experiences, we are likely to notice patterns and develop basic “rules of thumb.” Our previous experiences therefore are likely to go a long way in determining whether we develop useful and accurate heuristics. The closer these experiences are to a representative sample of all applicable events we could have experienced, the better our heuristics will be. On the other hand, being exposed to a biased sample of experiences is likely to lead to poorer decision-making and problem solving.

Computers could help or hurt in providing us with experiences from which we can derive useful rules of thumb. One area of concern is that information is increasingly delivered to us based on our personal likes and dislikes. If we are less likely to come across any information that challenges our biases, these are likely to become cemented, even if they are incorrect. Indeed, nowadays it is easier than ever for people with extreme views to find plenty of support and confirmation for their views online, something that would have been difficult if they were only interacting with family, friends, and coworkers. Not only that, but even people with moderate but somewhat biased views could be led into more extreme views by not seeing any information challenging those small biases, and instead seeing only information that confirms them.

There is a better way, and it involves delivering information that may sometimes make people uncomfortable by challenging their biases. This may not be the shortest path toward creating profitable information or media applications, but people using such services could reap significant long-term benefits from having access to a wider variety of information and better understanding other people’s biases.

How would you design the cognitive future for reasoning and problem solving? Do you think we should let people look “under the hood” of systems that help us make decisions? Would you prefer to experience the comfort of only seeing information that confirms your biases, or would you rather access a wider range of information, even if it sometimes makes you feel uncomfortable?



Posted in: on Tue, October 14, 2014 - 2:38:54

Juan Hourcade

Juan Pablo Hourcade is an associate professor in the Department of Computer Science at the University of Iowa, focusing on human-computer interaction.
View All Juan Hourcade's Posts


Post Comment


No Comments Found


Conceptual precision in interaction design research


Authors: Mikael Wiberg
Posted: Wed, October 01, 2014 - 10:13:28

Interaction design research is to a large extent design driven. We do research through design. A design can be seen as a particular instantiation of a design idea. Accordingly, interaction design research is also about the development of ideas. However, there is no one-to-one relation between design ideas and design instantiations. A design idea can be expressed through a wide variety of designs. That is partly why we work with prototypes in our field and why we think iterative design is a good approach. We explore an idea through a number of variations in terms of how the design idea is manifested in a particular design.

Of course, precision is always a key ingredient in research. We appreciate precision when it comes to definitions, measurements, and descriptions of research methods applied, data sets, data-collection techniques, and formats. As a community I also think that we all agree that research contributions and conclusions should be stated with precision. Precision enables us to position a particular research contribution in relation to an existing body of research.

However, do we work with similar precision when it comes to articulating our design ideas? And do we work with such precision when we articulate how our design ideas are manifested in the designs we produce as important outcomes from our research projects? I certainly hope so! At least I have noticed a growing concern for this matter in our field over the past few years. In a recent paper Erik Stolterman and I discuss the relation between conceptual development and design [1], and in a related paper Kristina Höök and Jonas Löwgren present the notion of “strong concepts” in relation to interaction design research [2].

So, if we can agree that design and conceptual development goes hand in hand in interaction design research, and if we can agree that precision is key for this practice as well, then maybe we should also ask the question of how we can advance our field through design. That is to answer the fundamental question of whether we can make research contributions, i.e., progress, through design? In a forthcoming NordiCHI paper we elaborate on this issue (see [3]). In short, we suggest that we should focus on the formulation of classes of interactive systems and that we should develop ways of analyzing designs both in relation to the elements that constitute a particular design, and in relation to how a particular design composition can be said to belong to, extend, challenge, or combine such classes. We discuss this in relation to the importance of a history of designs and a history of design ideas for interaction design research, under the label of “generic design thinking.” Again, if we as a research community should take on one such classification project then precision will again be key! For reviewing the past, and for moving forward! 

Endnotes

1. Stolterman, E. & Wiberg, M (2010) Concept-driven Interaction Design Research, Human Computer Interaction (HCI), Vol 25, Issue 2, p. 95-118.

2. Hook, K. and Lowgren, J. 2012. Strong concepts: Intermediate-level knowledge in interaction design research. ACM ToCHI. 19, 3, Article 23.

3. Wiberg, M & Stolterman, E (2014) "What makes a prototype novel? - A knowledge contribution concern for interaction design research", Full paper, In Proc. of NordiCHI 2014; http://dx.doi.org/10.1145/2639189.2639487



Posted in: on Wed, October 01, 2014 - 10:13:28

Mikael Wiberg

Mikael Wiberg is Professor of Informatics in the Department of Informatics at Umeå University, Sweden.
View All Mikael Wiberg's Posts


Post Comment


No Comments Found


Lasting impact


Authors: Jonathan Grudin
Posted: Wed, September 24, 2014 - 9:35:29

An enduring contribution can take different forms. It can be a brick, soon covered by others yet a lasting part of a field’s foundation. Alternatively, it can be a feature that remains a visible inspiration.

Eminent scientists and engineers have offered insights into making an impact.

Inspiration

“If you want to predict the future, invent it,” said Alan Kay. In the 1960s, Kay conceived of the Dynabook. His work was widely read and had a lasting impact: 50 years later, tablets are realizing his vision. Vannevar Bush’s 1946 essay “As We May Think” has been aptly termed science fiction—an outline of an impossible optomechanical system called the Memex—but it inspired computer scientists who effectively realized his vision half a century later in a different medium: today’s dynamic Web.

Kay and his colleagues took a significant step toward the Dynabook by developing the Xerox Alto. Bush attempted to build a prototype Memex. However, generations of semiconductor engineers and computer scientists were needed to reach their goals—introducing a second factor.

Perspiration

A century ago, the celebrated inventor Thomas Edison captured the balance, proclaiming that genius was “1% inspiration, 99% perspiration.” He recognized the effort involved in inventing the future, but by attributing output to input this way he overlooked the fact that not all clever, industrious people thrive. Circumstances of birth affect a person’s odds of contributing; even among those of us raised in favorable settings, many creative, hard-working people have a tough time. More is needed.

Divination

A century before Edison, Louis Pasteur observed that “chance favors the prepared mind.” This elegantly acknowledges the significance of perspiration and inspiration while recognizing the role of luck.

Lasting impact awards

On a less exalted level, several computer science conferences have initiated awards for previously published papers that remain influential. My 1988 paper on challenges in developing technology to support groups received the first CSCW Lasting Impact Award at CSCW 2014 in Baltimore. The remainder of this essay examines how that paper came to be written and why it succeeded. Avoiding false modesty, I estimate that the impact was roughly 1% inspiration, 25% perspiration, and 75% a fortunate roll of the dice. (If you prefer 100% totals, reduce any of the estimates by 1%.)

Seeing how one career developed might help a young person, although 30 or 40 years ago I would have avoided considering the role of luck. Like all students in denial about the influence of factors over which they have no control, I would have been anxious thinking that intelligence and hard work would not guarantee success. But those who move past Kay and Edison to Pasteur can use help defining a path toward a prepared mind.

The origin of my paper

“Why CSCW applications fail: Problems in the design and evaluation of organizational interfaces,” was an unusual paper. It didn’t build on prior literature. It included no system-building, no usability study, no formal experiment, and no quantitative data. The qualitative data was not coded. The paper didn’t build on theory. Why was it written? What did it say?

The awkward title reflected the novelty, in 1988, of software designed to support groups. Only at CSCW‘88 did I first encounter the term groupware, more accessible than CSCW applications or Tom Malone’s organizational interfaces. Individual productivity tools—spreadsheets and word processors— were commercially successful in 1988. Email was used by some students and computer scientists, but only a relatively small community of researchers and developers worked on group support applications.

I had worked as a computer programmer in the late 1970s, before grad school. In 1983 I left a cognitive psychology postdoc to resume building things. Minicomputer companies were thriving: Digital Equipment Corporation, Data General, Wang Labs (my employer), and others. “Office information systems” were much less expensive, and less powerful, than mainframes. They were designed to support groups and were delivered with spreadsheets and word processing. We envisioned new “killer apps” that would support the millions of small groups out there. We built several such apps and features. One after another, they failed in the marketplace. Why was group support so hard to get right?

Parenthetically, I also worked on enhancements to individual productivity tools. There I encountered another challenge: Existing software development practices were a terrible fit for producing interactive software to be used by non-engineers.

Writing the paper

In 1986 I quit and spent the summer reflecting on our experiences. I wrote a first draft. My colleague Carrie Ehrlich and I also wrote a CHI’87 paper about fitting usability into software development. A cognitive psychologist, Carrie worked in a small research group in the marketing division. She had a perspective I lacked: Her father had been a tech company executive. She explained organizations to me and changed my life. The 1988 paper wouldn’t have been written without her influence. It was chance that I met Carrie, and partly chance that I worked on a string of group support features and applications, but I was open to learning from them.

In the fall, I arrived in Austin, Texas to work for the consortium MCC. The first CSCW conference was being organized there. “What is CSCW?” I asked. “Computer Supported Cooperative Work—it was founded by Irene Greif,” someone said. I attended and knew my work belonged there. The field coalesced around Irene’s book Computer Supported Cooperative Work, a collection of seminal papers that were difficult to find in the pre-digital era. Irene’s lasting impact far exceeds that of any single paper. I may well have had little impact at all without the foundation she was putting into place.

My research at MCC built on the two papers drafted that summer: (i) understanding group support, and (ii) understanding development practices for building interactive software. MCC, like Wang and all the minicomputer companies, is now gone. Wedded to AI platforms that also disappeared, MCC disappointed the consortium owners, but it was a great place for young researchers. I began a productive partnership there with Steve Poltrock, another cognitive psychologist, which continues to this day. We were informally trained in ethnographic methods by Ed Hutchins and in social science by Karen Holtzblatt, then a Digital employee starting to develop Contextual Design. MCC gave me the resources to refine the paper and attend the conference.

The theme: Challenges in design and development

Why weren’t automated meeting scheduling features used? Why weren’t speech and natural language features adopted? Why didn’t distributed expertise location and project management applications thrive? The paper used examples to illustrate three factors contributing to our disappointments:

  1. Political economy of effort. Consider a project management application that requires individual contributors to update their status. The manager is the direct beneficiary. Everyone else must do more work. If individual contributors who see no benefit do not participate, it fails. This pattern appeared repeatedly: An application or feature required more work of people who perceived no benefit. Ironically, most effort often went into the interface for the beneficiary.

    Was this well known? Friends and colleagues knew of nothing published. I found nothing relevant in the Boston Public Library. Later, I concluded that it was a relatively new phenomenon, tied to the declining cost of computing. Mainframe computers were so expensive that use was generally an enterprise mandate. At the group level, mandated use of productivity applications was uncommon.

  2. Managers decide what to build, buy, and even what to research. Managers with good intuition for individual productivity tools often made poor decisions about group support software. For example, audio annotation as a word processor feature appealed to managers who used Dictaphones and couldn’t type. But audio is harder to browse, understand, and reuse. We built it, no one came.

  3. You can bring people into a lab, have them use a new word processor for an hour, and learn something. You can’t bring six people into a lab and ask them to simulate office work for an hour. This may seem obvious, but most HCI people back then, including me, had been trained to do formal controlled lab experiments. We were scientists!

The paper used features and applications on which I had worked to illustrate these points.

Listening to friends

The first draft emphasized the role of managers. I still consider that to be the most pernicious factor, having observed billions of dollars poured into resource black holes over decades. But my friend Don Gentner advised me to emphasize the disparity between those who do work and those who benefit. Don was right. Academia isn’t strongly hierarchical and doesn’t resonate to management issues. Academics were not my intended audience and few attended CSCW’88, but those who did were influential. Criticizing managers is rarely a winning strategy, anyway.

Limited expectations

Prior to the web and digital libraries, only people who attended a conference had access to proceedings. I wanted to get word out to the small community of groupware developers at Wang Labs, Digital Equipment Corporation, IBM, and elsewhere, so they could avoid beating their heads against the walls we had. Most CSCW 1988 attendees were from industry. I assumed they would tell their colleagues, we would absorb the points, and in a few months everyone would have moved on.

It didn’t matter if I had missed relevant published literature: The community needed the information! Conferences weren’t archival. The point was to avoid more failed applications, not to discover something new under the sun.

The impact

At the CSCW 2014 ceremony for my paper, Tom Finholt and Steve Poltrock analyzed the citation pattern over a quarter century, showing a steady growth and spread of the paper’s influence. I had been wrong—the three points had not been quickly absorbed. They remain applicable. A manager’s desire for a project dashboard can motivate an internal enterprise wiki, but individual contributors might use a wiki for their own purposes, not to update status. Managers still funnel billions of dollars into black holes. Myriad lab studies are published in which groups of three or four students are asked to pretend to be a workgroup.

All my subsequent jobs were due to that work. Danes who heard me present it invited me to spend two productive years in Aarhus. A social informatics group at UC Irvine recruited me, after which a Microsoft team building group support prototypes hired me. Visiting professorships and consulting jobs stemmed from that paper and my consequent reputation as a CSCW researcher.

Why the impact?

The analysis resonated with people’s experience; it seemed obvious once you heard it. But other factors were critical to its strong reception. The paper surfaced at precisely the right moment. In 1984 my colleagues and I were on the bleeding edge, but by 1988 client-server architectures and networking were spreading. More developers worked on supporting group activities. The numbers of developers focused on group support had risen from handfuls in 1984 to hundreds in 1988, with thousands on the way.

I was fortunate that the CSCW’88 program chair was Lucy Suchman. Her interest in introducing more qualitative and participatory work undoubtedly helped my paper get in despite its lack of literature citation, system-building, usability study, formal experiment, quantitative data, and theory. In subsequent years, such papers were not accepted.

The most significant break was that the paper was scheduled early in a single-track conference that attracted a large, curious crowd. Several speakers referred back to it and Don Norman called it out in his closing keynote.

Finally, ACM was at that time starting to make proceedings available after conferences, first by mail order and then in its digital library.

Extending the work involved some perspiration. A journal reprinted the paper with minor changes. A new version was solicited for a popular book. Drawing on contributions by Lucy Suchman, Lynne Markus, and others, I expanded the factors from three to eight for a Communications of the ACM article that has been cited more than the original paper.

No false modesty

I was happy with the paper. I had identified a significant problem and worked to understand it. But as noted above, the positive reception followed a series of lucky breaks. Acknowledging this is not being modest. Perhaps I can convince you that I’m immodest: Other papers I’ve written have involved more work and in my opinion deeper insight, but had less impact.

Fifteen years later I revisited the issue that I felt was the most significant—managerial shortsightedness. The resulting paper, which seemed potentially just as useful, was rejected by CSCW and CHI. It found a home, but attracted little attention. Authors are often not the best judges of their own work, but when I consider the differences in the reception of my papers, factors beyond my control seem to weigh heavily.

Ways to contribute

The Moore’s law cornucopia provides diverse paths forward. Your most promising path might be to invent the future by building novel devices. In the early 1980s, we found that that does not always work out. To avoid inventing solutions that the rest of us don’t have problems for, prepare with careful observation and analysis, and hope the stars align.

A second option, which has been the central role of CHI and CSCW, is to improve tools and processes that are entering widespread use.

Third, you can tackle stubborn problems that aren’t fully understood. My 1988 paper was in this category. It is difficult to publish such work in traditional venues. With CSCW now a traditional venue, you might seek a new one.

In conclusion, Pasteur’s advice seems best—prepare, and chance may favor you. Preparation involves being open and observant, dividing attention between focal tasks, peripheral vision, and the rear-view mirror. For me, it was most important to develop friendships with people who had similar and complementary skills. It takes a village to produce a lasting impact.



Posted in: on Wed, September 24, 2014 - 9:35:29

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Lessons from leading design at a startup


Authors: Uday Gajendar
Posted: Tue, September 16, 2014 - 9:01:28

In the past 100+ days I’ve led the successful re-invigoration of a fledgling design capability at a 2-year-old startup into a robust, cohesive, solidified practice with vitality to carry it further, with unified executive support. This includes a revitalized visual design language, visionary concepts to provoke innovation, and strategic re-thinking of UX fundamentals core to the product’s functionality. Being my first startup leadership role, this has certainly proved to be a valuable “high learning, high growth” experience filled with lessons, small and large. I’d like to share a few here that made the most definitive mark on my mind, shaping my design leadership model going forward. Hopefully this will help other UX/HCI/design professionals in similar small-team leadership situations. 

Say “no” to preserve your sanity (and the product focus): It’s critical early on to set boundaries demarcating exactly what you’ll work on and what you’ll defer to others. As a former boss liked to say, “You can’t boil the ocean.” Be selective on where you’ll have design impact with immediate or significant results that you can parlay into your next design activity. “Saying No” also builds respect from others, telling colleagues that you have a direction and a purpose to deliver against.

Remove “like” from the discussion: Everyone has opinions about design—that’s simply natural and expected. One way to mitigate the “I like” (or “I don’t like”) is to remove that word and instead focus on “what works” or “doesn’t work” for a particular persona/context/scenario. This forces the discussion to be about functional nature of design elements, not subjective personal tastes.

Role model good behavior from day one: It’s only natural for a startup starved for design expertise to immediately ask for icons and buttons after the designer found the bathroom and got the computer working. After all, that’s what most interpret design as—the tactical items. As a designer in that context, it’s your opportunity to demonstrate the right behavior for engaging and creating, such as asking user-oriented questions, drafting a design brief, sketching at whiteboards, discussing with engineers, etc. 

Build relationships with Sales, your best friend: Yes, sales! You gotta sell to customers and your sales leader will point you to the right folks to learn about customers, markets, partners, etc. Understanding the sales channel, which is the primary vehicle for delivering a great customer experience, is vital to your success as a design leader. Build that rapport to actively insert yourself into the customer engagement process, which is a gold mine of learnings to convert into design decisions. 

Don’t get hung up on Agile or Lean: These are just process words and mechanisms for delivering code, each with their particular lexicon. They are not perfect. There is no ideal way to fit UX into either one. Yet the overall dynamic is complementary in spirit and should enable smooth, efficient, learning-based outcomes to help iterate the product-market fit goals. The gritty, mundane details of JIRA, stories, estimations, sprint reviews, etc. are simply part of the process. Keep up your design vision and learn how to co-opt those mechanisms to get design ahead of the game, like filing “UX Stories” based upon your vision. 

Think in terms of “goals, risks, actions” when managing up: Maybe as part of a large corporate design team it was acceptable to vent and rant about issues with close peers. However, in a design leadership role on par with CEO and VPs of Engineering or Sales, you need to be focused and deliberate in your communications with them, to amplify the respect and build trust/confidence in you. I learned it’s far more effective to discuss things in terms of your goals, what are the key risks affecting the accomplishment of said goals, and what actions are desired (or asks to made) to help achieve those goals. This is way more professional and valuable of a dialogue driver. Don’t just rant! 

Finally, get comfortable with “good enough”: As Steve Jobs said, “Real artists ship,” meaning that you can’t sweat all the perfection-oriented details too much at risk of delaying the release. At some point you must let it go, knowing that there will be subsequent iterations and releases for improving imperfections—which is ongoing. Having fillers, stop-gaps, and temporary fixes are all expected. Do your best and accept (if not wholly embrace) the notion of “satisficing” (per Herb Simon) of doing what’s necessary yet sufficient. 

Design leadership is incredibly hard, perhaps made more difficult because of the glare of the spotlight now that UX is “hot” and finally recognized by execs and boards as a driver of company success. While you may be a “team of one,” the kinds of learnings itemized here will help enable a productive, design-led path forward for the team.



Posted in: on Tue, September 16, 2014 - 9:01:28

Uday Gajendar

Uday Gajendar is Director of User Experience at CloudPhysics, focused on bringing beauty and soul to Big Data for virtualized datacenters.
View All Uday Gajendar's Posts


Post Comment


No Comments Found


Service blueprints: Laying the foundation


Authors: Lauren Chapman Ruiz
Posted: Wed, September 10, 2014 - 12:01:14

This article was co-written by Izac Ross, Lauren Chapman Ruiz, and Shahrzad Samadzadeh.

Recently, we introduced you to the core concepts of service design, a powerful approach that examines complex interactions between people and their service experiences. With this post, we examine one of the primary tools of service design: the service blueprint.

Today’s products and services are delivered through systems of touchpoints that cross channels and blend both digital and human interactions. The service blueprint is a diagram that allows designers to look beyond the product and pixels to examine the systems that bring a customer’s experience to life.

What is a service blueprint?

You may be familiar with customer journey mapping, which is a tool that allows stakeholders to better understand customer interactions with their product or service over time. The service blueprint contains the customer journey as well as all of the interactions that make that journey possible.

Because of this, service blueprints can be used to better deliver a successful customer experience. Think of it this way: You can look at a building, and you can read a description, but to build the building you need more than an image or description. You need the instructions—the blueprint.

Service blueprints expose and involve many of the core concepts we talked about in Service Design 101.To use that vocabulary: Service blueprints clarify the interactions between service users, digital touchpoints, and service employees, including the frontstage activities that impact the customer directly and the backstage activities that the customer does not see.

When should you use a service blueprint?

Service blueprints are useful when:

  • You want to improve your service offering. Knowing how your service gets produced is essential for addressing breakdowns or pain points.
  • You want to design a new service that mixes digital and non-digital touchpoints. Service blueprints shine when examining and implementing the delivery of complex services.
  • You have lost track of how the service gets produced. Services, like products, have manufacturing lines. The longer the service has been around or the larger the organization, the more siloed and opaque the manufacturing process can become.
  • There are many players in the service. Even the most simple-sounding service often involves IT systems, people, props, and partners all working to deliver the customer experience. A blueprint can help coordinate this complexity.
  • You are designing a service or product that is involved in producing other services. Products and services often interact with other services, particularly if they are b2b. Understanding your customer’s interactions with partners throughout the service can support a more seamless—and better—customer experience.
  • You want to formalize a high-touch service into a lower-touch form. New technologies can create opportunities for delivering higher-touch (and thus more expensive) services to broader audiences in new, more cost-effective forms. For example, think about the expanding world of online education. A blueprint uncovers the essential considerations for implementing a new, lower-touch service.

Keep in mind that there are times when a service blueprint is not the right tool! For example, if the goal is to design an all-digital service, journey mapping or process flows might be more appropriate.

Anatomy of a service blueprint

You understand the concept of service blueprints, and you know when to use them. Now, how do you make one? The starting elements are simple: dividing lines and swimlanes of information.

There are three essential requirements for a formal service blueprint:

  • The line of interaction: This is the point at which customers and the service interact.
  • The line of visibility: Beyond this line, the customer can no longer see into the service.
  • The line of internal interaction: This is where the business itself stops, and partners step in.

In between these lines are five main swimlanes that capture the building blocks of the service:

  • Physical evidence: These are the props and places that are encountered along the customer’s service journey. It’s a common misconception that this lane is reserved for only customer-facing physical evidence, but any forms, products, signage, or physical locations used by or seen by the customer or internal employees can and should be represented here.
  • Customer actions: These are the things the customer has to do to access the service. Without the customer’s actions, there is no service at all!
  • Frontstage: All of the activities, people, and physical evidence that the customer can see while going through the service journey.
  • Backstage: This is all of the things required to produce the service that the customer does not see.
  • Support processes: Documented below the line of interaction, these are the actions that support the service.

Additional Lanes

For clarity, here are some additional swimlanes we recommend:

  • Time: Services are delivered over time, and a step in the blueprint may take 5 seconds or 5 minutes. Adding time along the top provides a better understanding of the service.
  • Quality measures: These are the experience factors that measure your success or value, the critical moments when the service succeeds or fails in the mind of the service user. For example, what’s the wait time?
  • Emotional journey: Depending on the service, it can be essential to understand the service user’s emotional state. For example, fear in an emergency room is an important consideration.
  • Splitting up the front stage: With multiple touchpoints that are working together simultaneously to create the service experience, splitting each touchpoint into a separate lane (for example, digital and device interactions vs. service employee interactions) can be very helpful.
  • Splitting up the backstage: The backstage can be comprised of people, systems, and even equipment. For detail or low altitude blueprints, splitting out lanes for employees, apps, data, and infrastructure can clarify the various domains of the service.
  • Phases of the service experience cycle: Services unfold over time, so it can add clarity to call out the phases of the experience cycle. For example: how customers are enticed to use the service, enter or onboard to the service, experience the service, exit, and then potentially re-enter the service and are thus retained as customers.
  • Photos/sketches of major interactions: Adding this lane can help viewers quickly grasp how the service unfolds over time, in a comic-book-like view.

You can build as much complexity as needed into your blueprint, depending on the complexity of the service.

The 5 Ps and how they relate to the blueprint

In Service Design 101, we talked about the 5 Ps: People, Processes, Props, Partners, and Place. Looking at the service blueprint structure, you can see that all of the 5 Ps are captured. Through the top are Props and Place, and down through the rest of the blueprint are People, Process, and then Partners in the support processes. In the Customer Action lane are experience and actions.

Service blueprints can only be useful if they can be interpreted and implemented. We recommend the following common notation standards for making sure a blueprint is clear, focused, and communicates successfully.

Variations

Like a customer journey map, a service blueprint is focused on one persona’s experience through a single path. Think of it as a kind of scenario. Your blueprint will quickly become too complex and too difficult to read if you put multiple journeys on a single one, or try to capture service use with many variations. Blueprints show one use case or path over time, so additional blueprints must be used for variations of the service journey.

Common notations

Arrows

When an arrow crosses a swimlane, value is being exchanged through the touchpoints of the service.

Arrows have a very important meaning beyond the direction of the value exchange. They indicate who or what system is in control at any given moment:

  • A single arrow means that the source of the arrow is in control in the value exchange.
  • A double arrow indicates that an agreement must be reached between the two entities to move the process forward. For example, agreeing on the pick-up time with a pharmacist, or negotiating a price in a non-fixed cost structure.

Annotations

As you do the research and field observations necessary to build a blueprint, remember that blueprints can be a powerful way to communicate what’s working and what isn’t working for both service users and employees in the existing process. These notable moments can be captured in a number of ways, but we recommend icons with a legend to keep things legible and clear.

Some notable moments to consider capturing:

  • Pain points which should be fixed or improved
  • Opportunities to measure the quality of the service
  • Opportunities for cost savings or increased profits
  • Moments that are loved by the customer and should not be lost

What’s next?

As you begin to incorporate blueprinting into your process, remember that blueprints have altitude! They can capture incredible amounts of detail or summarize high-level understandings. When evaluating or implementing an existing set of service interactions, a low altitude map that details out the processes across touchpoints and systems is an invaluable management tool. To quickly understand the customer experience in order to propose design changes or develop a shared understanding, higher altitude diagrams can be most helpful.

Service blueprints break down the individual steps that are happening to produce the customer experience, and are an essential tool for the practice of service design.

In upcoming weeks we will talk about where this tool fits into the design process, and on gathering information to create service blueprints and journey maps.



Posted in: on Wed, September 10, 2014 - 12:01:14

Lauren Chapman Ruiz

Lauren Chapman Ruiz is a Senior Interaction Designer at Cooper in San Francisco, CA, and is an adjunct faculty member at CCA .
View All Lauren Chapman Ruiz's Posts


Post Comment


@Raffaele (2014 09 21)

What do you think of my storygraph? http://www.rainwiz.com/2012/10/13/introducing-the-storygraph/

@Kathryn Gallagher (2015 01 13)

Thank you Lauren for the great explanation.  I would like to create electronic service blueprints myself and wondered if you had any software apps. you could recommend, pref. free?  (I’m a student)  Thank you!

@James (2017 11 15)

Good blog! I really love how it is easy on my eyes and the data are well written. I am wondering how I could be notified whenever a new post has been made. I have subscribed to your feed which must do the trick! Have a nice day!


Big, hairy, and wicked


Authors: David Fore
Posted: Fri, September 05, 2014 - 12:00:56

Interaction designers sure can take things personally. When our behavior is driven by ego, this habit can be annoying. All that huffing and puffing during design crits! But when it springs from empathy for those our designs are meant to serve, then this signal attitude can yield dividends for all.

Nowhere are these sensitivities more critical to success—or more knotty—than when we confront systems whose complexity is alternately big, hairy, and wicked, and where making a positive impact with design can feel like pushing a rope.

I learned this again (apparently you can’t learn it too often) while leading design at Collaborative Chronic Care Network (C3N). Funded by a landmark grant from National Institutes of Health (NIH), C3N has spent nearly five years cultivating a novel learning health system, one that joins health-seekers and their families, their healthcare providers, and researchers in common cause around improving care. 

The past few years have been filled with front-page stories about the difficulties of changing healthcare for the better… or even to determine if what you’ve done is for the better. To hedge such risks from the start, C3N adopted interaction design methods to enhance understanding of and empathy for all system participants: pediatric patients, their caregivers, and researchers alike. Not a panacea, to be sure, but still a step in the right direction.

This initiative has led me to a deep appreciation for the value of three publications—two new ones and a neglected classic—that offer methods, insight, and counter-intuitive wisdom for those whose job it is to design for systems.

Few people know this terrain better than Peter Jones. He wrote Design for Care on the Internet, sharing his steady progress on concepts and chapters with a broad swath of the interaction design community. The result is a densely packed yet accessible book with a strong narrative backbone that demonstrates a wide range of ways multidisciplinary teams and organizations have designed for healthcare environments.

Jones uses theory, story, and principles to demonstrate how designers and their collaborators are trying to change how healthcare is delivered and experienced. Practicing and teaching out of Toronto, Jones has great familiarity with the Canadian healthcare system as well as the US model, which means that readers are permitted to see how similar cultures get very different outcomes. 

He also makes a convincing argument that designers have an opportunity—an obligation even—to listen for and respond to a “call to care.” Otherwise, he observes, our collaborators—the ones with skin in the game—will find it difficult to take our contributions seriously. After all, a cancer patient seeking health or an oncology nurse running a clinical quality-improvement program need to know you care enough to stick around and see things through.

The depths of this book’s research, the clarity of its prose and schematics, and the methods it offers can help designers take advantage of an historic opportunity to improve the healthcare sector. I expect Design for Care to be an evergreen title, used by students, teachers, and practitioners for years to come.

Every designer I’ve worked with, without exception, loves doing field research. It might begin as a desire to get out from behind their pixel machines. But they always come back to the studio ready to blow away old ways of thinking. 

The fruit of qualitative research not only improve designs, but it also equips designers with invaluable insight and information when negotiating with product managers, developers, and executives. 

And what do they find out there among the masses? 

People are struggling to realize their goals amidst the whirling blades of systems not designed for their benefit. That’s why it’s so valuable (and challenging) to spend time with folks at factories, agricultural facilities, hospitals, or wherever your designs will be encountered.

But what if you can’t get out into the field? How do you make sure that your nicely designed product is going to be useful. In other words, how do stop yourself from the perfect execution of the wrong thing? A new whitepaper from the National Alliance for Caregiving Caregiver’s Alliance has some answers. 

It is my burden to read a lot of healthcare papers. What helps the work of Richard Adler and Rajiv Mehta rise above the rest is how carefully they inflect their recommendations toward the sensibilities and needs of product designers and software developers. The authors are Silicon Valley veterans who place the problem into context, then make sure to draw a bright line from research to discovery to requirements. 

Equipped with this report, designers are far more likely to create designs that serve the true needs of the people who have much to gain—and lose—during difficult life passages. It is filled with compelling qualitative research results and schematics and tables that shed light on the situations and needs of family caregivers. 

Why is this important? Because these are family and community members—people like you and me—that constitute the unacknowledged backbone of the healthcare system. They are also, perhaps, the most overburdened, deserving, and underserved population in the field of design today.

So now we’re ready to redesign healthcare for good! 

But wait a minute…

That’s not just difficult, it’s probably impossible. After all, we humans are exceptionally good at over-reaching, but less good at acknowledging the limits of what we can foresee. We want to change everything with a single swift and stroke, but typically that impulse leaves behind little else but blood on the floor. 

John Gall, a physician and medical professor, observed that systems design is, by and large, a fool’s errand. But even fools need errands, which is why he wrote the now-legendary SystemANTICS. Gall’s perspective, stories, and axioms make this book a must-read, while his breezy writing style makes the book feel like beach reading. 

But make no mistake: Gall has lived and worked in the trenches, and he knows of what he speaks. His work has had such a profound influence on systems thinking, in fact, that we now have Gall’s Law:  A complex system that works is invariably found to have evolved from a simple system that worked.

Others are similarly concise and useful, such as New systems mean new problems and A system design invariably grows to contain the known universe

My favorite is this: People will do what they damn well please.

That last one comes from biology, of course, but it’s true for human systems as well. 

Gall implores you to acknowledge that people will use your systems and products in the strangest ways… and he will challenge you to do everything you can to anticipate some of those uses, and so build in resilience. 

As design radically alters the rest of economy with compelling products and services, healthcare remains a holdout. The current system in whose grip we find ourselves appears designed to preserve privileges and perverse incentives that propagate waste at the expense of better outcomes for all. By protecting prerogatives of incumbent systems and business models, innovation is too often stifled. But if we put our minds and hearts to it, and we’re aware of the pratfalls, we are more likely to push the rock up the hill at least another few inches.



Posted in: on Fri, September 05, 2014 - 12:00:56

David Fore

David Fore cut his teeth at Cooper, where he led the interaction design practice for many years. Then he went on to run Lybba, a healthcare nonprofit. Now he leads Catabolic, a product strategy and design consultancy. His aim with this blog is to share tools and ideas designers can use to make a difference in the world.
View All David Fore's Posts


Post Comment


@Jonathan T Grudin (2014 09 08)

Very nice essay, David. Thanks.

@Lauren Ruiz (2014 09 16)

So true! I often find we designers want to talk all about what needs to change, and that design thinking can tackle these problems, but the how is ever-elusive. How do you affect big systems? These books are good guides in starting to answer that question.


Inside the empathy trap


Authors: Lauren Chapman Ruiz
Posted: Mon, August 11, 2014 - 11:43:17

It’s not uncommon to find yourself closely identifying with the users you are designing for, especially if you work in consumer products. You may even find yourself exposed to the exact experiences you’re tasked with designing, as I recently discovered when I went from researching hematologist-oncologists (HemOncs) and their clinics to receiving care from a HemOnc physician in his clinic. (Thankfully, all is now well with my health.)

This led to some revealing insights. Suddenly I was approaching my experience not just as a personal life event, but as both the designing observer, taking note of every detail, and the subject, or user, receiving the care. Instead of passively observing, I focused on engaging in a walk-a-mile exercise, literally walking in my own shoes, as my own user.

In the past, I’ve written about the importance of empathy in design, but this was an extreme. I was able to identify my personal persona, watch to confirm the validity of workflows, and direct multitudes of questions to the understanding staff members. This subsequent experience can be extremely positive, but reminded me of the dangers of biases and designing solely for one person.

For instance, most of my caregivers enjoyed chatting, and one even stated how fun it was to have a patient who inquired about everything. That was my reminder that most patients are not like me, not having studied this exact space, and therefore having less comfort in asking questions. I had to remember this was something unique.

When we find ourselves in these situations, we need to remember that what happens to us may enhance our knowledge, but it cannot become the only conceivable experience in our minds. Too often we can walk dangerously close to designing for ourselves or for “the identifiable victim.” However, this can cause us to lose focus on improving outcomes for “the many” by single-mindedly pursuing an individual solution to a particularly negative outcome.

A New Yorker article called “Baby in the Well” builds a case against empathy, pointing out that this can cause us to misplace our efforts, missing the needs of “the many.” It is shown that the key to engaging empathy is the “identifiable victim effect,” which is the tendency for people to offer greater aid when a specific, identifiable person, or “victim” is observed under hardship, as compared to a large and vaguely defined group with the same need. The article states:

As the economist Thomas Schelling, writing forty-five years ago, mordantly observed, “Let a six-year-old girl with brown hair need thousands of dollars for an operation that will prolong her life until Christmas, and the post office will be swamped with nickels and dimes to save her. But let it be reported that without a sales tax the hospital facilities of Massachusetts will deteriorate and cause a barely perceptible increase in preventable deaths—not many will drop a tear or reach for their checkbooks.

When we design, we pursue a broader type of empathy. As a colleague once said to me, designers need to identify with the whole user base. User-centricity is about the ability to recognize that there are a number of personas, each with different goals, desires, challenges, behaviors, and needs. We design for these personas, recognizing that each has different goals they’re trying to accomplish and with different behaviors in how they go about achieving them.

So what are the key takeaways from my experience?

  1. Situations that help us build empathy for our users are invaluable as it gives us deep knowledge, but we should recognize and feel empathy for many. Looking at our situations through the lenses of your multiple personas can help you avoid this trap.
  2. Remember that the empathy we look to build in design is not just about feelings, but rather about understanding goals, the reasons for these goals, and how they are or aren’t currently accomplished.
  3. Have some empathy for yourself—it’s hard to untangle our personal feelings from the work we do on a day-to-day basis. Remember, we’re all human, and we will fall into the trap of focusing on ourselves from time to time. Recognizing this and looking out for the places where it affects our work is the best we can do.

What about you—have you found yourself in similar situations? How have you approached it? Are there tricks you use or pitfalls you work to avoid? Please use the twitter hashtag #designresearch to share in the conversation. 

Illustration by Cale LeRoy



Posted in: on Mon, August 11, 2014 - 11:43:17

Lauren Chapman Ruiz

Lauren Chapman Ruiz is a Senior Interaction Designer at Cooper in San Francisco, CA, and is an adjunct faculty member at CCA .
View All Lauren Chapman Ruiz's Posts


Post Comment


No Comments Found


Diversity and survival


Authors: Jonathan Grudin
Posted: Tue, August 05, 2014 - 11:40:21

In a “buddy movie,” two people confront a problem. One is often calm and analytic, the other impulsive and intuitive. Initially distrustful, they eventually bond and succeed by drawing on their different talents.

This captures the core elements of a case for diversity: When people with different approaches overcome a natural distrust, their combined skills can solve difficult problems. They must first learn to communicate and understand one another. In addition to the analyst and the live wire, buddy movies have explored ethnically, racially, and gender diverse pairs, intellectual differences (Rain Man), ethical opposites (Jody Foster’s upright agent Clarice Starling teamed with psychopathic Hannibal Lecter), and alliances between humans and animals or extra-terrestrials.

Diverse buddies are not limited to duos (Seven Samurai, Ocean’s Eleven). All initially confront trust issues. Lack of trust can block diversity benefits in real life, too: Robert Putnam demonstrated that social capital is greater when diversity is low and that cultures with high social capital often fare better. However, the United States has prospered with high immigration-fueled diversity, despite the tensions. When is diversity worth the price?

In the movies, combining different perspectives solves a problem that no individual could. The moral case for racial, gender, social, or species diversity is secondary, although these differences may correlate with diverse views and skills. At its core, diversity is about survival, whether the threat is economic failure or the Wicked Witch of the West. “Don’t put all your eggs in one basket,” financial planners advise. A caveat is that diversity is not always good. Noah had to bar Tyrannosaurus rex from the ark; it wouldn’t have worked out. For a given task, some of us will be as useful as the proverbial one-legged man at an ass-kicking party. Exhortations in support of diversity rarely address this.

Diversity in teams receives the most attention. My ultimate focus is on the complex task of managing diversity in large organizations—companies, research granting agencies, and academic fields. But a discussion of diversity and survival has a natural starting point.

Biological diversity

Diversity can enable a species to survive or thrive despite changes in environmental conditions. Galapagos finches differ in beak size. Big-beaked finches can crack tough seeds, small-beaked finches ferret out nutritious fare. Drought or a change in competition can rapidly shift the dominant beak size within a single species. The finches do well to produce some of each.

In two situations, biological diversity disappears: A species with a prolonged absence of environmental challenge adapts fully to its niche, and a species under prolonged high stress jettisons anything nonessential. If circumstances shift, the resulting lack of diversity can result in extinction. In human affairs too, complacency and paranoia are enemies of diversity.

Human diversity

Coming to grips with workplace diversity is difficult because all forms come into play. Our differences span a nature-nurture continuum. Race and gender lie at one end, acquired skills at the other. Shyness or a preference for spatial reasoning may be inherited; cultural perspectives are acquired. In his book The Difference, Scott Page focuses on the benefits of diverse cognitive and social skills in problem-solving. As I wrote, a friend announced a startup for which a major investor was on board provided that other investors join: He wants diverse concurring reviews.

A team matter-of-factly recruiting core skills doesn’t think of them as diversity—but an unusual skill becomes a diversity play. Whether differences originate in nature or nurture isn’t important. Understanding their range and how they can clash or contribute is.

Drawing on thousands of measurements and interviews, William Herbert Sheldon’s 1954 Atlas of Men [1] yielded three physical types, each focused on an anatomical system accompanied by a psychological disposition: (i) thin, cerebral, ectomorphs (central nervous system), (ii) stocky, energetic mesomorphs (musculature), and (iii) emotional, pleasure-seeking endomorphs (autonomic nervous system). Consider a team comprising a scarecrow, lion, and tin man keen to establish ownership of a brain, courage, and a heart, or Madagascar’s giraffe, lion, and hippopotamus.

Aldous Huxley used Sheldon’s trichotomy in novels. Organizational psychologists favor broader typologies. Companies know that good teams can be diverse and try to get a handle on it. A popular tool is the Myer-Briggs Type Indicator. This 2x2 typology was built on Carl Jung’s dimensions of introversion/extraversion and thinking/feeling. It is consistent with his view that a typology is simply a categorization that serves a purpose. Other typologies are also used. Early in my career, my fellow software developers and I were given a profiling survey that I quickly saw would indicate whether we were primarily motivated by (i) money, (ii) power, (iii) security, (iv) helping others, or (v) interesting tasks. If I filled it out straightforwardly I’d be (v) followed by (iv). Instead, I drew on my childhood Monopoly player persona, and in the years that followed received very good raises. I took the initiative to find interesting tasks, rather than relying on management for that.

Teams and organizations

Organizations generally differ from teams in several relevant respects. Organizations are larger, more complex, and last longer [2]. An organization, a team of teams, requires a greater range of skills than any one team. Organizations strive to minimize the time spent problem-solving, where diversity helps the most, and maximize the time spent in routine execution. Most teams continually solve problems; one change in personnel or external dependency can alter the dynamics and lead to a sudden or gradual shift in roles.

Should an organization group people with the same skill or form heterogeneous teams? Should a company developing a range of products have central UX, software development, and test teams, or should it form product teams in which each type is represented? Homogeneous teams are easier to manage—assessing diverse accomplishments is a challenge for a team manager. Diverse teams must spend time and effort learning to communicate and trust.

Homogeneous teams could be optimal for an organization that is performing like clockwork, heterogeneous teams better positioned to respond in periods of flux. A centralized UX group is fine for occasional consulting, an embedded UX professional better for dynamic readjustment.

Teams

Consider a working group with a single manager, such as a program committee for a small conference, an NSF review panel, or a team in a tech company. The scope of work is relatively clear. Diversity may be limited: quant enthusiasts may keep out qualitative approaches or vice versa; a developer-turned-manager may feel that a developer with some UX flair has sufficient UX expertise for the project.

Where does diversity help or hinder? Joseph McGrath identified four modes of team activity: taking on a new task, conflict resolution, problem-solving, and execution. Diversity often slows task initiation. It can create conflicts. Diversity is neutral in execution mode [3], where a routine job has been broken down into component tasks, minimizing complex interdependencies.

Scott Page describes contributions of diversity to the remaining mode of team activity, problem-solving. Diversity helps when clearly recognizable steps toward a solution can be taken by any team member, as in open source projects or when several writers work on dialogue for a drama. Although a team executing in unison like a rowing crew may not benefit from diversity, most teams encounter problems at times. Members are often collocated, enabling informal interaction, learning to communicate, and building trust. When resource limitations force hard decisions on a team, members understand the tradeoffs. Subjective considerations sometimes override objective decision-making on behalf of team cohesion: “We just rejected one of her borderline submissions, let’s accept this one.” “His grant proposal is poor but his lab is productive, let’s accept it at a reduced funding level.” In contrast, responses to organizational decisions are often less nuanced.

Teams have teething pains, conflicts, and managers who can’t evaluate workers who have different skill sets or personalities. But in general, diverse teams succeed. One that fails is replaced or its functions reassigned.

When time is limited, introducing diversity is challenging. I was on a review panel that brought together organizational scientists and mathematicians. The concept of basic research in organizational science mystified the mathematicians, to whom it seemed axiomatic that research on organizations was applied. Another review panel merged social scientists studying collaboration technology and distributed AI researchers; the latter insisted loudly that every grant dollar must go to them because DARPA had cut them off and they had mouths to feed.

Organizations

Organizations often endorse diversity, perhaps to promote trust in groups that span race, gender, and ethnicity. However, it is rarely a priority. Given that managing diversity is a challenge, why should a successful organization take on more than necessary? An organization’s long stretches of routine execution don’t benefit from a reservoir of diversity that enhances problem-solving. Complacency sets in. Perhaps diversity would yield better problem-solving, outweighing the management costs. Perhaps not.

The biology analogy suggests that a reservoir of skills could enable an organization to survive an unexpected threat. We don’t need big beaks now, but keep a few around lest a drought appear. A successful organization outlives a team, but few surpass the 70-year human lifespan. Perhaps identifying and managing the diverse skills that could address a wide-enough range of threats is impossible; managing the clearly relevant functions is difficult enough.

One approach is to push the social, cognitive, and motivational diversity that aids problem-solving down to individual teams to acquire and manage, using tools such as employee profile surveys. Unfortunately, it doesn’t suffice for organizational purposes to have skills resident in teams. Finding and recruiting a specific skill that exists somewhere in a large organization is a nightmare. I have participated in several expertise-location system-building efforts over 30 years, managing two myself. The systems were built but not used. Incentives to participate are typically insufficient. Similarly, cross-group task forces have been regarded as stop-gap efforts that complicate normal functioning.

Another approach is to assign teams to pursue diverse goals. For example, one group could pursue low-risk short-term activities as another engages in low-probability high-payoff efforts, drawing on different skills or capabilities.

Assessment at large scale

Organizations can’t be infinitely diverse. A company does a market segmentation and narrows its focus. NSF balances its investment across established and unproven research. A conference determines a scope. When unexpected changes present novel problems, will a reserve of accessible skills and flexibility exist? Management can draft aspirational mission statements, but in the end, responsiveness is determined by review processes, such as employee performance evaluations, grant funding, and conference and journal reviewing.

A pattern appears as we scale up: Assessing across a broad range not only requires us to compare apples and oranges (and many other fruits), it requires deciding which apples are better than which oranges. Sometimes all the apples or all the oranges are discarded.

Large organizations. How broadly should assessments and rewards be calibrated laterally across an organization? Giving units autonomy to allocate rewards can lead to the perception or fact that low-performing units are rewarded equivalently to stronger units. A concerted effort to calibrate broadly takes time and can lead to the dominant apples squeezing out other produce. For example, when rewards are calibrated across software engineering, test, and UX, the more numerous software engineers to whom “high-performing UX professional” is an oxymoron can control the outcome.

Organizations also sacrifice diversity to channel resources to combat exaggerated external threats. A hypothetical company with consumer and enterprise sales could respond to a perceived threat to its consumer business by eliminating enterprise development jobs and devoting all resources to consumer for a few years. When the pendulum swings back to enterprise, useful skills are gone.

Granting agencies. An agency that supports many programs has three primary goals: (i) identify and support good work within each program (a team activity), (ii) eliminate outdated programs, which facilitates (iii) initiating new programs, expanding diversity. Secondary diversity goals are geographic, education outreach, underrepresented groups, and industry collaboration.

Let’s generously assume that individual programs surmount team-level challenges and support diversity. The second goal, eliminating established programs that are not delivering, can be close to impossible. Once a program survives a provisional introductory period, it is tasked to promote the good work in its area—there is an implicit assumption that there is good work. Researchers in a sketchy program circle the wagons: They volunteer for review panels and for rotating management positions, submit many proposals (“proposal pressure” is a key success metric), rate one another’s proposals highly, and after internal debates emerge with consensus in review panels.

The inability to eliminate non-productive programs impedes the ability to add useful diversity. For example, NSF has a process for new initiatives that largely depends on Congress increasing its budget. The infrequent choices can be whimsical, such as the short-lived “Science of Design” and “CreativIT” efforts [4]. I participated in three high-level reviews in different agencies where everyone seemed to agree that science suffers from inadequate publication of negative results, yet we could find no path to this significant diversification given current incentive structures.

Large selective conferences. Selective conferences in mature fields form groups to review papers in each specialized area. Antagonism can emerge within a group over methods or toward novel but unpolished work, but the main scourge of diversity is competition for slots, which causes each group to gravitate toward mainstream work in its area. Work that bridges topic areas suffers. Complete novelty finds advocates nowhere. Researchers often wistfully report that their “boring paper” was accepted but their interesting paper was rejected.

Startups: Team and organization

A startup needs a range of skills. It may avoid diversity in personality traits: People motivated by security or power are poor bets. Goals are clear and rewards are shared; there is a loose division of labor with everyone pitching in to solve problems. The short planning horizon and dynamically changing environment resemble a team more than an established enterprise. With no shortage of problems, diversity in problem-solving skills is useful, but every hire is strategic and there is little time to develop trust and overcome communication barriers.

Professional disciplines

Competition for limited resources works against community expansion and diversity. Two remarkably successful interdisciplinary programs, Neuroscience and Cognitive Science, originated in copious sustained funding from the Sloan Foundation. In contrast, I’ve invested more fruitless time and energy than I like to think about trying to form umbrella efforts to converge fields that logically overlapped: CHI and Human Factors (1980s), CHI and COIS (1980s), CSCW and MIS (1980s), CSCW and COOCS/GROUP (1990s), CHI and Information Systems (2000s), and CHI and iConferences (2010s). An analysis of why these failed appears elsewhere.

Conclusion

This is not the short essay I expected, and it doesn’t cover the equity considerations that drive diversity discussions in university admissions and workplace hiring. What can we conclude? Noting that diversity requires up-front planning to possibly address unknown future contingencies, I will consider where the biology analogy does and doesn’t hold.

With moderate uncertainty, diversity is a good survival strategy; with major resource competition, diversity yields to a focus on the essentials. So avoid exaggerating threats. Next, given that choices are necessary, what dimensions of diversity should we favor? Ecological cycles favor the retention of capabilities that were once useful—a drought may return. Economic pendulum swings argue for the same.

However, the march of science and technology creates both obsolescence and novel opportunities and challenges. Some, but not all, can be anticipated by studying trends. It is fairly empty to recommend focusing on efficiency in execution while retaining flexibility, but “avoid overreacting to perceived threats” is again good advice. Businesses narrow when they should diversify. Government funding is poured into defense and intelligence at the expense of health, education, infrastructure, and environment. And finally, my favorite hot button example, large conferences.

HCI researchers have always been terrified of appearing softer than traditional computer science and engineering. So we followed their lead. We drove down conference acceptance rates, kept out Design, and chased out practitioners. But other CS fields evolve more slowly, with greater consensus on key problems. Human interaction with computers explodes in all directions. Novelty is inevitable, yet with acceptance rates of 15% - 25%, each existing subfield accepts research central to today’s status quo, leaving little room for research that spans areas, is out of fashion but likely to return, involves leading edge practitioners, or is otherwise novel [5]. Could our process consign us to be followers, not leaders?

Endnotes

1. His atlas of women was not completed.

2. To be clear about my terminology use, a “football team” is an organization, although the group of players on the field together is a team. Boeing called the thousands of people working on the 777 a team, but here it would be an organization.

3. An exception is an organization tasked with problem-solving. The World War II codebreaking organization at Bletchley Park made extraordinary use of diversity, documented in Sinclair McKay’s The Secret Lives of Codebreakers: The Men and Women Who Cracked the Enigma Code at Bletchley Park (Plume, 2010).

4. DARPA is an agency with top-down management which can and does eliminate programs, sometimes restarting them years later.

5. See Donald Campbell’s provocative 1969 essay “Ethnocentrism of Disciplines and the Fish-scale Model of Omniscience.”

Thanks to Gayna Williams for ideas and perspectives, John King, Tom Erickson, and Clayton Lewis for comments on an earlier draft.



Posted in: on Tue, August 05, 2014 - 11:40:21

Jonathan Grudin

Jonathan Grudin is a principal design researcher at Microsoft.
View All Jonathan Grudin's Posts


Post Comment


No Comments Found


Possibilities, probabilities, and sensibilities


Authors: Uday Gajendar
Posted: Thu, July 31, 2014 - 10:27:43

Design is an iterative activity involving trajectories of exploration and discovery, of the problem space, the target market, and the solutions, towards making good choices. As the primary designer charged with delivery of an optimal solution, I must contend with such problems of choice, and thus trade-offs. Designing is fundamentally about mediating “choices”: what elements to show on-screen, which pathways to reveal, how to de-emphasize some features or prioritize others, and so forth. Some are “good choices” and some are not so good. So, if choice is at the heart of designing, how does a designer effectively handle too many choices and options—a dazzling array propositioned by earnest product managers seeking revenues and tenacious engineers wanting to showcase brilliance. Hmm! It’s a veritable challenge in the course of daily design work that I confront in my own professional life, too. I offer a potential framework that I have been evolving and applying which may be useful: iteratively defining the possibilities, probabilities, and sensibilities. Let me explain further...

Possibilities: This involves mapping out to the fullest extent all possible variants of user types, contexts of use, or solutions for a problem. Even if it’s wild or unfeasible, or an “edge case,” just capture it anyway so it is recorded for everyo