Blogs

The Facebook “emotion” study: A design perspective would change the conversation


Authors: Deborah Tatar
Posted: Sun, April 19, 2015 - 12:58:05

Jeff Hancock from Cornell gave the opening plenary at the 2015 CSCW and Social Computing conference in Vancouver last month (3/16/15). Jeff was representing and discussing the now infamous “Facebook Emotion Study,” in which a classic social psychology study was conducted on over 600,000 unwitting Facebook members, to investigate the effects of increasing the percentage of positive or negative elements in their news feeds on the use of emotion words in their subsequent posts. He apologized, he explained, and he did so with a pleasing and measured dignity.

But he also made choices that dismissed or at least downplayed what to my mind are some of the most important issues and implications. He focused on the research study itself, that is, whether and how it is ethical to conduct research on unwitting participants. Hancock focused on the ethics of conducting research. We can share that focus: we can, for example, focus on whether Cornell lacked sufficient oversight, or we can focus on informed consent. I’m glad that he is doing that.

But that view ignores the elephant in the room. The way we interpret the ethics of the research is grounded in the way we evaluate the ethics of the underlying practice as conducted by Facebook. The study aroused so much public heat (The Guardian, The New York Times, Forbes) in part because it exposed how Facebook operates routinely.

We could argue that researchers are not responsible for the systems that they study and therefore that the underlying ethics of Facebook’s practice are irrelevant to the discussion of research. But that point of view depends on the existence of a very clear separation of concerns. In this case, the main author was an employee of Facebook. We must consider that pleasing Facebook, one of the most powerful sources and sinks of information—and of capital—in the world, was and is a factor in the study and its aftermath.

To my mind, the wrong aspects of the research are grounded in the wrong aspects of the system itself, and while Jeff Hancock is a multitalented, multifaceted guy, an excellent experimentalist, and presumably a searcher after truth, I also think that he gave Facebook a pass in his CSCW presentation. That, by itself, is an indicator of the deeper problem. It is really hard to think critically about an organization that has such untrammeled power. Hancock put the blame on himself, which is an honorable thing to do. He does not deserve more opprobrium and it was pretty brave of him to talk about the topic in public at all. But in some sense the authors of the study are secondary to the set of considerations the rest of us should have.

Regardless of what Hancock said or did not say, Facebook and other large corporations—Google, Amazon, Facebook and so forth—the so-called GAFA companies—make decisions about what people see in unaccountable ways. These decisions are implemented in algorithms. As Marshall and Shipman (“Exploring Ownership and Persistent Value in Facebook”) reported the next day, people do not know that algorithms exist much less what they contain. Users can imagine that the algorithms are at least impartial, but who actually knows? And, even if the algorithms are impartial, we must remember that impartial does not always mean fair or right.  It certainly does not mean wise. On one hand, all of these companies take glory in their power; on the other hand, they fail to claim responsibility for their influence and their power, to a large degree, rests in their influence.
 
Is what Facebook is doing actually wrong? Not everyone thinks so. Hancock cited Kariahalios’s work, indicating that when people learn about what Facebook does routinely they are initially very upset, but after a couple of weeks they realize they want to read news that is important to them. But this is precisely the place where Hancock’s argument disappointed. Instead of scrutinizing this finding, he moved on.  

I have recently in ACM Interactions called out the ways that computers, as they are designed and most people interact with them today, dominate humans through their inability to bend, the way people often or even usually do. I hypothesize that computers put users in a habitually submissive role. On this analysis, the real damage inflicted by the influence of the large, unregulated companies on internet interaction is that the systems they create fail to reflect to us the selves we wish we were. Instead, they reflect to us the people they wish we were: primarily, compliant consumers. And I have raised the possibility that this has epidemiological-scale effects.

This is important from a design perspective. As I said, Hancock is a multifaceted, multitalented guy, but he is not a designer. The design question is always “What could we do differently?” and he neither asked that nor pushed us to ask it. Instead of talking about all the ways that Facebook or a competitor could provide some of its services (perhaps a little compromised) in a better way, the analysis tacitly accepted the trade-off that we cannot have both—on the one side, transparency, honesty, and control, and on the other side, paired-down and selected information.  A person cannot do everything in one talk, but this was an important missing piece.

I am not the only person to talk this way about the importance of reconceptualizing technologies such as Facebook or the possible dangers of ignoring the need to do so.  In my Interactions article, I cited an intellectual basis for the claims in a wide range of thinkers (Suchman, Turkle, Nass) and would have cited more but for the word limit. Lily Irani’s Turkopticon is an exercise in critical design. Chris Csikszentmihaly’s work at the Media Lab represented a tremendous push-back. The Bardzells have also been central in designing responses.  

And then, some of the points I make here—and more—were brought up beautifully in the closing plenary by Zeynep Tufekci of UNC Chapel Hill. Tufekci did not give Facebook a pass. She was forthright in her criticisms. She analyzed the situation from a different intellectual basis, offering a range of compelling examples of issues and problems. Most plaintive was the example of the New Year’s card, created by Facebook, that read “It’s been a great year” featuring the picture a 7-year old girl who had died that year. The heartbreaking picture had received a lot of “likes” and so was impartially chosen by the algorithm. The algorithm was written, as all algorithms are, by people, who were not so prescient as to imagine all the situations in which a large number of people might “like” a picture, much less how their assumptions might play out in actual people’s lives. The algorithm was written to operate on information that was, by the terms of the EULA (end-user licensing agreement), given to Facebook.  Are we allowed to give our information to Facebook and other companies in this way? After all, we are not allowed to sell ourselves into slavery, although many early immigrants from Ireland and Scotland came to North America this way.

But the considerable agreement between Tufekci’s criticism, mine, and others’ is tremendously important. It exists in the face of a countervailing tendency to think that the design of technology has no ethical implications, indeed no meaning. My on-going effort is to design technologies that, sometimes in small ways, challenge the user relationship with technology and create questions.

After Hancock’s talk, but before Tufekci’s, one of my friends commented that the real threat to Facebook’s success would be another technology that does not sell data. In fact, Ello is such an organization, constructed as a “public benefit company,” obligated to conform to the terms of its charter. It intends to make money through a “freemium” model. According to Sue Halpern of The New York Review of Books (“The Creepy New Wave of the Internet,” November 2014), Ello received 31,000 requests/hour after merely announcing its intention to construct a social networking site that did not collect or sell user data. At 31,000 requests an hour, the trend would have had to have continued for many, many, many hours to start being able to compete with Facebook’s 1.4 billion users, but this level of response suggest that there is a deep hunger for alternatives.

Perhaps le jour de gloire n’est pas encore arrivé, but, designers, there is a call to arms in this! Thank god for tenure and a commitment to academic free speech.

Aside from the ways that we are, like Esau in the Bible, selling our birthrights for a mess o’ pottage (that is, selling our information and ultimately our freedom to GAFA companies for questionable reward), there is another issue of great concern to me: the almost complete inutility of the ACM Code of Ethics to address the ethical dilemmas of computer scientists in the current moment. I asked Jeff Hancock about this, and he said that “it had been discussed” in a workshop held the previous day about ethics and research.  I will look forward to hearing more about that, but it seemed clear that because his focus is primarily on the narrower issue of research, and this was the matter brought up repeatedly in the press coverage and public discourse, he is more concerned with new U.S. Institutional Review Board and Health and Human Services regulations rather than that the position of the ACM. But CSCW, and, for that matter, Interactions, are ACM products. ACM members should be concerned with the code of ethics.


Posted in: on Sun, April 19, 2015 - 12:58:05

Deborah Tatar

Deborah Tatar is a professor of computer science and, by courtesy, psychology, at Virginia Tech.
View All Deborah Tatar's Posts



Post Comment


@andersontudosobreblog (2015 09 03)

good post