Departments

XX.1 January + February 2013
Page: 6
Digital Citation

Feedback


Authors:
INTR Staff

back to top  Tech for Good and Evil

"HCI for Peace: Beyond Tie Dye" (September + October 2012, p.40) expresses an admirable sentiment and sets an equally admirable goal. I also believe the article is predicated on a naïve assumption: that the espoused technology will be used for good and not evil.

Technology is politically neutral. It does not care who uses it or for what purpose it is used. In its 2011 annual report, Amnesty International reminded us that "technology will continue to be a tool used by both those who want to challenge injustices around the world and those who want to control access to information and suppress dissenting voices." The report pointed out that the ability of governments to abuse and exploit technology will always be superior to the ability of grassroots organizations to use technology to achieve favorable outcomes.

I don't think it's too much of a stretch to see the technologies the authors are promoting used by repressive regimes in undesirable ways.

Avram Baskin
[email protected]

back to top  Conceptual Models Are Part of the Answer

In an otherwise valuable article in the September + October issue of interactions (p. 54), Susanne Bødker, Niels Mathiasen, and Marianne Graves Petersen badly misunderstand and misconstrue the meaning of a person's mental model, also called a conceptual model. As a result, they argue against an extremely valuable tool for design, even though their article itself is a good illustration of the value of such a tool. You might say that although they do not throw out the baby with the bathwater, they are washing the wrong baby.

The title of their article is "Modeling Is Not the Answer! Designing for Usable Security." Notice that my title here, "Conceptual Models Are Part of the Answer," differs in two respects. First, I claim that conceptual models are part of the answer to the security problem, whereas the very title of their article argues against the idea. Second, I say "a part of the answer" to indicate this is only one of the many things that must be gotten right.

The authors fundamentally misunderstand the meaning of a conceptual model and, in particular, my discussion of them. "No, no," the authors seem to be saying, "no model can capture the complexity of situated behavior." Me and my cohort Butler Lampson [2], a security expert at Microsoft, they say, "have not embraced insights from past decades of HCI research." I cannot let such a statement remain unchallenged, especially as their mistaken belief is the basis for the well-publicized article. The weird thing is that I completely agree with the content. Why am I being attacked?

The point of their article is to say that security is a complex system, involving many different trade-offs and complexities. We need to study people's real understandings and behaviors. And even then, because actions are situated in the real, complex, unpredictable world, modeling will often fail. Yes, I completely agree, as my many writings on this topic have indicated.

In the article, the authors use several stories to illustrate the complexity of real behavior. They emphasize that people's beliefs and understandings are based on "learning and past experience." Their understanding is situated, which means heavily dependent upon the details of the particular event. Trust is an essential component that influences their behavior.

Well, yes, that is what I have long been advocating. I call people's real understandings their conceptual models. So why am I being attacked? Where do we disagree?

I do not know what the authors believe, but having unsuccessfully tried to explain my point of view to one of them in a private email exchange, I can only imagine that the word model violates their world view of what kind of science is appropriate to the study of human beings. I suspect that when they hear the word model, they immediately imagine a large, complex computer simulation of people's behavior, one that of necessity oversimplifies the situation, the environment, and the complexity of human behavior. Many in our community are attempting to develop these kinds of models. And many in our community think that such an attempt is ill-advised and guaranteed to fail. Whatever side you wish to believe in is quite irrelevant to this discussion, for the conceptual models I speak of are a very different kind of entity. The problem is that when these three authors saw the word model, they immediately went into attack mode, damning the use of conceptual models as a valuable design strategy, all because of their misunderstanding.

My opinion is buttressed by the next-to-last sentence of their article, where the reader is encouraged "to design for usable security without falling into the trap of better generic modeling." Wow. Conceptual models should not be confused with computer or other simulations designed by scientists or technologists to predict human behavior. They are different things. These authors fundamentally miss the point of the models.

Conceptual models are mental beliefs held by people about how things work, where things can be products, objects, systems, animals, and other people.

Imagine a bicycle built for two people, constructed by taking two normal bicycles sharing the same front wheel. To do this, imagine two bicycles lined up with the bicycle on the left facing to the right, the bicycle on the right facing to the left and then, most importantly, with both bicycles sharing the same front wheel (the middle wheel in this imagined configuration) [3].

If you have successfully constructed a mental image of this bicycle built for two, you will quickly realize that the concept is unworkable. How do you know it is unworkable? The mental image leads to a simple mental model inside your head in which you imagine each rider trying to peddle against the peddling of the other, and when you imagine the outcome, well, it would be a disaster. This is what I mean by a mental conceptual model.

Mental conceptual models are widely used in the teaching of interaction design. I made them a fundamental component of my book, The Design of Everyday Things [4]. Designers, I said, develop conceptual models within their heads of the devices they build. This leads them to specify a resulting product or service (or in the case of the article under discussion, a security system). When people use the resulting design they develop their own mental conceptual models of how it works. Ideally, the designer's model and the user's model would correspond, but this is only true if the designer provides the appropriate clues and information to aid the user.

In my writings and talks for the security community I have argued that designers need to understand and help shape the conceptual models of users of security. It is relatively easy for people to understand how thieves might break into homes that do not have sufficient safeguards against intrusion but security in information technology systems is much more difficult to understand. Moreover, it is not helped by many institutions that send us legitimate email messages that ask us to click on links, thus causing us to create conceptual models that say it is okay to do this. It is not aided by the complexity of the structure of a URL that is not understood by the average person. It is not helped by the onerous password requirements of many institutions (especially universities) that make no sense to the recipients (primarily because they are silly and misguided).

Do people make security trade-offs? Of course, and if security experts understood people's conceptual models, they might understand the trade-offs. Indeed the security professionals themselves trade-off convenience for security, although their own personal conceptual models cause them to believe they do it intelligently. A major point that both Lampson and I were making is that if security experts wish people would make sensible trade-offs based upon consideration of the risks, they must provide them with appropriate conceptual models. We didn't say that this was the answer to security problems: We said this would be a major step forward. In fact, as all of us who hide keys to our front doors under doormats know, there is no perfect solution to security, no perfect trade-off between usability (convenience) and security. But there can be an informed trade-off. (When the confirmed serial killer is thought to be in the neighborhood, many will remove the key from under the doormat, but put it back once this particular threat is over.)

Security is a shared responsibility, I have argued. Everyday users need to understand the implications of their actions, which means they need to develop better conceptual models of security; security professionals need to help provide meaningful information to guide in the development of these models, as well as to understand how people behave. Security measures need to be appropriate to real human behavior.

The types of studies described by Bødker, Mathiasen, and Petersen would help.

Conclusion: Conceptual models are a powerful tool for designers. They describe people's beliefs about the things they encounter—beliefs about artificial systems, the environment, animals, and other people. They are intended as descriptions of people's beliefs, the better to design to accommodate them. The models of the world held by people are incomplete, ambiguous, and sometimes self-contradictory. Our understanding of people's conceptual models is also incomplete. Nonetheless, they are valuable aids to the designer, essential if we wish design to take into account human understanding, and essential if we wish to incorporate empathy into design.

Donald A. Norman
www.jnd.org/[email protected]

back to top  A Response to Don Norman

We thank Don Norman for his response to our article and are pleased to engage in what we see as a very important discussion.

We accept that Norman is unhappy with the title claiming that "modeling is not the answer." While we admit that to some extent Norman has been pooled with other respected researchers in the intro, we did not intend the title to specifically reference his work. However, we are concerned with how the conceptual modeling perspective often dominates the security area, and we suggest that rather than more modeling, there is a strong need for different other perspectives to be developed.

Norman defines conceptual models as "mental beliefs held by people about how things work, where thing can be products, objects, systems, animals, and other people."

We agree that designers get great help from understanding how people think work is done, how things work, and so on, and that those matters can intentionally be changed through design. The fact that we change users' understanding of their work and artifact through design is as such important for designers to understand as well. This is quite fundamentally why we have always argued for participatory design.

Nonetheless, our point is a little different than the way Norman reads our text: Both his own original example of the doors at Google [1] and most of our examples show that even if users understand the risks, they do not act according to this understanding. In that sense it is not even the case that the user's conceptual model is false or flawed, but rather that other aspects of their actions—for example, time pressure—make them ignore what they know and understand. This means that Hans's story (the discussion piece in our article) cannot be "fixed" through a better conceptual model as such (i.e., that he could have been taught differently). What Hans's story shows is that even though Hans knows a lot about security risks on the Internet, time pressure and money issues make him behave irresponsibly under pressure.


What we are trying to say is: Conceptual models inside the head do not sufficiently explain how people act, in particular when it comes to security in everyday situations.


In other words, we argue for the need for perspectives different from better conceptual models, and our note does not reflect a misunderstanding on our part of what conceptual models are. Norman in his piece argues for clear and understandable conceptual models as follows:

"Finally, we need more humane ways of interacting with the systems to establish eligibility, ways that the people who use them consider appropriate and reasonable. This means that the systems must be accompanied by a clear and understandable conceptual model. If people could understand why they were required to do these things, they would be more willing to pay a reasonable penalty. We all willingly adapt to the inconvenience of locks that seem reasonable for protection, but not those that get in the way—as the propped-open door at the security conference indicates" [1].

What is at stake here is more than whether or not we have misunderstood "conceptual models"—it is the entire foundation of cognitive models. What we are trying to say (and had this been more of a research paper, this could have been developed better) is: Conceptual models inside the head do not sufficiently explain how people act, in particular when it comes to security in everyday situations.

We believe there are fundamental differences between everyday use situations for people and citizens at large and high-risk use situations, for example, power plants where safety/security drills are part of keeping up the level of attention [2]. In the latter case users may be taught the right models, but in the former this is not possible. As a matter of fact, we are quite fed up with the many attempts we have seen to educate users to behave more securely, and we admit to participating in it. One of our sister projects within IT Security for Citizens was a project that focused entirely on educating the Danish population to act more securely (see [3]).

So actually, we are saying that everyday security cannot be dealt with on a conceptual level alone when events unfold in which people (in these security-critical situations) act under pressure (see also, e.g., [4]). Conceptual models (how people understand their systems) do not sufficiently explain how people make trade-offs and react in tense, threatening, and unforeseen situations (where security is really at stake). This is because not all human action happens based on cognitive/mental activity, as suggested by Gibson and activity theory (see below). So conceptual models may be useful in situations where there is a chance of training the users, but the contingencies of the real threads make such drills impossible in everyday situations.

In our view it becomes a little too far-fetched to argue that the inappropriate actions of users caused by such unexpected elements of the use situation are also parts of a more extensive, and still flawed and incomplete, conceptual model ad infinitum.

Whether or not we are right about this can obviously be discussed, and we will be happy to have a longer discussion. Also, how we then learn to act more securely, and how we can accumulate general understanding as human beings in this area, is a really interesting issue.

Obviously, we know Norman's work and we also know and appreciate the difference between this work and the modeling that happens in areas of computer science.

Whether or not conceptual models are in the heads of people, we believe, is worth a discussion as such, but is not the main issue at stake here. We recognize that people have, and are able to act according to, a certain understanding that they have learned or that are due to classical Gibsonian affordances. This understanding is in their heads and in their bodies, perhaps even in the relationships with their artifacts (see, e.g., [5]). It is interesting that Norman so strongly holds on to this idea about conceptual models when at the same time he has been interested in affordances.

Thus rather than more conceptual modeling we call for different perspectives to be developed within the area of security, such as those of affordances, learning in use over time, embodied interaction, and others, so that we can design for better everyday security.

Norman does not like our interpretation of his work. That is fair. We on the other hand, don't agree with him on how better conceptual models necessarily improve security.

Susanne Bødker and Marianne Graves Petersen
University of Aarhus

back to top  References

1. Bødker, S., Mathieson, N., and Petersen, M.G. Modeling is not the answer! Designing for usable security. interactions 19, 5 (2012), 54–57.

2. Lampson and I presented our views at a National Academies report on Usability, Security, and Privacy. See National Research Council Steering Committee on the Usability Security and Privacy of Computer Systems. Toward Better Usability, Security, and Privacy of Information Technology: Report of a Workshop. The National Academies Press, 2010. Also, Bødker et al. cite an earlier version of my work: Norman, D.A. When security gets in the way. interactions 16, 6 (2009), 60–63.

3. Ah, how nice it would be just to show you a drawing. The verbal description is my attempt to avoid the considerable effort required to find the elusive French artist, Jacques Carelman, who thought of the idea and has a wonderful picture in his book. I used a lot of his drawings in The Design of Everyday Things, but getting permission was a nightmare I do not wish to repeat.

4. Norman, D.A. The Design of Everyday Things. Basic Books, New York, 2002.

5. Norman, D.A. When security gets in the way. interactions 16, 6 (2009), 60–63.

6. Rasmussen, J. and Vicente, K.J. Coping with human errors through system design: Implications for ecological interface design. International Journal of Man-Machine Studies 31 (1989), 517–534.

7. Gjedde, L., Sharp, R., Andersen, P., and Meldgaard, H. Safeguarding the user - Developing a multimodal design for surveying and raising Internet safety and security awareness. Proc. of the 5th International Conference on Multimedia and ICT in Education (Lisbon, Portugal). 2009.

8. Bødker, S. and Palen, L.A. Don't get emotional: Affect and emotion in human-computer interaction. From Theory to Applications. Springer Lecture Notes in Computer Science 4868 (2008), 12–22.

9. Bødker, S. and Klokmose C.N. The Human-Artifact Model: An activity theoretical approach to artifact ecologies. Human–Computer Interaction, 2011, 315–371

back to top 

©2013 ACM  1072-5220/13/01  $15.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2013 ACM, Inc.

Post Comment


No Comments Found