HCI and security

XIII.3 May + June 2006
Page: 45
Digital Citation

What do they “indicate?”


Authors:
Lorrie Cranor

Security- and privacy-related tools often feature graphical (or in some cases textual or audio) indicators designed to assist users in protecting their security or privacy. But a growing body of literature has found the effectiveness of many of these indicators to be rather disappointing.

Security researchers often evaluate the effectiveness of systems by their ability to resist attack. They typically envision a threat model in which an attacker attempts to "fool the software" by disabling the security software, performing a malicious action in an undetectable manner, or deceiving the software into interpreting a malicious action as an innocuous one. The threat models have been expanded to include semantic attacks [5] in which attackers attempt to "fool the humans" by obscuring or visually spoofing security indicators. This work tends to focus on how to prevent sophisticated semantic attacks by developing unspoofable indicators [1,7]. But humans are often fooled by much simpler attacks and may even ignore the (usually correct) warnings provided by their security software [6].

Security and Privacy Indicators in Web Browsers

The icons that appear in the bottom right corner of most Web browsers are some of the most familiar indicators, but also the subject of considerable criticism. The most infamous of these indicators is the padlock, which signals the presence or absence of a secure sockets layer (SSL) connection between the browser and a Web site. Most browsers display a closed padlock if an SSL connection has been established. The absence of an SSL connection is signaled by either the absence of any padlock or by an open padlock. An SSL connection usually implies that communications with the site are encrypted, and that the site has authenticated itself. (Attackers have found ways to cause an unencrypted SSL connection to be established, as well as to create Web sites that spoof a trusted brand and send certificates that allow an SSL connection to be established but are not actually associated with the spoofed brand. See http://news.netcraft.com/archives/2004/03/08/ssls_credibility_as_phishing_defense_is_tested.html.) However, studies have shown that neither of these concepts is conveyed by the padlock. Many users believe the padlock signifies a secure place rather than a secure connection [4]. In addition, some users remain unfamiliar with the padlock and don’t notice its presence [3]. The padlock has been criticized because it is easy for an attacker to spoof it by painting a picture of the padlock in a borderless window at the bottom of some browsers [7]. However, users are easily convinced that a site is secure if they see the padlock embedded in the content of a Web page, which requires no sophisticated spoofing attack [3].

A lesser-known browser indicator is the flagged cookie icon that appears in the lower right corner of the Microsoft and Netscape Web browsers. Both browsers display an icon involving an eye, indicating that a cookie has been blocked or modified due to a conflict with the user’s configured privacy settings. Anecdotal evidence indicates that the meaning of these indicators is virtually unknown to users.

I designed a browser-based tool called Privacy Bird, which alerts users as to whether the computer-readable privacy policy at a Web site matches their configured privacy settings. At sites where the policy matches, a green bird is displayed with musical notes coming out of its mouth. At sites where the policy doesn’t match, a red bird is displayed with "swearing" characters coming out of its mouth. I was surprised to discover during a user study that some users, even after being told that the indicators were associated with a privacy tool, said they thought the green bird indicated a site where one could download music and the red bird indicated a site that used adult language [1].

Evaluation Criteria

Before we can build truly effective security and privacy indicators, we must develop a better understanding of where our current indicators succeed, where they fall short, and why. Thus, I propose a set of evaluation criteria for security and privacy indicators.

Does the indicator behave correctly when not under attack? Does the correct indicator appear at the correct time without false positives or false negatives? This is a mainly a question of whether the software is capable of properly detecting the condition associated with the indicator. This criterion is relatively easy to test by simply putting the software into a variety of known states and checking to see which indicator is displayed.

Does the indicator behave correctly when under attack? Is the indicator resistant to attacks designed to deceive the software into displaying an inappropriate indicator? Evaluating an indicator against this criterion requires first coming up with a threat model and a list of possible attack vectors.

Can the indicator be spoofed, obscured, or otherwise manipulated so that users are deceived into relying on an indicator provided by an attacker rather than one provided by their system? Evaluating an indicator against this criterion also requires coming up with a list of possible attack vectors.

Do users notice the indicator? Users may not notice an indicator that is too small, surrounded by more interesting icons, covered up by other windows, or positioned somewhere on the screen where users seldom look.

Do users know what the indicator means? Users may have no idea what an indicator means, or they may have mistaken ideas about what it means.

Do users know what they are supposed to do when they see the indicator? Even if users know what an indicator means, they may not know what they should do when they see it. It is important to test whether users understand how an indicator relates to their own activities. This and the previous two criteria can be tested by showing users these indicators in a lab study or by surveying users after they have been using a piece of software for some time to determine whether they remember seeing these indicators and what they interpreted them to mean.

Do they actually do it? Computer users don’t always pay attention to indicators, even if they notice them, know what they mean, and know what they are supposed to do when they see them. Users may ignore indicators because they are too busy or find it too inconvenient to take the advised steps, or because they do not trust the indicator or believe the indicator has appeared in error [6].

Do they keep doing it? Over time users may stop noticing indicators that they have grown accustomed to seeing. Or they may find certain indicators to be too disruptive to their work and disable them. While a lab study can often help evaluate whether or not users do what they are supposed to do when they see an indicator, it is difficult to simulate a user’s natural work environment with all its accompanying time pressures and distractions in the lab. It is also difficult to account for effects that only occur after repeated use of a piece of software over a long period of time. Thus determining whether users keep taking the appropriate actions after they leave the lab may require subsequent observations of the same users over an extended period of time, or instrumenting the user’s work or home computer to record their actions or take periodic snapshots of certain pieces of system state information.

How does this indicator interact with other indicators that may be installed on a user’s computer? Even if we are able to come up with a set of indicators that are each demonstrably effective, it is not clear how effective they will be when used in combination. Indicators may clutter the screen and users may have trouble remembering what each signifies. After seeing my lab’s design proposal for a browser-based anonymity indicator in the form of a masked fox, one of my colleagues commented on the veritable Noah’s Ark of privacy and security indicators that our lab seemed to be creating. First privacy birds, then anonymity foxes… integrity-checking whales and intrusion-detection armadillos must not be far behind. If every privacy or security problem requires its own indicator, users may grow confused when indicators display seemingly (or actually) contradictory information.

The Role of Indicators

Indicators are potentially most effective when used to provide information to users that will aid them in making a decision that they want to be able to make themselves or that software is unable to make well. If the software is capable of making a good decision and users are happy to trust it to do just that, the indicator may be unnecessary. Even in cases where users desire indicator information, multiple indicators might be replaced by a single indicator that provides only the most critical information, with more detailed information available if the user requests it.

References

1. Adelsbach, A., Gajek, S., and Schwenk, J. Visual spoofing of SSL protected web sites and effective countermeasures. In Proceedings of the 1st Information Security Practice and Experience Conference (Singapore, 11-14 April, 2005). ISPEC2005. LNCS, Springer-Verlag, Heidelberg.

2. Cranor, L., Guduru, P., and Arjula, M. User Interfaces for Privacy Agents. ACM Trans. on Computer-Human Interaction, (2006).

3. Dhamija, R., Tygar, J. D., and Hearst, M. 2006. Why phishing works. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI `06. ACM Press, New York, NY.

4. Friedman, B., Lin, P., and Miller, J.K. Informed consent by design. In Cranor, L. and Garfinkel, S. eds. 2005 Security and Usability. O’Reilly Media, Inc., 477-504.

5. Schneier, B. 2000. Inside risks: semantic network attacks. Commun. ACM 43, 12 (Dec. 2000), 168. DOI= http://doi.acm.org/10.1145/355112.355131

6. Wu, M., Miller, R. C., and Garfinkel, S.L. 2006. Do security toolbars actually prevent phishing attacks? In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI `06. ACM Press, New York, NY.

7. Ye, Z., Smith, S., and Anthony, D. 2005. Trusted paths for browsers. ACM Trans. Inf. Syst. Secur. 8, 2 (May 2005), 153-186. DOI= http://doi.acm.org/10.1145/1065545.1065546

Author

Lorrie Faith Cranor
Carnegie Mellon University
lorrie@cs.cmu.edu

About the AUTHOR:

Lorrie Faith Cranor (http://lorrie.cranor.org/) is an associate research professor at Carnegie Mellon University where she directs the CMU Usable Privacy and Security Laboratory (CUPS). She came to CMU in 2003 after seven years at AT&T Labs-Research. She co-edited the book Security and Usability (O’Reilly 2005) and founded the Symposium on Usable Privacy and Security (SOUPS). She also directs an NSF-funded project that is studying the human aspects of semantic attacks.

©2006 ACM  1072-5220/06/0500  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2006 ACM, Inc.

 

Post Comment


No Comments Found