Randolph Bias, Douglas Gillan
Someone once said, “Gravity—it’s not just a good idea, it’s the law.” In the art-and-science world of user interface (UI) design, the first law we consider—almost the only “law” that gets invoked in our still-too-subjective practice—is Fitts’s Law: The time to acquire a physical target is a function of the size of that target and its distance.
Why would a forum on evaluation and usability devote space to Fitts’s Law? In the first place, much of usability is concerned with selecting—both cognitively and physically—the correct item from a screenful of icons, links, menu items, and other virtual objects. In the second place, we believe strongly that the next step for the field of usability/user-centered design/user experience is a move toward better quantification, better empirically based decision-making regarding the design of UIs.
So, as any toddler knows, but as has been well quantified by Paul Fitts and many researchers since, we will be quicker to touch that big ball over here than that smaller ball, way over there.
This from Wikipedia:
Fitts’s Law is used to model the act of pointing, either by physically touching an object with a hand or finger, or virtually, by pointing to an object on a computer monitor using a pointing device. Fitts’s Law has been shown to apply under a variety of conditions, with many different limbs (hands, feet, the lower lip, head-mounted sights, eye gaze), ... input devices, physical environments (including underwater), and user populations (young, old, special educational needs, and drugged participants) .
The implications for and the applications of this fundamental law of UI design have been unmistakable. Commonly needed buttons and other widgets should be larger and nearer the top or nearer that point at which the cursor likely resides immediately preceding this need. Less common acts that might be expected to take longer can be completed by acquiring—clicking on—smaller, more remote targets.
Done and done, as they say. We didn’t fight the law, and the law (and users) won. Woohoo!
However. What if there might be other variables involved in this equation, other variables that likewise contributed to the speed of acquiring a target but that were not captured by the earlier research efforts? Specifically, we hypothesize that the cost of an error in attempting to acquire a target is a key determinant of target acquisition speed.
We came to this when we noticed a difference in our own behavior when dialing a cellphone versus dialing a landline phone. With the latter, we tended to slow down as we dialed the final digits of a phone number, whereas with a cellphone, this tendency wasn’t present. The reason seemed obvious. A landline phone has no backspace or erase button; for any error that the dialer makes, the only option is to terminate the call and start over—a relatively costly error in both time and effort. And the deeper one is into dialing a number, the more time has been invested, and thus the bigger the cost of making an error. However, with a cellphone, with its backspace or erase button, the cost of mis-entering any digit is always just two button pushes: backspace and the correct number key.
More than three decades of research on Fitts’s Law applied to human-computer interaction, starting with Card, English, and Burr , has yielded a steady and increasingly fine-tuned collection of devices and conditions that can be described by (Alan Traviss Welford’s version of) Fitts’s Law :
or some slightly amended version thereof.
In their original study of four distinct devices, Card et al. found:
The time for the step keys increases rapidly as the distance increases, while the time for the text keys increases somewhat less than as the log of the distance, owing to the existence of keys for moving relatively large distances with a single stroke. Again the mouse is the fastest device, and its advantage increases with distance .
Also they found that “[o]f the four devices tested, the mouse had the lowest overall error rate, 5%; the step keys had the highest, 13%.”
So there is no speed-accuracy tradeoff. Rather, high speeds tend to be associated with low error rates. We wonder what fraction of Fitts’s constant K might be accounted for by the cost of an error, thereby allowing for yet more of the variance to be accounted for by an amended Fitts’s Law?
Card et al. say, somewhat cryptically, “At the end of each block of twenty trials [subjects] were given feedback on the average positioning time and average number of errors for those trials. This feedback was found to be important in maintaining subjects’ motivations.” What was the manifestation of unmaintained subject motivation? What if, say, motivation remains high, but an error becomes more costly? Card et al. later opine, “Of the four devices, the mouse is clearly the most ‘compatible’ for this task ... Thus it would be expected to be easier to use, put lower cognitive load on the user, and have lower error rates.” Though the instructions are not offered, the article says “the subject pressed a button ‘selecting’ the target as he would were he using the device in a text editor.” We note that for the mouse and the joystick, an overshoot is fixed via a reciprocal action that requires no repositioning of the hand. For the step keys an overshoot may be corrected via a reciprocal action, though there would be potentially some repositioning of the hands required. For the text keys, which yielded the longest latencies, an overshoot correction may not require a reciprocal action (e.g., when one too many “paragraph” key strokes require one or more reverse “line” keystrokes) and definitely require hand repositioning. Our point is that perhaps it is the relatively high cost of an error that determines, or at least adds to, the relatively long latencies for target acquisitions with text keys.
In a similar subsequent study, Brian Epps tested six devices and found that the best [Fitts’s Law] fits are most evident with the trackball, mouse, and to a lesser extent, the absolute touchpad. Although the regressions for the rate-controlled joysticks produce relatively good R2 values, [other statistics] are high in comparison to the trackball and mouse. This is probably due to the high regression slopes associated with the joysticks .
High regression slopes reflect a larger influence of increased distance and reduced size of target, that is, harder tasks are relatively much harder. Perhaps it is that as more time (even milliseconds) is invested, the user takes more care not to make an error, for he/she realizes that recovery from that error will mean an even more punitive situation. In the Epps study, “If the trial was unsuccessful ... the subject had to reposition the cursor within the target boundaries and press the input button again.” Might this help account for the relatively poor fits for the joysticks, or any high-cost device? That is, perhaps the harder it is to carry out a task, the more care one takes because he/she sure doesn’t want to have to carry out such a (relatively) long task again.
Card et al. had found feedback on speed and accuracy “to be important in maintaining subjects’ motivations,” but MacKenzie, Sellen, and Buxton say that in their test, “Although instructed to move as quickly and accurately as possible, performance feedback was not provided” . These authors studied dragging and dropping as well as pointing and clicking, and found that “Subjects were observed to occasionally ‘drop’ the object during the dragging task, not through normal motor variability, but because of difficulty in [holding the button down during the drag] ... Thus ‘dropping errors’ were distinguished from motor variability errors.” Here again, speeds echoed error rates; slow devices also yielded more errors. Or maybe the way to think of it is that devices that led to more errors led to longer latencies—subjects take more care not to have to repeat a long task.
So, sure, the size and distance of a target are key determinants of the speed with which a user can acquire that target. But we suspect that the cost of an error will also play a large role in that target acquisition speed. That 1-cm-square keyboard key that is 1 cm away from your pinky finger will take you less time to hit if you are simply entering the first letter of the sentence “Power to the user,” and you can correct the error with a single backspace keystroke, than if it represents the action “Power down the nuclear-power plant.”
While our research is yet incomplete, we offer just a tease of data here, in hopes of inspiring others to engage in this line of study and help quantify the cost of an error, as we look to amend Fitts’s Law. We are trying two empirical approaches.
In one paradigm we have employed a traditional Fitts’s Law test: On each trial, the screen shows a 1 cm diameter black dot (the starting point) and a rectangle (the target). The starting point is always on the left of the screen, but its specific position varies from trial to trial. Targets in this experiment were rectangles with a height of 2 cm and a width that varied from trial to trial. Ten different target widths and 19 different movement distances were tested, thus comprising a range of indexes of difficulty (ID). Test participants were told their goal was to acquire as many targets as they could, as quickly and accurately as possible. The independent variable we introduced was a penalty for making an error. For the penalty condition, participants were told that missing the target would result in a time-out period in which they would be presented with a black screen and would not be able to proceed to the next trial for 30 seconds. In the non-penalty condition, there were no consequences of a missed target. Trials were blocked in the penalty and non-penalty conditions, and the two conditions counterbalanced for order. Consistent with Fitts’s Law, response times increased as ID increased. In addition, mean response times for successful trials were significantly greater in the penalty condition than in the non-penalty condition (mean RT’s = 1077 msec and 962 msec, respectively, F(1,9) = 7.24, p = .027). Also important for the hypothesis underlying this research, response times in the penalty condition were more sensitive to ID than in the non-penalty condition, resulting in a higher regression parameter estimate for penalty condition and a significant ID x penalty condition interaction (F(1,9) = 14.94, p = .004). So, penalties made people both slower overall and more sensitive to the external control of distance and target size. Accuracy data showed that more misses occurred in the non-penalty condition than in the penalty condition—121 and 104 misses, respectively.
A second paradigm we’ve tried penalizes the user for “coloring outside the lines,” that is, for taking a path from the starting point to the target that deviates from the most direct, straight-line path. This approach uses an unmarked path of the same vertical size as the starting point through which the test participant must move the cursor. Subjects had to move from the starting point to the target, with movement distance and target size (in direction of movement) varied. If the movement went outside this fairly narrow path in the vertical dimension, the user received a penalty of a 30-second timeout (blank screen, no ability to move to next trial) for one group. For the control condition, there was no penalty for moving outside of the box. The subject was informed whether or not a block of trials would entail penalties for being outside of the box.
As with penalties for missing the target, people were slower in moving the cursor when they received penalties for moving outside of the direct path. In addition, the non-penalty condition produced data that fits well by Fitts’s Law (R-squares all above .90), but in the 30-second penalty condition, the fit of the data by Fitts’s Law is not good.
The traditional approach to human interaction with technology embodied in Fitts’s Law has been to emphasize the physical features of the technology. In contrast, as we have moved toward a focus on user experience, various aspects of users’ motivation, cognitive abilities, and tasks deserve greater attention in the development of scientific models and their application in the design of interfaces. The research described in this report identifies one aspect of user motivation—avoiding penalties—that influences the speed with which users move objects in an interface. As we and others go forward with this research and discover specific improvements to Fitts’s Law with certain punishments associated with certain target sizes and distances, our work will inform product design more richly as we take costs of errors into account. We invite the exploration of this and other user-focused factors that affect movement and other elements of interaction with technology.
1. Fitts’s Law. In Wikipedia. 2015; https://en.wikipedia.org/wiki/Fitts%27s_law
5. MacKenzie, I.S., Sellen, A., and Buxton, W. A comparison of input devices in elemental pointing and dragging tasks. Proc. of the CHI ‘91 Conference on Human Factors in Computing Systems. ACM, New York, 1991, 161–166.
Randolph Bias is a professor in the School of Information at the University of Texas at Austin, and is currently a visiting scientist with the Institute for Human and Machine Cognition. He wonders what kind of thinking we were engaged in before “design thinking.” firstname.lastname@example.org
Douglas Gillan is a professor and head of the psychology department at North Carolina State University. His training in psychology focused on biopsychology and cognition. He has worked both in industry and academia on information visualization and human-technology interaction. He is a Fellow of the Human Factors and Ergonomics Society. email@example.com
©2015 ACM 1072-5220/15/11 $15.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2015 ACM, Inc.