XIX.2 March + April 2012
Page: 69
Digital Citation

Personalized dynamic accessibility

Krzysztof Gajos, Amy Hurst, Leah Findlater

It is a cliché to point out that computers and the Internet have entered all parts of our lives. We need them at work; governments urge us to file our taxes online; students are required to use them in their classes; online businesses offer better deals than their brick-and-mortar counterparts; and it is becoming more difficult to maintain relationships without access to social networking and social media sites. Public discourse about accessibility focuses on the assertion that access to these technologies is essential for meaningful participation in today’s society. Unfortunately, compliance with accessibility guidelines and standards is still not a part of mainstream software engineering and user interface design practice. As a result, we must remind, beg, and threaten developers to make software accessible. But is this sufficient? Are we blinding ourselves to tomorrow’s challenges as we fight yesterday’s battles?

We argue that it is both the possibility and the efficiency of access that are necessary for meaningful and equitable participation in society. As larger fractions of our personal and professional activities are conducted using computers, inefficient access limits what an individual can achieve. We propose a long-term vision of Personalized Dynamic Accessibility: We believe that user interfaces will enable more effective interaction if they reflect each person’s unique abilities, devices, and environment. Consider the following two examples.

Bob (we changed his name to protect his identity) is a college student. He has limited use of his hands as a result of spinal cord injury. He is able to perform many UI operations quickly except for those that require fine dexterity, such as selecting from hierarchical pull-down menus, scrolling, or resizing images or windows. Many of his classes, particularly those in math, require him to use specialized computer software for his assignments. He is able to use the required software but is much slower at it than his able-bodied peers. His grades suffer as a consequence.

Ken has spastic cerebral palsy and has little control over his limbs or his speech. Despite these impairments, he obtained a college degree in computer science and runs his own IT consulting business. He controls a trackball with his chin and uses a head-mounted wand to type. Just like Bob, Ken finds it difficult to perform UI operations that require precise control of the mouse-pointer position. Consequently, he needs nearly three times as much time as his able-bodied counterparts to operate typical graphical user interfaces and has to limit the number of clients he can accept.

One might argue that the inefficiencies Bob and Ken experience are inherent to their conditions, but this is not the case. Bob and Ken can perform many UI operations quickly but stumble on those that require fine dexterity. However, we found they were able to interact with custom-made interfaces with large interactors and minimal scrolling (similar to Figure 1b) faster and more easily than with typical interfaces.

ins01.jpg Figure 1. Ability-based adaptation in SUPPLE: (top) default interface for controlling lighting and A/V equipment in a classroom; (bottom) interface for the same application automatically generated by SUPPLE for a user with impaired dexterity based on a model of her actual motor abilities.

The current adaptive technologies that Bob and Ken use make it possible for them to interact with most software, but these technologies are designed on the premise that our software is immutable and that users must adapt themselves to the software. With Personalized Dynamic Accessibility, we aim to reverse this situation. Our vision rests on the following four pillars:

  • User interfaces should share the burden of adaptation. Interactions adapted to an individual’s abilities and input devices can improve a user’s range of activities, their efficiency, and subjective perceptions of the experience. Such specialized interfaces do not eliminate the need for assistive devices but offer the promise of more efficient interaction and the ability to perform more activities. For example, VoiceDraw has an interface designed specifically for vocal control [1]. It has enabled a paralyzed artist to create a broader range of art than he could with his existing voice-recognition software and assistive technology.
  • Personalization. Due to the diversity of abilities, needs, and assistive devices, no single user interface adaptation can address the needs of all users with impairments. For example, adaptations that will enable easier interactions for people with severely impaired dexterity may not be useful to users with reduced strength and limited range of motion. Personalized Dynamic Accessibility thus relies on mechanisms for assessing a user’s unique needs and functional abilities and then translating these assessments into personalized user interface design adaptations.
  • Dynamic adaptation. Needs and abilities change over time due to fatigue, medication, progression of the disease, or situation. Naturally, to be effective, personalized adaptations have to reflect the dynamic nature of these abilities and needs.
  • Scalability. Solutions that require access to scarce resources (such as designers and experts) are not feasible, because there are many individuals with unique abilities and needs. The success of Personalized Dynamic Accessibility depends on novel approaches that leverage automation, crowdsourcing, and user communities, as well as innovations that empower end users to create and modify their interfaces and to share these designs.

Our vision of Personalized Dynamic Accessibility extends beyond permanent impairments. In particular, functional abilities among non-disabled users can be significantly affected by activity or context. Compared with a stationary user, for example, a person operating a mobile device while walking will experience impaired dexterity, increased cognitive load, reduced visual acuity, and fragmented attention [2,3]. Such situational impairments currently limit the effectiveness of interaction for mobile users but may be mitigated by user interfaces that reflect the situation-dependent changes in a user’s effective abilities. Many other researchers have pointed out the connections between the challenges faced by mobile users and people with permanent impairments, but few strong intellectual bridges have been established between the accessibility and mobile HCI communities. With a focus on user interface adaptations and dynamic changes in users’ conditions, Personalized Dynamic Accessibility aligns the concerns of the two communities more closely than before.

We argue that it is both the possibility and the efficiency of access that are necessary for meaningful and equitable participation in society.

The vision of Personalized Dynamic Accessibility is approach-agnostic. We expect that some solutions will give users full control of how the adaptations are performed, while others will make the adaptation process entirely transparent. Some approaches may rely on automation, while others may turn to crowdsourcing or community-based collaborative design tools. The vision expressed here also builds on ideas developed by researchers in many fields other than accessibility and mobile HCI, including user modeling, intelligent user interfaces, and CSCW. Inherently interdisciplinary, Personalized Dynamic Accessibility does not have a home in any of the existing research communities. Here, as we did at our earlier workshop at CHI 2011, we aim to elucidate the shared aspects of the vision and demonstrate the potential value of closer communication and collaboration.

Tangible Progress

To illustrate the scope of Personalized Dynamic Accessibility and its interdisciplinary nature, we briefly present several representative research efforts.

Automatically assessing users’ functional abilities. Automatic approaches to personalizing the user interface must first assess the user’s functional abilities and subsequently predict effective interface modifications. As an example, Hurst and colleagues demonstrated that machine-learning techniques could be used to distinguish between mouse movements performed by able-bodied individuals and individuals whose motor abilities were impaired by age-related factors or Parkinson’s disease [4]. The same methods were also used to automatically predict whether individuals would benefit from an adaptive software technique.

One challenge with assessing functional ability is that much past work has relied on observations collected during controlled lab experiments, which Hurst and colleagues have demonstrated are not representative of real-world performance [5]. We believe that it is important to model text-entry and pointing abilities based on observations made unobtrusively while the users are performing their own tasks. With accurate and up-to-the-minute models of what the user can do, the computers can guide their users in the selection of the most promising adaptive settings or offer to automatically generate optimal adaptations.

Automatically adapting user interfaces to a user’s abilities. Ability assessments can be used to make appropriate adaptations to the interface automatically or in collaboration with the user. Supple is an example of such a system (Figure 1) [6]. Supple uses decision-theoretic optimization to automatically generate user-optimal interfaces. Supple relies on a model of how quickly a particular user performs basic user interface operations to generate user interfaces predicted to be the fastest to use for that user. Despite a large space of possible solutions, Supple finds the optimal design in seconds; the resulting interfaces have improved the performance and satisfaction of users with motor impairments. Automatic user interface generation is a scalable approach and one that enables highly personalized and dynamic solutions. One of its limitations, however, is that the resulting interfaces are unlikely to capture all the nuances of the applications’ semantics, and the approach is fundamentally limited by what types of abilities can be modeled.

Empowering user communities to design and share specialized interfaces. Allowing users to redesign the interfaces they use most often is an alternative to relying on automation. Challenges to this solution include enabling deep end-user redesign of existing user interfaces and providing tools for communities with similar abilities and needs to share and collaborate on the redesigns. Collaborative user interface design is already available for some gaming platforms, and dedicated gamers have developed interfaces optimized for particular strategies or game tasks. In the non-gaming world, the AdaptableGIMP project has developed infrastructure for community-based UI innovations [7]. In this project, users can develop variants of the GIMP toolbox specialized for different tasks and easily post their designs on a shared wiki so others can reuse these adaptations. We see user-driven design and sharing of specialized interfaces as an important component in our vision, and something that is largely unexplored.

Adapting interfaces to the situational impairments experienced by mobile users. There are many parallels between permanent impairments and the challenges faced by mobile users on the go. The Walking UI prototype (Figure 2) provides an example of adapting to the changing abilities of mobile users. It provides different UIs for users when stationary and in motion [8]. Both versions use a similar design to ensure that users do not have to learn two separate UIs, but the walking variant has larger interactors to compensate for mobile users’ impaired dexterity and larger fonts for song titles to accommodate reduced reading ability. It also manipulates the visual saliency of important information. The walking variant illustrates two differences between the design requirements for mobile users and users with permanent impairments. First, because mobile users’ abilities change, frequently adaptations must support the transfer of user skills between interfaces. Second, manually designed adaptations for the most common mobile situations may be feasible because most users will experience similar impairments. The latter point makes mobile situational impairments a productive domain in which to initially explore the design space of user interface adaptations for dexterity, visual acuity, and attention impairments.


Moving Forward

At CHI 2011 we organized a workshop on Personalized Dynamic Accessibility (though at the time we called it Dynamic Accessibility). Our goal was to provide an opportunity for researchers with diverse backgrounds but a shared interest in this topic to develop a common understanding and prioritize the research agenda.

One broad consensus among participants was that automated approaches to adaptation will likely be part of the solution land-scape for Personalized Dynamic Accessibility. However, further discussion about the degree and scope of automation requires systematic exploration of the design space of adaptive interactions. How should the user and the system share the initiative in the adaptation process? How should the adaptive mechanisms manage the trade-off between a user’s familiarity with the current state of the adapted user interface and the changes in her condition that make the current adaptation suboptimal? Is there a tension between optimizing the interaction for a user’s abilities and the possible rehabilitation benefit of working with a slightly more challenging design? What would be the psychological effects of the user interface continuously adapting in a way that makes the user’s declining abilities apparent?

Another area where progress is urgently needed is in measuring and modeling users’ abilities. We need modeling approaches that can capture the unique abilities of different individuals from a small number of observations. We need models that can be used to answer questions relevant to design: Will the user be able to perform a particular set of actions with a particular design? Will she be able to do so efficiently and with few errors? Existing models of motor performance are largely limited to simple pointing interactions, while other common desktop operations, such as scrolling, have not been studied in as much depth. Going beyond desktop interactions, no models currently exist for reasoning about the accuracy and speed of modern multitouch gestures pervasive in mobile computing. Modeling perceptual and cognitive abilities is even more challenging. To enable progress, we need both methods and implemented tools that others can build on.

A critical enabling resource in research on Personalized Dynamic Accessibility is data. Whether the work involves a novel modeling approach or an automatic adaptation mechanism, abundant data representative of diverse individuals is key. We can increase availability of this critical resource by publishing data along with our papers or by developing methodologies for conducting studies remotely. Sharing human subjects’ data is challenging because of ethical and regulatory requirements, but it is possible: Many IRBs will allow properly anonymized data to be retained indefinitely and shared with other researchers. Such sharing requires foresight, but once it becomes part of our research practice, it adds little extra overhead. Remote experimentation is also a powerful enabler because sufficient numbers of participants with a specific set of abilities may not be available locally. Such experimentation is also challenging because the experimenter must give up some control over the study environment. Here, recent developments in the crowdsourcing community may provide insight into how to effectively monitor and verify remote participants’ performance.

Despite ambitious goals, we believe we can have substantial real-world impact soon. We expect that within five years we will see tools enabling personalized access to Web-based inter-active content. Even sooner, we expect to see mobile applications that subtly respond to how users’ motor, perceptual, and cognitive abilities change due to activity and context. In the long term, we see Personalized Dynamic Accessibility as both enabling more equitable participation in today’s society for people with impairments and advancing the effectiveness and quality of mobile interaction.


1. Harada, S., Wobbrock, J.O., and Landay, J. Voicedraw: A hands-free voice-driven drawing application for people with motor impairments. Proc. of the 9th International ACM SIGACCESS Conference on Computers and Accessibiilty. ACM: New York, 2007.

2. Barnard, L., Yi, J.S., Jacko, J.A., and Sears, A. Capturing the effects of context on human performance in mobile computing systems. Personal Ubiquitous Comput. 11, 2 (2007), 81–96.

3. Lin, M., Goldman, R., Price, K.J., Sears, A., and Jacko, J. How do people tap when walking? An empirical investigation of nomadic data entry. International Journal of Human-Computer Studies 65, 9 (2007), 759–769.

4. Hurst, A., Hudson, S.E., Mankoff, J., and Trewin, S. Automatically detecting pointing performance. Proc. of the 13th International Conference on Intelligent User Interfaces. ACM, New York, 2008.

5. Hurst, A., Mankoff, J., and Hudson, S.E. Understanding pointing problems in real world computing environments. Proc. of the 10th International ACM SIGCHI Conf. on Computers and Accessibility (ASSETS) (Halifax, Nova Scotia). ACM, New York, 2008, 43–50.

6. Gajos, K.Z., Weld, D.S., and Wobbrock, J.O. Automatically generating personalized user interfaces with Supple. Artificial Intelligence 174, 12–13 (2010), 910–950. doi:10.1016/j.artint.2010.05.005

7. AdaptableGIMP;

8. Kane, S., Wobbrock, J., and Smith, I. Getting off the treadmill: Evaluating walking user interfaces for mobile devices in public spaces. Proc. of the 10th International Conference on Human Computer Interaction with Mobile Devices and Services. ACM: New York, 2008.


Krzysztof Z. Gajos is an assistant professor of computer science at Harvard University. His research interests span HCl, AI, and applied machine learning.

Amy Hurst is an assistant professor of information systems at the University of Maryland, Baltimore County. Her research interests span assistive technology, context-aware computing, and interaction design.

Leah Findlater is an assistant professor in the College of Information Studies at the University of Maryland, College Park. Her research interests include personalization, accessibility, and information and communication technologies for development (ICTD).

©2012 ACM  1072-5220/12/0300  $10.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2012 ACM, Inc.

Post Comment

No Comments Found