..' }

No Section...

VI.2 March/April 1999
Page: 9
Digital Citation

Research alerts


Authors:
Jennifer Bruer

back to top  Using Earcons to Provide Navigation Cues in Telephone-Based Interfaces

Stephen A. Brewster
Glasgow Interactive Systems Group
Department of Computing Science, University of Glasgow, Glasgow, G12 8QQ, UK
Tel: +44 (0)141 330 4966, Fax: +44 (0)141 330 4913
Email: stephen@dcs.gla.ac.uk, Web: www.dcs.gla.ac.uk/~stephen/

The following 2 abstracts are from the recent issue of ACM's Transactions on Computer Human Interaction (ToCHI). They are included here to alert interactions' readers to what research is currently being done in the field of Computer Human Interaction. The complete papers can be found in ACM's Digital Library at http://www. acm.org/pubs/citations/journals/tochi/1998-5/.

bullet.gif Introduction

Telephone-based interfaces (TBIs) are an increasingly important method for people to access information. The telephone is an ubiquitous device and is many people's primary method of entry into the information infrastructure. Access to an increasing number of sophisticated services is being offered over the telephone, such as voice-mail, electronic banking and even Web pages. However, these interfaces are often slow and hard to use; people get lost navigating around the hierarchies of menus that they must go through to reach the option or function desired.

Interaction is limited, in most cases, to the keypad for input and synthetic or recorded speech for output. The research we are interested in at Glasgow focuses on how we can improve the output from such systems by using non-speech sound. There are only two types of output possible in a standard TBI: speech and non-speech sounds. In almost all cases non-speech sounds are not used when the person is using the service. This means that an important part of the output channel is being wasted. We have conducted several experiments trying different types of sounds and training methods to understand how non-speech sound can be used to aid navigation.

bullet.gif Non-speech sound

We are using structured non-speech sounds called Earcons to provide navigation cues to help users know where they are and to avoid becoming lost. Non-speech sounds are used for this so that speech can remain to present information about bank account details etc. (if speech was used for navigation as well then the interaction would be further slowed down and the navigation cues would get in the way of the information the user was really after). With careful design the sounds and the speech do not interfere with each other (in the same way that singers and instruments work together in music).

bullet.gif How are the sounds used for navigation?

Sounds were added to an hierarchy as shown in Figure 1. Instead of having 25 individual sounds, the earcons were designed using hierarchical rules to aid recall. By remembering a small number of rules users could work out what location the sound represents within the hierarchy. The rules were:

  • Level 1: Neutral flute sound, played continuously
  • Level 2: Each sub-tree was given a different musical instrument timbre, played continuously
  • Level 3: Rhythm used to differentiate the nodes. The rhythms were played in the instrument from level 2
  • Level 4: The tempo of the level 3 sounds was changed to differentiate the nodes.

The sounds played when a user moved into a node within the hierarchy.

Our experimental results have shown that people can use the earcons for navigation and be able to tell where they are within a hierarchy of menus very effectively. They can do this with only small amounts of training and can remember the sounds well over periods of time. Therefore, designers of telephone services can use earcons to provide navigation cues to greatly enhance the usability of their systems.

back to top  Two-Handed Virtual Manipulation

Ken Hinckley
Microsoft Research
One Microsoft Way
Redmond, WA 98052
Tel: (425)-703-9065
kenh@microsoft.com

Randy Pausch
Carnegie Mellon University
Human-Computer Interaction Institute,
School of Computer Science, and School of Design
5000 Forbes Avenue
Pittsburgh, PA 15213-3891
pausch@cs.cmu.edu

Dennis Proffitt
University of Virginia
Department of Psychology
Charlottesville, VA 22903
drp@virginia.edu

Neal Kassell
University of Virginia
Virginia Neurological Institute and
Department of Neurological Surgery
Charlottesville, VA 22903

We discuss a two-handed user interface for three-dimensional neurosurgical visualization which was designed in collaboration with neurosurgeons at the University of Virginia. The user interface is based on the two-handed physical manipulation of hand-held tools in free space. These user interface props facilitate transfer of the neurosurgeon's skills for manipulating tools with two hands to the operation of a user interface for visualizing 3D medical images. From the surgeon's perspective, the interface is analogous to holding a miniature head in one hand which can be "sliced open" using a cutting-plane tool held in the other hand (Figure 1). The interface also includes a touchscreen which allows facile integration of 2D and 3D input techniques. Informal evaluations with neurosurgeons (and many non-neurosurgeons) have shown that with a cursory introduction, users can operate the interface within about one minute of touching the props.

By itself, this system is an example, or a "point design"; yet to understand why interaction techniques do or do not work, and to suggest possibilities for new techniques, it is important to move beyond point design and to introduce careful scientific measurement of human behavioral principles. For the case of two-handed virtual manipulation, we propose behavioral principles and show how our system is engineered to match these principles. In particular the common-sense viewpoint that "two hands save time by working in parallel" has some shortcomings because the hands do not necessarily work in parallel; rather there is a structure to dexterous two-handed manipulation, with the preferred hand articulating its motion relative to the dynamic frame of reference specified by the nonpreferred hand, as originally suggested by psychologist Yves Guiard. This directly influences the type of input mappings that are appropriate for two-handed virtual manipulation.

Furthermore, two hands do more than just save time over one hand. Users have a keen sense of where their hands are relative to one another, which is not dependent on visual feedback. It is also important to recognize that a two-handed compound task is not the same thing as a serial combination of one-handed subtasks. Using both hands alters the syntax of the interaction, which can ultimately influence how users think about a task.

To support these claims, we present a pair of formal experimental studies which investigate behavioral aspects of two-handed virtual object manipulation. Our hope is that this work will help others to apply the lessons learned in our neurosurgery application to future user interface designs. We also hope that it may serve as a concrete example of how one can explore, understand, and characterize advanced interaction techniques in general.

back to top  Figures

F1Figure 1. The hierarchy used.

F2Figure 2. A user views a cross-section of a brain using the interface props.

back to top 

©1999 ACM  1072-5220/99/0300  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 1999 ACM, Inc.

Post Comment


No Comments Found