No Section...

VI.4 July-Aug. 1999
Page: 9
Digital Citation

Research alerts

Jennifer Bruer

back to top  A Software Model and Specification Language for Non-WIMP User Interfaces

Robert J.K. Jacob
Leonidas Deligiannidis
Stephen Morrison
Department of Electrical Engineering and Computer Science
Tufts University
Medford, Mass., U.S.A.

User interface software tools have become highly specialized for today's state of the art GUI or desktop style or WIMP (window-icon-menu-pointer) interfaces. As next-generation interaction styles such as virtual environments (called "non-WIMP") emerge, today's user interface description languages and software systems will be less applicable, and a new generation of tools will be needed.

Advances in user interface design and technology have been outpacing advances in models, languages, and user interface software tools. The result is that, today: previous generation command language interfaces can now be specified and implemented very effectively; current generation direct manipulation or WIMP interfaces are now moderately well served by user interface software tools; and the emerging generation of non-WIMP interfaces is hardly handled. Most of today's examples of non-WIMP interfaces, such as virtual reality systems, have of necessity been designed and implemented with event-based models more suited to previous interface styles. Because those models fail to capture continuous, parallel interaction explicitly, the interfaces have required considerable ad-hoc, low-level programming approaches. While some of these are very inventive, they have made such systems difficult to develop, share, and reuse. We seek techniques and abstractions for describing and implementing these interfaces at a higher level, closer to the point of view of the user and the dialogue, rather than to the exigencies of the implementation.

Figure. The specification for a simple pivoting arm attached to a base column, in a virtual environment, shown in the visual form of our language. The state diagram in the lower window shows the state change that occurs when the user grabs the arm (it activates condition GRASPED1) and releases the arm (deactivates it). The data flow (Linkc1) in the upper window, relates the continuous hand cursor position to the continuous arm rotation variable and is active only while the user is grasping the arm.

To develop tools for non-WIMP interfaces, we first need new models, abstractions, and languages for specifying such interaction. We present our work on developing, prototyping, and testing a software model and language for describing and programming the fine-grained aspects of interaction in a non-WIMP user interface, upon which next-generation user interface software tools can be built. Our approach is based on our view that the essence of a non-WIMP dialogue is a set of continuous relationships — most of which are temporary. The model combines a data-flow or constraint-like component for the continuous relationships with an event-based component for discrete interactions, which can enable or disable individual continuous relationships. Its key ingredients are the separation of non-WIMP interaction into two components and the framework it provides for communication between the two.

To demonstrate our approach, we present the PMIW user interface management system for non-WIMP interactions, a set of examples running under it, a visual editor for our user interface description language, and a discussion of our implementation and our restricted use of constraints for a performance-driven interactive situation. Experience with our examples showed the benefit of using a declarative specification for the continuous portion of the interface. The interface designer need write no code to maintain such continuous relationships or handle events triggered by value changes, but simply declares a relationship and then turns it on or off as desired. This restricted use of such declarative specification does not hurt performance, as might be expected; in fact, it will make possible the introduction of time management and optimization features designed for the needs of a video-driven virtual environment.

We seek to provide a model and language that captures the structure of non-WIMP interactions in the way that various previous techniques have captured command-based, textual, and event-based styles and to show that using such a high-level model need not compromise real-time performance.

This work was supported by National Science Foundation Grant IRI-9625573, Office of Naval Research Grant N00014-95-1-1099, and Naval Research Laboratory Grant N00014-95-1-G014.

back to top  A Partial Test of the Task-Medium Fit Proposition in a Group Support System Environment

Bernard C. Y. Tan
School of Computing, National University of Singapore

Kwok-Kee Wei
School of Computing, National University of Singapore

Choon-Ling Sia
Department of Information Systems, City University of Hong Kong

Krishnamurthy S. Raman
School of Computing, National University of Singapore

When newer media (e.g., electronic communication) first emerged in the late 1970s, it was predicted that this combination of computer and communication technology could link people electronically and lead to a replacement of face-to-face by dispersed meetings. This prediction has not materialized. Dispersed meetings did become a reality but did not totally replaced face-to-face meetings. Research on technology-mediated groups offers the explanation.

When groups solve different task types, communication media have a differing impact on task outcomes. There are different consequences when the communication medium is too rich or too lean for the task. A communication medium that is too rich would not hinder groups from completing their task effectively. But groups may be vulnerable to distractions caused by exchange of more surplus information than is desired. This is a poor task-medium fit from an efficiency perspective. With a communication medium that is too lean for the task, groups would be unable to exchange necessary rich information to resolve equivocality. They would have difficulties attaining a shared frame of reference to complete their task. This is a poor task-medium fit from an effectiveness perspective.

A laboratory experiment was carried out to test this task-medium fit proposition in a Group Support System (GSS) environment (see Figure 1). Communication medium was varied using a face-to-face GSS and a dispersed GSS setting. A face-to-face GSS setting is similar to traditional face-to-face meetings in the sense that group members gather at the same time and place, and can talk to and see each other. But unlike traditional face-to-face meetings, group members can carry out electronic communication. Visual (e.g., visual orientation and facial expression) and verbal cues (e.g., tone and loudness of voice) can be used with textual cues. A dispersed GSS setting differs from traditional face-to-face meetings because group members meet at the same time but are located at different places, and can neither talk to nor see each other. Instead, they use solely electronic communication. Task type was varied using an intellective and a preference task. An intellective task has solutions based on shared logical criteria among group members while a preference task involves solutions based on individual preferences and values. Results of this experiment indicated that face-to-face GSS and dispersed GSS groups were equally efficient and effective with the intellective task. But face-to-face GSS groups were more efficient and effective than dispersed GSS groups when doing the preference task. Thus, group performance tends to be adversely affected when the communication medium is too lean for the task but not when the communication medium is too rich for the task.

The findings can benefit practitioners by providing clues on when face-to-face GSS meetings may be desirable and when dispersed GSS meetings may be appropriate. Dispersed GSS meetings offer several benefits in organizational settings. First, group members can meet in the comfort and privacy of their offices. Second, by remaining in their offices, group members are likely to have access to a wider range of internal and external information sources. Third, group members can engage in asynchronous meetings where they are called upon to contribute only when their expertise is required. Fourth, if group members are in geographically dispersed locations, such meetings do away with a significant amount of time and financial resources required for traveling. Given these benefits and the fact that dispersed GSS meetings did not reduce group effectiveness and efficiency with the intellective task, groups can consider using dispersed GSS meetings when confronted with intellective tasks or other task types low in equivocality. However, dispersed GSS meetings do not allow exchange of rich visual and verbal cues, essential for equivocality reduction. Groups performing the preference task in the dispersed GSS setting reported a negative impact on effectiveness and efficiency. Hence, when groups are working on preference tasks or other task types high in equivocality, face-to-face GSS meetings may be appropriate.

Results from this study suggest that face-to-face and dispersed GSS meetings may each be suitable for different task types. Thus, instead of predicting which type of meetings may prevail in the future, it is more fruitful to examine how groups can use both types of meetings to optimize their performance. Face-to-face and dispersed meetings should be seen as complements to rather than substitutes for each other.

back to top  Figures

F1Figure 1. The Experimental Settings

UF1Figure. The specification for a simple pivoting arm attached to a base column, in a virtual environment, shown in the visual form of our language. The state diagram in the lower window shows the state change that occurs when the user grabs the arm (it activates condition GRASPED1) and releases the arm (deactivates it). The data flow (Linkc1) in the upper window, relates the continuous hand cursor position to the continuous arm rotation variable and is active only while the user is grasping the arm.

back to top 

©1999 ACM  1072-5220/99/0700  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 1999 ACM, Inc.

Post Comment

No Comments Found