No Section...

VII.4 July 2000
Page: 19
Digital Citation

Design: innovating with OVID


Authors:
Daniel Corlett

The general principle of an OOUI is to use familiar objects to model the real world as the user normally interacts with it, and then to map the environment onto a computer. OVID has already been used successfully in this context at IBM in designing software such as RealPhone and RealCD.

However, if instead of simply automating everyday tasks on a computer, we decided to innovate and develop new working environments and practices, how easy is it to use such a modeling technique to create the ideal interface? A group of six student software designers and programmers discovered the answer as they put OVID to use in creating the Handheld Learning Resource (HandLeR).

HandLeR is a cross-disciplinary project hosted by the University of Birmingham. The goal of the project is to design a ubiquitous computing and communications device to aid and support learning throughout a user’s lifetime. This device should offer all the productivity tools expected of a personal computer in a small, portable package, but more than that, it must act as tutor, guide, mentor, secretary, companion, and memory aid. Intuitive to use, it should be available continuously and persistently throughout the user’s lifetime, adapting to the user’s personal and intellectual development. The team’s mission was to develop a first-generation functional prototype in eight months. Given the scope of the project, however, the team decided to limit the study to children 7 to 11 years old [2].

The team was convinced early on that, in design, it should break with existing interface and metaphor conventions to provide something that would be intuitive to all users and appropriate in every context of working and learning. OVID was not suggested until after the first three months of design had passed, which were spent brainstorming the concept and conducting user surveys to define a generic HandLeR. From these sessions came the space of possible designs—a collection of storyboards, sketches, and outline specifications covering the interface and the technical and ergonomic aspects. As decisions were made and system specifics began to arise, the team decided to formalize the design using OVID.

The steps to designing with OVID are shown in Figure 1. Because of the iterative nature of OVID, most or all of these steps can be revisited at any time. Thus, OVID is involved in a project from inception to completion and involves all members of a software team. Learning OVID presented a steep learning curve to the group, not least because the skills of object-oriented design (OOD), unified modeling language (UML), task analysis, and evaluation had to be learned concurrently. Simply learning how to do object-oriented modeling can take up to six months [7].

OVID models use a subset of UML and Rational Rose®, a computer-aided software engineering (CASE) tool. This was used in the project, although it does not yet support all the necessary notation for OVID. The model is built incrementally, starting with the user’s perceived model of his environment. From this model develops the designer’s model—a collection of objects essential to the user’s task, with which views are associated. Interaction diagrams, state diagrams, and state tables ensure completeness of the design and finally lead to the implementer’s model.

Requirements Analysis and the User’s Perceived Model

Requirements analysis was the longest stage in the HandLeR project, lasting several months. It incorporated a thorough analysis of the users’ needs and current situation. In addition, creative thinking was applied to see how the users’ tasks could be improved by a HandLeR.

The input to an OVID design comes from the designer’s observing or assuming the role of the user; it describes in natural language the actions that they perform and the objects with which they interact. These descriptions are called use cases. Observation, rather than lateral thinking, is essential. The conflict between observation and lateral thinking became the biggest obstacle to what we were trying to achieve. In attempting to make the system innovative and generic for all users, we had already decided to eliminate workplace-specific objects such as desktops, folders, and mailboxes.

Before using OVID to guide the design, we had already decided on a main facet of the interface that had come solely from brainstorming the problem. We tried to consider what would be the most intuitive metaphor for the system and decided on the human body. Every person understands the functions of his own body—the legs for transport, the hands for manipulating things, the eyes for seeing, and so on. Most important, we know that the brain is for thinking and remembering. So we decided to base all functions of the system on the human body and provide the user with an avatar, or character, to represent all the fundamental actions she may wish to perform. For example, clicking on the eyes of the avatar would access the built-in camera, clicking on the mouth would begin a phone conversation (through the built-in mobile telephone), and clicking on the head would open the mindmap. The mindmap dynamically links all experiences of the user, from Web browsing to writing a document, to taking pictures with the camera. All links between objects are made by context associations such as key words, locations (provided by the integrated satellite positioning system), time, and others. We are confident that had we left the design entirely to "mimicking" the existing real world, this design would not have come about. However, we did use OVID to refine the design ideas. In this HandLeR designed for children, the avatar manifests itself as a cartoon rabbit. (See Figure 2.)

The team decided first to model the system tools (such as for drawing, writing, and communicating), which would be based on recognizable tools such as books, pens, erasers, and paintbrushes. Task descriptions, which are fundamental in making a successful model, can be tricky. Sentences such as "Child writes in topic book," "Pen writes in topic book," and "Child pushes pen across topic book" sound alike, but the resulting object model could be considerably different. The necessary level of detail also became contentious. Is "Child paints picture" at all helpful? Descriptions such as "Child opens hand to grasp paintbrush. Paintbrush is dipped in water and paint. Paintbrush is pushed across paper to leave a mark" would certainly ensure that the project was never completed! We realized that most important, descriptions should be consistent, and if serious doubts were raised, the users should be revisited to better understand their perceptions. In practice, descriptions remained brief and informal as all team members were involved in all development processes and fully understood what was being developed. Working as a small team in the same office allowed questions to be answered immediately. However, with larger projects and more team members, it is likely that more comprehensive descriptions would be required. Examples of descriptions are found in Figure 3.

Next, we tried to model the novel parts of the system. This created something of a paradox. In discussion with one of OVID’s creators, we decided to "imagine" that HandLeR already existed and write the task descriptions as if we had observed someone using it. Because we were writing the mindmap heuristics from scratch, imagining someone using it helped us to refine our decisions and make an intuitive navigation system. HandLeR is not only a passive system, it is also intended to include autonomous agents, which had to be part of the system model. To this end, we defined HandLeR as a user of itself—an odd concept, but one that worked.

For unknown tasks and objects, it would have been appropriate to start with essential use cases [1]. Essential use cases explain abstract intentions or goals without specifics of the task. These can then be developed through the imagination of user and designer to generate more full use cases. However, it is then important to refer any objects that have been invented to the user to test for intuitiveness. Not doing so was a mistake that cost us dearly later in the evaluation cycle.

The set of use case descriptions was rewritten many times before a consistent and informative set emerged. Because OVID was an iterative method, it allowed the descriptions to be revisited at any stage, and it was often the later stages, particularly interaction and state diagrams, that showed when descriptions had either too much or too little detail.

The next stage was to extract and sort all the nouns that occur in the use cases. The nouns are listed in a table under the headings concrete objects, people (objects), people (subjects), forms, and abstract objects. They are then ordered by importance and frequency of use. Some people can be both object and subject, as in "Child gives completed homework to the teacher" and "Teacher gives marked homework to child." It is important to realize who is using the system at the time in order to maintain consistency and avoid confusion. Some items were difficult to classify—is a picture a concrete or abstract object? The paper or canvas may be concrete, but is the image itself? At this point it was necessary to return to the user’s point of view to see how she would normally associate with such an item. As a rule of thumb, if in some form it would be possible to touch the item, it was labeled concrete; if not, it was abstract. Thus an image was considered concrete, whereas a conversation was labeled abstract.

Once all the objects are identified, each must be described in a single-clause sentence. This helps to define membership and behavior of the classes (objects) in the diagrams. The rule of thumb is that if a description contains any ands, ifs, or buts, it cannot be used consistently in a user interface and probably needs to be split into further objects. [4]

A metaphor should normally be used only if the task analysis makes it apparent [1]. The final set of descriptions did not directly support the chosen metaphor, but the intuitiveness of the human body as the place where all the user’s tasks begin was confirmed through user studies and was retained.

Making the Designer’s Model

The modeling stage was shorter than the first stage, but nevertheless it took many weeks to build an entire first-generation model, and then a few days each time the stage was revisited to update the model.

Having defined the objects, the designers began building the designer’s model. First, objects were added to the UML diagram as hierarchical families of classes. (See Figure 4.) At first, the designers classified objects according to use rather than behavior. It seemed natural to place together paint, brush, and crayon as drawing tools, despite the dissimilarity of their individual properties. Likewise, pens, pencils, and erasers were grouped as writing tools, even though an eraser performs an action opposite to the other two. A further iteration of object tables and descriptions was thus necessary. Paint was removed, and a paintbrush was assumed to be already loaded with paint. This was then grouped as a marking tool, which, together with pen, pencil, crayon, and eraser "...creates a mark the same color as the background."

Once we were satisfied that the objects were suitably grouped, we drew new diagrams of all the other interrelationships of objects. These were then easily expanded to include the views for each visible or audible object. A view can constitute anything from a single button or sound to an entire screen. (See Figure 5.)

Interaction diagrams, which are part of the designer’s model, came next. These diagrams illustrate use cases for the new system. Interactions between the user and views are detailed, as is the information that must be passed internally between views. At this point we realized we had not defined the behavior of the mindmap nearly well enough. Many iterations were rapidly cycled. We drew a basic diagram and tested it by performing a mental walkthrough of the use cases. Doing so highlighted the errors, omissions, and inconsistencies, and the diagram was redrawn. Eventually the diagram became comprehensive and consistent enough to satisfy the evaluations.

Finally, we drew state diagrams to show how objects change state for given actions. Diagramming ensured that all permutations were considered. However, even at this late stage, we uncovered inconsistencies in the model. Previously, the designers had considered that, for example, a piece of paper that was initially blank could "become" either a picture or a piece of text. However, object reincarnation is not possible in state diagrams. Objects can change state, but they cannot become other objects. We realized that not only could the diagrams not be drawn, but such a notion would most likely confuse the user, so definitions were redrawn. A generic workbook was introduced and defined as a container, and paper and exercise books were removed. Objects such as text and images could then be added to or removed from the workbook, each beginning and ending life only once.

At an early stage, nouns had been extracted from the task analysis use cases and all other information was discarded. However, when we describe the relationships and responsibilities of objects, particularly at the implementer’s level, Points of View (POV) [5] are useful. A POV (which can be written in natural language like all other OVID descriptions) not only describes each object’s attributes, but also the rest of the model as seen by that object. A rigorous application of such descriptions can eliminate exceptions and omissions in functionality.

Although OVID uses task analysis to extract objects, the method is not inherently task oriented. Were the prototype HandLeR to have developed further to include more custom learning and creativity tools, this could have been limiting. Not all working practices are step-by-step procedures with clear objectives and set tools. For example, each person uses a calendar or diary differently [8]. Often, entries are uncertainties without crisp formalities. The objects in this scenario may be quite simple to identify, but how they are used may require a detailed task and organization study.

Design

The design stage occurred partly in parallel with the previous one. It lasted as long, but it required only a few hours per week. As sections of the model were completed, graphic design could begin.

OVID does not explain how real views are to be developed from the abstract view classes in the diagrams. One of the most difficult decisions to be made was how to embed views within each other, such as buttons or text boxes within screens. The interaction diagram was a good guide, but a more thorough approach would have been to create comprehensive POVs for views as well as objects. In practice, the group used a combination of intuition and trial and error.

A means of bridging the gap between models and design is still needed. In hindsight, the later parts of the modeling stage and the earlier parts of the design stage should have been merged. Content modeling [1] using the abstract views on sticky notes can lead to logical collections of views, or super-views. For specific use cases, using a whitewater approach [3] with user participation can rapidly build up interface views (still devoid of graphics and specific interaction techniques). The whitewater approach defines the placement of views within other views and may identify views not yet explicitly shown on the diagram. Content models can then inform the expansion of the original view diagram with additional views, relationships, and hierarchies. This more complete diagram, which also references the content model, can in turn inform the creation of useful interaction diagrams, which more closely relate to the finished system. As the interaction diagrams are developed, navigation maps [1] can be used to test a given network of views to determine their usability. For at least the first iteration the modeling stage would end and the graphic design could begin.

As the designs grew, the previously separate metaphor and tools began to merge. When the avatar was drawn, it was noticed that the hands were valuable parts of the body but were not being used. It was also clear from the use cases that writing and drawing were important tasks and had to be clearly represented, so we chose to place a workbook in one hand and a palette in the other. (See Figure 2.) Touching these objects would launch the appropriate tools. Having now placed these objects in the avatar’s hands, we had redraw the models to relate the objects and define the new interactions.

Evaluation

Each evaluation was completed in one day; usually we spent 1 or 2 days afterwards inspecting the results and figuring out how they should affect the design. Low-fidelity paper mock-ups were drawn and placed in front of the users for evaluation. The main attribute tested was intuitiveness—the children were given hypothetical tasks and were asked, from the screen designs, what they thought they would have to do to perform the task. Had there been more time, longer cognitive walkthroughs would have been conducted at the low-fidelity stage to ensure that whole sequences were easy to perform. Instead, the designers performed them mentally on the interaction diagrams. Mock-ups were then converted, first to semifunctional HyperCard® screens and then via the implementer’s model to fully functional Visual C++ code.

At each stage, an evaluation could force a change in the models that would then need to be reworked. As the models grew, it became increasingly difficult to track changes and their effects throughout the model.

Implementation

The implementer’s model, built on the designer’s model, was drawn up. Extra information needed by the programmer is included and any function not directly connected with the interface must be defined, such as the database underlying the mindmap. In addition, since in the prototype HandLeR some proprietary tools were used, the interactions with these tools rather than with the ideal tools had to be modeled. State tables also had to take into account that a computer sees objects differently from the user. For example, a piece of text, which is either "existent" or "not existent" to the user, is to the computer a file that can be "opened," "saved," or "modified." Building the implementer’s model generally took only a couple of days each time a generation of the designer’s model was completed. Coding the implementation took the equivalent of three people working full-time for 2 months.

Finally…

Final evaluations followed, consisting of cognitive walkthroughs and heuristic evaluations. Children were given a HandLeR to use for a number of hours. For some of this time, they had specific tasks to perform; the rest they spent in informal exploration. During this time they were filmed and, afterwards, interviewed and given questionnaires. Many more children were given printed versions of the interfaces with questionnaires to test intuitiveness.

The main concern about the model was that only 10 percent of the children found the feet of the avatar an intuitive place to start browsing the Web, and only 7 percent associated the watch carried by the rabbit with a diary and journal, which showed that earlier evaluations may not have been thorough. However, all other aspects, including navigation of the mindmap, drawing, taking photos, and making a phone call were found to have surprisingly high success rates. Some were recognized at first glance by as many as 94 percent of the children. Clearly applying the thorough processes of OVID had paid off.

Nearly all of the children expressed a dislike of the aesthetics of the cartoon rabbit, but each had his or her own ideas as to how it should be improved. Clearly, the avatar should be highly customizable, perhaps taking on the personas of film and sports stars, aliens, robots, or teddy bears.

OVID and its UML diagrams are ideal for team collaboration and informant design [6]. Until the implementer’s model is reached, all models and descriptions are written in the user’s familiar language at a level basic enough for all parties to understand. This allows even young children to participate in the design. The implementer’s model is simply an augmentation of the designer’s model; this allows either the designer or the programmer to quickly understand the other’s model and to build one from the other with minimum additional work. Using the object-oriented approach allows the design to be directly transferred to code, which is supported by the CASE tool. What would be useful, however, is a CASE tool that supports team work and versioning and the attachment of images to views to be shared by the whole group.

Had we used OVID from the beginning, the (very successful) body metaphor would not likely have become apparent, nor would the mindmap have been considered. In an attempt to avoid using the files and folders metaphor, a shallow task analysis of the situation would have seen children using drawers, display boards, cupboards, and marking piles for storing and retrieving information. Although these might have been ideal for a system only used in this context, they could not expand to support lifelong learning.

The study proved that OVID can be used successfully for a project such as HandLeR, but it must be applied carefully. At each stage you must decide when to apply OVID rigorously and when to ignore the constraints imposed by it. The evaluations supported our experience that good graphic design is not guaranteed by an OVID model, but that excellent interactivity could be.

Above all, before even task analysis is conducted, the designers must fully understand the goals and requirements of the project and be prepared to hold off using OVID while initial creative problem-solving is carried out.

References

1. Constantine, L.L. and Lockwood, L. Software for Use: A Practical Guide to the Models and Methods of Usage-Centered Design. Addison-Wesley, 1999.

2. Corlett, D.J. et al. Group Design Study. Final Report: Vol. 1, Development of the Interim HandLeR. 1999. Available from School of Electronic and Electrical Engineering. University of Birmingham, B15 2TT, UK

3. Dayton, T. et al. Bridging User Needs to Object Oriented GUI Prototype via Task Object Design. In L.E. Wood, User Interface Design, CRC Press, 1998.

4. Roberts, D. et al. Designing for the User with OVID: Bridging User Interface Design and Software Engineering. Macmillan Technical Publishing, 1998.

5. Rosson, M.B. Designing Object–Oriented User Interfaces from Usage Scenarios. Available at http://www.cutsys.com/CHI97/Rosson.html

6. Scaife, M. Rogers, Y. Aldrich, F., et al. Designing For or Designing With? Informant Design for Interactive Learning Environments. In Proceedings of Human Factors in Computing Systems, CHI’97, 1997.

7. Van Harmelen, M. Object-Oriented Modelling and Specification for User Interface Design. In Proceedings of the EG Workshop on Design, Specification, and Verification of Interactive Systems. Springer-Verlag, 1996, pp.199–231.

8. Van Harmelen, M. et al. Object Models in User Interface Design: CHI 97 Workshop Summary. Available at www.cutsys.com/CHI97/Results.html.

Author

Daniel Corlett
School of Electronic and Electrical Engineering
University of Birmingham
B15 2TT, United Kingdom
djcorlett@iee,org

Design Column Editors

Kate Ehrlich
Viant
89 South St, 2nd Floor
Boston MA 02111
(617) 531-3700
kehrlich@viant.com

Austin Henderson
Rivendel Consulting & Design, Inc.
P.O. Box 334
8115 La Honda Rd. (for courier services)
La Honda, CA 94020 USA
+1-650-747-9201
fax: +1-650-747-0467
henderson@rivcons.com
www.rivcons.com

Figures

F1Figure 1. Flow Diagram of the OVID method

F2Figure 2. The Rabbit Avatar

F3Figure 3. Use-Case descriptions and noun extraction

F4Figure 4. Object hierarchy of partsof the avatar.

F5Figure 5. Object Relationships Diagram for avatar, augmented with Views

UF1Figure.

©2000 ACM  1072-5220/00/0700  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2000 ACM, Inc.

 

Post Comment


No Comments Found