Pär Carlshamre, Martin Rantzer
One of the efforts involving methodology development was called the Delta project, starting in 1992 as a collaboration of the computer science department at Linköping University, Sweden, and a consultancy within the Ericsson corporation (referred to in this article as Infocom). The project had two aims: (1) to develop a method that ensured a high level of usability of the delivered systems and (2) to make sure that the skills and knowledge of the technical communicators were better used in the development process. Naturally, everyone involved hoped that these changes would affect the whole organization, permanently.
Eight years later, when we review and consider the extent to which we have achieved our missionary goals, we realize that maturity of usability is not only about depth, as it is commonly described and measured, it is also about breadth. Thus, introducing usability is a two-stage process, and our strategy lacked the second stage.
In this article we begin with a traditional success story, told with the goal of depth in mind, followed by an analysis of the factors that made it successful. Then, in a broader perspective, we review what we have achieved and analyze the problems. Finally, we describe our approach to the second stage of the usability introduction strategy.
During the spring, summer, and fall of 1992, the Delta project group worked toward developing usability-oriented extensions to the development model officially used by Infocom. The sources used were primarily scientific literature on usability-oriented development. Developing the method was a mutual learning process, during which the researchers suggested numerous academically established methods and techniques for Infocom to consider. In the process, the researchers learned a great deal more about the intended context for and requirements of the method. The Infocom participants, meanwhile, learned about scientific work in usability.
Infocom's current methodology and strong emphasis on competitive, contract-bidding business made us concentrate on the usability engineering paradigm. The output was an initial version of a method handbook, detailing a number of project activities and their relevance to the methods currently used. The researchers took extra care to ensure that the method was well adapted to the organization and its current development methodology. The method was tested in a pilot project and revised according to the experiences of the researchers and the Infocom representatives on the Delta method development team. The activities described in the revised handbook were
- Scope definition,
- User and task analysis,
- Conceptual design,
- Specifying and testing usability requirements, and
- Prototype development.
See www.deltamethod.net for a complete description.
From the outset the Infocom participants had no explicit strategy for disseminating either the method or usability awareness in general. Rather, Infocom took an ad hoc, opportunistic approach, in which each opportunity to make a case for usability was taken according to the circumstances.
A method is in itself quite intangible. It's merely a piece of knowledgeknowledge that is difficult and costly to dig out for the first time. Once it is done, however, it's tricky to get a return on the investment made. Because Infocom was a consultancy, it naturally wanted to be able to make a profit on its initiative. At the time, there were no usability-oriented methods available on the Swedish market. Usability had just started to become a buzzword in the commercial vocabulary, and Infocom was way ahead of its competitors. But in order to be able to make a profit, you need something to sellthe more concrete the betterso the packaging and the tangible representations of the method were important.
Once the project team decided on producing a handbook, it had many considerations; for example, Should the handbook reveal the whole story? Should the handbook be treated as proprietary information? There was a clear conflict between the commercial interests and the good cause of spreading usability knowledge.
Eventually, the handbook was designed as a spring-back booklet, richly illustrated by a professional graphic designer. The designer also made artwork of the boxy process description, which turned out to be of tremendous importance in disseminating the Delta method. The process map itself became the logotype with which people identified the Delta method. (See Figure 1.)
But the handbook was only one of the representations of the intangible knowledge; a presentation kit and a flyer were developed, and, most important, the full five-day course. The course spanned an impressive range, from cognitive psychology to visual design to a complete case-based walk-through of the method itself.
The Delta project could rightfully be described as a success story. After its launch in 1993, the start was a bit slow primarily because only a handful of people within the organization had first-hand knowledge of Delta. As always, organizational changes and people moving didn't make things better. For example, the project manager himself went on leave for a year, and the primary initiator changed jobs before the method was even launched.
However, in 1994 a new effort was made to introduce Delta into the larger Ericsson organization. A number of courses, seminars, and presentations were given, and several projects within the organization followed the principles of the method, with support by the core Delta team. A certain consciousness started to propagate within parts of the company, and sometimes you could even hear people talking about "doing a little Delta" in their own projects.
Since development of the method, upper management had realized the increasing importance of usability and supported an increased focus on usability issues in general, and the Delta method in particular. It was more difficult to win the acceptance of project managers, even though tailored presentations, focusing on cost-benefit issues and the straightforwardness of the method, were given en masse. Ericsson, with its roots in hardware design and production, has always been quite aware of methodology and mature in applying development processes. One of the drawbacks of the method was that it was an addendum to the official and established development process, rather than an integral part of it. The reason for this was that the Delta project team didn't have the authority to change the official development model. Thus, it was easy for project managers, with their already strained schedules, to simply turn down any suggestions to "addend." However, after some lobbying, in 1995 the Delta method was integrated with the official system development model, and thus described with the same terminology and figures and in the same documents. This was a significant achievement in Delta's history.
A new organizational change at Ericsson resulted in the sale of the Delta method. Infocom was incorporated into another Ericsson company with a product-oriented business, and no services were to be marketed externally. Thus, to regain some of the investment made, the rights to the Delta method were offered on the Swedish market (and the rights to use it internally remained within Ericsson). A large consultancy, having noticed an increasing demand for usability services from its customers, bought the method for a sum that represented only a small fraction of the actual investment but was still fairly substantial considering that all they got was a method handbook and the rights (and obligation) to use and exploit the method. The consultancy also hired one of the main contributors to Delta, and within two years it was considered one of two prime suppliers of usability-oriented services within the country.
Ericsson continued using and refining Delta. During 1996, more courses were given, more seminars, lectures, and presentations were held, and now Ericsson had an edge in the external consultancy. Considering that a consultancy, which stressed cost versus benefit, had been able to make a profit from the method gave an extra boost to the internal marketing campaign. A number of projects using Delta were carried out internally, and one project in particular became a tremendous success. The product at hand was already sold before its completion, something that had never happened before. The reason was that the usability team, after immense persistence and negotiation, was allowed to do usability testing of an early prototype on site at a potential customer in Japan. The users were so overwhelmed that they immediately ordered the product.
The next year, this successful case was awarded the status of "Good Practice" in a corporatewide initiative to improve on development processes in order to increase quality and productivity and shorten lead times. This was the most exquisite mark of recognition a method could get within the Ericsson corporation. At the same time, usability appeared on the list of the three "Vital Few Actions" for the corporation as expressed by the Ericsson chief executive officer (CEO).
This recognition created new demands on the supply of usability-oriented services. However, because only a handful of people were considered capable of planning, managing, and carrying out the activities in a Delta project, the usability expertise itself constituted a bottleneck in disseminating the method. As a consequence, 11 human-factors people from various Ericsson companies were flown to Ireland for a week of training to become Delta course teachers, and the number of usability-competent people was expected to grow exponentially from that point. Courses were now given internationally, and requests came from Canada, India, Ireland, and the United States, among others.
The Linköping office was now considered to be the nerve center for usability engineering within the corporation. To establish this reputation, an international (and internal) conference was arranged, with tutorials, reviewed papers, and invited speakers, attracting 130 people from three continents.
Until 1999, it was estimated that more than 100 Delta projects had been carried out, and many more than 250 Ericsson internal software engineers, managers, and technical communicators had taken the five-day course. Apart from the internal benefits, the Delta efforts had resulted in eight academic theses on various levels and five internationally published articles [13, 6, 7].
Most people would say, either from the preceding description or from their own perceptions, that the Delta project was a complete success. So why was it so successful?
First of all, the timing was good. It was during the early 1990s that humancomputer interaction started to gain interest outside academia. Computer users became more aware and started to oppose hard-to-use interfaces, computer magazines began to include usability as a criterion for evaluation, and as a natural reaction software suppliers tentatively started to use usability (or user-friendliness) as a term in their marketing efforts. No one could deny that usability had ascended to the premier league of buzzwords in information technology, and now it competed with such terms as object orientation and client-server. In Sweden at the time there was, to the best of our knowledge, only one small consultancy that offered usability-oriented knowledge and services. (This may come as a surprise to those who believe that Scandinavia has been regarded as a pioneer in the history of user-centered design.) A few years earlier, the Delta project could probably not have been initiated, and a few years later, the Delta method would definitely not have had the advantage of being the only coherent and commercially available method.
Another important factor in the success of Delta was of course the method itself. It was designed for a commercial environment; therefore,
- It was stripped from activities, documents, and artifacts for which the benefits did not sufficiently outweigh the costs or that were not comprehensible (i.e., "academic");
- User involvement was "optimized," considering that users are scarce resources;
- The method was fairly straightforward and constituted a good pedagogical model that could easily be explained to managers in about 20 minutes ; and
- The description was free from dogmatism.
Finally, as we touched on earlier, the whole method was well packaged; there was a list of services, a course with a fixed price, and there were brochures and books about it. If all that wasn't convincing enough, there were scientific results proving that it worked within Ericsson!
The most important success factor, however, was the individuals that developed and promoted the Delta method. They were not usability experts when they started out, but over time they became experts in the eyes of their colleagues and managers (and a little later they became actual experts). They were firm believers in the importance of usability; they were verbal, intrepid, self-confident, devoted, and determined to do whatever was needed to deliver the users from bad usability.
In short, methods seem to follow the same laws as any product. Without the right timing and the right marketing it doesn't really matter how good the product is. From this perspective, the contribution of the researchers to this success was quite limited. This may also explain why so many good methods never spread outside academia.
The usability maturity level of an organization is usually discussed and measured in terms of depth, rather than breadth. In the usability maturity model , for instance, the so-called "human-centeredness" scale is used to measure usability maturity on six levels, from "unrecognized" to "institutionalized" for a particular "organization." The scale is intended to assist those who wish to improve their organization's performance of user-centered activities, and the model gives useful advice for the transition between levels. But maturity, in terms of how large parts of the whole organization have any awareness of usability, is not addressed.
Likewise, in other literature (for example, ) and popular computer-human interaction (CHI) tutorials, good advice on how to introduce usability often starts with "getting a foot in the door," from which an increased focus on human-centered design is developed until the organization has a completely user-centered perspective.
Depending on what we mean by "organization," the Delta project could be said to be a complete success or a complete failure. With an appropriate definition of "organization," we managed to reach the top levels of usability maturity. But considering Ericsson as a whole, things look a bit different.
Ericsson employs around 130,000 people, of which about one-fifth are involved in software development. Some 250 of these, or about 1.2 percent, have taken the Delta course in the past six years.
Furthermore, about 1,000 software development projects per year are started within Ericsson. That adds up to 7,000 since 1993. Delta has had an impact on 100, or 1.4 percent, of these.
Our achievements, in terms of collective consciousness, are also disputable. For example, the organization that maintains the corporate Web site holding the "Good Practice" descriptions referred to previously, during a recent revision decided to remove the Delta description from the site. The reason was that only three requests for information had been logged during the past year. Also, the proclamation of usability focus as one of the Vital Few Actions lasted only one yearit disappeared with the CEO who pronounced it.
This is not to say that only 250 software developers are aware of usability issues, or that only 100 projects have had an explicit focus on usability, or that our whole organization has a lacking interest in usability issues. On the contrary, we have seen a huge increase in usability awareness since we started eight years ago. But we, or the Delta method, cannot justly claim the credit for that.
As mentioned previously, Ericsson is a highly process-conscious organization. The flip side of this is that people working with systems development have seen new allegedly "revolutionary" methods come and go every year. Understandably, most of them just ask you to stand in the line when you come with another something-oriented method.
Not only are there too many methods, but they are usually too complex. Methods need to be complex to be marketable, and simple to be implementable. They need a certain level of complexity to merit a name, a handbook, a logotype, and all the other manifestations that are marketable. But they need to be easy if you want people to adapt them.
In a large organization with a great diversity of products and technologies, it seems impossible to achieve a unified way of developing systems. As methodologists, the thought of the whole organization's using the same processes seems quite attractive, and sometimes we are fairly persuasive aboutit. But we have yet to find evidence that one method fits all development teams, any more than one size of shoes fits all developers.
Furthermore, methods often lack an adequate level of humbleness. With a new method comes a new set of values on which the method is based, and the work done so far in an organization is suddenly evaluated according to this new set of values, and turns out to be not so good. This is natural, but it is not uncomplicated. Except for the inherent insinuation that "what you have done so far has been wrong," which may not be much of a problem, methodologists suggest that people who may have a lot more domain experience than themselves should buy in to these new values instantly. In the case of usability, this strategy is taken to the extreme. We often start out by showing examples of bad usability that people can laugh at, and then we tell them how things should be done. Of course, the examples come from systems or products whose development we were not involved in. If you do not have a deep understanding of the users, their tasks, and the contextas we did notyou will not be able to evaluate usability.
Usability proponents sometimes tend to believe that usability is the most important aspect in every situation. "If it's not usable, it's worthless," we say, and this seems an incontestable fact. But isn't it also a fact that if it doesn't function, it's useless? Or if it doesn't sell it's useless? And if it's not portable, or reliable, or has the right price, it may not sell, and thus it will be useless? Sometimes we forget that quality attributes other than usability are necessary to consider in building a successful product. And sometimes in our quest to introduce usability in our organizations we boost usability at the expense of other quality aspects. So what happens to the specific methods to ensure reusability, portability, fault tolerance, delivery precision, and so on? Which method is applied in a given case sometimes depends more on current trends or who shouts the loudest than on rational and careful weighing of pros and cons . And sometimes usability proponents have very loud voices, which in the long run may turn into a disadvantage.
Do we, as usability methodologists, practice what we preach? Are we as careful and cautious when we introduce new methods in an organization as we are when we introduce new systems?
To be successful in introducing a new system in an organization, we know how important it is that the organization be well prepared for it. We often talk about "preparing the ground for a soft landing," which means that we involve people from the receiving organization in development, analysis, design, and evaluation. We stress that the receiving organization needs to be informed about what is going on and how it is going to affect the individual employee, and so on. We get upset when we hear the old argument "if it's just good enough, it will sell itself."
In the introduction to this paper we proudly stated that the development of the Delta method was a joint venture between Linköping University, as domain experts, and Ericsson, as the users. Furthermore, "the researchers took extra care to ensure that the method was well adapted to the organization and its current development methodology." This is of course truethe method was developed in good usability-oriented spirit, but little effort was made to ensure that the method had a soft landing.
Finally, if for once we did a user analysis on software engineers (both authors are trained software engineers), the users of our methods, we would understand that they don't like to be told how to do things. They are trained to solve problems, and they are usually amazingly skilled at just that. They take pride in solving complex problems in the most efficient ways, and the method is part of the solution. But if you tell them how to solve a problem, you take away much of the creativity in their work. Therefore, introducing a method into a software development organization is a delicate problem already from the outset. And usability experts, if any, should realize this.
The Separatist's Dilemma
When introducing a new concept into an organization, you need to boost it and focus on what is different from the traditional. But there comes a point when separation is counterproductive and may be the most serious threat to the desired exponential dissemination. This is probably the most important part of our experience from the Delta projectseparation is necessary for depth, but detrimental for breadth.
In many cases in which the usability concept is introduced, it is common to create separate tools and methods, with separate vocabularies, advocated by people with separate educational backgrounds who receive separate roles and titles, and who have separate budgets and write separate requirements in separate specifications, and then verify these requirements in their separate laboratories.
To a certain point this separation is necessary; we needed to create the role of "usability expert," for example, because an authority, or somebody setting the norms and having all the answers, is important in a changing environment. In our case it wasn't really a conscious or strategic decision to create this role; it just happened. Moreover, the reason why usability requirements were often separate from other requirements, sometimes even in separate documents, was that there were separate people doing the user and task analyses and functional analyses, respectively. And the usability requirements didn't really fit into the traditional forms of specifications anyway, which is quite unfortunate considering the power and importance that requirement specifications have in industrial software development.
Furthermore, usability labsour separate requirement verification facilitiesare usually great marketing tools, because they make the otherwise intangible usability concept quite concrete. The usability lab constitutes a "room of usability," with a sign saying "Usability Lab," where skeptics can walk in and tamper with some cool pieces of technology. It's very real. Even though the Delta method encourages field testing in the right context, the decision to invest in a usability lab was made as soon as we could afford it.
But in our case, the separation soon enough became counterproductive. It was so special that it was hard for everyone but the handful of "experts" to gain confidence to manage usability activities themselves. Consequently, an expert was always called in as soon as a new project started, to help out with methodology issues, project planning, and, not least, marketing within the development team. Even though there were quite a few "usability champions" in the development organization who would have been perfectly capable of managing these issues themselves, they always called an expert for help. In effect, the handful of experts became a bottleneck in the dissemination of usability.
Considering again the usability maturity model, the organizational units within Ericsson that employ the Delta method would probably rank somewhere between "integrated" and "institutionalized" according to the indicators provided in the model description. These are the two highest maturity levels. However, in many of these cases there is still much work for a usability advocate to do, and if we need a usability advocate for each organizational unit, we need thousands to cover our corporate needs.
With our initial "strategy" we have not yet seen the exponential dissemination that is needed in order to have a serious impact on our development organization. Dissemination has been linear instead, albeit with a reasonably high angle of inclination which allows (or allures) us to describe the project as a success story. But to get to the next stage in the deployment process, where we don't talk so much about usability but instead just do itthroughout the development organizationwe need to stop separating usability from the rest of the world and instead focus on integrating the concept with our daily routines. We, the usability experts, need to make ourselves redundant, and make the concept of usability as commonplace as object-orientation or client-server technology. We don't have a special organizational unit for "object-orientedness," for example; developers have adopted object-oriented methods and incorporated them into their daily work. As long as people keep asking us for introductory-level seminars in usability, we know that we have not succeeded yet. Think about how many people these days are asked to give introductory-level seminars in object-orientation or client-server techniques. Not manythey succeeded!
It seems to us, then, that it's time for a second step, a totally different strategy. We definitely had to take that first step, and some parts of our large organization may still need to stretch that first step a little longer. But in many cases, the focus should now be on diffusion rather than perfection. We need to find a way to increase the average level of usability maturity by raising the floor rather than sharpening the peaks.
To summarize, we have learned that we need to be more humble, that we need to take into greater consideration other aspects of quality, that methods must not be complex, that software engineers don't like to be told how to solve a problem, and that usability needs to be integrated to the point where it's not even talked about much. So what would our next step be?
The Power of Requirements
Analyzing software engineers would also reveal their reverence for requirements. We mentioned earlier the importance of requirements specifications for industrial software development. In fact, the smallest and single most important piece of information is the requirement; requirements drive the whole development project toward a complete product. Requirements are followed to the letter; nothing much is done unless a requirement states so. Following this stricture, combined with the fact that about 90 percent of the requirements in our organization are formulated as "It shall be possible to...," inevitably leads to lots of features that are just thatpossible to do. But not necessarily easy to do. By using this power and importance of requirements we hope to be able to achieve exponential growth in usability awareness.
Our model (See Figure 2) is based on requirements and is designed to avoid or eliminate many of the problems we have described. We assume a requirements database in which each requirement is defined as a hierarchy of attributes that should be "set," or assigned values, in order to proceed from one development stage to another. These attributes include information such as "Name," "Short description," "Origin," "Date," "Responsible," "Priority," and other information that is usually needed for handling requirements. Attributes can be simple, as in the preceding examples, or compound, as in "Models," which consists of textual descriptions, use-case models, and sketches, for instance. There are compound attributes called "User," "Task," and "Usage context" that should also be set in order for the requirement to be propagated to the next stage in the development. This is where usability becomes integrated.
To accommodate other aspects of quality, attributes may exist for portability, reusability, and reliability. The model, developed by some of the Delta originators, was recently proposed within Ericsson. It has the power to incorporate usability issues where it matters the mostbecause our development efforts are driven by requirementswhile considering other aspects of quality on the same level of importance.
Simply suggesting a new model for our requirements will probably not change much. But from our experience we are reluctant to introduce new methods. Instead, we would like to view system development as simply "setting the attributes" of the requirements. The software developers may use available techniques to fill in the information in the attribute structure. Some of the attributes, such as "Name" and "Origin," may be easy to set. But some other attributes may require expertise or at least some form of help. But we, and especially the software engineers, do not want to have a separate, monolithic method to be able to fill in the information related to usability.
Setting the "Task" attribute, for example, may require some sort of task analysis. If an analysis has been done previously, the requirements handler may just link to relevant parts of the analysis document. If not, he or she may need some assistance, and just by clicking on the attribute, the system provides help on various levels, from examples, templates, and minimal step-by-step descriptions to film clips of someone interviewing a user. No method mentioned, just a humble helping hand.
We need some way to describe progress, but not in terms of what has been doneonly in terms of what is known about our requirements. This need is a natural consequence of systems development's function as, above all, a learning process. At the beginning of the life of a project (or requirement), not much is known about what will result. Different stakeholders have different ideas and visions, but in the perspective of a learning process, most everything still remains to be learneda fact that should be reflected in the requirements database. The requirements database should also reflect the progress of the learning process. Therefore, we have defined a number of states that a requirement can have, depending on how much is known about it.
The state model (See Figure 3) is designed to reflect natural stages in a development process, and the decision points between states can be viewed as milestones.
A requirement that is captured means that it has been identified as important for further elaboration. At this stage, the requirement may only have the "Name," "Origin," and "Text description" set. A specified requirement holds all information needed to start design and implementation, including the usability-oriented attributes. A requirement in the planned state has been scheduled for realization, and resources have been allocated for it. Finally, a realized requirement has been verified and is ready for delivery. The actual work is carried out between the states.
The decision points between each state are important. At each decision point, the product committee basically has the power to make one of two decisions:
- Propagate the requirement to the next state. This implies that the product committee confirms that all necessary information is available at this stage and that the requirement should be further elaborated (or delivered, in the final state).
- Not propagate to the next state. This means that some issues need to be further investigated before a decision can be made.
The state model is not rigid, in terms of what information can be added at what state. The states merely define a least acceptable level of information. So for example, there is no rule against adding "planning information" to a captured requirement. It might even be required sometimes to do so. Furthermore, it is the product committee, those who know the domain, that decides how to define the states in terms of what attributes need to be set at each particular stage.
The requirements driven evolutionary model, or RDEM, borrows ideas from Tom Gilb's evolutionary project management model . We see RDEM not as a method, but rather as an infrastructure, a framework in which the various development teams in our large and global organization are allowed to add their own competencies, their own methods and techniques. It is not revolutionary in any respect (we have learned to be humble), but it is a novel perspective for us that takes into consideration all the lessons that we learned from the Delta project: It's simple, it doesn't say how to, it appreciates existing expertise and domain knowledge, and it integrates usability at the lowest level and places it on par with other aspects of quality.
We probably could not have introduced usability with the RDEM framework back in 1992. The first step of separation was necessary in order to put the focus on usability and to explain what usability-oriented development is. In some parts of our organization separation may still be an appropriate strategy in order to achieve momentum in disseminating usability. But in many other parts we will probably not get any further with our first strategy. Still, the Delta method exists as a pedagogical framework, and the techniques described in the method could very well be applied within the context of the RDEM. Learning about our users, their tasks, and the contexts of use will always be necessary, and the Delta method suggests how to do this.
The RDEM framework presents many new challenges, some having to do with usability issues. For example, will it actually help an exponential dissemination of usability awareness? Other challenges deal with requirements management issues in general, including how to understand and handle the complexity of relationships between requirements and between requirement attributes. As we direct our research efforts toward these new problem areas, we want to be able to tell a real success story next time.
We wish to thank Kristian Sandahl, Jonas Löwgren, and Åsa Granlund for valuable comments on earlier drafts of this article.
1. Carlshamre, P. Technical Communicators and System Developers Collaborating in Usability-Oriented Systems Development: A Case Study. In Proceedings of the ACM 12th Annual International Conference on Systems Documentation (SIGDOC'94), ACM, Oct 2-5, 1994, Banff, Canada.
2. Carlshamre, P. and Karlsson, J. A Usability-Oriented Approach to Requirements Engineering. In Proceedings of the 2nd IEEE International Conference on Requirements Engineering, April 15-18, 1996, Colorado Springs, CO, USA. IEEE Computer Society Press.
3. Carlshamre, P., Löwgren, J., and Rantzer, M. Usability Meets the Real World: A Case Study of Usability-Oriented Method Development in Industrial Software Production. In Proceedings of the 4th International Symposium on Human Factors in Organization Design and Management (G. Bradley and H. Hendrick, eds.), May 29-June 2, 1994, Stockholm, Sweden, North-Holland (Amsterdam).
9. Yeh, A. Requirements Engineering Support Technique (REQUEST): A Market Driven Requirements Management Process. In Proceedings of the 2nd Symposium of Quality Software Development Tools (May, New Orleans, LA), IEEE Computer Society Press, 1992, pp. 211223.
Pär Carlshamre and Martin Rantzer
Ericsson Radio Systems
Business Column Editor
Dray & Associates, Inc.
2007 Kenwood Parkway
Minneapolis, MN 55405, USA
- 1993 - Launch of Delta
- 1995 - Delta integrated with official system development model
- 1997 - Delta acknowledged as "Good Practice"; CEO cites usability as one of three "Vital Few Actions" for Ericsson
- 1999 - 250+ participants pass 5-day Delta course; more than 100 projects used Delta
©2001 ACM 1072-5220/01/0100 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2001 ACM, Inc.