Authors:
Morten Hertzum, Torkil Clemmensen, Pedro F. Campos, Barbara Rita Barricelli, Carl Emil Derby Hansen, Linnea K. Herbæk, Jose Abdelnour-Nocera, Arminda Guerra Lopes, Parisa Saadati
Over the preceding decades, usability testing became widely used for revealing design problems in information systems while they are still at the prototype stage. Normally, these tests involve removing users from their work for an hour or two to have them solve preset tasks with a system prototype in a lab-like setting. As a result, usability testing is insensitive to many of the organizational and contextual issues that determine the fit between a system and its real-world environment. Methods such as work domain analysis and scenario-based design aim to address this limitation but from an analysis-and-design perspective. In contrast, pilot implementation is an evaluation method. It involves evaluating a system in the field and thereby is an important supplement to usability testing.
Evaluation in the field allows for identifying subtle organizational and contextual issues that are critical to the adoption of a system and to its consequences for those affected by it. This makes pilot implementation valuable to the interaction designer. However, pilot implementations are challenging to conduct, the identified issues may be muddled, and the possibilities for resolving them may be limited. In deciding whether and when to apply the method of pilot implementation, interaction designers need to be aware of its strengths, weaknesses, opportunities, and threats (SWOT). In this article, we offer a critical perspective on the adoption of pilot implementation in interaction design, supported by the results of a SWOT analysis.
→ Pilot implementation is a method for evaluating the fit between a system and its real-world environment prior to release.
→ The strengths of pilot implementation revolve around its realness and the weaknesses around its partialness.
→ A pilot implementation reveals the consequences of a system for those involved and affected.
A pilot implementation is "a field test of a properly engineered, yet unfinished system in its intended environment, using real data, and aiming—through real-use experience—to explore the value of the system, improve or assess its design, and reduce implementation risk" [1]. This definition, illustrated in Figure 1, points to four ways in which pilot implementation goes beyond usability testing.
![]() | Figure 1. The elements of a pilot implementation. |
First, pilot implementations are conducted in the field, not the lab. This difference in setting means that the pilot system is exposed to the users' technical infrastructure, organizational processes, incentive structures, power relations, and so forth.
Second, pilot implementations involve using the pilot system for real work. That is, preset tasks are replaced with the users' real work, which has genuine interdependencies, deadlines, and consequences.
Third, pilot implementations are conducted toward the end of the system development process. This is necessary because pilot systems must be properly engineered; they are not merely mock-ups or prototypes.
Fourth, pilot implementations last for days, weeks, or even months. Therefore, data about what is learned must be collected in ways other than by listening in on the users while they think out loud.
Just as pilot implementation differs from usability testing, it also differs from the early stages of full-scale implementation. While full-scale implementation is conducted to realize benefit from the new system through continued use, pilot implementation is a test conducted to learn through temporary use. The learning objective means that a pilot implementation must strike a balance between integrating the system in day-to-day processes and maintaining a focus on the system as an object under evaluation.
As an example of a pilot implementation, Pereira et al. [2] describe the creation and pilot use of a blog for attracting more students to a university master's program. The blog posts, for example, contained videos about the contents of the master's program and summaries of theses written by its students. New blog posts were added several times a week throughout two periods of pilot use (July—August 2020 and January—August 2021). The data collected to learn from the pilot implementation showed that most visitors arrived at the blog from Facebook, thereby making links on this platform particularly important. The blog posts attracting most visitors concerned contemporary issues, such as smart cities, and suggested the importance of varied content. In terms of enrollment, 27 students were enrolled in 2020, compared to 18 the year before. In 2021, 50 students applied, thereby exceeding the maximum intake of 35 students. By documenting this increase, the pilot implementation provided a strong argument for making the blog permanent. The main challenge is the resources required to post new blog content on a continual basis.
Informing Interaction Design through Pilot Implementation
This article is the outcome of a workshop held at the INTERACT 2021 conference by the IFIP Working Group 13.6 on Human-Work Interaction Design. At the workshop, 11 studies of pilot implementation were presented and discussed. In the months after the workshop, nine of its participants—the authors of this article—continued discussions and made a SWOT analysis of pilot implementation. We chose a SWOT analysis because it explicitly looks for both pros and cons and because it could be conducted in a consensus-building manner that identified the pilot-implementation features we agreed on (see sidebar on next page).
The SWOT analysis identified nine strengths, three weaknesses, 10 opportunities, and four threats (Table 1). We hope that our analysis will stimulate discussion about pilot implementation and inform decisions about when and how to apply this method. The strengths accord with the positive experiences from the above-mentioned master's program blog and indicate that pilot implementation has a lot to offer. However, the uneven distribution of strengths and opportunities versus weaknesses and threats also reveals a need for further research on the features that weaken and threaten pilot implementation.
![]() | Table 1. The agreed-upon strengths, weaknesses, opportunities, and threats of pilot implementation. |
We contend that pilot implementation has nine strengths (Table 1). The strengths revolve around the realness that is achieved by trying out a system in its intended environment. Pilot implementations share this realness with methods such as beta tests and living labs. The aims of these methods overlap, but beta tests tend to be more about the technical quality of a system than the social and organizational issues included in pilot implementation. Living labs span labs that resemble a living environment as well as living environments that are instrumented for data collection; pilot implementation is exclusively about the latter.
The use of a system for real work makes its consequences salient to its users, who may experience that their daily work becomes easier, that their workload increases, or that workarounds become necessary. Usually, this salience is associated with the post-implementation stage after a system has gone live [4]. Pilot implementation makes the consequences of using a system salient to those involved and affected while the design of the system has not yet been finalized, that is, prior to go-live. Thereby it provides possibilities for instigating increased accountability for these consequences and for remedying negative consequences before the system is released for full-scale use.
The identified weaknesses (Table 1) revolve around the partialness of pilot vimplementation. While partialness is inevitable in any activity that attempts to stage real use prior to go-live, pilot implementation can go a long way to reduce the weaknesses, for example by prolonging the pilot implementation or involving multiple pilot sites. However, the reduction must be weighed against the cost of extra time and sites. Rather than seeking to minimize the weaknesses (at high cost), it appears advisable to factor them into the interpretation of the learning from the pilot implementation. In doing so, the first weakness almost becomes a recommendation for how to handle the partialness: by letting the pilot implementation inform discussions rather than expecting it to put an end to them.
Pilot implementation makes the consequences of using a system salient to those involved and affected while the design of the system has not yet been finalized.
The essence of the identified opportunities for deriving additional benefit from pilot implementation is that pilot implementation creates a room for experiencing and experimenting with a future system and the associated ways of working. Because pilot implementations have limited organizational and temporal scope, the cost of failure is restricted. Thus, it becomes feasible to run somewhat larger risks and learn from the outcome. Several of the opportunities (Table 1) point to ways of extending this learning through the incorporation of, for example, user-centered approaches, facilities for managing the feedback data, or tools for end-user development. These examples emphasize that pilot implementations are not merely tests but also opportunities for innovation. New possibilities may emerge as a result of the pilot implementation and be seized by its participants to pursue additional goals with the system.
The identified threats (Table 1) emphasize that pilot implementations may fail. Diverse issues must be handled to avoid failure, thereby requiring that those in charge of a pilot implementation maintain a wide spread of attention. The threats show that the issues in need of attention include, among others, schedule pressure, expectation management, and sufficient preparations. The preparation phase may last as much as 12 times longer than the period of pilot use if the users first need to reach alignment and external events interfere with the basis for reaching this alignment [5].
It should be noted that the identified threats focus on why a conducted pilot implementation may fail to generate benefit. They do not explain the issues that may lead to deciding against conducting a pilot implementation in the first place.
We do not mean to imply that the 26 items in our SWOT analysis are a complete list. Rather, we sought to err on the side of caution by only including items on which we agreed or strongly agreed. Researchers and practitioners with backgrounds different from ours may be aware of additional items or rate items differently. Our backgrounds are in research and mainly in systems for use at work. Complementary input is needed from, for example, design practitioners, system users, and people with a managerial outlook. These groups experience systems from different perspectives. Future work should attend to these differences, which bring out that pilot implementation may uncover contentious and political issues [5]. Thus, a consensus-building approach, such as the one taken in this article, will not suffice. Studies must also make room for dissensus among groups with different perspectives on pilot implementation. In our future work, we will continue to conduct case studies and action research to investigate the pros and cons of pilot implementation for all involved and affected.
One SWOT analysis cannot settle the qualities of pilot implementation. That said, we contend that there is untapped potential in recognizing pilot implementation as a method for evaluating the consequences of systems prior to their release. Currently, pilot implementation is often confounded with the early stages of full-scale implementation. The human-computer interaction community could play a key role in the discussion of the qualities of pilot implementation and in positioning it as a method for evaluating the fit between a system and its real-world environment, thereby complementing lab-based usability tests.
1. Hertzum, M., Bansler, J.P., Havn, E., and Simonsen, J. Pilot implementation: Learning from field tests in IS development. Communications of the Association for Information Systems 30, 1 (2012), 313–328; https://doi.org/10.17705/1CAIS.03020
2. Pereira, M.C., Ferreira, J.C., Moro, S., and Gonçalves, F. University digital engagement of students. Sense, Feel, Design — INTERACT2021 IFIP TC13 Workshops. Revised Selected Papers. Springer, Cham, 2022, LNCS vol. 13198, 376–390; https://doi.org/10.1007/978-3-030-98388-8_33
3. Brady, S.R. The Delphi Method. Handbook of Methodological Approaches to Community-Based Research: Qualitative, Quantitative, and Mixed Methods. Oxford Univ. Press, Oxford, UK, 2016, 61–67.
4. Wagner, E.L. and Newell, S. Exploring the importance of participation in the post-implementation period of an ES project: A neglected area. Journal of the Association for Information Systems 8, 10 (2007), 508–524; https://doi.org/10.17705/1jais.00142
5. Mønsted, T., Hertzum, M., and Søndergaard, J. A socio-temporal perspective on pilot implementation: Bootstrapping preventive care. Computer Supported Cooperative Work 29, 4 (2020), 419–449; https://doi.org/10.1007/s10606-019-09369-6
Morten Hertzum is a professor of information science at the University of Copenhagen. His research interests include human-computer interaction, sociotechnical change, and healthcare informatics. He coedited the book Situated Design Methods (MIT Press, 2014) and has authored books about usability testing and organizational implementation. [email protected]
Torkil Clemmensen is a professor in the Department of Digitalization at Copenhagen Business School. His research interest is in psychology as a science of design. His research centers on cultural-psychological perspectives on usability, user experience, and the digitalization of work. He contributes to human-computer interaction, design, and information systems. [email protected]
Pedro F. Campos is associate professor with habilitation at the University of Madeira. He is VP for research at the Interactive Technologies Institute, part of LARSyS, a reference associate laboratory. His research interests include persuasive technologies, behavior change, pilot implementation, and human-work interaction design. [email protected]
Barbara Rita Barricelli is an assistant professor in the Department of Information Engineering at the University of Brescia. Her research interests are human-computer interaction, end user development, computer semiotics and semiotic engineering, and participatory design. She is chair of the IFIP Working Group 13.6 on Human-Work Interaction Design. [email protected]
Carl Emil Derby Hansen is a master's student at Copenhagen Business School studying business administration and information systems. He has a bachelor's in business economics and IT. His bachelor thesis was a study of a change and implementation process and was written on a case of pilot implementation. [email protected]
Linnea K. Herbæk is an M.Sc. student studying business administration and IT at Copenhagen Business School, where she received her bachelor's degree. Her research interests are within the areas of IT implementation approaches, change management, interaction design, and research design. [email protected]
Jose Abdelnour-Nocera is a professor of sociotechnical design at the University of West London. His interests are in stakeholder diversity in the design of people-centered systems and in software development teams. He has been involved in several projects in international development, health, enterprise resource planning systems, service design, and higher education. [email protected]
Arminda Guerra Lopes is a professor at Polytechnic Institute of Castelo Branco. She is a researcher at LARSyS/Interactive Technologies Institute (ITI) in Portugal. Her research interests include human-computer interaction, interaction design, and research methodologies. She holds a Ph.D. in human-computer interaction from Leeds Metropolitan University in the U.K. [email protected]
Parisa Saadati is a lecturer in IT at the University of West London. She is currently a Ph.D. student in sociotechnical design for automated systems. Her research interests lie in designing future automated systems and workplaces based on human-centered systems and Agile project management. [email protected]
Sidebar: How we Made the SWOT Analysis
The SWOT analysis proceeded in three steps inspired by the Delphi method [3]:
First, the authors made a SWOT analysis of the pilot implementation they had presented at the workshop. This step resulted in SWOT analyses of six of the 11 studies presented at the workshop (some of the authors had worked together on studies).
Second, the first author of this article compiled a list of 10 strengths, 10 weaknesses, 10 opportunities, and 10 threats related to pilot implementation. This list was based on the SWOT analyses from the first step, a reading of the other studies presented at the workshop, and the pilot-implementation literature.
Third, the nine authors individually rated the 40 SWOT items on a seven-point scale from "Strongly disagree" (1) to "Strongly agree" (7).
We retained the 26 items that received a median rating of 6 or 7, indicating that the majority of the authors agreed or strongly agreed that these items captured pilot-implementation features. The other 14 items were excluded because only a minority of the authors agreed more than weakly to them.
©2023 ACM 1072-5520/23/01 $15.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.
Post Comment
No Comments Found