Features

XXIX.6 November - December 2022
Page: 54
Digital Citation

Esports and expertise: what competitive gaming can teach us about mastery


Authors:
Ben Boudaoud, Josef Spjut, Joohwan Kim, Arjun Madhusudan, Benjamin Watson

back to top 

Historically, much research and development in human-computer interaction has focused on atomic and generalizable tasks, where task completion time indicates productivity. The emergence of competitive games and esports, however, reminds us of an alternative perspective on human performance in HCI: mastery of higher-level, holistic practices. Just as a world-renowned artist is rarely evaluated for their individual brushstrokes, so skilled competitive gamers rarely succeed solely by completing individual mouse movements or keystrokes as quickly as possible. Instead, they optimize more-task-specific skills, adeptly performing challenges deep in the learning curve for their game of choice.

back to top  Insights

Esports is a great platform for studying development of expertise.
Game developers work carefully to drive engagement by driving reward over time.
We can borrow ideas from esports to improve other aspects of HCI.

Mastery or expertise refers to the acquisition of comprehensive knowledge or ability in a certain art, technique, or task. Experts typically differentiate themselves from others via high levels of performance specific to a given activity, but not necessarily through their underlying simpler and more general skills. For example, elite chess players have superior recall of chess-piece configurations drawn from actual matches, but not of random configurations [1]. Musical savants are superior at recalling tonal sequences following Western scale structure, but not with unfamiliar sequences that violate Western musical conventions. Similarly, skills in esports are most easily transferred when in-game tasks are not just mechanically but also contextually alike. Thus, the perceived superiority of elite esports athletes can be better understood by considering gaming tasks holistically, as opposed to dissecting these tasks into atomic actions.

ins01.gif

Expertise is difficult to achieve without well-known tools with high skill ceilings (i.e., tools that do not limit the individual's performance). This, in part, explains the rarity of equipment changes in traditional sports and lack of interest in sports with high levels of arbitrariness (e.g., professional coin flipping). Similarly, competitive gaming seldom changes its interfaces. Instead, esports athletes spend hours mastering the human-computer interface specific to their game, given their play style. We believe that esports presents a unique opportunity to HCI researchers: to return focus to the support of task mastery. Rather than just perpetually improving interfaces to support ease of learning and low-level efficiency, understanding how to enable deep expertise with well-known, high-skill-ceiling interfaces should also be emphasized. By studying mastery of esports, researchers can support mastery in a broad range of other tasks.


The perceived superiority of elite esports athletes can be better understood by considering gaming tasks holistically, as opposed to dissecting these tasks into atomic actions.


At first glance, this suggestion may appear self-contradictory. How are we to accelerate and improve mastery in interfaces without changing them? The answer to this riddle lies in thinking beyond the interface. Computerized tasks provide unique opportunities to measure and coach performance, for example, by allowing the exact reproduction of challenging tasks for training purposes. By leveraging this highly controlled nature of tasks together with precise, granular, and repeated measures of user performance, computers provide a unique opportunity to enable learning environments that carry users far deeper into mastery than most traditional sports coaching.

back to top  Background

Human-computer interaction has long been driven by ease of use, with interfaces skeuomorphically representing well-known analog affordances, minimizing barriers to entry and accelerating acquisition of expertise early in the learning process. Examples include the pointer-based desktop interface, touch-based mobile interfaces, joysticks and steering wheels for flight and car simulators, and pressure- or distance-sensitive pads for drawing. In all of these cases, the human-computer interface borrows from an interaction in the physical world that, at some point, was novel, requiring skilled mastery by humans.

Like many HCI researchers, our own prior work studies atomic actions, categorizing the effects of system parameters such as latency, frame rate, resolution, and display size on fundamental first-person aiming tasks. However, in conducting this work, we have begun to realize that these parameters make up only a small portion of mastery within competitive gameplay. For example, a static targeting task, among the best studied in HCI literature, rarely mimics competitive gameplay, wherein targets are often other users intentionally trying to create difficult-to-follow motion paths. Similarly, a well-controlled and predictable targeting dynamic such as a point-and-click model does not always do a good job of mimicking the complex weapon dynamics of kick, spread, and areas of effect often found in actual competitive first-person shooter (FPS) titles. Figure 1 demonstrates some of the critical differences between static point-and-click and common FPS targeting tasks: ignoring target motion and weapon dynamics. A Fitts's style width and distance can be defined for both tasks, but the 3D nature of the first-person targeting task makes the perceived experience very different.

Figure 1. Differences between Fitts' style tasks and FPS aiming, which also follows Fitts' law Figure 1. A comparison of a "pure" Fitts's law and FPS target task, both of which have been demonstrated to conform to Fitts's law. While both tasks have and respect a log(D/W) difficulty formulation, the FPS targeting task includes substantially more distracters, 3D effects, and scene dynamics.

HCI does have a tradition of research supporting mastery, dating back at least to Doug Engelbart, the progenitor of much of the modern computer interface. In his "mother of all demos" in 1968, Engelbart presented interface components such as the chorded keyboard that were designed to support expertise rather than ease of learning [2]. However, likely because contemporary computer interfaces were so hard to learn, research and industry focused on components that were "user friendly," such as mice and menus. For the rest of his life, Engelbart bemoaned the trend, asking, "Would you bring a tricycle to a bicycle race?" [3]. With a few exceptions, this trend in HCI has continued to this day.

In focusing on mastery, Engelbart also advocated for intelligence augmentation (IA) over artificial intelligence (AI) [2]. Mastery implies deep human engagement in a task, while the predominant forms of AI seek to free humans from their tasks. Instead, Engelbart proposed that computers help humans with their tasks—indeed, that they help them achieve mastery. In competitive gaming, the use of AI is nearly always seen as anticompetitive and unfair. Alternatively, IA approaches such as console aim assist have had a warmer reception.

back to top  Current Trends

The recent explosion of esports popularity and prize pools has created a renaissance of support for mastery, including a focus on higher-level, game-specific tasks, computer training aids, and improved interfaces. Many expert gamers are already performing pseudoscientific studies on themselves in an attempt to gain a novel edge in performance. While the drawbacks to this self-experimentation are relatively minor when compared with those of traditional sports, where players risk health impacts or injury, such studies could be more controlled and productive, with results disseminated widely. The broader community has demonstrated not just an interest in but also a hunger for better-controlled scientific study of optimizing performance with their specific interfaces. This interest comes in part from the competitive nature of esports, where benefits that might be unimportant to more typical computer users can be the difference between winning and losing. Esports tasks are designed to differentiate skill levels, amplifying small differences near the limit of human mechanical and cognitive mastery (e.g., a powerful or long-range weapon penalizing missed shots by enforcing a long inter-shot period). By amplifying small performance differences deep into the learning curve, esports motivate users to chase every possible performance improvement. Figure 2 provides a simplified demonstration of how the logarithmic nature of the skill-time curve can be pushed toward linear reward over time using an exponential reward-skill curve.

Figure 2. Skill-time and reward-skill curves designed to provide a linear reward over time. Figure 2. A metaphorical illustration of how esports game design tends to amplify skill differences, shaping the reward-skill curve to counteract the logarithmic nature of the skill-time (learning) curve for most applications, creating a more linear relationship in the reward-time space and driving continued improvement in user interaction.

Higher-level tasks are now an important part of competitive esports training. While initially developed only to build low-level aiming skills, aim-trainer software such as KovaaK's and Aim Lab now support training under conditions more closely resembling particular games. Competitive gamers regularly scrimmage (or "scrim") to hone their skills, particularly to practice their role in the team. Examples from MOBAs (multiplayer online battle arenas) include team coordination (verbal and otherwise); team composition, in which players may take on different roles; focusing damage on particular opponents; and actions that impair enemies' vision or movement. In games with economics, such as Counter-Strike, StarCraft, League of Legends, Dota 2, and Valorant, players may practice while better or more poorly equipped, based on economic conditions. Similarly, scrimmaging provides opportunities to train for different placement of players around maps and for unique strategic challenges. Even in single-player competitive games like StarCraft, pros often play countless hours of solo matches to develop different "ramp" strategies for quickly building up resources and units to attack other players, exhaustively searching for slightly better ways to save time in building up their in-game advantage. The practice of "theorycrafting" has become popular, wherein players create spreadsheets and run Monte Carlo simulations of various potential encounters, particularly to determine which high-level strategies will give them an advantage.

New esports training tools are emerging that are tightly integrated with games themselves. Several games offer detailed match-replay files and/or short video recaps ("killcams") following defeat, allowing players to analyze their performance to improve actions in the next encounter. Platforms like Mobalytics (https://mobalytics.gg/) offer users higher-level insights into their performance by longitudinally collecting data from game servers, tracking these statistics over time as a player improves. Many other Web tools provide matchup analytics for character picks and allow players to practice the character-drafting metagame.

The quest for mastery has also driven demand for innovation in esports interfaces—without fundamentally changing interaction paradigms. For example, vendors have begun supporting the customization of mouse weight and shape, and have also increased mouse-reporting rates up to 8 kHz. New displays can refresh at 360 Hz or even 500. The performance-oriented technical specifications of such input and output devices often meet or exceed those of specialized devices used in human performance research and are well beyond the needs of typical computer users.

In at least one case, interfaces supporting esports mastery have diverged qualitatively from interfaces supporting productivity, rather than just exceeding their needs. HCI research has long observed that pointer acceleration improves task-completion time and throughput [4]. (Pointer acceleration, or "nonlinear CD gain," makes cursors move farther when the mouse moves the same distance more quickly.) Yet, in FPS games, top players almost always turn off pointer acceleration, since it requires them to repeat not just a precise displacement but also a certain velocity profile to perform a given action again. Even more interestingly, some more experimental FPS gamers are beginning to adopt pointer acceleration but using speed-to-movement transforms that are quite different from the defaults adopted by Windows or other windowing systems.


In traditional sports science, only elite athletes have detailed performance data tracked over time. In esports, every player does.


With all of this support for expertise, esports is a unique opportunity to study the acquisition of task and interface mastery more generally [5]. First, the data and training context is unparalleled. Other computer applications rarely integrate training for higher-level tasks, and largely ignore usage histories. Conventional sports science does an admirable job of gathering many statistics on and off the field, particularly with modern tracking technologies, but pales in comparison to the granularity and pervasiveness of esports data capture. Rich tools for analysis paired with fine-grained, longitudinal data describing competition in highly precise and memory-abundant computer systems make up the unique value proposition of esports. Second, the range of user skill and variety of tasks are exceptional. The skill level of the esports user base ranges from novice computer users to professional gamers who have honed their skills for years. In traditional sports science, only elite athletes have detailed performance data tracked over time. In esports, every player does. Additionally, esports tasks are often broad and deliberately unconstrained, so data from just one title can often provide insight into a wide range of interactions.

back to top  Future Directions

We predict that more holistic, application-specific performance models will become increasingly relevant in HCI, particularly in highly demanding and competitive mastery-oriented settings where professional coaching or individualized advice are becoming the status quo. Rather than reject this more complete but specific knowledge as lacking broader impact or unlikely to generalize, we advocate the study of the techniques and tools being used by esports athletes to attain mastery so that we may begin helping a broader set of users develop expertise with their interfaces.

Improved tools can accelerate the development of interface mastery, both in esports and other domains. For example, a standard for describing real, in-game situations would enable esports athletes to drill for specific mechanics in which they require more expertise, and might allow them to compare their actions to those of experts. Alternatively, in some cases, noninteractive simulation might improve performance, helping find near-optimal choices in a broad user decision tree. Finally, while usage trace data is often collected today, more could be done to instrument the user. Such human-centric data might help us better understand how people react to challenges and successes in real time, unlocking the next level of performance through optimization on the human side of the human-computer interaction loop. To accurately affiliate human reactions to digital events, computer and human trace data must then be synchronized, a nontrivial task on its own.

Current performance-enhancing improvements to the esports interface will continue, which should have value outside of esports. Mice and displays will continue to update more frequently and with reduced delays. Researchers will attempt to learn which characteristics of improved input and output devices are most beneficial, and which displayed information enhances performance most. As with many modern workers, esports athletes are often formed into teams, requiring rapid and effective communication and creating demand for better technologies that support collaboration.

As an aside, many have argued that augmented and virtual reality will be an important part of competitive gameplay's future. It certainly is not considered mainstream in today's esports community. One possible reason for this is that AR/VR interfaces and interactions do not yet support the long-tailed development of mastery, as mice, keyboards, and game controllers do. To change this, AR/VR interfaces must support prolonged use (i.e., over multiple continuous hours), reducing nausea in particular. When these interfaces eventually do improve support for development of expertise, the physicality and "naturalness" of their interactions may make them less interesting to HCI researchers studying mastery of more conventional interfaces. AR/VR experiences typically strive to reproduce real-world interfaces (as a component of "presence") and in the limit, cease to behave any differently.

Which non-esports applications might benefit from new understanding, training tools, and interfaces supporting expertise? Mastery is most valued in tasks that:

  • Have a long learning curve. Simpler tasks are easier to master, giving such mastery less value. This implies that valuable mastery requires a complex task.
  • Reward continued learning. Skill acquired deep in the learning curve has little value if it is not rewarded. High-skill rewards drive growth toward mastery.
  • Create an intrinsic desire to improve. Without this motivation, skill tends to plateau, sating extrinsic motivators. In the late stages of mastery, intrinsic value becomes necessary.

We believe that digital applications such as art, coding, computer-aided design, and media/content creation all have these characteristics and could benefit by borrowing techniques supporting mastery from esports. Other applications might include teleoperation and collaboration. On the other hand, typical productivity applications such as word processing and spreadsheets rarely require or reward continued learning and development years into their use. As other applications encourage mastery with lessons from esports, they should bear in mind that competition is central to esports. Without it, driving expertise requires other extrinsic rewards for long-term learning, or strong intrinsic motivation. Yet the lack of competition may be helpful: It defines not only winners but also losers, which can discourage mastery.

With this caveat in mind, we advocate study of the often game-and task-specific techniques used by professional gamers to acquire expertise, as both an invaluable means of understanding interface mastery more broadly, and of building a brighter future for high-skill HCI.

back to top  References

1. Chase, W.G. and Simon, H.A. The mind's eye in chess. In Visual Information Processing. Academic Press, 1973, 215–281.

2. Engelbart, D.C. and English, W.K. A research center for augmenting human intellect. Proc. of the December 9–11, 1968, Fall Joint Computer Conference, Part I. ACM, New York, 1968, 395–410.

3. Engelbart, D. The augmented knowledge workshop. In A History of Personal Workstations. A. Goldberg, ed. Addison-Wesley, 2010, 185–248.

4. Casiez, G., Vogel, D., Balakrishnan, R., and Cockburn, A. The impact of control-display gain on user performance in pointing tasks, Human–Computer Interaction 23, 3 (2008), 215–250. DOI: 10.1080/07370020802278163

5. Campbell, M.J., Toth, A.J., Moran, A.P., Kowal, M., and Exton, C. eSports: A new window on neurocognitive expertise? Progress in Brain Research 240 (2018), 161–174; https://doi.org/10.1016/bs.pbr.2018.09.006

back to top  Authors

Ben Boudaoud is a research engineer at Nvidia working at the interface of human interaction and practical system design. His research interests include energy-efficient embedded systems, novel interaction modalities, and high-performance HCI. [email protected]

Josef Spjut is a research scientist at Nvidia working on esports, graphics, and player performance. His research contributed to the RT Core hardware in Turing and newer graphic processing units and the Nvidia Reflex esports platform. A lifelong gamer, he is most excited by how game interfaces connect people to one another. [email protected]

Joohwan Kim is a research manager leading Nvidia's Human Performance and Experience research group based in Santa Clara, California. His current interests are in understanding and improving viewer experience of various types of displays, especially regarding esports. [email protected]

Arjun Madhusudan is a Ph.D. student at North Carolina State University working on esports and player performance studies. He is actively involved in esports experience studies with Nvidia. He also received his M.S. from NC State. [email protected]

Benjamin Watson is an associate professor of computer science at North Carolina State University. His interdisciplinary Visual Experience Lab focuses on the engineering of visual meaning, working in the fields of graphics, visualization, interaction, and user experience. Watson cochaired the ACM Interactive 3D Graphics and Games (I3D) 2006 conference. [email protected]

back to top 

intr_ccby.gif Copyright 2022 held by owners/authors

The Digital Library is published by the Association for Computing Machinery. Copyright © 2022 ACM, Inc.

Post Comment


No Comments Found