XXV.1 January + February 2018
Page: 70
Digital Citation

Radar sensing in human-computer interaction

Hui-Shyong Yeo, Aaron Quigley

back to top 

The exploration of novel sensing technologies to facilitate new interaction modalities remains an active research topic in human-computer interaction. Across the breadth of HCI conferences, we can see the development of new forms of interaction underpinned by the appropriation or adaptation of sensing techniques based on the measurement of sound, light, electric fields, radio waves, biosignals, and so on.

back to top  Insights


Commercially, we see extensive industrial developments of radar sensing in automotive and military settings. Radar is often considered a long-range sensing technology that is all weather, offers 3D position information, operates at all times as it doesn't require lighting, and that can penetrate surfaces and objects. At very long range, radar technology has been used for many decades in weather and aircraft tracking. At long, mid, and short range, radar has been used for autonomous cruise control (ACC), emergency brake assist (EBA), security scanners, pedestrian detection, and blind-spot detection. At very short range, radar has been employed in disbond detection, corrosion detection, and foam-insulation flaw identification. In addition, the research community has explored radar technology for various purposes, including presence sensing and indoor user tracking [1], vital-signs monitoring [2], and emotion recognition. At this range, radar addresses problems with privacy, occlusion, lighting, and limited field of view that affect vision-based approaches. It is also useful in medical applications where traditional approaches such as capacitive and galvanic skin response sensing do not work well.

Within the HCI context, Doppler radar was used as early as 1997 in the Magic Carpet [3] for sensing coarse body motion. However, the system requires signal-processing knowledge and custom hardware, which present a high barrier for entry. The more recent development of low-cost, miniaturized, radar-on-chip devices with developer-friendly SDKs truly opens up the potential of radar sensing in HCI. Such sensing is exemplified in the Google Soli [4] for tracking micro gestures (such as finger wiggle, hand tilt, check mark, or thumb slide) and has led to increased interest in radar for gesture detection.

back to top  Project Soli

The Google Soli significantly lowers the barrier to entry for the HCI community in providing a plug-and-play SDK with gesture support and software examples. Radar, however, presents a number of challenges and opportunities, which this article seeks to introduce as a primer for those seeking to explore radar for interaction, tangible computing, gestures, or in sensor fusion.

Soli is a new gesture-sensing technology for HCI with many potential use cases. Compared with either capacitive sensing or vision-based sensing, it aims to overcome problems with occlusion, lighting, and embedded sensing. It also aims to support 3D, distance, and micro motions for novel forms of interaction. Soli combines a view of the hardware architecture, signal processing, software abstractions, a UX paradigm, and gesture recognition for embedded hardware and the final product.

Soli technology is hardware agnostic, which means the sensing technology can work with different radar chips. In fact, the team has developed two fully integrated radar chips (Figure 1), a frequency-modulated continuous wave (FMCW) SiGe chip, and a direct-sequence spread spectrum (DSSS) CMOS chip. There are four receive (Rx) and two transmit (Tx) antennas. The Rx antenna spacing is designed for optimal beam forming, while the Rx/Tx spacing is designed to gain isolation. The radar prototype was a custom 57–64 GHz radar with multiple narrow-beam horn antennas. In the 60 GHz band, the FCC limits the bandwidth to 7 GHz (40 to 82 dBm EIRP), which results in a resolution of ∼2cm less than Microsoft Kinect sensor resolution. Today, with a 60 GHz center frequency and 5mm wavelength, the Soli radar has 0.05 –15m range with a 180-degree field of view. The alpha developer kit (Figure 2) uses the FMCW version with an integrated development board that allows USB connection with the host computer.


ins03.gif Figure 1. Soli radar chips with antennas-in-package. Top: 12 x 12mm, FMCW SiGe chip. Bottom: 9 x 9mm DSSS CMOS chip. (Figure extracted from the Soli paper [4], used with permission.)
ins04.gif Figure 2. Exploded view of the Soli alpha hardware (not to scale, image adapted from Soli alpha SDK). In RadarCat [7], an object is placed on top of the sensor, where raw radar signals are extracted and classified using machine-learning techniques.

back to top  Radar Signal Processing

The Google Soli team's novel paradigm for radar sensing is based on signal processing from a broad antenna beam, which delivers an extremely high temporal resolution instead of focusing on high spatial resolution. As a result, Soli can track sub-millimeter motion at high speeds with great accuracy. By illuminating the entire hand in a single wide beam (Figure 3), the Soli can measure the superposition of reflections from multiple dynamic scattering centers (e.g., arches, fingertips, and finger bends) across the human hand. One radar signal provides information about the instantaneous scattering range and reflectivity of the various centers, while taking this over time with multiple repetition intervals affords information about the dynamics of the centers' movements. An analysis of the instantaneous scattering results in characteristics that can be used to describe the pose and orientation of the hand, while dynamic movements and characteristics can be used to estimate hand gestures. Published studies have explored random forest machine-learning classifiers for gesture recognition [4] along with deep convolutional and recurrent neural networks [5]. This research and development allow the Google Soli team to propose a ubiquitous gesture-interaction language that can allow people to control any device with a simple yet universal set of in-air hand gestures.

ins05.gif Figure 3. Soli is designed end-to-end for ubiquitous and intuitive fine-gesture interaction. (Figure extracted from the Soli paper [4], used with permission.)

Basic radar sensing systems for long- and short-range use typically provide access to uncompressed raw data from the various channels. However, radar systems with hardware and software offer pre-processing including noise subtraction, error correction, and filtering, while signal processing affords developers more enriched views of the radar signal, including image formation, clutter removal, range-Doppler map, elevation estimation, object tracking, and target detection. Radar, as we have noted, is not a new technology; its use at varying ranges means there are an abundance of techniques, methods, and approaches that researchers in HCI might learn from and adapt.

However, when considering the Soli, it is important to understand the Soli Processing Pipeline (SPP), from hardware to software to application. There is a further distinction between the hardware-specific transmitter, receiver, and analog signal preprocessing, and digital-signal preprocessing in software, which is also hardware specific. Hardware-agnostic signal transformations include range-Doppler, range profile, Doppler profile (micro-Doppler), and spectrogram, followed by feature extraction and gesture recognition, which can be provided to an application. The Soli SDK enables developers to easily access and build upon the gesture-recognition pipeline. The Soli libraries extract real-time signals from radar hardware, outputting signal transformations, high-precision position and motion data, and gesture labels and parameters at frame rates from 100 to 10,000 frames per second.

back to top  Radar Applications Area

Currently, the main application of Soli is centered around close-range sensing of fine and fluid gestures [4,5]. In addition, the team also suggested potential use cases in wearables, mobile, VR/AR systems, smart appliances, and IoT, as well as in scanning and imaging, wayfinding, accessibility, security, and spectroscopy. In research work, concrete examples developed by the alpha developers and shown at Google I/O 2016 [6] include material and object recognition [7], 3D imaging, predictive drawing, in-car gestures, gesture unlock, visualization, and musical applications. More recent work also demonstrated interesting use cases, such as the world's smallest violin [8], a freehand keyboard [9], biometric user identification, fluid/powder identification, and glucose monitoring [10].

Currently, the main application of Soli is centered around close-range sensing of fine and fluid gestures.

The small, low-power, single-package size of the Google Soli radar chip can be embedded in myriad consumer devices, from smart watches to home-control stations. As a technology that can be embedded beneath surfaces and that does not require light, it can be incorporated into both wearables and a range of objects such as cars or IoT devices, where an exposed sensor would not be permissible. Such a sensing technology can afford a paradigm shift in how we interact in a touchless manner with any computation, embedded into any device. Examples of virtual tools, smart watch interaction, loudspeaker interaction, and musical applications have also been demonstrated.

RadarCat [7] is a small, versatile system for material and object classification that enables new forms of everyday proximate interaction with digital devices. RadarCat exploits the unique raw radar signals that occur when different materials and objects are placed on the sensor. By using machine-learning techniques, these objects can be accurately recognized. The system can also recognize an object's thickness or state (filled or empty mug), as well as different body parts. This gives rise to research and applications in context-aware computing (Figure 4), tangible interaction (with tokens and objects), industrial automation (e.g., recycling, Figure 5), and laboratory process control (e.g., traceability). Radar-on-chip systems have also been demonstrated in presence sensing and breath monitoring.

ins06.gif Figure 4. Four example applications to demonstrate the interaction possibilities of RadarCat, clockwise from top left: physical object dictionary; tangible painting app; context-aware interaction and body shortcuts; and automatic refill.
ins07.gif Figure 5. Three potential future applications of RadarCat, from left to right: automatic waste sorting in a recycling center; assisting the visually impaired; and automatic check-out machine.

back to top  What's Down the Road?

We were selected to participate in the Google Soli Alpha Developers program, for which we thank the Google ATAP Soli team. This provided us access at the time to the raw radar signal, which gave us the opportunity to explore radar sensing and machine learning for object and material classification in novel ways. This developer program demonstrates what is possible for the HCI community when it considers radar-on-chip or higher levels of processed data, such as with the Google Soli gestures. We suggest that avenues of research in healthcare, on- and around-body interaction, and object interaction are now opening up. Challenges with radar remain, including energy consumption, dealing with noisy signal processing, and achieving high recognition rates. The implication is that HCI researchers can use this new sensing technology to achieve more. Although radar is not a Swiss army knife solution to every problem, with sensor fusion it can form part of a rich sensing space for new forms of interaction.

The current availability of the Soli developer kit is limited, but we hope and believe that soon it will be available to more developers and researchers. Indeed, at the Google I/O last year, the Soli team announced a newer beta developer kit with a built-in processing unit and swappable modules. In the meantime, for $2 or less readers can buy an alternative radar sensor module, such as the HB-100 (10 GHz) and RCWL-0516 (a great comparison video can be found on YouTube [11]). Alternative plug-and-play portable radars include those from Walabot or Xethru. High-end custom radar modules can also be found but are not within the scope of this article. Ultimately, it is the advent of small, low-cost radar units that opens up new avenues of research and exploration in HCI, bringing radar sensing into the fabric of our lives.

back to top  Acknowledgments

Thank you to the Soli team for the feedback and corrections during the drafting of this article.

back to top  References

1. Bahl, P. and Padmanabhan, V.N. RADAR: An in-building RF-based user location and tracking system. Proc. of IEEE INFOCOM 2000. Conference on Computer Communications. 19th Annual Joint Conference of the IEEE Computer and Communications Societies (Cat. No.00CH37064). IEEE, 2000, 775–784, vol. 2.

2. Adib, F., Mao, H., Kabelac, Z., Katabi, D., and Miller, R.C. Smart homes that monitor breathing and heart rate. Proc. of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, New York, 2015, 837–846. DOI:

3. Paradiso, J., Abler, C., Hsiao, K.-Y., and Reynolds, M. The Magic Carpet: Physical sensing for immersive environments. CHI '97 Extended Abstracts on Human Factors in Computing Systems. ACM, New York, 1997, 277–278. DOI:

4. Lien, J., Gillian, N., Karagozler, M.E., Amihood, P., Schwesig, C., Olson, E., Raja, H., and Poupyrev, I. Soli: Ubiquitous gesture sensing with millimeter wave radar. ACM Trans. Graph. 35, 4 (July 2016), Article 142. DOI:

5. Wang, S., Song, J., Lien, J., Poupyrev, I., and Hilliges, O. Interacting with Soli: Exploring fine-grained dynamic gesture recognition in the radio-frequency spectrum. Proc. of the 29th Annual Symposium on User Interface Software and Technology. ACM, New York, 2016, 85–860. DOI:

6. Bridging the physical and digital. Imagine the possibilities. ATAP. - Google I/O 2016;

7. Yeo, H.-S., Flamich, G., Schrempf, P., Harris-Birtill, D., and Quigley, A. RadarCat: Radar categorization for input and interaction. Proc. of the 29th Annual Symposium on User Interface Software and Technology. ACM, New York, 2016, 83–841. DOI:

8. Project Soli -World's Tiniest Violin;

9. SoliType;

10. Mobile HCI 2017: Workshop on Object Recognition for Input and Mobile Interaction;

11. Radar Sensors/Switches: Comparison and Tests;

back to top  Authors

Hui-Shyong Yeo is a third-year Ph.D. student at the University of St Andrews. His research focuses on exploring and developing novel interaction techniques, including for mobile and wearable systems, AR/VR, text entry, and pen interaction, with a focus on single-handed interaction.

Aaron Quigley is the director of SACHI, the St Andrews computer-human interaction research group. His research interests include surface and multi-display computing, human-computer interaction, pervasive and ubiquitous computing, and information visualization.

back to top 

©2018 ACM  1072-5520/18/01  $15.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.

Post Comment

No Comments Found