DEMO HOUR: Project TelepathyIssue: XXV.4 July - August 2018
Anne-Claire Bourland, Peter Gorman, Jess McIntosh, Asier Marzo
Speech is our natural way of communicating. However, it is limited by being a broadcast process: Everyone within a certain area can hear someone else's speech. But what if we'd like to target our speech toward only one person? In this demo, we combine two technologies for realizing targeted communication, with potential applications in task coordination and private conversations. For detecting words, we measure the bioelectric signals produced by facial muscles. Four surface electrodes provide 80-percent accuracy in discriminating between 10 silently mouthed words. For transmitting the message, we use a phased array of ultrasonic emitters, capable of emitting a high-directive sound beam that can be steered electronically without physically moving the array.
Bourland, A.C., Gorman, P., McIntosh, J., and Marzo, A. Project telepathy: Targeted verbal communication using 3D beamforming speakers and facial electromyography. Proc. of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 2017, 1508–1515.
Anne-Claire Bourland, Peter Gorman, Jess McIntosh, and Asier Marzo, University of Bristol
|Surface electrodes capture the electric signals produced by the muscles when the user silently mouths a word. A processing unit detects which word was mouthed and reproduces it using the phased array of ultrasonic emitters on the forehead.|
|Left: The user's head is broadly pointing to the target (green dot). Right: By electronically steering the beam, the phased array can accurately direct the sound toward the target.|