How was it made?

XXV.5 September-October 2018
Page: 16
Digital Citation

How was it made? Project telepathy

Anne-Claire Bourland, Peter Gorman, Jess McIntosh, Asier Marzo

back to top 

Describe what you made. Project Telepathy is a blue-sky project aimed at overcoming some of the limitations of human speech. Speech is a broadcast process, has limited reach, and does not work well underwater. We focused on overcoming the broadcast issue by developing a wearable system capable of silently detecting a message that the wearer wants to transmit and then delivering it to a specific individual. The system measures the bioelectric signals produced by facial muscles when words are silently mouthed. To transmit the message, we used a directional speaker made of dozens of tiny ultrasonic emitters to generate audible sound within a very narrow beam.

ins01.gif Scanning the acoustic field generated by an array using a microphone mounted on a modified 3D printer.

Briefly describe the process of how this was made. We divided Project Telepathy into two challenges. The first challenge was to silently identify what the user is saying. The second challenge was transmitting this message to a single individual. We looked at technologies that could be suitable for solving these two parts.

We separately constructed and evaluated the two subsystems that compose Project Telepathy. The word-recognition system was evaluated in a user study with six people to determine the accuracy when discriminating between 10 words, when they were both normally pronounced and silently mouthed. We evaluated the directional speaker system using a microphone mounted on a stage to scan the field and validate the simulated directional profile. At the end, we put the two systems together.

ins02.gif Driving boards employed to drive the directional speaker made of dozens of tiny ultrasonic emitters. Clockwise from left: 64-channel driver based on an Arduino Mega; board without the components soldered on; 256-channel driving board; 2-channel driving board.

Did anything go wrong? In the beginning, we decided that we would use "fringe" phenomena to detect the user message. We analyzed the small electrical signals sent to your muscles even when just reading words in your mind (i.e., subvocalization), but our results and previous papers deemed this option unfeasible. To transmit the words, we wanted to use the microwave hearing effect, in which a modulated microwave beam can induce auditory sensations. The amount of power required was within the safety guidelines, but we never found any paper suggesting that you could induce any sound beyond simple beeps and clicks.

So we decided to go for the "more common" technology. The wearer would silently mouth the word and a directional speaker would be used to transmit the message.

ins03.gif Hardware for capturing the facial electromyographic signals. Gel electrodes capture the signal that is amplified by a standard breakout board. An Arduino digitizes these signals and sends them to a computer running the machine-learning algorithms.
ins04.gif Different array geometries for the directional speakers. Clockwise from left: flat, spherical cap, headband.
ins05.gif Evaluating the prototype. Two users perform different tasks. The wearer communicates messages to either of them individually.

Was there anything new in the making process, materials, or anything else that you can tell us about? I think the hardware and software that we used for detecting words was quite standard in terms of what other research groups have used. However, the idea of employing phased arrays as directional speakers was new back then. Previous directional speakers could steer the sound only if you mechanically moved the device. In contrast, our device can steer the beam by adjusting the electric signals delivered to each of the tiny individual emitters forming the system; it can focus sound at the target accurately within milliseconds.

ins06.gif An alternative prototype that uses a chest-mounted array and a camera with a laser pointer to mark the target of the directional speaker.

How would you improve on this if you were to make it again? We would focus on the parametric speaker and forget about the word-recognition system. The directional speaker proved to be a reliable and effective piece of equipment, whereas the word-recognition system was cumbersome, not 100 percent reliable, and limited to a dozen words. I am pretty sure we would use something simple such as a small keypad to create the message.

ins07.gif User wearing the complete system: a phased array on the forehead and electrodes on the face.

back to top  Authors

Anne-Claire Bourland, Bristol Interaction Group, University of Bristol

Peter Gorman, Bristol Interaction Group, University of Bristol

Jess McIntosh, Bristol Interaction Group, University of Bristol

Asier Marzo, Bristol Interaction Group, University of Bristol, [email protected]

back to top 

Copyright held by authors

The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.

Post Comment

No Comments Found