Jules Françoise, Norbert Schnell, Riccardo Borghesi, Frédéric Bevilacqua
Describe what you made. We created two installations built around a single idea: letting people choose their own arm and hand gestures to control sound synthesis. Record your own movements; the system will learn them and map them to sounds. In the first installation, people craft specific gestures to interact with environmental sounds, for example, wind, fire, birds, and water. Similar to a Foley artist, you can “play” sounds, creating a complex sound environment with your hands.
|MO (modular musical interface) inertial sensors.|
In the second installation, you use arm gestures to control vocal sounds. In this case, you record both gestures and vocalization simultaneously. To make it more challenging, we created a two-player game based on action cards, the “imitation game.” Each player must follow the suggestions on the cards for both sounds and actions. Once one player records a vocalization and a gesture, the other player must imitate the gesture, which will replay the other player’s voice.
The idea was to create elements with which people could engage with the systems. By brainstorming, we ended up with the idea of an imitation game and cards to guide people. We then went through several iterations to select and refine the actions and sounds, balancing between ease of execution, and challenging and playful cases. The final setup, including motion sensing, fine-tuning of the sound synthesis, and positions of each physical and graphical interface, was adapted through iterative testing with users.
What for you is the most important/interesting thing about what you made? Our goal was to involve people in a creative process of designing gestures to control sound. Usually, designing sonic interactive systems requires programming and expertise—we wanted to make this process playful. First, you move as you feel while listening to a sound, or while vocalizing. Then you’re in control of a palette of sounds that you can manipulate as you wish. The systems draw upon interactive machine learning to learn the associations from user demonstrations. The installation also teaches people about state-of-the-art adaptive gestural interfaces.
|Experimenting with the motion sensors.|
Was this a collaborative process, and if so, who was involved? The design of the installations involved several people with complementary expertise in hardware and software development, sound design, and sonic interaction design. The software building blocks of the systems—motion analysis, sound analysis, and synthesis tools—originate from several years of research and practice at Ircam. The wireless motion sensors were designed by our colleague Emmanuel Fléty for interactive music systems, with requirements of high accuracy, low latency, and compactness. Some of the recorded sound material was designed by Roland Cahen, originally for another installation called DIRTI, the Dirty Tangible Interface project by User Studio, to which our colleague Diemo Schwarz also contributed. For the imitation game, we developed a set of action cards, associating actions and sounds that the players had to imitate. We experimented within the team to select the set of action cards that elicited the most interesting gestures and vocal imitations; the final set was designed by Riccardo Borghesi.
|The imitation game.|
|Action cards for inspiring vocal imitations and gestures.|
What was the biggest surprise in making this? It is striking to observe how people can learn very quickly with the help of interactive sound. Imitating someone else’s gestures with accuracy is difficult, especially when the task is to reproduce the precise dynamics of the movement. Interactive sonification provides rich feedback that complements the visual modality in kinesthetic empathy. We believe that continuous sonification can help improve movement accuracy and consistency, which creates possibilities for novel applications in movement learning such as sport and dance pedagogy, and rehabilitation.
|Testing the final setup.|
Jules Françoise, IRCAM
Norbert Schnell, IRCAM
Riccardo Borghesi, IRCAM
Frédéric Bevilacqua, IRCAM
Copyright held by authors
The Digital Library is published by the Association for Computing Machinery. Copyright © 2015 ACM, Inc.