Demo Hour

XXV.4 July - August 2018
Page: 8
Digital Citation


Authors:
Xuan Luo, Jason Lawrence, Steven Seitz, Anne-Claire Bourland, Peter Gorman, Jess McIntosh, Asier Marzo, Cheng-Te Chi, Ian Gonsher, Steve Kim, McKenna Cisler, Jonathan Lister, Benjamin Navetta, Peter Haas, Ethan Mok, Horatio Han, Beth Phillips, Maartje de Graaf

back to top  1. Pepper’s Cone

Pepper’s Cone is a simple 3D display that can be built from a tablet computer and a plastic sheet folded into a cone. By rotating the tablet about the y-axis, a user can view 3D objects naturally over 360 degrees without special glasses. The transparent conical surface reflects the image displayed on the 2D screen. The displayed image is pre-distorted, so its reflection appears to be perspective-correct and suspended inside the reflector. Using the tablet’s integrated gyroscope, the viewer adjusts the rendered image based on their relative orientation.

Luo, X., Lawrence, J., and Seitz, S.M.
Pepper’s Cone: An inexpensive do-it-yourself 3D display.
Proc. of UIST ‘17. ACM, New York, 2017, 623–633;
https://doi.org/10.1145/3126594.3126602
http://roxanneluo.github.io/PeppersCone.html
https://youtu.be/W2P-suog684

Xuan Luo, University of Washington
xuanluo@cs.washington.edu

Jason Lawrence, Google
jdlaw@google.com

Steven M. Seitz, University of Washington
seitz@cs.washington.edu

ins01.gif A plastic cone placed on top of a tablet computer reflects a 3D image of the 2D image on the display.

back to top  2. Project Telepathy

Speech is our natural way of communicating. However, it is limited by being a broadcast process: Everyone within a certain area can hear someone else’s speech. But what if we’d like to target our speech toward only one person? In this demo, we combine two technologies for realizing targeted communication, with potential applications in task coordination and private conversations. For detecting words, we measure the bioelectric signals produced by facial muscles. Four surface electrodes provide 80-percent accuracy in discriminating between 10 silently mouthed words. For transmitting the message, we use a phased array of ultrasonic emitters, capable of emitting a high-directive sound beam that can be steered electronically without physically moving the array.

Bourland, A.C., Gorman, P., McIntosh, J., and Marzo, A. Project telepathy: Targeted verbal communication using 3D beamforming speakers and facial electromyography. Proc. of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 2017, 1508–1515.
http://www.biglab.co.uk/
https://www.youtube.com/watch?v=YfcPBa_CMaw

Anne-Claire Bourland, Peter Gorman, Jess McIntosh, and Asier Marzo, University of Bristol
amarzo@hotmail.com

ins02.gif Surface electrodes capture the electric signals produced by the muscles when the user silently mouths a word. A processing unit detects which word was mouthed and reproduces it using the phased array of ultrasonic emitters on the forehead.
ins03.gif Left: The user’s head is broadly pointing to the target (green dot). Right: By electronically steering the beam, the phased array can accurately direct the sound toward the target.

back to top  3. Wind Ceremony

Wind Ceremony represents our desire to capture the unseen. The movement of the installation produces a sound like the blowing wind. When participants pass by, the installation spins like a windmill, inviting them to “become” the wind. The slits in the metal cans in the lower part of the installation produce the wind sound as the installation spins. The upper part of the installation indicates the wind direction. As the participant becomes the wind, indicators point in the direction in which they are heading. Invoking a ceremony of the wind, the two moving parts of the installation make the invisible wind visible.

https://mike8503111.wixsite.com/miroc/wind-ceremony
https://www.youtube.com/watch?v=XqdEk1uobyY

Cheng-Te Chi, Shih Chien University
mike8503111@gmail.com

ins04.gif The top part of the installation indicates the wind direction. The bottom part produces the sound. As the bottom part spins, slits in the cans produce the sound of the wind.

back to top  4. Tablebot (Tbo)

We are developing a class of robots called situated robots, which hide in plain sight within built environments as furniture and other objects. We anticipate that situated robots will become the universal remote controls of the near future, allowing user interfaces to disappear when not in use.


We anticipate that situated robots will become the universal remote controls of the near future.


Tablebot (Tbo) is a situated robot that responds to voice commands, connecting to other objects and the Internet. Tbo’s most useful feature is its ability to allow users to interact with others, projecting onto flat surfaces to create telepresence experiences that are less “Skype on a stick” and more embodied and immersive.

http://gonsherdesign.com/
https://vimeo.com/225230975

Ian Gonsher (PI), Steve Kim, McKenna Cisler, Jonathan Lister, Benjamin Navetta, Peter Haas, Ethan Mok, Horatio Han, Beth Phillips, and Maartje de Graaf, Brown University
ian_gonsher@brown.edu

ins05.gif A situated robot, Tablebot hides in plain sight.
ins06.gif Tablebot provides immersive telepresence experiences.

back to top 

©2018 ACM  1072-5520/18/07  $15.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.

Post Comment


No Comments Found