Columns

XXIV.1 January - February 2017
Page: 20
Digital Citation

A user, an interface, or none


Authors:
Uri Kartoun

back to top 

Google and Microsoft are known primarily for three major services: search, map navigation, and email. One fundamental requirement of a high-quality software service is responsiveness. Google and Microsoft excel in providing fast responses across their services, making the user experience strikingly attractive.

User interface (UI) advances obviously make software products friendlier, more responsive, and more fun. However, such advances gradually require less intelligence from users because many of the decisions are made by a computer: Will our decision-making no longer be needed, say, in the year 2073? Given what we know today, it looks as if advances in many technologies, not merely those restricted to UIs, will eliminate most of the decisions that require human participation, such as choosing the most appropriate dinner based on credit card transactions, genetic profiles, and cholesterol levels. Fundamental human needs will still stay the same, I assume, including the need to consume food, communicate with other people, travel, and respond to varying weather conditions with clothing and shelter.

Currently it’s 2017, and the majority of the world’s population uses a desktop or mobile device to address these needs to some degree. The UI is a crucial component that allows us to use these devices effectively in making decisions, as well as in controlling actions affecting our future. To enable faster searching, Google introduced an embedded technology in 2010 called Google Instant to its search engine, with the major objective of allowing a user to browse the Internet more quickly [1]. A subsequent invention from Microsoft used a ranking module to asynchronously cache and present data more efficiently [2]. On the one hand, our society would likely benefit from the two technologies, mainly due to their potential to accelerate access to online resources and to increase the quality of the information retrieved. On the other hand, these technologies may directly affect users’ natural decision-making process by guiding them on which actions to take, thereby making them use less of their own intelligence and judgment.

One example of an exotic UI technology is mind reading, often observed in movies; however, this technology does not seem to exist in the real world. In reality there are only a few examples in which an exotic UI technology steps out of a lab environment and becomes a product. For example, Caltech is launching an experimental prototype that demonstrates how a paralyzed man can drink beer with the help of a mind reading robot. Outside of these exotic UI technologies, more common UI tools include the computer mouse, the keyboard, and the monitor. Combining these three pieces of hardware with well-designed software is a fundamental requirement of any high-quality interactive product. But think of a computer without any input accessories such as a mouse or a keyboard. Instead of typing or deleting letters and words in a text editor, you would transmit “thinking commands” to the computer, which would then magically create your content on the device. Taking imagination even further, think of a computer that would generate the text of this entire manuscript, not even needing me to provide any actions or thoughts.

Back to reality. As technologies advance and new products become available, traditional input devices are often replaced. One common example is the touchscreen found everywhere on smartphones and tablets. New patents for intriguing UI technologies have recently been granted, and these technologies are gradually becoming an integral part of successful products. For instance, Apple’s newly granted patent describes a method to use a mobile device’s fingerprint sensor as a UI navigation tool. Other exciting UI-related inventions launching from Apple include a glasses-free interactive holographic touchscreen display.

A different type of UI allows a user to verbally interact with a computer without any physical interactive accessories or a computer monitor. Instead, the computer is equipped with intelligent software capabilities to understand spoken human language and conduct an interactive collaboration with the human. Many applications that require handling unstructured data resources rely on deep learning, a set of highly efficient architectures that attempt to model data by applying nonlinear transformations, often applicative to speech processing [3]. IBM’s Watson is one example of such a powerful machine, which can interpret unstructured medical content provided by a physician even if the content is disrupted or heterogeneous.

In Lewis Carroll’s fantasy novel Through the Looking-Glass, the White Queen lives backward in time and claims to remember events that occur in the future. If there was a way to harness such a power, then all future technologies would be known in advance and strategic decisions about developing new technologies would ultimately be more beneficial to society. But we are human beings and as such are intellectually limited. We cannot reliably predict which new technologies will be discovered in the upcoming decades or to what extent advances in UIs may positively or negatively affect human intelligence. These technologies may not have appeared yet in their creators’ imagination. Perspectives from Jules Verne or George Orwell and other like-minded individuals, including Google’s Ray Kurzweil [4] and Microsoft’s Eric Horvitz [5], will be invalid by 2073 if we speculate in 2017 what the future may hold. We think we are adventurous in coming up with ideas such as colonizing Mars or becoming cyborgs. However, these predictions are trivial, expressing a predictable and natural evolutionary path for survival, often magnified by the media. We still praise Thomas Edison for his light bulb without really considering that lighting technology might also be obsolete with advances that allow us to see in the dark without any special equipment or electricity infrastructure. We cannot come up with nontrivial ideas because we do not have the ability to define what a nontrivial idea is. We rely on our intelligence, imagination, and observations of the past to make predictions and decisions about the present and the future. These three components result only in speculation on how we and our world will look in the future. User interfaces, robots, and biomedical data technologies [6] may not exist by 2073.

It feels like the need to make real-time decisions is slowly becoming obsolete. However, new technologies may trigger hidden human decision-making needs not yet known to us. Metaphorically, we could observe the way we currently live in 2017 and reflect on the situation of our ancestors who lived in caves hundreds of thousands of years ago. Say our ancestors just made a significant discovery, such as fire, or developed a new weapon, thereby improving the efficiency of hunting. Were they able to hypothesize that one day humans would make decisions regarding which car to purchase, what brand of refrigerator to choose, or whether to listen to classic rock, psychedelic, or gothic metal tracks on their music players? Probably not. Similarly, current technologies cannot be used to predict the future. These examples reveal a new set of decisions that were not part of the caveman’s circle of thoughts; similarly, I believe there are other sets of human decisions unknown to us. We think we live in a technologically advanced era, but if we had the ability to compare the present with 2073, we might conclude that our world is not as advanced as we think. In this respect, we are no different from our ancestors.

ins01.gif

back to top  References

1. Kamvar, S.D., Haveliwala, T.H., and Jeh, G.M. Anticipated query generation and processing in a search engine. The United States Patent and Trademark Office (US Patent 7836044). Oct. 27, 2010.

2. Kartoun U. Asynchronous caching to improve user experience. The United States Patent and Trademark Office (Publication Number: US20130204857A1). Aug. 8, 2013.

3. LeCun, Y., Bengio, Y., and Hinton, G. Deep learning. Nature 521, 7553 (2015), 436–444.

4. Kurzweil, R. The law of accelerating returns (an essay). Mar. 7, 2001.

5. You, J. A 100-year study of artificial intelligence? Microsoft Research’s Eric Horvitz explains. Science Magazine. Jan. 9, 2015.

6. Kohane, I.S., Drazen, J.M., and Campion, E.W. A glimpse of the next 100 years in medicine. The New England Journal of Medicine 367 (2012), 2538–2539.

back to top  Author

Uri Kartoun is a research staff member at IBM Healthcare Research in Cambridge, MA. Previously he was a research fellow at Harvard Medical School/Massachusetts General Hospital. He obtained his Ph.D. focused on human-robot collaboration from Ben-Gurion University of the Negev, Israel. uri.kartoun@ibm.com

back to top 

Copyright held by author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2017 ACM, Inc.

Post Comment


No Comments Found