What happened to the Internet of Things? Just a couple of years ago, it was all the rage. IoT experts were highly sought after in the job market; conferences and workshops were being held in its honor; and whole research centers and institutes were formed and reformed to carry out research and development into this new and exciting paradigm of computation. Then pretty much nothing happened.
To be fair, however, the underlying idea of IoT—that internetworked things can be made smarter and thus more useful than non-connected devices because they can know things non-connected devices cannot—is still highly relevant, perhaps even more relevant today than it was a couple of years ago.
What has changed recently is not so much the primary idea itself but rather how we have come to relate to that idea: our perspective on the idea. We have started to take the connectivity of devices for granted and moved a click up the ladder of abstraction. The question now is not so much how we connect our devices but what we do with our connected devices and what that means at scale and at a human and cultural level. What does it mean to be human in a world of connected devices that are all designed to do things for us, to make our lives easier and more convenient, and to automate tasks we find boring, somehow below us, or—as is arguably not uncommon—just because they can be automated?
For me, the most exciting development in technology and design over the past couple of years is the way in which maturing interconnected devices equipped with an evergrowing array of sensors, powered by both local and cloud-based computational muscle, can be combined with various forms of artificial intelligence to achieve almost unimaginable things.
The term in the future—used by IoT researchers and others for a long time to describe a future scenario in which their research will be important—is no longer needed. We are now living in the future.
If you own a Tesla automobile, it now lets you get out of your car and will then park itself for you. It opens your garage door, drives into the garage, and shuts down by itself. The goal for the near future is that your car can be “summoned” from anywhere to meet you, including charging itself on the way and syncing its arrival time with your calendar.
In another scenario, you would be able to pull up outside a restaurant and instead of leaving your car with the valet, it would drive off on its own to find a parking spot. Or, for the brave at heart, why not have your car do a few autopilot Uber runs while you are having your dinner? Another notable aspect of this is that the new feature was added using a software update, not a hardware update.
Having seen this development from the perspective of research and innovation in an academic setting as well as from the organizational, corporate, and business perspective my current role focuses on, it is one of the most fascinating and disruptive journeys of this decade.
There is a plethora of current books out there that are interesting and inspiring in relation to this trend. I have been reading and enjoying The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies by Erik Brynjolfsson and Andrew McAfee, a book full of insights and—not least—rich examples of how current digital technologies are changing life, society, and business as we know it.
In The Glass Cage: How Our Computers Are Changing Us, Nicholas Carr explores the various consequences of our growing dependence on computers and the hidden costs of letting software control almost every part of our life, especially the tendency of software automation to leave us feeling disengaged and discontented. Finally, in Superintelligence: Paths, Dangers, Strategies, Nick Bostrom goes the whole nine yards and poses the question: If machine brains would one day come to surpass human brains in intelligence, should we be worried? In the same ways as the survival of gorillas today depends more on humans than on the gorillas themselves, would our survival then rest on the actions of machine intelligence?
While these are all interesting books that I, in different ways, have taken great pleasure in reading, many of the questions and considerations they bring to the surface—whether human, technical, societal, or cultural—make me return to what I consider a number of classics in the field. I would wholeheartedly recommend all of the books below to anyone interested in these topics.
First, in What Computers Still Can’t Do: A Critique of Artificial Reason, Hubert Dreyfus argues for how it seems impossible that disembodied machines would be able to achieve any higher intelligent behavior. First published in 1972, it is still surprisingly vital and relevant, even though the world has moved on and so has AI, especially in its recent focus on machine learning. Dreyfus’s book still sits steadily as invaluable glue between philosophy and AI, pointing to their inseparability.
Second, published in 1990, Technology and the Lifeworld: From Garden to Earth by Don Ihde, a philosopher of technology, manages to lay out a prescient view on digital technology and the various and “multistable” ways in which we as humans come to relate to and with our technologies and our world. Ihde’s analysis, which predates most recent advances in autonomous systems and AI, is still highly relevant as a backdrop to the question of how we relate to these kinds of technologies that are no longer available to us only in the realm of science fiction.
Third, and finally, in Technology and the Character of Contemporary Life: A Philosophical Inquiry, Albert Borgmann, also an American philosopher of technology, argues that a key but often overlooked aspect of technology and technological development is that rather than freeing us up to pursue more inspiring and fulfilling things, it tends to control our lives and contribute to a life dominated by effortless, thoughtless consumption, ultimately making us passive. Here, Borgmann asks the still relevant question: What is the “good life”? Is it about making our life more convenient? Is the good life really about having our cars drive us to work?
Daniel Fallman is a design directorat McKinsey & Company in New York City, where he is helping large organizations innovate and become design led. Formerly a full professor of human-computer interaction at Umeå University, Sweden, his research interests are at the intersection of interaction design, design theory, and the philosophy of technology. firstname.lastname@example.org
Copyright held by author
The Digital Library is published by the Association for Computing Machinery. Copyright © 2017 ACM, Inc.