Columns

XXVI.6 November - December 2019
Page: 22
Digital Citation

Humanizing the experience in the era of automation


Authors:
Uday Gajendar

back to top 

We are living in an age of automation, where computational intelligence guided by algorithms in apps and sensors is woven into our daily lives. What does this mean for the human aspect of experiencing such literally artificial intelligence? The challenge before us is the act of humanization itself—shaping tech interactions with humanistic qualities, like emotion, conversation, and relationship, that impart values for humane living, like trust, respect, and dignity. Doing that well requires uncovering mental models, identifying emotional drivers, and unpacking our expectations of how a smart device or predictive service should behave—and how to interact with it to achieve our goals gracefully... and safely! (Cue HAL from 2001: A Space Odyssey refusing to obey human commands.) And we must study how such expectations evolve as new forms of computational intelligence spread and become, for lack of a better phrase, invisibly powerful—thanks to overnight updates! Suddenly your electric car can drive off by itself at 8 a.m., which can be a bit jarring to discover. Astonishment can turn to distrust and fear in a heartbeat.

To help parse all this, imagine a Venn diagram showing some 20 to 30 percent overlap between humanization and automation. Examining the size of that overlap and how it evolves over time for folks in varying cultures/contexts would be helpful for intuiting how we deal with automation expectations. This leads me to Charles Eames's reply when asked about the boundaries of design. He tersely said, "Well, what are the boundaries of problems?" We now must ask: What are the boundaries of expectations and influences? When does magical possibility simply become accepted as normal in our lives, thanks to vague notions people have about advancements in algorithms, sensors, assistants, ad trackers, and so forth: always on, listening, learning, and nudging. We need to probe these boundaries of knowing what's happening and understanding how/when to interact—or intervene when it's all going wrong.

It's useful to know how such boundaries/overlaps differ at home versus at work versus being mobile, on the go—and there are likely changing levels of cross-directional influences of one upon the other. That is, how is human-centric thinking shaping automation and vice versa, and is it for better or worse? It's not really clear cut, of course; indeed, the edges become fuzzier as intelligent devices and apps quietly permeate our routines, turning into familiar companions that grow to "know" us, truly inhabiting our lives. And we come to rely upon them to the point that life without them feels... empty? Difficult? Unbearable, even? Hmm...

Increasingly, devices with networked services know each other's presence and exchange information—your inferred identify profile being one of them! It's all a bit spooky, yet highly convenient. Look at Google, Siri, Alexa, Facebook, or Netflix. Think of how many times you touch each service, or merely have it running on some device in your pocket, on your desk, in your car. Its presence offers a sense of comfort. So where do we draw those lines, and are we actually becoming more automated and less humanized as those boundaries keep blurring?


Just imagine an AOL-style cheery chime: "You've got [awful disease name]!"


Finnish architect Eliel Saarinen famously said, "Always design a thing by considering it in its next larger context—a chair in a room, a room in a house, a house in an environment, an environment in a city plan." Taking this hint, we should consider automation's scopes of impact, which may be pathways for humanizing interaction: Always design an automated service or smart device from the human's worldview, to the device's worldview, to the algorithm's worldview, to increasing scopes of interconnected devices, people, services, with ever expanding rings of participants—most are silent and passive. All the while being mindful of what data is being passed along and how social or contextual expectations shape experiences in a positive way, so a person feels supported, valued, and respected—not simply a mindless zombie who exists for the sake of the algorithm's utility. Remember, every click and tap is valuable training data! A healthy symbiosis is needed, not a parasitic dependency that diminishes our own humanity. It's in our purview as HCI professionals to make that possible.

So how do we foster humanistic qualities within frequent experiences of the automated and algorithmic? By humanistic I mean deep relationships, emotional stories, rich conversations, noble aspirations—a feeling of being alive that validates our own lives. I believe this pursuit also includes human virtues that we wish—and should demand—that smart devices and apps convey: trust, fairness, politeness, honesty, diligence, even humor. These of course contrast against vices that, sadly, too many badly designed devices and apps express: laziness, deceit, confusion, wastefulness, and stubbornness. Fostering these positive qualities is what it means to add that special element of humanity that makes any experience significant. Let's address this by talking about modalities of interaction, dimensions of intervention, and premises (or postures) of humanizing automation.

ins01.gif

Modalities of interaction. We engage with automated services and devices in a variety of ways, whether through voice, touch/tap (mobile), mouse and keyboard, and so on—including AR/VR modes with gamelike controllers and tactile/haptic interfaces. A few points to consider:

  • Interactions are increasingly becoming multi- (or mixed) modality with a combination of methods to invoke, verify, and complete feedback loops of commands or alerts from your phone to the device and back—as well as some Web appliance panel. Or think of accessing the same service (like Netflix) across different platforms that all maintain a unified sense of state and presence, as well as history of activity—tied to your account, which is your primary identity. More on this in a bit.
  • We should consider how seamless the shifting of modes of interaction are. For certain contexts (e.g., ER operating room, industrial facility, noisy call center), maybe it's better to have clear seams with useful friction that must be engaged.
  • Regardless, every interaction has a set of actors (human and artificial/algorithmic) that require a transparency to what's happening, especially with how those interactions train a hidden intelligence, with proper and meaningful sensory cues to shape a positive relationship or dialogue. Is there a gentle whirring sound? A graceful beep? An LED that glows softly and chimes when learning has become more confident?

Dimensions of intervention. When it comes to living with computational intelligence and automation, intervention is the apt term. It really is about intervening into something that's constantly ON in the background—invisibly and silently, yet always active and listening for signals of human engagement. The manner and frequency of interventions correlate to some intermingled threads that we must all be careful about:

  • Identity. Who are you when you engage with smart, automated entities? Well, programmatically you are your account name and login credentials. But also, that's up to the system's algorithm—how it was constructed and for what purpose—and how much training data you provide via your behaviors and shared data. There is an inferred identity building over time, which is a rough portrait, incomplete and finite. It's a proxy of you that's just enough to grant you the convenience you want, evolving within the confines of the algorithm's design. Are you aware that a portrait of you is being created at all? Or will you try to manipulate it and project a certain version of you accordingly, in effect gaming it? How we as HCI professionals shape that awareness for consumers and creators alike is a crucial obligation.
  • Power. The ability to cause change to transpire with your intent, and knowing you actually can do that—that's power. Not being mindlessly swept along, but rather exerting force to shape things per how you want them to be. There's a level of self-reflection and deliberation here, which we often forget, thanks to silent automation convenience, which allows us to glide through life with ease, but also blissful ignorance. Maybe we need to be reminded now and then that we do have the power to alter course, or to simply turn it all off and live a life unautomated for a beat! We need to raise that awareness at appropriate times so that people have real power if they choose to use it, for their own benefit.
  • Control. Related to power is the ability and desire to control the service and its data with clear intent, and a real sense of the consequences. Yes, we're often reminded by Facebook and Google (per some dialogue box that pops up when we least expect it—get out of the way!) that we can control our privacy and security settings, but (a) those screens are far too complex with tedious toggles and verbose language and (b) control has meaning only when it's contextualized within the activities and goals we are pursuing. Control has purpose when it's made emotionally relevant within a dialogue and with stories that invite us to take actions to make6 things as we wish them to be. How can an intelligent app/service stage that kind of conversation with human actors, using the modes of interaction? This requires designing situations to allow for control to be respected and enabled in a way that's personally valuable.
  • Responsibility. I won't repeat the classic line from Spider-Man, but we know power and control are nothing without a real sense of responsibility for actions taken, and data shared. This applies to the algorithm (and its designers) as well as the human actor intervening with the smart service. Which all goes back to knowing and understanding what's about to happen, the scopes of impact upon the self, friends/family, the context or space, and others nearby. I'm especially thinking about smart cooking devices and self-driving cars or automated drone-delivery drops—situations where physical spaces and real people could be affected by negligent behaviors or poorly defined algorithms. This also relates to notions of propriety, for automated suggestions to be sent and/or withheld. A medical chatbot texting a dire diagnosis without sensitive guidance by a human could be deeply upsetting to a patient. (Just imagine an AOL-style cheery chime: "You've got [awful disease name]!") Responsibly integrating spatial, social, personality, and emotional considerations into the computational intelligence will enable a positive, humanized experience thereof.

Premises and postures. As smart services and devices are being developed, it's important to consider the premises of their authors. What's the anchoring belief about some intelligent, automated tool, and how does that relate to any implicit expectations for the intended audience? Is it a highly programmatic premise that this is a supertechie tool for hobbyist engineers to geek out on? Or is its premise something grounded in real human moments, such as calm assurance for a family living in a noisy or unsafe neighborhood (referring to smart home security systems)? That premise may then suggest a range of postures, from being overtly persistent and in your face to something more stealthy and out of the way, and everything in between—a range of states of presence and attention. When is it OK to be loud (but not rude), and when is it better to be quiet (but not secretive)? Much human-centered research must be applied to selecting the appropriate postures.

ins02.gif

So, as noted in my reflections here, applying our HCI/UX knowledge is a bit different from "dumb" software apps, websites, or hardware. There's a conversational-style exchange, invisible learnings of behaviors and data patterns (augmented by cloud-based datasets) that change over time, and fuzzy notions of what's possible (due to learning by a hidden algorithm that is indecipherable by most folks) and thus of what's really expected—for ill or for gain. The invisibility is the most striking difference, while in typical digital products the interface and interactions are largely self-evident in their limitations and affordances.

And let's face it, this era of automation and computational intelligence will advance further and faster—there's no escape! It's not a cause for alarm but a challenge to address, of how to truly humanize such experiences. This goes back to supporting emotionally satisfying relationships that guide good power-control constructs whereby the user feels confident, knowing what's happening and how to guide things along for their own betterment. As legendary designer Massimo Vignelli proclaimed, "The life of a designer is a life of fight; fight against the ugliness." But I would amend that to say: In this automated era, it's a life of finessing the tensions between humanity and automation to preserve, protect, and progress our quality of life.

back to top  Author

Uday Gajendar (ghostinthepixel.com) has been a prolific UX designer and leader for more than 15 years, shipping designs for PayPal, Facebook, Citrix, Adobe, and others. He also enjoys coaching startups on UX fundamentals. [email protected]

back to top 

Copyright held by author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2019 ACM, Inc.

Post Comment


No Comments Found