"Auto Mode disabled by user," read the medical chart. The "smart" system had been turned off from midnight to noon the following day, an act of disobedience by the "user" (me) in order to get a few hours of sleep. This is not a research project that I chose but rather one that landed on me in 2011 when I learned I was a type 1 diabetic. For four years, from 2018 to 2022, I used one of "the world's first" automated systems for delivering insulin to manage and control my blood sugar.
I identify, research, and write as a disabled cyborg. I am a cyborg not because my body is (partly) made up of machines (the insulin pump and sensor system) but rather because of my interest in cyborg knowledges, practices, and politics , which take disability into account in order to question the myth of technological perfectionism and solutionism while at the same time seeking out possibilities for more generative questioning and engagement.
→ AI systems are disabled in that they are prone to errors, failure, and biases.
→ These problems can be understood as positive attributes rather than deficits, which will allow us to better mitigate harmful social consequences.
→ A crip understanding of AI systems engages critically and generatively with a reimagining of HCI through practice-based art and design approaches.
For me, the notion of cyborg disability acknowledges that both humans and machines might be understood as imperfect, unsolvable, and, yes, even, disabled. Perhaps it is not so unusual to talk about computational technologies as disabled? As the example above illustrates, it is a common colloquial and technical expression when talking about computing. For example, "Your account has been disabled." But we rarely understand disability to be a property of both humans and machines. In this piece, I take up this provocation seriously to forge new directions for HCI.
In HCI, there have been a number of examples of the use of autoethnography, autobiographical design, and first-person research  as new methodologies for data collection, reflection, and design intervention. Similarly, disabled scholars within HCI often draw on their own experiences to open up new questions and write new futures for the field. Specifically, as Williams et al. write in Interactions, "Crip HCI recognizes the researcher as situated, and thus articulated within, the sociotechnical meta-contexts of society, scholarship, research, and design inquiry and practice" . Here, disability can be understood as a means of "rupture" , disturbing existing knowledge practices and disciplinary norms. Disabled scholars committed to a "crip" understanding of disability argue that it is not a lack of something (a normative body or an accessible society) but rather an expansion of what it means to be human; unmaking and remaking existing ideas about humanity while, at the same time, opening up generative possibilities for intervention, action, and change.
Often, I am not sure whether I am taking care of these devices or they are taking care of me.
In this piece, I use autoethnographic field notes on my own experience of disability as a type 1 diabetic, which includes my dependency on a synthetically produced hormone (insulin), a machine (my insulin pump), and an assorted arrangement of digital and analog parts, including sensors, tubing, charging cables, alcohol swabs, insertion devices, needles, and the like. I use my own observations from daily life with machines to better understand the ethical and political stakes of computing and design. In fact, I believe it is my responsibility as a researcher to pay careful attention to these experiences because disabled people have long been experimental subjects for technologies that are later deployed in the general population.
The fragile arrangement of human, machine, biotechnological, and analog things makes life possible, and also, sometimes, quite impossible. While technologies offer new affordances and modes of living, they also introduce complexities that require attention, care, and maintenance. Software updates, regulatory changes, and medical innovations offer new possibilities, but they also remove some ways of being in the world. Habits, routines, practices, and even notions of selfhood and subjectivity are reshaped around seemingly small modifications. Often, I am not sure whether I am taking care of these devices or they are taking care of me: a turn toward a more relational, more-than-human, and/or posthuman subjectivity that sees humans and machines as intimately entangled, rather than as discrete entities as they have traditionally been conceived.
My field notes are foggy , patchy , and partial, often fragmented statements sent by e-mail to myself in the middle of the night before drifting back to sleep. For me, intimacy as method engages affective dimensions—in this case, fatigue, exhaustion, and burnout. By the glow of my iPhone, they hint at possible vignettes that can be copied into a Scrivener and brought to life the following day, when the sun is shining in through the heavy red curtains in the living room. But, while personal in nature, they are not meant to be confessional . As the author and as a researcher, I can decide which details to share and which to hide. My purpose is not to illuminate my own life per se but rather what it means to live with machines, when your life truly hangs in the balance.
What follows is a series of abbreviated field notes—each roughly 300 words, extracted from longer passages and written over the course of the past four years—selected for the ways in which they illustrate the nature of these "intimate infrastructures" that might be, like me, understood to be disabled.
In 2013, before I adopted an insulin pump and sensor system, I often woke in the night drenched in sweat and nearly too weak to get out of bed to get a 15-ounce glass of orange juice from the refrigerator in the kitchen a few steps away or even to reach for chalky glucose tablets sitting on the nightstand. I often went to sleep hoping I would wake up in the morning (and not fall into a diabetic coma). While teaching classes, my face would go numb. While walking down the street, I would suddenly not be able to feel my legs.
While that fear of frequent severe lows is a thing of the past for me, from 2018 to 2022, I was unable to sleep through the night more than a few times a week due to the need to calibrate the sensor system in order to ensure the continued operation and accuracy of the insulin pump. It is almost ironic that the system that has nearly eliminated the frequent episodes of extreme low blood sugar that woke me in the middle of the night almost a decade ago has enforced another form of sleep interruption and deprivation. But this one feeds data to the algorithm rather than sugar to the body.
With this "smart" system, frequent sleep interruption was such a common occurrence that I was convinced I was sleeping like a sensor (in shorter patterns that mimic the system). In short, with long-term sleep deprivation leading to anxiety, irritability, and depression, I believed that the AI system keeping me alive was also ruining my life.
One morning in early December, on the final day of the semester, I got a low-battery alert. It was 11:45 a.m. and I needed to eat lunch before heading to campus to teach that afternoon at 2. I unscrewed the battery cap with a coin, a quarter to be exact. I popped the new battery in and screwed the battery cap back on. The pump, however, didn't recognize the battery and the screen did not illuminate. I glanced at the table in the corner where I had placed the pump and saw a small copper-colored piece of metal in the shape of a plus sign with little grooves and bumps. The piece had fallen off the cap.
My heart was pounding and I was very stressed and afraid in an existential way. I called the company's tech support, but the soonest they could deliver a new battery cap was the next day. I considered possible fixes for the broken cap. I bought superglue at the nearest CVS pharmacy, located within a Target. Sitting on a bench in the pharmacy, I carefully dropped a dab of glue onto the battery cap and affixed the metal piece. Once the glue dried, I screwed the cap back on. Still, this made no difference.
My phone rang, it was the local diabetes educator. He had remembered my name from the original training three years before. He had a few extra battery caps and could drop one off that afternoon. At 2:45, he met me at the pharmacy. I bought a package of new batteries and inserted them into the pump along with the new battery cap. Ta-da, the screen illuminated. I reconnected my pump to my body. In seven years, this was the longest that I had been disconnected from my pump.
My heart started beating more slowly. I zipped up my coat and headed to a restaurant for lunch. As I was walking westward, away from the pharmacy, I felt a familiar buzzing under my coat. Bzz bzz. Bzz bzz. The sensation that had been such a nuisance for so many years reminded me that once again I was still alive. Bzz bzz. Bzz bzz. I was reunited with my disabled cyborg identity. I unzipped my coat and took out my pump. "Calibrate now," the alert said. Calibrate now.
It was 3:30 on a Saturday morning in late August after a long and intense week of travel. I was in the bathroom, grasping the porcelain of the toilet bowl, hanging on as if my life depended on it. And, in fact, it did.
I had awoken with dangerously high blood sugar. I thought it was strange. I hadn't eaten much the day before because I was worried I would run out of insulin before getting back home to New York. About 30 minutes before I got home, on a delayed A train, my pump started its characteristic buzzing and beeping, its only means of communication with me, its human being. As soon as I arrived home, sweaty and tired, I changed the cartridge of insulin in the pump and tubing attached to my stomach. Facing a refrigerator that was nearly empty after four weeks away, I went straight to bed without dinner, too tired to make any additional effort.
At 4 a.m., I drank some water, ate some crackers, and went back to bed, administering insulin with the pump every 30 minutes and monitoring its effect on my BG [blood glucose]. I felt nauseous again and ran to the bathroom. At 5 a.m., I hypothesized that I wasn't getting any insulin at all, so I changed the tubing. At 7:30 a.m., there was still no change, and my husband went for a haircut. At 8:15 a.m., I texted him that "the numbers" were coming down. I finally fell asleep at 9 a.m.
Later that day, I removed the original "site." Sure enough, the small tube that delivers the insulin had bent and slid under the adhesive tape and never entered my skin. The humidity in the apartment had prevented the adhesive from sticking properly. I had been without insulin for nearly 12 hours for the first time in 10 years. It took me nearly a week to recover.
These field notes illustrate the powerful ways in which paying attention to your own experiences with computing and design and recording your observations can deepen your understanding of the situatedness and fragility of these systems, the ways in which they are social as well as technological, and the ways in which they become visible upon breakdown. While they challenge HCI's reliance on scientific norms such as objectivity and quantification, they also stand to contribute a rich engagement with positionality, context, and politics. I describe three kinds of failures that emerge at the intersection of social and technological systems: 1) the ways in which unpredictable and indeterminate algorithmic systems do not match up with human lives and, in fact, demonstrate their own forms of situated nonhuman actions; 2) the ways in which hardware (such as the battery cap) and software make up functioning systems, which are supported by human relations; and 3) the ways in which social, psychological, and environmental factors such as exhaustion and humidity might interact with technologies such as adhesive and insertion devices.
In considering my technologies as well as myself as disabled, I see failure rather than perfection as the default setting. As a disabled cyborg, I am aware that social and technological systems fail together, that there may not be a single place to put the blame. But, at the same time, like my crip identity, failure is not a lack but rather what it means to live with disability and what it means to live with machines.
I dwell on these failures both as critique and possibility. For example, I am currently collaborating with designers and artists on a series of more creative projects. These projects have allowed me to turn embarrassing, exasperating, and even debilitating experiences into things for thinking and creative expression. The purpose of these projects is not to solve the problems within the medical devices but rather to raise complementary questions about issues such as privacy, labor, and care, as well as the boundaries between who is considered human and what is considered machine.
While disabled scholars offer modes of rupturing disciplinary norms and accounting for diverse experiences with computational systems, artists through their creative practices challenge common terminologies around computing. In recent years, for example, artists and curators have proposed alternative understandings of what is meant by artificial intelligence. Maya Indira Ganesh, Pratyush Raman, Padmini Ray Murray, and the Design Beku Collective's "AI Is for Another" creates "forks and distractions in how 'AI' is being imagined and produced in the world" (https://aisforanother.net). Stephanie Dinkins's Secret Garden argues that "stories are algorithms" with an immersive installation that showcases Black women's stories (https://www.stephaniedinkins.com/secretgarden.html). With her "techno-vernacular"  creativity as illustrated through algorithmically generated portraits of well-known Black leaders, Nettrice Gaskins explores the ways in which culture and making by diverse groups can expand knowledge practices, even in STEM fields, which are often deemed as objective. And micha cárdenas creates public performances as "poetics" that are understood as actions, movements, ritual offerings, and "possibilities of life" for trans of color artists .
Failure is not a lack but rather what it means to live with disability and what it means to live with machines.
My experiences as a disabled cyborg have prompted the following series of what-if questions:
- What if…I redesign myself and/or my life?
- What if…I design new interfaces (fashions, coverings, etc.)?
- What if…I use my data to create art objects (e.g., sculptures, visualizations, music)?
- What if…disabled people design their own technologies?
- What if…I participate in and design new communities and/or social structures?
- What if…I create new infrastructures?
In 2015, my worries about going to the beach with a disabled cyborg body led me to reach out to Sky Cubacub of Rebirth Garments, a fashion designer who creates queer crip garments for and performances with disabled people, to design a custom-made bathing suit that accommodates my insulin pump. The piece allowed me to think more about the invisibility of diabetes as a disease in tandem with the visibility of the machines that I use to control my blood sugar.
Continuing with these creative modes of experimentation, since 2020 I've collaborated with interdisciplinary visual artist Itziar Barrio on a series of robotic sculptures (Figure 1) that use data from my first "smart" insulin pump, the one that kept me awake at night for much of the four years between 2018 and 2022. When I visited Barrio's studio in December 2020, I was struck by the ways in which the choice of materials—cement, spandex, and rubber—suggested an alternative narrative about computing in contrast to the shiny metal and glass of the latest mobile phones, tablets, and computers. I could visualize the scholarly citations that might support such a line of questioning.
|Figure 1. Robotic sculpture in progress at Itziar Barrio's studio. Concrete, spandex, lighting filters, hardware, epoxy resin, Arduino, motor, custom circuit board, and Laura Forlano's insulin pump alert data. September 2022.|
In 2019, I spent the month of July transcribing alert and alarm data on a daily basis to better understand the patterns. Here is one example, a day when I received more than one alarm per hour:
Day 27, July 27
- 12:31 a.m. Low reservoir
- 5:56 a.m. Calibrate now
- 7:01 a.m. Calibrate now
- 8:06 a.m. Calibrate now
- 8:31 a.m. Change sensor
- 1:05 p.m. Low reservoir
- 2:27 p.m. Lost sensor signal
- 2:43 p.m. Possible signal interference
- 2:59 p.m. Check connection; ensure transmitter and sensor connection is secure, then select OK.
- 5:02 p.m. Sensor connected
- 5:06 p.m. Sensor warm-up
- 7:01 p.m. Calibrate now
- 7:30 p.m. BG required
- 7:31 p.m. Alert on low
- 8:11 p.m. Alert on low
- 8:41 p.m. Alert on low
- 8:46 p.m. Low SG [sensor glucose]
- 9:06 p.m. Alert on low
- 9:06 p.m. Low SG
- 9:31 p.m. Alert on low
- 9:40 p.m. BG required
- 9:40 p.m. Calibration not accepted
- 9:56 p.m. Calibrate now
- 9:58 p.m. Change sensor
Some of this data is printed directly on the circuit board powering the sculpture, a reminder of the human labor that is required to make automated systems work. These sculptures translate alert and alarm data into subtle movements that recall the writhing movements from four years of sleep deprivation and balloons that inflate and deflate, mimicking the inhaling and exhaling of human lungs.
This research suggests a few possible directions for HCI. First, let's continue to develop modes of autoethnographic engagement. Could you live intimately with your designs for one day, for one week, for one month, for one year?
Second, let's admit that AI, like all technological systems, is disabled. Our design processes are still overwhelmingly skewed toward optimization, perfection, efficiency, success, and "the happy path." We vastly understate and minimize the ways in which they may fail and the ways that things might go wrong, leaving others to suffer the consequences of our actions. While making things work is difficult, it is even more difficult to acknowledge that things might not go as planned.
Could you live intimately with your designs for one day, for one week, for one month, for one year?
Based on my experiences, AI systems are disabled in the following ways: 1) There can be a specific problem with the software or hardware itself (e.g., when the battery cap broke in my earlier example); 2) There can be issues with the ways in which AI systems fit into the broader context, including social, political, economic, and environmental aspects (e.g., when the humidity caused the adhesive and tube not to insert properly in my example); 3) AI systems can literally be disabled through the refusal to use them (e.g., when I turned the AI system off at night); and, 4) AI systems can be creatively reimagined through practice-based art and design approaches that engage with disability (as well as themes such as failure). Crip HCI can embrace the disability model of computing, acknowledging that failures, breakdowns, errors, and biases are not a "problem" to be solved but rather the reality of living and working with technology. By using a crip HCI approach, we may become better at understanding the social consequences of our designs and, hopefully, doing something about them before they cause harm.
Third, crip HCI must seek to support disabled people in articulating their own knowledge about their disability as well as their interactions with computational systems.
Finally, let's engage with creative projects that allow us to reframe the meaning of computation and design. What could go wrong?
Laura Forlano is a social scientist and design researcher. She is an associate professor of design at the Institute of Design at the Illinois Institute of Technology. She is an editor of three books: Bauhaus Futures (MIT Press, 2019), digitalSTS (Princeton University Press, 2019), and From Social Butterfly to Engaged Citizen (MIT Press, 2011). [email protected]
Copyright 2023 held by owner/author
The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.