Authors:
Nadine Felber, Hamed Alavi
A common wish—in fact, maybe the most important wish—among the elderly is to live as long as possible in their own home. Home, in this context, denotes a place someone is familiar with, has control over, and that is sheltered from unwanted outside influence or scrutiny [1]. For older people, the familiarity of this space is especially crucial: They are so familiar with every step of the staircase, every door handle, and every light switch that they are able to maneuver through their home safely, even while physical and cognitive capabilities decline. I observed this myself: My great-aunt, over 90 years old and having lived more than 30 years in her apartment, moved precisely and without incident through her living space despite having gone blind. She had achieved complete symbiosis with her home, knowing where each and every thing was located like she could still see it.
Is AI going to ruin this relationship with one's home?
→ Older people living in smart homes experience a new kind of "home" meant to support their independence, but there are risks involved.
→ Either they rebel against the support of the smart home, or they become overly reliant on it.
→ Therefore, the smart home should also take into account their somatic self-assessment.
The increasing incorporation of sensors, actuators, and AI applications in built environments is introducing fundamental changes to our everyday experiences at home. How can we ensure that AI won't disrupt the symbiotic relations between the elderly and their homes but rather become a part of them?
Think about it: "Home" is where you feel—and are—most yourself. In the privacy of your home, you have privileged, undisturbed access to yourself. Nobody knows you like you know yourself while at home. And this is even more true for the elderly. An older person has had decades to get to know themselves and their capacities somatically (i.e., through their body)—especially in a familiar environment. AI in the smart home, however, "knows" the occupant through their data, and based on that data can make predictions about the person's capacities and the associated risks, by recognizing his or her patterns and habits, detecting minute changes in those patterns, and foreseeing possible problems [2].
Therefore, the most important impact AI will have on the elderly is likely to be on their self-assessment and thus their self-confidence in executing everyday activities. Ideally, data will support the somatic experience of the elderly, reaffirming their decisions. But what happens if AI undermines their self-confidence? What if, by alerting them of the risks associated with their life, they eventually become paralyzed? What if, by trying to help the elderly, the AI in their homes actually harms them in the long run?
To understand this impact, and risk, in more detail, let's look at the range of available AI-enabled at-home applications for older people. There are at least three types of applications with monitoring and assistive features aimed at making home a safer place for the elderly:
- Fall detection sensors in floors, walls, or wearables (falls are the most common accident the elderly face) [3]
- Sleep-monitoring mattresses or wearables to detect changing patterns or insomnia [4]
- Motion sensors to track moving patterns and detect a decline in physical functioning [5.]
All of these features are meant to support older people in their daily life at home. However, they also interfere with the purely somatic experience and self-assessment of the elderly.
The Somatic Experience Versus the Smart Home's Suggestions
Let's briefly review what the somatic experience is: the subjective, personal assessment of how we feel about a certain activity [6] and the ultimate decision maker when it comes to personal action. Data is usually used to confirm feelings in scenarios such as the following: If my thermometer confirms that I have a fever, I call my boss, reassured that I am indeed ill and thus justified in not going to work. If the thermometer shows a normal body temperature contradicting my somatic experience of a fever, I turn to other somatic experiences to make a decision. Is my throat sore? Am I coughing? Sneezing? If enough somatic markers still give me the impression that I am unwell, I will still not go to work, thus discarding the factual data of my thermometer. If my phone tells me it's 15 degrees Celsius (59 degrees Fahrenheit) outside, I might think I won't need a jacket. But when I go outside and feel cold, I'll go back inside and take one anyway. These examples and numerous similar ones show that our own perceptions play a crucial role in our decision making, and this remains generally true across physical, spatial, temporal, and social contexts.
The interaction between somatic experience and judgment, and the available data and its derived forecasts by AI can go wrong in at least two ways: 1) The older person refuses to acknowledge the information provided by AI and relies solely on their somatic experience to judge their capacity to do a certain task or take a certain action, creating a constant conflict in their own home; and 2) the older person starts to rely too much on the available data, distrusting their somatic judgment, causing dependency that can be problematic. Either way, self-esteem will be undermined, as AI is constantly assessing and challenging it.
The reason why the self-esteem of the elderly might be especially vulnerable to AI is precisely the fact that in old age, one's life is centered around trying to stay independent, maintaining capacities, and not getting sick or injured so as to become a burden to others [7]. The fear of losing certain capacities makes it more likely that an older person will downplay the risks associated with specific activities and overestimate their ability to be independent [8]. For example, an older resident may have lost their balance and fallen to their knees twice in a row while reaching for a pan in the kitchen cabinet, but they continue to insist they are able to cook for themselves, as they still feel capable of reaching down for the pan, straightening up again, and putting the pan on the stove. After all, younger people also stagger and even fall from time to time. As long as the elderly feel secure in their capabilities, they don't want to refrain from activities like cooking that bring them joy and pleasure. Nevertheless, the availability of data records from the motion sensors, together with the power of risk assessment by AI, might cause the smart home to suggest to the elderly resident not to bend over anymore, and maybe also alert their healthcare provider and/or a relative that their frailty is increasing. The resident, however, might not appreciate any consequences that could follow from this action. They judge the suggestion and the alert system as paternalistic, intruding on their privacy and curbing their autonomy in their own home. This may cause resentment toward and distrust in the technology, which in the long run undermines the key point of a smart home.
How can we design a smart home that is just the right amount of cautious and suggestive, but not paternalistic for an elderly person? How much weight should be given to the AI's risk assessment, and should the human always have the last word, even if their behavior might be high risk for themselves?
Now let's turn to the second possibility, which is that the older resident slowly starts to distrust their own somatic judgment and rely too much on the smart home. This scenario may happen if the gathered data becomes very detailed and accurate, and the predictions by the AI are therefore more fine-grained and individual. Take the example of gait again. An elderly person may use the bathroom several times at night, usually not thinking too much about it. They may have tripped a few times, but never had an actual fall. The AI now is capable of detecting more-unstable or shaky steps and alerts the resident immediately if this happens, recommending that they return to bed, as the risk of falls increases considerably with a less stable walking pattern. The resident feels unsettled, as they had not yet noticed their gait was less stable. The risk here is that every warning by the smart home may diminish the resident's self-confidence, thus fostering the decline of physical capabilities. Such a situation may lead to the person eventually refusing to walk in their home before they are "cleared" by the AI. In such a case, it might have been better to let the resident be their own judge based on somatic experiences, as this would have led to increased physical activity, albeit with a greater risk associated with it. Our point is that exaggerated risk aversion might backfire in the long run, diminishing the capabilities of the elderly faster than without the assistance of risk-averse AI.
The solution to this issue might be—paradoxically—more data. It is possible that there exist certain thresholds in regard to the balance between risk aversion and maintenance of capabilities, but likely more data is needed to find accurate tipping points.
The Ideal Scenario: A Person's Sense of Security in Line with the Smart Home's Risk Assessment
As mentioned earlier, it is the familiarity of the home that makes an older person want to stay there as long as possible. In the best-case scenario, older people will be familiar with their smart home as well, having lived in the enhanced environment for a long time, and thus the constant monitoring will have become normal to them. The relationship between the person and the smart home becomes symbiotic and does not feel intrusive, paternalistic, or uncomfortable anymore. It is possible that this monitoring is actually cherished—after all, many people monitor themselves already on a voluntary basis, given the success of health and fitness trackers. Thus, an older person might feel at ease knowing that her smart floors keep a log of her gait, steps, and incidents of falls, and that her mattress records her sleeping patterns every night, as she knows that all these technologies will inform healthcare professionals and/or family members should anything of concern be detected. It might even be possible that the relationship between the resident and the smart home becomes symbiotic beyond the "simple" monitoring features and alerts, an actual feature of the somatic experience of the home and thus a felt presence. Maybe the floor next to their bed heats up slightly before the person gets out of bed, "greeting" them with warm feet. Such features might create a sort of special intimacy with their home, without disturbing their somatic judgment.
Conclusion: Ensuring that the Smart Home is Pleasing, Not Unpleasant
These hypothetical scenarios shed light on some of the crucial questions and concerns that smart home designers and researchers need to grapple with: How can we find the fine line between useful advice for the elderly and paternalistic commands from AI? How much control should an older person have over their smart home environment, and how can boundaries be drawn that are personalized to the needs and wishes of the aging person? It is also important to know which of AI's analytic capacities may have detrimental outcomes for the elderly user, if communicated directly to them. In what situations would it be appropriate to heed AI's advice and when would it be better to let the user assess their capacities themselves? Research is necessary to gain insight into values that elderly persons uphold in their own home, and how the notions of risk aversion, self-assessment, nudging, and personal autonomy in one's private space interact in the smart home of an older person. In addition, we need to pay attention to the somatic experience of elderly people, adapting designs of smart home applications accordingly. Such "soma design," as proposed by Kristina Höök [9], might be the right approach to capture the unique emotions and needs someone has in their own home, ensuring that this delicate environment is as little disturbed as possible by new technology.
1. Després, C. The meaning of home: Literature review and directions for future research and theoretical development. Journal of Architectural and Planning Research 8, 2 (1991), 96–115.
2. Barbareschi, M., Romano, S., and Mazzeo, A. A cloud based architecture for massive sensor data analysis in health monitoring systems. Proc. of the 2015 10th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing. IEEE Computer Society, 2015, 521–526; https://doi.org/10.1109/3PGCIC.2015.114
3. Bobillier Chaumon, M.-E., Cuvillier, B., Body Bekkadja, S., and Cros, F. Detecting falls at home: User-centered design of a pervasive technology. Human Technology 12 (2016), 165–192; https://doi.org/10.17011/ht/urn.201611174654
4. Obayashi, K., Kodate, N., and Shigeru, M. Can connected technologies improve sleep quality and safety of older adults and care-givers? An evaluation study of sleep monitors and communicative robots at a residential care home in Japan. Technology in Society 62 (2020), 101318; https://doi.org/10.1016/j.techsoc.2020.101318
5. Villalba Mora, E. et al. Home mobile system to early detect functional decline to prevent and manage frailty. International Journal of Integrated Care 18 (2018), 138; https://doi.org/10.5334/ijic.s2138
6. Bedia, M.G. and Di Paolo, E. Unreliable gut feelings can lead to correct decisions: The somatic marker hypothesis in non-linear decision chains. Frontiers in Psychology 3 (2012), 384; https://doi.org/10.3389/fpsyg.2012.00384
7. Svidén, G., Wikström, B.-M., and Hjortsjö-Norberg, M. Elderly Persons' Reflections on Relocating to Living at Sheltered Housing. Scandinavian Journal of Occupational Therapy 9, 1 (2002), 10–16; https://doi.org/10.1080/110381202753505818
8. Tzeng, H.-M., Okpalauwaekwe, U., and Lyons, E.J. Barriers and facilitators to older adults participating in fall-prevention strategies after transitioning home from acute hospitalization: A scoping review. Clinical Interventions in Aging 15 (2020), 971–989; https://doi.org/10.2147/CIA.S256599
9. Höök, K. Designing with the Body: Somaesthetic Interaction Design. MIT Press, 2018; https://doi.org/10.7551/mitpress/11481.001.0001
Nadine Andrea Felber is a Ph.D. candidate at the Institute for Biomedical Ethics at the University of Basel, Switzerland. She currently investigates the implications of smart home technology in healthcare and nursing. Her main research focus is dignity and epistemology in relation to new technologies. [email protected]
Hamed Alavi is an assistant professor at the University of Amsterdam and a founding member of the Digital Interactions Lab. His research investigates human interactive experiences with and within built environments of the future as they embody various forms of intelligence. He holds a Ph.D. in computer science from Swiss Federal Institute of Technology in Lausanne (EPFL). [email protected]
Copyright 2022 held by owners/authors
The Digital Library is published by the Association for Computing Machinery. Copyright © 2022 ACM, Inc.
Post Comment
No Comments Found