Ambient intelligence: the next generation of user centeredness

XII.4 July + August 2005
Page: 37
Digital Citation

Case study

Panos Markopoulos, Boris de Ruyter, Saini Privender, Albert van Breemen

Ambient Intelligence. Since the early visionary articles and white papers that introduced Ambient Intelligence, [1] for example, this vision has been associated with a widely prophesized proliferation of devices and applications populating our physical environment. Increasingly we are lead to anticipate that ambient intelligent technology will mediate, permeate, and become an inseparable component of our everyday social interactions at work or at leisure.

The social dimension of ambient intelligence is clearly an important topic for human-computer interaction (HCI) research. The field of HCI has justifiably turned its attention to issues pertaining to how users will be able to manage such complex, adaptive environments and the critical concerns that arise regarding privacy. A less-explored design problem is how such technologies can be designed to fit the social processes in which they are embedded and to blend socially in the activities of their users.

Another critical defining characteristic for ambient intelligence vision is the use of system intelligence that will allow ambient intelligence applications to sense, adapt to, and serve human needs. Considering our interest in the social side of ambient intelligence, we argue in favor of considering intelligence beyond its narrow sense of problem solving, learning, and system adaptation to cover the ability of a system to interact socially with people and become a socially competent agent in the group interactions it supports.


Psychologists have debated for several decades regarding the multiple kinds of intelligence that humans possess. Over the years, different theories have identified various numbers and types of dimensions to describe intelligence. Despite their disagreement about the exact dimensions, abilities, categories of intelligences there are, such theories are pretty uniform in their consideration of a type of intelligence that concerns the ability of a person to interact within a group, to understand and to relate to other people. Typically, this type of intelligence includes the knowledge of the self, one’s own emotions, inner desires, personality and social norms and how they should be applied within different situations and environments. We argue that this latter type of intelligence, broadly referred to as social intelligence, is a much needed aspect of ambient intelligence and one that presents challenging research problems to the HCI community.

Social Intelligence. In its widest definition, social intelligence is a person’s ability to "...get along with people in general, social technique or ease in society, knowledge of social matters, susceptibility to stimuli from other members of a group, as well as insight into the temporary moods of underlying personality traits of strangers." [9].

There is a large body of psychological literature on social intelligence, its definition and even its measurement. However, there does not seem to be a single list of characteristics or behaviors that can completely and unequivocally describe social intelligence. An important reason for this is that social intelligence is manifested in very different ways according to the context at hand. Socially intelligent behaviors may range from being nice and pleasant to interact with, admitting mistakes, displaying curiosity, to being able to read non-verbal cues of interlocutors, etc.

The social component of ambient intelligence brings about four critical challenges to human computer interaction research:

  • Designing ambient intelligence systems and environments so that they can be perceived as socially intelligent. The complex nature of social intelligence means that there are multiple ways in which we can try to make a system exhibit social intelligence just like humans can exhibit their social intelligence in multiple ways and in varying contexts. Whereas the traditional design challenge of usability is no less multi-faceted, research on interaction with artificial forms of intelligence is far from having anything like Nielsen’s usability heuristics or Schneiderman’s guidelines to epitomize design experiences and results of empirical research. A broad range of challenges lies ahead for HCI research. Which aspects of system behavior affect social intelligence? Does this only concern embodied conversational agents? Or does social intelligence also apply to less centralized and explicitly dialogical types of interaction? Would the same concepts and guidelines apply to implicit interactions with a system or "ambient displays" combined with tangible controls, typical in ambient intelligence and ubiquitous computing demonstrations? Research in robotic and on-screen characters has produced several related results that are mostly in line with the position of Reeves and Nass [7]: People react socially towards such artifacts and it appears that overall the systems exhibiting more colorful human-like behaviors are typically appreciated more by test-users in a range of experimental settings. One is tempted to generalize that we should be endowing systems with human-like social skills. However, as it is not straightforward to describe that a socially intelligent behavior would be for a human, it remains challenging to decide what capacities ambient intelligence systems and environments should be equipped with.
  • Designing intelligence that will support human-to-human cooperation and social interactions. Intelligence that does not support teamwork or that embarrasses users, that does not allow them to control how they present themselves or that leads them to social blunders and social clumsiness, is clearly undesired. An ambient intelligence system that lacks this "other" and "softer" type of intelligence is unlikely to be accepted for use.
      A similar design challenge has been occupying research into computer-supported cooperative work and social computing for some time with some generalizable design knowledge, [4] for example. Most of such work concerns online interactions. Considering intelligent environments, an interesting challenge is to examine how such results extend to collocated interactions between humans where ambient intelligence is used over the longer term until it becomes an integral component of how people interact with their social network. Similar questions are currently being raised in the context of studying shared public displays, [6] for example. Further to sharing displays between co-workers, this challenge also concerns the range of domains where ambient intelligence technology may play a role, for example, the patient-caregiver relationship in the healthcare domain, the teacher-pupil relationship in an educational setting, etc. Critical to meeting this design challenge is to gain experience from actual long-term deployment of such systems, studying the way such technology is appropriated and the impact it will have on the social or professional groups that adopt it.
  • How to evaluate social intelligence? Assuming the best intentions and intuitions by interaction designers or even assuming researchers wish to address the first two challenges, we are lacking a yardstick to measure our success. Far from a new version of a Touring test that would benchmark a system against the intelligence of a human, we need, at least, to be able to verify that one design is superior to another with regards to how they are perceived as socially intelligent. In the sections that follow we outline some work we have done in this direction in the context of home dialogue systems. As social intelligence can be demonstrated and experienced at different contexts, in different ways, it is also reasonable to expect that a set of alternative techniques may need to be invented, suitable for different interaction contexts and associated with different research methods (e.g. observation or self report).
  • What are the benefits of social intelligence? In the sections above we have laid out an argument as to why this aspect intelligence should not be neglected in system design. To this point this argument is merely a conjecture. Evidence is needed to demonstrate that social intelligence is perceived as such and is appreciated by users.
      Application-driven research is required to provide tangible evidence of the benefits of social intelligence to users.


The last of these challenges is a very crucial one: to verify the relevance of a research program on social intelligence we first need to establish that it does provide some added value for users of ambient intelligent systems. Below we outline an investigation in home dialogue systems; interaction technologies that act as intermediaries between technologies embedded in a domestic environment and humans inhabiting those environments. Rather than examining the effect of singular factors conducive to social intelligence, our study examined what broader benefits could be brought upon the interaction experience by a more socially complex and coherent home dialogue system that is perceived as more socially intelligent.

A Robotic Home Dialogue System. The home dialogue system used in our study takes the form of an "interactive cat," or just iCat. The iCat is a research platform for studying social robotic user-interfaces. It is a 38 cm tall robot that is placed on a table since it lacks mobility facilities. The robot’s head is equipped with 13 standard R/C servos that control different parts of its face, such as the eyebrows, eyes, eyelids, mouth, and head position. The robot is thus capable of displaying many different facial expressions in order to express different emotions.

A camera installed in the iCat’s head is utilized for different computer vision capabilities, such as recognizing objects and faces. The iCat’s foot contains two microphones to record the sounds it hears. This stereo signal is used for speech recognition and to determine the direction of the sound source, which in turn allows the iCat to exhibit another human-like behavior, i.e., to turn towards a speaker. A loudspeaker is built into the robot for playing sounds and generating speech. The iCat in our study was connected to a home network supporting the control of various in-home devices, for example, light, VCR, TV, radio, and to access the Internet. Touch sensors and multi-color LEDs are installed in its feet and ears to sense whether the user touches the robot and to communicate further information encoded in colored light. For instance, the operation mode of the iCat, e.g. sleeping, awake, busy, listening, etc., is encoded by the color of the LEDs in the ears.

The iCat can exhibit a rich set of human-like behaviors. In order that the iCat be perceived as socially intelligent, these behaviors should be performed correctly at the right social context and at the right moment. In direct analogy to social psychology that provides an insight into interpersonal social interactions, we can hypothesize a range of behaviors that a socially intelligent robot should exhibit in a one-to-one interaction with a user.


In our study, a specific interaction context was created where the iCat could be used in a Wizard of Oz fashion to aid a user in executing some tasks with some technology for the home. The study was executed in the Home Lab, a special purpose facility for prototyping and user testing ambient intelligence technologies for the home (see sidebar).

The experiment provided a context where three of the four challenges mentioned above were addressed:

  • Based on an understanding of interpersonal social interactions, we designed and simulated socially intelligent behaviors for the iCat (challenge 1).
  • We developed an instrument to measure perceptions of social intelligence through self-report (challenge 3).
  • We conducted an experiment to assess the effect of social intelligence of the iCat on how participants experienced the interactive systems which the iCat helped them access (challenge 4).

Enhancing the Social Intelligence of the iCat with a Wizard. The following behaviors were designed and implemented as pre-programmed blocks that could be triggered by a wizard at the appropriate time and context.

  • Listening attentively: by looking at the user when he or she is talking and occasionally nodding the head.
  • Being able to use non-verbal cues the other displays: responding verbally to repeated wrong actions of the user by offering help.
  • Assessing well the relevance of information to a problem at hand: for example, by stating what is going wrong before offering the correct procedure.
  • Being nice and pleasant to interact with: by staying polite, mimicking facial expressions, for example, smiling when user smiles, being helpful, etc.
  • Paying attention to affective signals from the user: by responding verbally or by displaying appropriate facial expression to obvious frustration, confusion, or contentment.
  • Displaying interest in the immediate environment: the immediate environment being the participant and the equipment used in tasks, by carefully monitoring the person and the progress of the tasks.
  • Knowing the rules of etiquette: by not interrupting the participant when he or she is talking.
  • Remembering little personal details about people: addressing the participant by name, remembering login information and passwords if asked.
  • Admitting mistakes: by apologizing when something has gone wrong, but also when no help can be provided upon participant’s request.
  • Thinking before speaking and doing: by showing signs of thinking (with facial expression) before answering questions or fulfilling the participant’s request.

In the experiment we set out to test whether this combination of behaviors would result in the robot to be perceived as socially intelligent. Currently they provide a good indication of the capabilities that we would need to develop to move beyond an emotionally expressive robot, to one that could be perceived to be socially intelligent.

Evaluating Social Intelligence. In the absence of existing validated instruments to assess social intelligence in interactive systems we developed a special purpose questionnaire: the Social Behaviors Questionnaire (SBQ). The questionnaire was built up of five-point scales rating subjects’ agreements with statements such as:

  • The robotic cat takes others’ interests into account.
  • The robotic cat does not see the consequences of things.
  • The robotic cat says inappropriate things.
  • The robotic cat is not interested in others’ problems.
  • The robotic cat tells the truth.

The set of scales was developed iteratively after considering similar questionnaires that are meant to assess social intelligence of humans and the range of social behaviors we would expect a computing system to exhibit. The development of the questionnaire is reported in [3]. The experiment took place at the HomeLab.

Experimental Evaluation of the Designed Social Intelligence. The experiment, which is described in detail elsewhere [3] had a two-fold aim: to verify that test-participants would perceive the iCat as socially intelligent; and to assess how the social intelligence of the home dialogue system would affect on the perception of quality of the other interactive systems in the environment.

The experiment involved 36 participants who were left with the iCat in the living room of the HomeLab, while the experimenter observed and controlled the experiment from the HomeLab observation station. The socially intelligent iCat (supported by a wizard as explained above) was compared against what was designed to be a "neutral" iCat.

In the socially intelligent condition, the robot talked using synthesized speech, with lip synchronization, blinked its eyes throughout the session, displayed facial expressions and the socially intelligent behaviors listed above. The wizard observed and listened to the participant, typed in responses for the iCat to utter and, initiated the pre-programmed social behaviors at appropriate moments.

In the "socially neutral" condition, the iCat did not display any facial expressions and did not blink its eyes. It talked and used lip-synchronization, but the talking was not driven by the above-listed aspects of social intelligence. It responded verbally only to explicit questions from the participant. The only self-initiated help was when the participants really got stuck and could not continue the experiment without help.

Participants were given the task to program a DVD recorder and to participate in an online auction task. Notifications of outbids by other auction-participants were sent by email to the participants during the test. The e-mail account had to be monitored if the participants wanted to complete the task successfully. When the opportunity arose, the socially intelligent iCat offered to monitor their email account for them. In general, the iCat was there to help in many other ways as well, for example, helping participants with the registration to the auction site and providing information on the items that were offered on the auction. If authorized, it could place bids for the participants, etc.

A multiple set of measures was designed to test both the direct effects of iCat’s behaviors and the potential implicit spillover effects like satisfaction with the DVD recorder. The SBQ was used to verify that indeed the iCat was perceived by participants as more socially intelligent in socially intelligent condition rather than in the socially neutral condition. The results shown in Table 1 show that indeed participants perceived the robot designed to be socially intelligent to be so more than the neutral one.

User satisfaction regarding the other technology used was assessed using USQ, an instrument previously developed in-house for assessing user satisfaction with consumer products [2]. Acceptance of the home technology was assessed using the Unified Theory of Acceptance and the Use of Technology (UTAUT) questionnaire [8]. Participants who interacted with the socially intelligent iCat were more satisfied with the DVD recorder. Further, participants who worked with the neutral iCat were less inclined to want to continue working with iCat at home. (Readers are referred to [3] for more details on these results).

From the interviews we could generally discern a more positive attitude upon the socially intelligent robot. In response to the question, "If you had iCat at home, what would you like it to do for you?" many participants mentioned things like operating all their electronics and electrical appliances (such as lights, home heating system, household appliances, and home entertainment equipment). Many mentioned more private tasks like having their email checked for them, screening telephone calls, and Internet banking. No differences were found in the pattern of responses of the participants in the different conditions.

Our study has shown that a few social behaviors in a home dialogue system may be sufficient to remove a lot of the discomfort that is brought about as people’s domestic environments become richer in technology. Adding some thoughtful implementations of social intelligence to a perceptive robot can make the robot easier to communicate with and more trusted by users. Whereas most research on social robotic characters has concentrated on the interaction with the robot as the focus of attention, this study focused on the role of a robot as a home dialogue system. The interaction with the iCat was not the participants’ priority. Despite its background function, the iCat and the behaviors that it displayed had significant effects on satisfaction with the embedded systems, acceptance of the technology, and sociability towards the system. It seems that social intelligence is not just important for direct interaction with robotic or even screen characters, but has relevance in systems that do not necessarily have a social function.

Future Social Intelligence Research. Above all, this study demonstrates the relevance of social intelligence as a concept for studying interaction between humans and ambient intelligence environments. This relevance has been assumed, perhaps implicitly, in the context of artificial intelligence where computational characters are designed and evaluated. By showing how an increase in perceived social intelligence impacts positively upon people’s perception of a system we can argue strongly in favor of exploring the "social side" of ambient intelligence and of developing a research program to address the four challenges discussed earlier.

It seems promising for future research to explore the most effective ways to achieve social intelligence and to forge the links between lower level behaviors of robots, on-screen characters or, more generally, the interventions by an ambient intelligence environment, with the resulting experience of social intelligence. In setting out to design and build socially intelligent systems the boundaries between HCI research and artificial intelligence research become, once more, very blurred with exiting challenges confronting both fields.

A theoretical framework for social intelligence suitable for the study of ambient intelligence environments still needs to be developed. Currently we are borrowing concepts from social psychology and we are at the early phases of developing appropriate measurement instruments, such as the SBQ. Yet this instrument is far from sufficient, as we need evaluation tools applicable for the very diverse manifestations of social intelligence discussed throughout this paper. Finally, another challenging research problem is to examine whether social intelligence itself can be a useful approach for intelligence into the environment, for example, in terms of teams of cooperating agents.

We expect significant progress to be gained with the development of research programs focusing on the user experience of ambient intelligence, at home, at the office or in the care domain. An important component of such research programs is the establishment of facilities like the Home Lab, where user experiences can be created, simulated, and assessed. The study described has only addressed a limited and well-bounded situation, where a single user experiences interaction with a single socially intelligent agent.

In progressing towards the vision of Ambient Intelligence the four challenges discussed in this article will need to be addressed in contexts of higher complexity:

  • Where the ambient intelligent environment mediates, supports, and adapts to cooperative activities and social interactions of groups of people.
  • In experimental contexts where the ability to relate to another person and to adapt the environment’s behavior to this person is more critical, for example, affective communication, healthcare, etc.
  • In cases where the ambient intelligence system can enhance the social intelligence of a person in their own social interactions, for example, enhancing personal recollection of people and situations, managing one’s reachability over different communication media, etc.

Finally, the technical challenges ahead that were circumvented by the Wizard of Oz set up should not be underestimated. An open and important challenge for the ambient intelligence research community that is highlighted by our work is the need to make systems capable of understanding and relating to people at a social level, timing, and cueing their interactions in a socially adept manner.


1. Aarts, E, and Marzano, S (Editors) (2003), The New Everyday: Vision on Ambient Intelligence, 010 Publishers, Rotterdam, The Netherlands.

2. De Ruyter, B.. & Hollemans, G. (1997). Towards a User Satisfaction Questionnaire for Consumer Electronics: Theoretical Basis. Eindhoven: Natuurkundig Laboratorium Philips Electronics N. V., NL - TN, 406/97.

3. De Ruyter, B., Saini, P., Markopoulos, P., van Breemen, A., (2005), The Effects of Social Intelligence in Home Dialogue Systems, to appear in Interacting with Computers, Elsevier.

4. Erickson, T., Smith, D.N, Kellog, W.A., Laff, M., Richards, J.T. and Bradner, E. (1999) Socially translucent systems: social proxies, persistent conversation, and the design of "Babble." ACM CHI’99, ACM Press 72-79, 1999.

5. Markopoulos, P., (2004) Designing ubiquitous computer human interaction: the case of the connected family. In Isomaki, H., Pirhonen, A., Roast, C., Saariluoma, P., (Eds.) Future Interaction Design, Springer.

6. O’Hara, K., Perry, M., Churchill, E., and Russel, D., (Eds.) Public and Situated Displays: Social and Interactional Aspects of Shared Display Technologies, Kluwer, 2003.

7. Reeves, B., and Nass, C. (1996). The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places, Cambridge: Cambridge University Press.

8. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User Acceptance of Information Technology: Towards a Unified View. MIS Quarterly, 27(3), 425-478

9. Vernon, P.E. (1933). Some characteristics of the good judge of personality. Journal of Social Psychology, 4, 42-57.


Panos Markopoulos
Eindhoven University of Technology, The Netherlands

Boris de Ruyter
Philips Research, The Netherlands

Saini Privender
Philips Research, The Netherlands

Albert van Breemen
Philips Research, The Netherlands

About the Authors:

Dr. Panos Markopoulos is an assistant professor at the Eindhoven University of Technology in the Netherlands. Having worked on a wide range of topics in the field of human-computer interaction, his current research interests concern Awareness Systems, Privacy in Ambient Intelligence and Usability Testing with Children.

Boris de Ruyter studied experimental psychology and works as a senior scientist at Philips Research in the Netherlands. His research interests concerns User Profiling in adaptive systems and Personal Healthcare applications. Since 1999 he is leading a team that conducts research into interactive systems for realizing ambient intelligence scenarios.

Saini Privender studied cognitive psychology at the Leiden University. After completing a two-year post-graduate education in User-System Interaction at the Eindhoven University of Technology, she joined Philips Research. Her research interests concern social intelligence and persuasive technologies.

Albert van Breemen studied electrical engineering and obtained a Ph.D. in Agent-Based multi-controller systems. He works as a senior scientist at Philips Research in the Netherlands. His research interest concerns software architectures for intelligent systems.


UF1Figure. HomeLab

UF2Figure. HomeLab

UF3Figure. iCat

Sidebar: HomeLab

HomeLab is a dedicated laboratory that simulates a home and which is equipped with an extensive observational infrastructure allowing testing of innovative technologies at home in an almost naturalistic setting. The HomeLab is built as a two-stock house with a living room, kitchen, two bedrooms, a bathroom, and a study. At first glance, the home interior does not show anything special. A closer look reveals the black domes that conceal cameras and microphones on the ceilings. Equipped with 34 cameras throughout the home, the HomeLab provides behavioral researchers a powerful instrument for studying user behavior and experiences. Adjacent to the HomeLab there is an observation room from which one has a direct view into the rooms of the HomeLab.

The signals captured by the cameras installed inside HomeLab, can be monitored from any of the four observation stations. Signals can be routed to these observation stations through an "observation leader" post. The observation leader modifies camera set-ups, routes video and audio signals, and monitors the capture stations. Each observation station is equipped with two monitors and one desktop computer to control the cameras and to mark observed events. The marked events are time-stamped and appended to the video data. All captured signals and marked events are recorded by means of the four capture stations.

When setting up an experiment in HomeLab, the researcher designs a coding scheme that lists all prototypical behaviors that are expected to be observable during the observation session.. The observers mark the occurrence of these behaviors during the HomeLab session. A similar scheme must be created and used for supporting the actuating tasks of the experimenter in Wizard of Oz studies.

©2005 ACM  1072-5220/05/0700  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2005 ACM, Inc.


Post Comment

No Comments Found