Blogs

Can AI truly be a collaborator?


Authors: Diego Gómez-Zará
Posted: Wed, June 18, 2025 - 2:53:00

An intriguing panel titled “Is Human-AI Interaction CSCW?” took place during the 2024 ACM SIGCHI Conference on Computer-Supported Cooperative Work and Social Computing. The discussion focused on the extent to which researchers could consider human-AI partnerships to be real collaborations [1]. While a significant portion of the audience agreed that these partnerships can be true collaborations, several concerns and problematic scenarios emerged (e.g., “Do I really collaborate with Microsoft Word?” and “Can one human user collaborate with multiple AI agents?”). Although the panel did not impose definitions of collaboration, the exchange revealed the need for consensus and conceptual boundaries within the research community.

Rather than enhancing teams by designing customized tools that improve their dynamics, the industry and research community are heavily investing in designing AI systems that act as real collaborators [2]. With the emergence of generative AI and large language models, AI is once again focused on enabling systems that facilitate collaboration among humans. Many of these technologies are designed with anthropomorphic figures and can communicate as humans to gain trust and validation. And, compared to previous groupware systems, AI technologies can handle more tasks that range in complexity, domains, and scale.

Furthermore, these AI models represent a significant departure from traditional software applications. Their algorithmic nature is obscured by their ability to engage in eloquent conversations, generate realistic language, keep accurate memories, and exceed the rationale of what previous machines were capable of doing. As such, many ambitious projects are building AI systems to automate roles and tasks that were previously done manually by humans, exploring the endless possibilities to create new types of teams with humans and machines. Although human-AI collaboration has become a buzzword in many conferences and studies, the ontological question “Can AI truly be a collaborator?” is becoming more frequent and complex than ever before.

Why Should We Consider AI as a Collaborator?

When we define collaboration as an occurrence where team members work together toward a common goal, we can see AI as a collaborator. AI systems could work with humans across several iterations, cocreate to improve ideas, contribute knowledge, handle specific tasks, and assist collaborators in making progress. From a small group research perspective, AI can now affect the feelings, thoughts, motivations, and cognitions that team members share. It can automatize what the group is doing, as well as augment and complement team members’ cognitive actions, skills, and behaviors. For example, AI tools can expand team members’ perspectives by offering different ideas, telling them whom they should consult, and pointing out blind spots. Thus, AI can engage as a relevant collaborator by contributing inputs, influencing team dynamics and processes, and affecting the final outputs of a team.

These creative and iterative capacities of AI allow them to surpass what previous technologies could do. AI goes beyond a traditional productive tool, such as Microsoft Word, since it acts autonomously. Traditional tools depend exclusively on its user’s strength and determination, sometimes making the tool feel like an extension of the user (e.g., “I am writing.”). However, asking an advanced AI system to brainstorm a new research project will involve several steps that are not visible or dependent on humans’ actions, making it a separate entity. Furthermore, simple productive tools’ outputs are known a priori. I know what happens if I write a sentence in Microsoft Word, whereas I might not know an AI model’s response since it is not highly predictable or obvious. Lastly, most traditional tools will not learn or adjust themselves after team members use them. Microsoft Word will stay as it was after I close it, while an AI model can learn and adapt better to humans’ requests based on back-and-forth interactions, offering better ideas after each iteration. These capabilities can change the course of the collaboration, allowing human and AI models to consider or rethink what has been cocreated or discussed.

Why Should We Not Consider AI as a Collaborator?

The true notion of collaboration involves members jointly deciding to be part of it. They will “think” together and affect each other. Cocreation involves an egalitarian structure, shared decisions, shared ownership, and a sequential process of knowledge integration that makes the final deliverable greater than the sum of its parts. As a result, collaborators can disagree on certain ideas, have conflicts, resolve their differences, claim ownership of what they have done, or stop working together. A collaborator can withdraw, pause, or rejoin. Collaboration also involves individual goals that drive the shared goals, involving incentives or rewards. While these characteristics are inherent in collaborations, AI cannot fully engage in these activities.

 Although AI looks more agentic than previous systems, its agentic features are grounded in its design rather naturally accomplished [3]. Human behavior is more complex than answering questions, generating outputs, or completing sentences [4]. Humans can contest, disagree, challenge, or have different perspectives. Even if some AI systems could be programmed to perform these actions, they would be the result of training and design done by their creators. In that sense, AI’s nature is mostly transactional since its actions respond to users’ requests. An AI agent will neither expect nor ask something from human collaborators. Whether it is commercial or open source, AI systems follow the paradigm of request-response that responds to its creators, designers, and investors’ interests (I do not imagine ChatGPT refusing to “collaborate” with someone). Moreover, AI’s ownership of any cocreated outcome continues questionable practices. The fact that a human pulls the trigger in any human-AI interaction makes the ownership (as well as liability) of the work convoluted. As a result, these human-AI interactions reveal a clear hierarchy that challenges the notion of what authentic collaborations should be.  

What Should the Community Do?

As the research community continues advancing human-AI collaborations, including developments in AI alignment and LLM agents, we need to establish common ground for how these advancements will reshape our disciplines and research endeavors.

Human-AI collaboration should be positioned as a distinct subdiscipline within HCI that focuses on strengthening human collaborations through AI technologies, rather than attempting to create authentic AI collaborators. Despite AI’s ability to emulate collaborative behaviors, it will remain subjected to creators’ or final users’ goals [5]. Similar to the Internet and other collaborative technologies, AI represents another technological framework that can enhance communication and coordination among humans. Thus, our research priority should be designing AI technologies that make collaborations more efficient by mitigating the restraints and limitations that humans experience, such as coordination issues, information asymmetries, or missing skills. Our work can continue to expand the AI frontier, making it more present in collaboration settings [6].

We must also develop a consensus on what constitutes meaningful human-AI collaborations [2]. While cocreation, social influence, and iterative work are essential components in any collaborative enterprise, systems that provide recommendations or are designed for a single human user do not represent true collaboration. More theoretical work in this venue is required to create common ground, addressing aspects such as agency and autonomy. A more relevant question might be: “How would the output of a human-AI collaboration differ if we removed the AI component?” If the output would remain largely similar—albeit slower or less sophisticated—then it resembles more closely automation than collaboration. For this reason, the presence of multiple human collaborators produces richer, less predictable outcomes, making AI a good partner to help the team coordinate more effectively and think more.

Lastly, future research should move beyond simply comparing team compositions to examine the underlying mechanisms that make human-AI collaborations effective. Rather than testing whether human-AI combinations outperform human-human ones, we should examine why these differences emerge. Is it because AI completes a task faster than an average human? Is it because AI recalls more information? And what processes or structures can be leveraged by AI in collaborative settings? These questions, in my opinion, can examine the value of incorporating these technologies into group settings, validating AI’s role as an orchestrator. By focusing on how AI can best support and amplify human teamwork, the HCI community can create more effective and meaningful collaborative experiences for the future.

Endnotes

1.    Morris, M.R., Bernstein, M.S., Bigham, J.P., Bruckman, A.S., and Monroy-Hernández, A. Is human-AI interaction CSCW? Companion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing. ACM, 2024, 95–97.

2.    Wang, D. et al. From human-human collaboration to human-AI collaboration: Designing AI systems that can work together with people. Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, 2020, 1–6.

3.    Evans, K.D., Robbins, S.A., and Bryson, J.J. Do we collaborate with what we design? Topics in Cognitive Science 17, 2 (2025), 392–411; https://doi.org/10.1111/tops.1...

4.    Dafoe, A., Bachrach, Y., Hadfield, G., Horvitz, E., Larson, K., and Graepel, T. Cooperative AI: Machines must learn to find common ground. Nature 593, 7857 (2021), 33–36.

5.    Russell, S. Human-compatible artificial intelligence. In Human-Like Machine Intelligence. S. Muggleton and N. Chater, eds., Oxford University Press, 2021, 3–23.

6.    Berente, N., Gu, B., Recker, J., and Santhanam, R. Managing artificial intelligence. MIS Quarterly 45, 3 (2021), 1433–1450.


Posted in: on Wed, June 18, 2025 - 2:53:00

Diego Gómez-Zará

Diego Gómez-Zará is an assistant professor at the University of Notre Dame. His research focuses on how social computational systems help people organize and collaborate. His recent publications include work in recommender systems, team formation, team formation, diversity, and virtual reality. [email protected]
View All Diego Gómez-Zará's Posts



Post Comment


No Comments Found