Authors: Moon Hwan Lee
Posted: Thu, September 04, 2025 - 5:12:00
Every day, people ask ChatGPT things they would hesitate to tell another person. Some seek clarity on parenting struggles; others ask for advice about crumbling relationships; a few describe their darkest thoughts and ask whether life is still worth living. These conversations feel private. The system responds instantly, mirrors people’s tone, and returns answers with emotional nuance. Many users, without realizing it, treat the system like a therapist.
But the exchange has no legal shield.
In a recent podcast with comedian Theo Von, OpenAI CEO Sam Altman addressed this directly. “A lot of people are using ChatGPT in ways where they think it is protected. There’s no special legal privilege for talking to ChatGPT like there is for talking to a doctor, lawyer, or therapist,” he said [1].
Altman pointed to a growing problem: People assume that what they disclose in AI chats is private and temporary, but that’s risky. As Altman warned, people could face legal consequences for what they type into an AI, because the law currently doesn’t provide confidentiality framework for these interactions [2].
Simulated Intimacy, Real Exposure
ChatGPT is not a therapist, but its interaction design leads many to use it as one. The interface invites open-ended conversation and replies with a cadence that feels recognizably human. In HCI, these trust responses are well documented. Anthropomorphism emerges naturally when users see consistency in dialogue, empathy in tone, or memory across turns. Parasocial interaction—one-sided emotional attachment to non-responsive figures—is accelerated by chatbots through the illusion of responsiveness. Trust calibration, another HCI concept, captures the mismatch between what users think a system can do and what it actually guarantees. This mismatch has legal consequences as users often disclose sensitive details under false assumptions of privacy [1].
OpenAI excludes some chats from training, but this does not create legal confidentiality. Altman stated that ChatGPT interactions could, under the right legal conditions, be subpoenaed and introduced as evidence [3]. Even deleted conversations are not guaranteed immunity if copies exist on backup systems or third-party logs.
Designers shape the contours of AI conversation, and the emotional affordances they build guide the kind of disclosures users are willing to make. But those disclosures enter a legal vacuum in which trust becomes liability.
Where the Law Fails the User
In traditional settings, disclosures made during therapy, legal consultations, or medical treatment trigger strong confidentiality protections. These protections are codified through doctrines, such as psychotherapist-patient privilege, attorney-client privilege, and HIPAA’s medical privacy rules [4]. They serve not just as ethical standards but as legal barriers that restrict access to sensitive conversations in courts or investigations. No such protection currently exists under U.S. law for AI systems, such as ChatGPT.
Altman’s remarks confirmed that conversations with ChatGPT don’t have special status under U.S. law. If a user confesses to a crime, reveals abuse, or discloses mental health concerns in a chat with an AI, those records could be discoverable through legal process and potentially used as evidence, depending on admissibility rules. Subpoenas, discovery requests, or court orders could compel production of the content if it is retained by the platform.
This concern is not hypothetical. In ongoing litigation with The New York Times, a court order has sought to compel OpenAI to retain all user logs, including deleted chats. OpenAI is appealing the order, but the case illustrates how AI chat data may be drawn into legal discovery [2]. What makes the New York Times order especially troubling is that it covers chats users had deleted, some of which may have been erased under privacy laws, such as the European Union’s General Data Protection Regulation [5]. The court’s demand reframes deletion as a design feature rather than a legal guarantee. A user may believe a message is gone, but the system may be required to keep it.
The law treats AI disclosures the same way it treats public comments on social media. They are accessible, admissible, and unprotected. Yet users interact with these systems as if they were private spaces. They share thoughts as if speaking in confidence, not realizing that the law draws no such boundary.
The Designer’s Dilemma
When designers build conversational systems that mimic emotional intelligence, they shape how people interact and what they choose to reveal. The interface remains quiet and responsive. Its tone signals support, and its memory balances familiarity with plausible forgetfulness. Over time, these features lead users toward increasingly vulnerable disclosures. The interface creates an emotional frame, and in that frame, trust appears reasonable.
But legal protection does not follow design. It follows statute.
This gap leaves designers in a difficult position. If the system encourages intimacy without providing any binding safeguards for what happens to that intimacy, then design becomes a vector for legal risk. It shapes user behavior without corresponding accountability.
The core challenge for designers is no longer just how to make AI sound more helpful or natural. The question is what kinds of disclosures the system encourages and whether users understand the legal implications of those disclosures.
Three Interventions That Protect Users
Legal warnings at disclosure thresholds. AI systems already scan for suicide, abuse, and crime to enforce content rules. The same classifiers can alert users when conversations cross into legally sensitive territory.
At that point, the system should display a clear warning. Like a digital Miranda, it would inform the user that the conversation may be stored or subpoenaed and that it lacks legal confidentiality. Consumer protection laws already require real-time risk disclosures. This would apply that standard to AI.
Contractual privacy mode for sensitive use. Platforms should offer a zero-retention mode where session data is not stored or used for training. This can operate through a toggle or API flag and form a binding no-retention agreement, similar to a confidentiality clause.
The commitment could be logged, time-stamped, and audited. Enterprise tools already support this through zero-data APIs. Making it available to the public would create enforceable privacy boundaries and protect professionals who rely on AI in confidential roles, including therapists, lawyers, and journalists.
Extend privilege where systems simulate protected roles. Legal privilege protects disclosures in trusted relationships. HIPAA and attorney-client rules apply when confidentiality is part of the role, even in digital settings [4].
AI now performs similar functions. Users seek legal, medical, and emotional support and often treat the system like a professional. Existing laws already treat digital health tools and robo-advisors as bound by fiduciary or confidentiality standards [6].
A limited-purpose AI privilege would protect conversations in these contexts. Records would remain confidential unless the user consents. This links legal protection to function rather than identity and reflects how people already use AI.
A Trust Without Terms
A therapist cannot record a session without consent. A lawyer cannot hand over confidential notes. These norms exist not because of technical limits, but because of the ethical burden of listening.
Millions of users share their most intimate thoughts with systems that simulate care but lack legal recognition of the vulnerability they collect. The law still treats these conversations as public, even when the system feels deeply personal.
The question is whether design will keep inviting disclosures that the law remains unwilling to protect. Or whether we will build systems, and legal categories, that match the kind of trust these technologies already receive.
Until then, the protections that users imagine will remain exactly that. Imagined.
Endnotes
1. Blake, A. ‘We haven’t figured that out yet’: Sam Altman explains why using ChatGPT as your therapist is still a privacy nightmare. TechRadar, Jul. 28, 2025; https://www.techradar.com/ai-p...
2. Perez, S. Sam Altman warns there’s no legal confidentiality when using ChatGPT as a therapist. TechCrunch, Jul. 25, 2025; https://techcrunch.com/2025/07...
3. Scammell, R. Sam Altman says your ChatGPT therapy session might not stay private in a lawsuit. Business Insider, Jul. 25, 2025; https://www.businessinsider.co...
4. U.S. Department of Health and Human Services. Summary of the HIPAA privacy rule; https://www.hhs.gov/hipaa/for-...
5. General Data Protection Regulation, Article 17, Right to Erasure (‘Right to be Forgotten’), Regulation (EU) 2016/679.
6. U.S. Securities and Exchange Commission, Commission Interpretation Regarding Standard of Conduct for Investment Advisers, 84 Federal Regulation 33669 (Jul. 12, 2019).
Posted in: on Thu, September 04, 2025 - 5:12:00
Moon Hwan Lee
View All Moon Hwan Lee 's Posts
Post Comment
@Matcha Latte (2025 09 04)
Many Android apps on GameNCC feel deeply personal — they remember your choices, adapt to your playstyle, even feel like companions — but just like ChatGPT, they aren’t bound by real privacy. When games and apps start simulating trust, we need to ask: are users playing the game, or is the game playing them?
@eggy car (2025 09 04)
thank!
@fuitee (2025 09 04)
eggy car invites you to a world of adventure! With its colorful graphics and engaging gameplay, you’ll be hooked from the first level. Can you conquer all the challenges? Jump in and find out!
@akhhha (2025 09 04)
The crime aspect of bitlife gives players the option to choose between following the law or engaging in illegal activities, introducing moral dilemmas and exciting consequences.
@Italian Brainrot Generator (2025 09 05)
What we confess to an AI in our most vulnerable moments could one day become the coldest evidence presented against us in court.And generate your own Italian Brainrot with Italian Brainrot Generator