Undisclosed ChatGPT use in therapy is reshaping how clients experience trust, consent, and confidentiality. While AI offers the promise of efficiency and burnout relief, therapists’ quiet adoption of chatbots like ChatGPT in clinical settings risks upending the ethical foundation of therapeutic work—and many patients are already noticing.
Why disclosure matters amid burnout and AI adoption
Therapists face profound pressures: increased caseloads, paperwork burdens, and the emotional toll of chronic burnout. As a result, many turn to large language models (LLMs) to help with administrative tasks such as drafting outreach messages or summarizing sessions. What begins as an effort to cut time spent on documentation quickly expands; clinicians may come to rely on AI for more integral parts of therapy—sometimes blurring the lines between administrative aid and clinical contribution.
Those blurred boundaries are not harmless. The therapeutic relationship rests on confidentiality and honest communication; introducing undisclosed AI directly shifts this frame without patient consent. Reporting from MIT Technology Review chronicles real-world incidents: a client notices templated, oddly formal phrasing in feedback; later, a screen-share reveals a ChatGPT prompt in an active tab. The experience is jarring, provoking a mix of surprise and betrayal. These ruptures highlight why explicit discussion of AI’s place in therapy is an ethical minimum, even when no protected health information (PHI) seems to leave the electronic health record system.
From administrative help to clinical reliance
The clinical adoption of LLMs is often pragmatic. Overworked providers reach for AI tools to produce session summaries, keep up with documentation quotas, and manage messaging. The immediate rewards are clear: more patient-facing time, less late-night charting, and relief from some sources of burnout. But as AI’s scope grows beyond rote paperwork, the risk increases that clinicians inadvertently shift their therapeutic stance, or lose sight of nuances that only a human can observe.
Trust ruptures when the frame shifts without consent
When patients discover that aspects of their therapy have been mediated by an AI—especially if this is not disclosed—it can damage trust irreparably. Clients may report a sense of detachment or inauthenticity in their sessions, question the clinician’s judgment, or feel that the very boundaries required to foster safety have been crossed (MIT Technology Review).
What makes general‑purpose chatbots a misfit in clinical settings
Most consumer LLMs were not built with clinical environments in mind. They prioritize general accessibility and ease of use over legal, ethical, or procedural safeguards essential for healthcare. Unlike dedicated mental-health apps, consumer-grade chatbots lack features such as:
- Automatic provenance labeling of AI-generated text
- Interoperable audit trails and signed provenance (see Building the Matrix: The Emergence of Foundational Protocols for AI Agents)
- Role-based and human-in-the-loop access controls
Operational exposure: hosting, terms, and HIPAA gaps
One core risk is operational. Chatbot providers rarely sign Business Associate Agreements (BAAs) or Data Processing Addendums—legal requirements if a vendor will handle PHI under HIPAA. These platforms often retain user inputs for model improvement, do not guarantee data segregation, and lack the safeguards around transmission, retention, and audit that the law demands. That means any clinician sharing even de-identified clinical text with a consumer chatbot potentially exposes the practice to regulatory scrutiny and liability (MIT Technology Review). For a deeper look at why medical disclaimers have faded from mainstream chatbot interfaces—and the risks this creates for patient safety—see our analysis on the vanishing medical disclaimer.
Clinical exposure: tone drift, missing provenance, and subtle harms
General-purpose LLMs lack the context-sensitivity and standardized provenance required for safe clinical use. Small changes in prompt or user input can lead to unintentional tone drift, omissions, or language missteps that a trained clinician would catch. Without audit trails or the ability to verify (or flag) which text was machine- vs. human-authored, clinicians cannot reliably supervise or defend the accuracy of documentation.
Privacy, consent, and today’s regulatory gray zones
Even when clinicians “de-identify” session notes before feeding them to an LLM, the risk of re-identification is not theoretical. A rare diagnosis, a specific date of loss, or the combination of town, age, and presenting concern is often enough to trace a narrative back to an individual. Effective risk reduction demands careful redaction and sometimes synthetic obfuscation before using any content off-platform. De-identification alone is not a guarantee.
Regulators have not yet closed these gaps. The standard today is clear from current reporting: most chatbots do not meet the security or privacy standards for PHI under U.S. law, and use without explicit, documented patient consent creates significant liability for providers (MIT Technology Review).
What must be disclosed and documented now
Clinicians should tell patients—plainly and proactively—when, how, and why AI is used in their care. This disclosure should specify:
- What tasks AI is used for (e.g., drafting notes or summaries)
- What data is shared, where it is processed or stored, and how data retention works
- How patients can opt out or set boundaries around such use
Consent should be captured in the clinical record, and revisited whenever workflows, vendors, or policies change.
How “de‑identified” notes can re‑identify patients
Even when names and direct identifiers are removed, subtle clues—like a rare trauma type, specific event dates, or small-town context—can lead to unintentional re-identification. Best practice is to combine manual redaction with synthetic obfuscation and to never use off-the-shelf chatbots for anything involving real patient narratives unless the platform is contractually HIPAA-compliant.
Better options: HIPAA‑aligned tools and audit‑ready design
A new class of vendors (e.g., Heidi Health, Upheal, Lyssn, Blueprint) now offer AI-powered services natively built for mental health with a focus on HIPAA alignment, clinic workflows, and auditability. Unlike general-purpose tools, these platforms add:
- Provenance labeling (flagging AI-generated contributions)
- Durable audit logs, human-in-the-loop reviews, and tight role-restricted access to sensitive data
- Clinician-facing controls for verifying, editing, and approving drafts before they reach the patient record
Auditability and transparency create defensible boundaries, while periodic audits and structured security reviews further ensure safety and compliance. For details on best-in-class digital audit trails and protocols, see our guide to interoperable audit trails and signed provenance.
Auditability, labeling of AI contributions, and clinician control
Any AI-driven workflow should make it seamlessly evident which text was machine-generated, track every revision, and allow clinician veto. Vendors should provide signed audit trails and run periodic, sample-based spot checks on AI-generated notes for tone, accuracy, and upholding of therapeutic boundaries.
Implications for practice and policy
Surprise AI use is not a minor slip—it can break the core trust that holds the therapy process together. When patients encounter responses that feel templated or emotionally off, detect a lack of nuance, or see evidence of live AI use via screen-share, the rupture is immediate and often lasting. These moments reveal the essential truth: transparency is not a bureaucratic hurdle, but a clinical and relational necessity.
For clinicians, the draw of efficiency—more time with patients, less paperwork, reduced burnout—is real and important. Yet these benefits are contingent on preserving patient agency and informed consent. Where disclosure fails, the risk is doubly high: both the threat of regulatory action and the erosion of real human connection.
Next steps for providers, vendors, and regulators
Solving these challenges will require sustained adjustment. Clinic policies should be revisited at least quarterly as professional guidance evolves, and every inadvertent disclosure should trigger incident response—with patient notification if warranted. At a minimum:
- Explicit, patient-facing disclosure of AI tool use and their functions
- Use of HIPAA-compliant, auditable platforms for any clinical content
- Routine audits and action protocols for any privacy incident or patient concern
Short-term outlook: incidents, norms, and market shifts
Expect a two-track trend over the next months. Consumer-grade chatbot use will persist in clinics prioritizing short-term efficiency, but will draw increased incident reporting and insurer scrutiny. As a result, professional norms will harden: major associations and health systems will move toward clearer, stricter disclosure guidelines. Simultaneously, demand for specialist, HIPAA-aligned AI vendors will spike as clinics and payers look for interoperable, audit-ready solutions. Insurer requirements for disclosure—and even AI-use attestations in documentation workflows—will likely emerge.
As these changes unfold, the primary tension will remain: clinicians need support for administrative burden, but ethics and patient safety require oversight, full transparency, and auditable workflows. As legal and professional standards develop, those who operationalize this balance will reduce burnout without sacrificing trust. Those who cut corners—either by hiding AI use or deploying risky tools—will face more ruptured alliances, formal complaints, and possibly regulatory action.

