On February 12, 2026, a US federal court ruled that documents generated using an AI tool have no legal privilege protection. The implications for mental health practitioners using AI for clinical documentation are immediate and serious.
On February 12, 2026, Federal Judge Jed Rakoff of the Southern District of New York ruled that 31 documents a defendant had generated using Claude (an AI tool built by Anthropic) and later shared with his defence attorneys are not protected by attorney-client privilege or work product doctrine.
The ruling is straightforward. An AI tool is not an attorney. It has no licence, owes no duty of loyalty, and cannot form a privileged relationship with the user. Sharing case details with an AI platform is, in the court's analysis, legally no different from discussing your legal situation with a stranger.
What made the ruling decisive was the AI provider's own privacy policy. At the time the defendant used the tool, Anthropic's terms expressly permitted disclosure of user prompts and outputs to governmental authorities. There was no reasonable expectation of confidentiality --- not because the defendant was careless, but because the platform's terms of service said so.
As digital assets and IP attorney Moish Peltz of Friedman Kaplan Seiler & Adelman LLP noted in his commentary on the ruling: "Every prompt is a potential disclosure. Every output is a potentially discoverable document."
The defendant could not fix the problem after the fact, either. Sending AI-generated documents to his lawyers did not retroactively make them privileged. That principle has been settled law for years. It simply had not been tested with AI until now.
Why This Matters for Mental Health Practitioners
This ruling was about legal privilege. But the logic applies directly --- and arguably with greater urgency --- to therapist-client confidentiality.
If a court can compel disclosure of AI-generated legal documents because the AI provider's privacy policy permits it, then the same reasoning applies to any clinician using a generic AI tool for clinical documentation. The privacy policy of the platform you use determines whether your data can be compelled by a court. Not your professional ethics code. Not your intention. The terms of service.
Most therapists who use AI tools for clinical work experience the interface as private. It feels like talking to a confidential advisor. You type your session notes. You get a structured SOAP note back. The interaction feels contained and personal.
It is not. Unless the platform's terms explicitly prohibit disclosure, you are inputting clinical content into a third-party commercial system that retains your data and reserves broad rights to disclose it --- including to government authorities, in response to legal process, or as required by law.
The Stakes Are Higher for Clinical Records
Attorney-client privilege is significant, but mental health records carry a different kind of weight.
Clinical notes contain trauma histories, abuse disclosures, substance use patterns, suicidal ideation, forensic risk information, and diagnostic impressions that follow clients through their lives. Disclosure of this material can cause:
- Stigma and discrimination --- in employment, insurance, housing, and relationships
- Custody loss --- mental health records are routinely subpoenaed in family court proceedings
- Re-traumatisation --- forced disclosure of trauma narratives to opposing counsel, employers, or family members
- Professional consequences --- fitness-for-duty evaluations that use AI-generated notes as evidence
- Criminal exposure --- substance use disclosures or risk assessments taken out of clinical context
Mental health records are among the most frequently subpoenaed documents in family court, custody disputes, fitness-for-duty evaluations, malpractice claims, and insurance disputes. A therapist who generates clinical documentation using a platform whose privacy policy permits third-party disclosure has created a discoverable record on infrastructure they do not control, governed by terms they almost certainly did not read.
The Privacy Policy Problem
Judge Rakoff's ruling turned on a specific fact: the AI provider's privacy policy permitted disclosure to governmental authorities. This was not a hypothetical concern. It was the deciding factor.
Most generic AI platforms --- including ChatGPT (OpenAI) and Claude (Anthropic) --- include terms of service that reserve the right to disclose user data in response to legal process, to comply with applicable law, or to protect the rights and safety of the provider. These are standard terms for consumer-facing technology products. They are also fundamentally incompatible with the duty of confidentiality that mental health professionals owe their clients.
The issue is not whether you trust the AI company. It is whether a court can compel them to hand over your data. If their privacy policy says they can disclose to authorities, the answer --- as Judge Rakoff has now confirmed --- is yes.
This Was Predictable
ConfideAI's AI Documentation Standards paper, "Using Generative AI for Mental Health Documentation: Why Public Tools Fail Professional Standards and What Safe Use Requires", identified this exact vulnerability before the ruling. The paper examined why generic large language models fail professional standards for confidentiality in clinical contexts, including:
- The gap between perceived privacy and actual data handling in consumer AI platforms
- Re-identification risks even when clinicians attempt to anonymise clinical input
- Professional indemnity implications when practitioners use tools that do not meet professional standards for data protection
- The equity gap that leaves independent practitioners --- who cannot negotiate enterprise agreements --- most exposed
The Rakoff ruling is the first major judicial confirmation of what the paper argued: that using consumer AI tools for privileged or confidential professional work creates discoverable records on infrastructure with third-party disclosure rights. The ruling addressed legal privilege specifically, but the underlying principle --- that a platform's terms of service determine whether your data is protected --- applies equally to clinical confidentiality.
What Should Practitioners Do?
Regardless of what AI tools you use, the Rakoff ruling means that every mental health professional using AI for clinical work needs to understand the following:
1. Read the Privacy Policy
Before you use any AI tool for clinical documentation, read the provider's terms of service and privacy policy. Look specifically for language about:
- Data retention --- how long are your prompts and outputs stored?
- Third-party disclosure --- does the provider reserve the right to disclose your data to law enforcement, government authorities, or in response to legal process?
- Training --- is your data used to train or improve AI models?
- Subpoena response --- what is the provider's stated policy when they receive a subpoena or court order?
If the privacy policy permits disclosure to authorities, then anything you input --- including clinical content --- is potentially discoverable.
2. Treat Every Prompt as a Potential Record
Judge Rakoff's ruling makes clear that AI-generated documents are discoverable. For clinicians, this means every prompt you send to an AI tool and every output it generates is a document that could be compelled in legal proceedings. Clinical notes, treatment plans, case formulations, risk assessments --- if they were generated on or processed through a third-party AI platform, they may not be protected by therapist-client confidentiality in the way you assume.
3. Know Your Professional Obligations
Most psychology and counselling licensing boards require clinicians to take reasonable steps to protect client confidentiality, including when using technology. Using a consumer AI tool whose privacy policy explicitly permits disclosure to authorities may not meet that standard. Review your licensing board's guidance on electronic records, cloud storage, and third-party processing.
4. Evaluate Your AI Infrastructure
Not all AI tools are built the same way. Consumer chatbots are designed for general use and governed by consumer privacy policies. Purpose-built clinical tools may offer architecturally different privacy protections. The question to ask is not "does this company promise to protect my data?" but rather "does the architecture make disclosure structurally more difficult?"
How ConfideAI Approaches This Problem
ConfideAI was built specifically for mental health practitioners, and its architecture is fundamentally different from the consumer AI platforms at the centre of the Rakoff ruling.
What ConfideAI does:
- Hardware-secured processing: All AI inference occurs inside Trusted Execution Environments (Intel TDX), where clinical content is isolated in hardware-encrypted memory during processing. This is not a policy --- it is a hardware constraint enforced by the processor silicon itself.
- No clinical content on ConfideAI servers: ConfideAI operates a secure gateway that authenticates users and routes requests to NEAR AI's TEE infrastructure. Clinical content is not stored, logged, or retained on ConfideAI's own servers.
- Cryptographic attestation: Each interaction is verified by cryptographic proof that processing occurred inside a genuine hardware-secured enclave, not on a standard server.
- Purpose-built for clinical workflows: 20+ evidence-based documentation templates covering the full episode of care from referral to discharge, with multi-orientation support. This is not a general-purpose chatbot repurposed for therapy notes.
This architecture gives practitioners a stronger starting position than any generic AI tool currently offers independent clinicians. The privacy protections are structural, not just policy-based --- which is precisely the distinction that mattered in the Rakoff ruling.
What ConfideAI does not claim:
ConfideAI does not claim immunity from legal compulsion. No technology provider can guarantee that. Court orders and subpoenas operate within the legal system, and no architectural design overrides a valid court order. What ConfideAI can offer is an architecture where the risk vectors that sank the defendant in the Rakoff ruling --- a provider privacy policy that expressly permits disclosure to authorities, data retained on consumer infrastructure with broad third-party access rights --- are structurally reduced.
ConfideAI is also developing fully isolated per-practitioner infrastructure where the platform provably never has access to clinical data — a future capability, not yet available.
The Bigger Picture
The Rakoff ruling is not an isolated event. It is the first judicial articulation of a principle that will shape how every profession that relies on confidentiality --- law, medicine, psychology, counselling, social work --- engages with AI tools.
The core problem, as Peltz identified, is the gap between how people experience AI and what is actually happening. The conversational interface feels private. It feels like talking to an advisor. But unless the platform is specifically built to protect your data architecturally, you are inputting confidential information into a commercial system that reserves broad rights to disclose it.
For mental health practitioners, the stakes are not abstract. They are the trauma histories, the abuse disclosures, the suicidal ideation assessments, and the diagnostic impressions that your clients trusted you to protect. The question is no longer whether AI tools are useful for clinical documentation --- they clearly are. The question is whether the tools you use are built to protect the confidentiality your clients expect and your profession demands.
This post is for informational purposes only and does not constitute legal advice. Mental health practitioners should consult their own legal and professional advisors regarding the use of AI tools in clinical practice, record-keeping obligations, and confidentiality requirements in their jurisdiction.
ConfideAI's AI Documentation Standards paper, "Using Generative AI for Mental Health Documentation: Why Public Tools Fail Professional Standards and What Safe Use Requires," is available at confideai.ai/resources.
Sources:
- Judge Jed Rakoff, United States District Court, Southern District of New York, ruling dated February 12, 2026
- Moish Peltz (@mpeltz), digital assets and IP attorney at Friedman Kaplan Seiler & Adelman LLP, commentary published February 12, 2026
- ConfideAI Research, "Using Generative AI for Mental Health Documentation: Why Public Tools Fail Professional Standards and What Safe Use Requires"