AI in Legal Practice: What Heppner v. US Means for Lawyers, Mediators, and Arbitrators
For the sake of transparency and self-protection, let me start by saying I am not an attorney, and I am certainly not offering legal advice.
However, this should unsettle every legal professional reading this. In February 2026, a federal judge in the Southern District of New York handed down a ruling that many of us in the legal and ADR community had been quietly dreading. Not because it was surprising, exactly. More because it confirmed what some of us already suspected: that the casual use of AI tools in legal practice has been creating exposure we cannot see and cannot easily undo.
The case is United States v. Heppner (25-cr-00503-JSR). Judge Jed Rakoff, one of the sharper legal minds on the federal bench, described it as “a question of first impression nationwide.” He wasn’t exaggerating.
What Actually Happened
Bradley Heppner, a financial executive facing federal fraud charges, used Anthropic’s publicly available version of Claude to prepare defense strategy materials. He later shared those materials with his attorneys. Sounds reasonable, right? He was trying to be a good client.
The FBI seized his devices during a search warrant. Thirty-one documents, AI generated, all containing legal strategy. His lawyers argued privilege. The government disagreed. Judge Rakoff sided with the government for three distinct reasons that every practitioner needs to internalize:
- Claude is not an attorney. Communications with an AI tool are not communications with counsel.
- No reasonable expectation of confidentiality. Heppner used the free, public version. Anthropic’s own privacy policy explicitly states that user inputs may be used for model training and disclosed to third parties, including government regulatory authorities.
- Not created at the direction of counsel. Heppner acted on his own initiative, which meant the Kovel doctrine (which can extend privilege to certain non-attorney agents acting under lawyer direction) simply did not apply.
The court ordered disclosure to the prosecutor. The materials were not protected.
Why This Is a Major Problem
The privilege question is almost the smaller concern. Think about what it means that an AI company’s servers contain a log of everything you typed. Every contract clause you asked about, litigation strategy you explored, or settlement range you tested.
Those logs exist. They are stored on servers in locations you do not know, across jurisdictions whose laws you have not read, potentially reviewed by human employees whose vetting you cannot audit. Under U.S. law, the Stored Communications Act (18 U.S.C. § 2703) establishes the baseline legal process by which the government may compel disclosure of stored electronic communications from service providers.
In a separate and equally instructive case, the CEO of Krafton used ChatGPT to brainstorm ways to avoid paying $250 million in contractual earn-out to game developers. He deleted the conversations. They surfaced anyway as central evidence in litigation. Deletion from your interface does not mean deletion from their servers.
For lawyers, this is a competence and confidentiality problem. For mediators and arbitrators, it is arguably more serious. Your confidentiality obligations run to all parties simultaneously, not just your own client. Feeding case details, financial terms, or party disclosures into a public AI tool may expose one party’s private caucus information in ways that no attorney in the room would have any right to do.
The Silicon Valley Arbitration and Mediation Center’s Guidelines on the Use of AI in Arbitration (April 2024) are unambiguous: an arbitrator should not input any information about an arbitration into an open AI system.
What the Court Left Open (and Why It Matters)
Heppner is not a ban on AI in legal work. Judge Rakoff himself noted that had counsel directed Heppner to use Claude, the outcome might have been different. Claude might have functioned, in his words, “in a manner akin to a highly trained professional who may act as a lawyer’s agent within protecting the attorney-client privilege.”
A separate federal decision issued the same week, Warner v. Gilbarco (E.D. Mich., Feb. 10, 2026), upheld work product protection over a pro se plaintiff’s AI-assisted materials, reasoning that AI platforms are “tools, not persons” and that disclosure to software is not disclosure to an adversary.
So, confusion reigns, and the law is genuinely unsettled. What is settled is the risk framework:
- Consumer AI tools, used without attorney direction, without confidentiality protection, and without documented workflow supervision, will not be protected.
- Enterprise tools with zero-retention contractual guarantees, used under attorney direction with documented protocols, have a much stronger case.
The difference between those two scenarios is not AI. It is everything around the AI.
Practical Actions: What to Do, What to Avoid
Stop doing these things
Do not paste client names, matter details, financial terms, or any identifying information (PII) into any consumer AI platform. This includes the free tiers of Claude, ChatGPT, Gemini, and similar tools. ABA Formal Opinion 512 (July 2024) is explicit: without explicit client consent and robust security guarantees from the AI provider, using consumer AI tools for client matters is extremely risky.
Do not assume that deleting a conversation protects you. The Krafton case should have ended that assumption. Logs persist. Backups exist. Training datasets may already contain your input.
Do not let junior staff or clients use AI tools on matters without a written policy governing that use. A 2026 survey found that 75% of lawyers were using AI but only 25% had received formal training on the ethical implications. That gap is a liability gap.
Start doing these things now
Update your engagement letters. They need to address AI use explicitly — yours and your client’s. Warn clients that communications with public AI platforms are not confidential and are potentially discoverable.
Implement a de-identification step before any document leaves your local environment. Tools like PII Anomalyzer are built specifically for this workflow. The software runs entirely on your local machine with no cloud access. It identifies and replaces personally identifiable information with semantically consistent tokens that preserve the analytical utility of the document. The real data never leaves your device. The mapping that allows you to re-identify outputs is stored locally, under your control. When you submit the de-identified version to an AI tool, the AI has no identifiable client data to store, subpoena, or inadvertently train on. This is not a workaround. It is the architecturally correct solution to the problem Heppner created.
Document your AI use. Keep a record of what tool was used, for what purpose, that no confidential matter details were submitted in identifiable form, and that the workflow was conducted under attorney supervision. That documentation becomes your defense if a privilege challenge arises.
For arbitrators specifically, disclose AI use to all parties at the outset of any proceeding. Get written consent. Be specific — generic references to “transparency” are not enough. Parties need to know what tools are in play and how confidentiality is being handled.
Conclusion
The legal profession has navigated technological disruption before. Email was once a confidentiality concern. Electronic discovery reshaped litigation practice entirely. Each time, the profession adapted — not by avoiding the technology, but by building protocols that made responsible use possible.
AI is no different. The practitioners who will be most exposed are not those who use it, but those who use it carelessly — without understanding what happens to the information they type, and without the procedural guardrails that distinguish professional use from consumer use.
The Heppner case is not the end of AI in legal practice. It is the beginning of a period in which the standards for responsible use are being defined in real time, in courtrooms, ethics opinions, and institutional guidelines. The question is whether you are ahead of that standard or behind it.
The author is not providing legal advice. Practitioners should consult applicable state bar ethics opinions and institutional guidelines governing their specific practice area and jurisdiction.