Back to Blog
· Robert Bergman

Privacy Regulations for Users of AI Models: A Free Practitioner's Field Manual

privacycomplianceAIfield manualHIPAAGDPRCCPA

Every conversation I have with a compliance officer in 2026 lands in the same place. They know the rules are changing. They know AI is making the rules harder to follow. What they want, and what almost nobody is publishing, is a single working reference that explains the landscape, the AI-specific pressure points, and what to actually do on Monday morning.

So I wrote one. It is called Privacy Regulations for Users of AI Models: A Practitioner’s Guide for Compliance-Driven Professions, and there is a free PDF download at the bottom of this post.

This post is the short version of why it exists, what is in it, and who I think will get the most out of it.

Why This Field Manual

The compliance landscape in the United States has fragmented to the point where it is genuinely difficult to know whether a given AI deployment is lawful. We have 20 comprehensive state privacy laws on the books. We have HIPAA, GLBA, FCRA, FERPA, COPPA, and the FTC Act doing the heavy lifting at the federal level. We have GDPR, the EU AI Act, and PIPL reaching into U.S. firms through customers and vendors. We have a stack of state-bar opinions, NIST frameworks, ISO standards, and regulator enforcement actions stacking up faster than any one practitioner can read.

Then somebody on your team pastes a settlement memo into ChatGPT to summarize it. Or a nurse drafts a discharge note in a public model. Or a financial advisor uses an unauthorized AI tool to score a client’s credit risk. Each of those is a privacy event with a paper trail, and the rules that govern them were written before the tool existed.

I wrote this manual for the people who have to answer for that. Compliance officers, general counsel, managing partners, healthcare administrators, financial firm CCOs, mediators, and anyone else whose job description now quietly includes “decide what AI we use and what we tell employees not to do with it.”

It is not a legal treatise. It is a field manual.

What Is Inside

The manual runs about 60 pages across four parts and six appendices. The structure is deliberately modular. You can read it cover to cover or skip to the part that matches your role.

Part I: Foundations. A brief history of U.S. privacy law from Warren and Brandeis through the sectoral era, a timeline of major regulations from 1970 to 2026, and a chapter on why the state-by-state patchwork creates operational friction that AI deployment makes sharply worse.

Part II: The Regulations in Depth. Federal sectoral laws (HIPAA, GLBA, FCRA, FERPA, COPPA), California’s CCPA and CPRA, the 20-state comprehensive-law landscape, and the international regimes (GDPR, EU AI Act, PIPL, LGPD) that reach U.S. firms through their data flows whether they intended it or not.

Part III: AI-Specific Legal Exposures. This is where the manual pays for itself. Three chapters cover cloud-based servers and privacy law, public AI models (training data provenance, prompt and output risk, enterprise versus consumer tiers, agents and tool use), and a verified set of enforcement actions from the Italian Garante’s OpenAI ban to the Mata v. Avianca fabricated-citation sanctions and California’s Sephora settlement.

Part IV: Practical Guidance. Industry-specific playbooks for legal practice (ABA Formal Opinion 512, privilege preservation, e-discovery), healthcare (HIPAA BAA checklists, PHI de-identification, state health-data laws like Washington’s MHMDA), finance (Safeguards Rule, SEC and FINRA, FCRA, AML, NAIC), and mediation (the area least covered in the AI privacy discussion, with the sharpest confidentiality risks). The final chapter walks through how to build a minimum-viable AI privacy program: data inventory, DPIA, vendor due diligence, AUP, training, monitoring, incident response.

Appendices. A master comparison table of all 34 regulations covered in the manual (the single most useful artifact for most readers, photocopy-ready), HIPAA Safe Harbor identifiers, an example acceptable-use policy for AI tools, a vendor due diligence questionnaire framework, a glossary, and a source list.

Who It Is For

Four audiences map cleanly onto Part IV’s playbooks:

  • Legal practices weighing whether to use generative AI for drafting, research, or document review without waiving privilege or violating ABA confidentiality duties.
  • Healthcare organizations that need to deploy AI tools without breaking HIPAA or running afoul of Washington’s My Health My Data Act and parallel state laws.
  • Financial firms dealing with the technology-neutral Safeguards Rule, FCRA exposure on AI-assisted credit decisions, and the NAIC’s AI System Program requirements.
  • Mediators and ADR professionals who face genuinely sharp confidentiality risks when mediation communications meet public LLMs, an area with almost no published guidance.

If you are not in one of those four categories but you sit on the compliance side of any organization adopting AI, the foundational chapters and the master comparison table will still earn back the time it takes to read them.

The Honest Caveats

A few things this manual does not pretend to be.

It is not legal advice. It does not create an attorney-client relationship. Before you act on anything in it, you should talk to counsel licensed in the relevant jurisdiction. That is in the front matter and I mean it.

It is a snapshot. Privacy and AI regulation move faster than any printed reference can keep up with. The statutes, regulations, and enforcement postures cited were current as of early 2026. A few have probably already shifted by the time you read this. The structure is more durable than the specifics.

It is U.S.-centric. International regimes appear because they reach U.S. firms, not because the manual treats them comprehensively. If your primary obligation is to GDPR or PIPL, you will need to supplement.

Why We Made It Free

We make a desktop application called PII Anomalyzer that detects and redacts PII across documents, entirely offline, with nothing transmitted to the cloud. Several chapters of the manual end up in the same place: if you can find sensitive data in your files and remove it before it leaves your environment, most of your compliance obligations get tractable. That is a useful thing for us to publish even if no reader ever buys our software, because the operational discipline outlives any specific tool.

It also feels honest. The compliance officers I talk to are paying enough lawyers and consultants already. A free PDF that explains the landscape clearly is a better marketing artifact for what we actually sell than another whitepaper full of stock photography.

Get the Field Manual

Download the PDF (1.8 MB)

No email required. The link is the link.

Optional

Want updates on new privacy posts?

Drop your email and we'll let you know when the next field-manual edition or a major regulatory shift lands. No more than once a month. Unsubscribe anytime.

The PDF is yours with no email required. Want updates on new privacy posts? Click the download button above and drop your email in the prompt that appears. Less than one note a month. Unsubscribe anytime.

If the field manual is useful, forward it to the colleague who has been quietly fielding your AI compliance questions for the last six months. If a section misses the mark, tell me so the next revision is sharper than the first.


Looking for a tool that fits the workflow described in Chapters 9 and 11? PII Anomalyzer detects 55+ types of PII across PDF, Word, Excel, and scanned images, runs entirely on your desktop, and never transmits documents to the cloud. The minimize-and-redact discipline the manual describes is exactly what the application is built for. See pricing or start a 7-day free trial.

Robert Bergman is CEO of Southwest Management Technology and Next Level Mediation, and the author of Privacy Regulations for Users of AI Models (First Edition, 2026).