After Heppner: Building the AI Governance Framework Your Corporate Legal Team Needs
On February 10, 2026, Judge Jed S. Rakoff of the Southern District of New York issued a ruling that should be required reading for every in-house legal team in the US. In United States v. Heppner, the court held that documents created using generative AI were not protected by attorney-client privilege or the work-product doctrine. The defendant had used a public-facing AI platform independently, outside the direction or supervision of counsel.
AI tools have become a fixture in legal departments everywhere—used to draft memos, synthesize research, assess regulatory exposure, and much more. But Heppner draws a sharp line: if your team is using AI without the proper structure, those outputs may be fair game for opposing counsel, regulators, and government investigators.
The good news is that the ruling also points toward a solution. Here’s how in-house legal teams can build an AI governance framework that preserves privilege, manages risk, and still captures the efficiency gains that make these tools valuable.
1. Understand What Heppner Actually Prohibits
The court's logic rested on two familiar privilege pillars. First, attorney-client privilege requires confidentiality—and sharing information with a public-facing AI platform is functionally the same as sharing it with a third party. Second, work-product protection requires that materials be prepared at counsel's direction or under their supervision. When an employee independently queries an AI tool to analyze legal exposure, neither condition is satisfied.
Critically, the court rejected the argument that privilege could be retroactively conferred by later sharing the AI-generated documents with lawyers. The privilege analysis turns on how the materials were created, not what happened to them afterward.
2. Audit Your Current AI Use Before You Build Anything New
Before drafting a single policy, legal ops leaders need a clear picture of how AI is actually being used today. That means asking hard questions across the department:
- Which AI tools are team members currently using, and are any of them public-facing consumer platforms?
- Are lawyers directing and supervising AI use, or are paralegals, analysts, and business partners running queries independently?
- What categories of information are being input into these tools? Is confidential client data, litigation strategy, or regulatory analysis being processed externally?
- Does the platform retain inputs for model training? What does the privacy policy actually say?
This audit isn’t just a formality; it’s the baseline for everything that follows, and it may surface practices that need to be corrected immediately.
3. Establish a Two-Tier Tool Architecture
Not all AI use carries the same risk, and your governance framework should reflect that. A practical approach is to distinguish between two categories of tools and use cases.
The first tier covers approved enterprise AI platforms—closed systems with strong contractual privacy protections, no data retention for training, and robust security certifications. These are the tools your team should use for anything touching confidential legal matters, regulatory exposure, litigation strategy, or privileged communications. Vet these platforms carefully and document your due diligence.
The second tier covers general-purpose or consumer AI tools. These may be appropriate for lower-stakes tasks—public research, drafting generic templates, summarizing publicly available materials—but should be categorically prohibited from handling anything that could implicate privilege or confidentiality. Make that line explicit in your policy, and make sure everyone understands where it falls.
4. Embed Legal Oversight Across the Organization
This is the element Heppner underscores most forcefully. The work-product doctrine requires that materials be prepared at counsel’s direction, which means legal oversight can’t be nominal or after the fact. And it can't stop at the legal department. If anyone in the business is using AI to assess legal exposure or handle information that could implicate privilege, the legal team needs to be involved from the start.
In practice, this means lawyers should define the scope of any AI-assisted legal analysis before it begins—whether that work is happening inside or outside the legal department. They should review and direct the prompts being used in sensitive matters, actively engage with the outputs, and maintain a record that reflects their supervisory role throughout the process. When an attorney actively directs the use of an AI tool by defining the scope of the inquiry, reviewing the outputs, and applying legal judgment throughout the process, the case for privilege protection is stronger.
5. Write the Policy, Train the Team, and Document Everything
Three components are non-negotiable.
Written policy. Develop a clear, specific AI use policy for the legal department. It should identify approved tools, prohibited use cases, required oversight procedures for sensitive matters, and consequences for non-compliance. Vague principles aren’t enough—the policy needs to be operationally specific.
Ongoing training. Legal professionals and their support staff need to understand not just what the rules are, but why they exist. Walk through the Heppner facts. Explain what privilege waiver actually means. Make the stakes concrete. Legal professionals are best positioned to lead that training—but the audience needs to be company-wide. Training that explains the reasoning behind a policy is far more likely to produce durable compliance than a checklist alone.
Documentation. For any AI-assisted work that touches a sensitive legal matter, maintain records of legal direction, the tools used, and the supervisory steps taken. If privilege is ever challenged, this documentation may be the difference between protection and disclosure.
The Bottom Line
Heppner is not a reason to stop using AI. It is a reason to use it thoughtfully. The efficiency gains are real—but so is the legal exposure when AI tools are deployed without the structures that privilege law requires.
In-house legal teams that implement governance frameworks can capture the speed and analytical power of generative AI while keeping their most sensitive legal work protected.
Litera’s AI solutions are built with governance in mind. Contact our team to learn how Litera helps legal teams capture AI's efficiency gains while maintaining privilege protections.