Executive Summary

  • Independent, unsupervised use of generative AI to analyze legal exposure may not be privileged. A federal court held that a defendant’s AI prompts and outputs relating to a criminal investigation of his conduct were not protected after they were seized pursuant to a search warrant.
  • Platform terms matter. If an AI provider reserves rights to retain, train on, or disclose user inputs, courts may find confidentiality—and therefore privilege—compromised.
  • Structure AI use under counsel’s direction. The ruling leaves open whether counsel-directed enterprise AI use on a secure platform with strong confidentiality terms may be treated differently. Governance and process may be outcome-determinative.

On February 10, 2026, U.S. District Judge Jed Rakoff of the Southern District of New York issued a bench ruling holding that a defendant’s use of generative AI to analyze legal exposure is not protected under attorney-client privilege or the work product doctrine. The decision has important implications as clients and non-lawyers increasingly use generative AI tools to assess legal risk, despite disclaimers by AI companies that their tools do not provide legal advice. Use of public AI tools creates significant privilege risks because these tools often lack confidentiality protections and are typically not used under counsel’s direction.

Judge Rakoff’s Ruling

In United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Oct. 28, 2025), a federal securities fraud case against a former financial services executive, defendant Bradley Heppner used a third-party generative AI tool, Anthropic’s Claude, to input prompts about the government’s investigation and his potential legal exposure. The prompts included facts he had learned from counsel, and the platform generated written responses.

On November 4, 2025, agents arrested Heppner and searched his Dallas residence, seizing numerous electronic devices. According to defense counsel, approximately thirty-one AI-generated documents, consisting of the defendant’s prompts and outputs, were located on the seized electronic devices. Defense counsel asserted privilege over those materials on the grounds that they were created to prepare for discussions with counsel and were later shared with counsel. Defense counsel conceded, however, that the materials were prepared by the defendant on his own initiative, not at counsel’s direction.

The government moved for a ruling that the materials were neither privileged nor protected work product. The court granted that request.

The Court’s Core Conclusions

Judge Rakoff held that the AI documents were not protected by attorney-client privilege or the work product doctrine because, among other reasons, the documents were not prepared by or at the direction of counsel, nor did the defendant have a reasonable expectation of privacy in his AI prompts and outputs. Key arguments raised by the government and considered by the court included the following:

  • AI Platforms Are Not Attorneys. Attorney-client privilege protects confidential communications between a client and counsel for the purpose of obtaining legal advice. The AI documents were not communications with an attorney, nor were they created for the purpose of obtaining legal advice from an attorney. Additionally, when asked about legal matters, Anthropic’s Claude provides a warning that the user should consult with a “qualified attorney.” As such, independent querying of an AI tool was treated as research activity, not a privileged communication.
  • Confidentiality Was Not Preserved. Sharing inputs and outputs with a consumer AI platform that reserves rights to retain, train on, and disclose user data indicates that those communications are not confidential, which is essential to privilege. Claude is publicly accessible and is trained on a variety of underlying sources, including the data it collects from its users’ prompts and outputs. Claude also warns that it may disclose data it receives to “governmental regulatory authorities” and “third parties.” This undermined any claim that the communications were made in confidence. Because Claude is a retail and publicly accessible program, the court did not resolve situations where the AI program is a “closed” enterprise environment intended to protect confidentiality.
  • Work Product Requires Attorney Direction. The work product doctrine protects materials prepared by or at the direction of counsel in anticipation of litigation. Because the defendant acted independently, the AI materials did not qualify as work product. The fact that Heppner later shared the AI output with his counsel does not retroactively confer privilege. As stated in the government’s motion: “[I]f the defendant had instead conducted Google searches or checked out certain books from the library to assist with his legal case, the underlying searches or library records would not be protected from disclosure simply because the defendant later discussed what he learned with his attorney.” However, as mentioned by the government, the analysis in this case “might be different” if “counsel [had] directed the defendant to run the AI searches.”

Implications for Companies and Executives

Executives and compliance leaders increasingly use generative AI tools to analyze legal and regulatory exposure, organize facts, and test strategic decisions. This ruling suggests that, absent careful structuring, those interactions may not be privileged—and may become discoverable in later proceedings.

Three practical points follow:

  • Independent AI use can create discoverable material; using AI to think through legal exposure or regulatory issues—even in preparation for speaking with counsel—may generate non-privileged documents.
  • Enterprise governance matters; if platform terms permit retention, training, or disclosure to regulators, privilege claims may fail, so procurement and governance should weigh litigation risk alongside cybersecurity and privacy.
  • Structure and process may be outcome-determinative; while this decision did not address counsel-directed use on a secure enterprise platform under strict confidentiality terms, that distinction could matter, particularly where counsel structures and supervises prompts in real time as part of litigation preparation.

Practical Guidance

Treat AI like a highly capable, potentially disclosure-prone tool, not a trusted legal advisor.

Carefully consider confidentiality prior to using AI tools. For example, consider whether the AI tool is a closed, enterprise program that intentionally protects client data and is geared toward privacy. Users should also understand how AI models train their systems and whether those models train on a closed set of documents provided by only one client, or on all user-collected data. Finally, keep in mind that publicly available or “retail” AI programs may not offer the same confidentiality protections as internal or closed-system tools.

Companies should consider involving counsel before using AI tools to analyze legal or regulatory exposure, establishing formal protocols governing AI use in investigations and litigation, reviewing AI platform terms for confidentiality and disclosure provisions, and not uploading privileged communications without structured oversight.

Looking Ahead

Courts are unlikely to expand privilege doctrines simply because AI tools are sophisticated or widely used. Traditional requirements—confidentiality, attorney involvement, and preparation at counsel’s direction—remain the touchstones.

As AI becomes embedded in corporate governance and compliance functions, privilege preservation will depend less on the technology itself and more on how it is used. For boards, executives, and compliance leaders, this ruling is an early reminder to structure AI use with the same care applied to any other sensitive legal communications.