On February 10, 2026, U.S. District Judge Jed Rakoff of the Southern District of New York issued a bench ruling holding that a defendant’s use of generative AI to analyze legal exposure is not protected under attorney-client privilege or the work product doctrine. See When AI Isn’t Privileged: SDNY Rules Generative AI Documents Not Protected. On February 17, 2026, Judge Rakoff issued a written opinion confirming the bench ruling and adding important analysis. This client alert outlines what the written opinion adds on confidentiality, work product, and waiver, and details the practical implications and open questions left by Judge Rakoff’s opinion.

The court, confronted with “a question of first impression,” succinctly frames the issues and answer:

“[W]hether, when a user communicates with a publicly available AI platform in connection with a pending criminal investigation, are the AI user’s communications protected by attorney‑client privilege or the work product doctrine? For the reasons that follow, the answer is no.”

Key Takeaways

  • Independent, unsupervised use of generative AI to analyze legal exposure may not be privileged or protected work product, and sharing these materials with counsel after generating them does not retroactively confer privilege.
  • Platform terms can defeat confidentiality. The court relied in part on Anthropic’s privacy policy to find that the AI materials were not privileged.
  • Entering privileged content into a consumer AI prompt risks waiving privilege.
  • Counsel‑directed use of a secure enterprise AI may be treated differently, but the opinion does not address that scenario.

Factual Background

In United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Oct. 28, 2025), a federal securities-fraud case against a former financial-services executive, defendant Bradley Heppner used a third-party generative AI tool, Anthropic’s Claude, to input prompts about the government’s investigation and his potential legal exposure, including facts he had learned from his counsel. The platform generated written responses.

On November 4, 2025, agents arrested Heppner and searched his Dallas residence pursuant to a search warrant, seizing several electronic devices. According to defense counsel, roughly thirty-one AI-generated documents, consisting of the defendant’s prompts and outputs, were located on the seized electronic devices. Defense counsel asserted privilege over those materials on the grounds that they were created to prepare for discussions with counsel and were later shared with counsel. Defense counsel conceded, however, that the materials were prepared by the defendant on his own initiative, not at counsel’s direction.

The government moved for a ruling that the materials were neither privileged nor protected work product. On February 10, 2026, Judge Rakoff ruled that they were not protected, and he confirmed that ruling in a written opinion issued on February 17, 2026.

Attorney‑Client Privilege Did Not Apply to the AI Documents

The opinion, quoting Second Circuit precedent, explains that attorney‑client privilege protects “communications (1) between a client and his or her attorney (2) that are intended to be, and in fact were, kept confidential (3) for the purpose of obtaining or providing legal advice.” See United States v. Mejia, 655 F.3d 126, 132 (2d Cir. 2011). Applying this test, Judge Rakoff held that the AI exchanges were not protected by the attorney-client privilege because they failed to satisfy at least two of the test’s elements.

First, the AI documents were not communications between Heppner and his counsel or an agent of his counsel but were, instead, communications with a third‑party AI platform. And even if the AI documents were “more akin to the use of other Internet-based software, such as cloud-based word processing applications . . . [no fiduciary] relationship exists, or could exist, between an AI user and a platform such as Claude.” In short, these were not the type of communications that privilege, which is interpreted narrowly in the Second Circuit, is intended to protect.

Second, “the communications memorialized in the AI documents were not confidential.” The opinion anchors this conclusion in the platform’s privacy policy, which allows collection of users’ inputs and Claude’s outputs, permits use of that data to train Claude, and reserves the right to disclose such data to third parties, including the government, “even in the absence of a subpoena compelling [Anthropic] to do so.” In light of those terms, the court held that Heppner lacked any reasonable expectation that his prompts and Claude’s responses would be confidential. Judge Rakoff also distinguished cases protecting confidential client notes prepared for counsel because, here, the client first shared the “equivalent of his notes with a third-party, Claude.”

Third, in a “closer call,” the purpose of Heppner’s communications with Claude was not to obtain legal advice. And, Claude states that it “can’t provide formal legal advice.” Importantly, the court also noted that if “counsel [had] directed Heppner to use Claude, Claude might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer’s agent within the protections of the attorney-client privilege.” But Heppner had instead acted “of his own volition,” and his later sharing the AI documents with his counsel did not retroactively confer privilege.

Work Product Protection Did Not Apply to the AI Documents

The opinion separately addresses the applicability of the work product doctrine to the AI documents. As summarized by the court, the work product doctrine provides qualified protection to materials prepared by or at the direction of counsel in anticipation of litigation. According to Judge Rakoff, the purpose of the doctrine is to protect defense counsel’s strategy. Accordingly, the court found that the AI documents were not protected by the work product doctrine because (1) they were neither prepared by nor at the direction of Heppner’s counsel, and (2) as defense counsel conceded, the AI documents did not reflect defense counsel’s strategy when created. The opinion expressly declines to adopt a broader view that would protect a client’s independently created materials simply because litigation was contemplated.

Implications Beyond Privilege and Work Product

By tying its analysis to the AI tool provider’s privacy policy and the user’s assent, the opinion applies a familiar third‑party disclosure framework: voluntarily sharing sensitive information with a platform that retains, trains on, and may disclose that information defeats a reasonable expectation of confidentiality. The court also cited another recent SDNY decision noting that AI users lack substantial privacy interests in conversations “voluntarily disclosed” to a publicly accessible platform that “retains [them] in the normal course of its business.” See In re OpenAI, Inc., Copyright Infringement Litig., No. 25‑MD‑3143, ECF No. 1021, at 3 (S.D.N.Y. Jan. 5, 2026).

This framing will influence discovery disputes over AI‑mediated content and should influence corporate policies for handling sensitive data. It also has implications for other legal contexts that turn on confidentiality or reasonable expectations of privacy, including internal‑investigation protocols, regulatory safeguards for nonpublic information, and contractual confidentiality obligations (such as NDAs) that may be breached if protected information is shared with AI platforms whose terms permit retention or disclosure.

Waiver Risks When Clients Incorporate Counsel’s Advice into Prompts

Sharing privileged content with a third-party consumer AI tool can waive privilege, just as sharing it with any other third party would. The court underscored that even if a client inputs privileged information learned from counsel, sharing it with a platform like Claude waives privilege, especially in light of Anthropic’s privacy policy, which warns that users’ prompts and Claude’s outputs may be shared further. Entering attorney emails, memoranda, or other privileged content into a consumer AI prompt thus risks both waiver and the creation of discoverable prompt/output records that adversaries may obtain through seizure, subpoena, or civil discovery.

Practical Recommendations After the Written Opinion

Ultimately, AI platform terms may determine whether privilege survives a challenge. A consumer tool with broad retention, training, and disclosure rights is a poor vehicle for privileged or work‑product‑adjacent tasks. By contrast, an enterprise deployment that contractually disclaims training on customer data, restricts retention, and limits disclosure—combined with documented counsel direction and supervision—materially strengthens privilege and work product claims, even if it does not guarantee protection. To preserve privilege and minimize discovery risk, organizations should:

  • Use enterprise AI tools that contractually bar training on customer data, restrict retention, and limit disclosure.
  • Involve counsel and document when non‑lawyers use AI at counsel’s direction; privilege log entries for AI-generated materials should indicate when they were created at counsel’s direction and identify contractual terms supporting confidentiality.
  • Prohibit entering privileged or sensitive information into consumer AI tools; treat prompts and outputs as potentially discoverable records and manage them accordingly.
  • Update AI procurement, policies, and training to align with the opinion’s focus on platform terms and user assent.

Open Questions After the Written Opinion

The practical steps above can reduce risk, but important questions remain about what companies must do to preserve privilege, including how closely counsel must direct AI use, how courts will treat waiver and dual-purpose materials, and whether different platform terms or jurisdictions will yield different outcomes. Those responsible for AI governance should keep the following unresolved questions in mind when setting policy, negotiating vendor contracts, and preparing for privilege challenges.

  • Counsel‑directed, enterprise deployments. How much direction, documentation, and control are required for an AI platform to function as an agent of counsel, such that privilege or work product attaches?
  • Scope of waiver. When privileged content is pasted into a consumer AI prompt, will courts find narrow, document‑specific waiver or broader subject‑matter waiver?
  • Dual‑purpose work product. How will courts treat mixed business‑and‑legal use cases when AI prompts are created by in-house attorneys who also serve business functions?
  • Platform and jurisdictional variability. How will differing platform terms (e.g., no‑training/no‑retention enterprise options) and legal authority influence outcomes in civil cases, regulatory inquiries, and other jurisdictions?

Conclusion

Judge Rakoff’s written opinion cements two points. First, confidentiality rises or falls on the platform’s terms and the user’s agreement to them. Second, work product protects lawyers’ mental processes and materials prepared by or at their direction; client‑generated AI content produced independently is unlikely to qualify. For executives and compliance leaders, the guidance is straightforward: use enterprise AI with strong confidentiality terms, route sensitive AI use through counsel, and prohibit entering privileged material into consumer tools.