IP Alerts

SDNY Rejects Privilege and Work Product in the Context of Consumer Generative AI Use

March 10, 2026

Adobestock 374721430

In a decision of first impression, the U.S. District Court for the Southern District of New York, in United States v. Heppner, No. 25‑cr‑503 (JSR), held that written exchanges with a generative artificial intelligence (AI) platform, specifically Anthorpic’s “Claude” tool, were not protected by attorney-client privilege or the work-product protection.

The written materials were generated by a criminal defendant using a publicly available version of Claude and were not protected by the attorney‑client privilege or the work product doctrine. The district court granted the Government’s motion for a ruling compelling production of approximately thirty‑one documents reflecting the defendant’s exchanges with the consumer-grade AI tool, notwithstanding the defendant’s contention that the materials were prepared to facilitate legal advice and later shared with counsel. While the documents at issue were created after the defendant had received a grand jury subpoena and retained counsel, they were generated on defendant’s own initiative and without direction from his attorneys. Importantly, they were also generated on the public or consumer-grade version of Claude where the privacy terms provided users with no expectation of privacy, according to the district court.

Applying settled Second Circuit law, the district court first concluded that the attorney‑client privilege did not apply because the AI documents failed to satisfy the basic elements of a privileged communication. Most fundamentally, the communications were not between a client and an attorney. The district court emphasized that Claude is not a lawyer and cannot have any attorney‑client relationship with its users, observing that recognized privileges depend on a “trusting human relationship” with a licensed professional owing fiduciary duties and subject to discipline—conditions that cannot exist between a user and a public AI platform. On that basis alone, the privilege claim failed.

The district court further held that the communications lacked the requisite confidentiality. The defendant’s use of the consumer version of Claude was governed by a privacy policy explicitly permitting user inputs and outputs to be retained by the model, used to train the model, and disclosed to third parties, including governmental authorities, even in connection with litigation and potentially absent a subpoena. In light of these disclosures, the district court concluded that the defendant could not have had a reasonable expectation of confidentiality in his communications with the platform.

Additionally, the district court determined the communications were not made for the purpose of obtaining legal advice within the meaning of the privilege. Because counsel did not direct or supervise the AI use, the relevant inquiry was whether the defendant intended to obtain legal advice from the AI itself. The district court noted that Claude expressly disclaims providing legal advice and instructs users to consult licensed attorneys. The fact that the defendant later shared the AI outputs with counsel did not retroactively transform non‑privileged materials into privileged communications, a proposition foreclosed by longstanding precedent.

The district court likewise rejected application of the work product doctrine. While acknowledging that the documents were created in anticipation of litigation, Judge Rakoff held that work product protection is aimed at safeguarding the mental impressions and strategy of counsel, and generally applies only to materials prepared by or at the direction of an attorney or the attorney’s agent. Here, the defendant acted on his own volition, was not functioning as counsel’s agent, and the documents did not reflect counsel’s strategic thinking at the time they were created. Extending work product protection to such materials, the district court reasoned, would undermine the doctrine’s core purpose.

The Heppner decision underscores that the use of generative AI does not alter traditional privilege doctrines, and indeed, confirms that exchanges with consumer-grade AI platforms lack an expectation of confidentiality. Since client‑generated AI materials may be treated no differently than non-confidential disclosures to any other third party, with potentially significant consequences for litigation strategy and information governance, ensuring careful use of secured AI platforms is critical.

For more information on this topic, please contact Fitch Even partner Steven M. Freeland, author of this alert.

Fitch Even IP Alert®

Steve Freeland 10 09 25
Partner

Steven M. Freeland

Steven M. Freeland practices in all areas of intellectual property law, focusing primarily on the development, protection, and management of intellectual property. Steve assists clients with sophisticated patent portfolio management and the prosecution of complex patents, helping them to manage their patent assets using strategies tailored to further their business objectives.