NEWS

ASK Now Available in Logikcull, Bringing Intuitive AI to 38,000 Global Users.

blog

Court Rules That Use of Third-Party AI Does Not Waive Work Product Protections

Learn how the Morgan ruling protects pro se litigants who use AI in litigation and what it means for lawyers using third-party AI tools in civil cases.

Almost daily, judges issue rulings on AI tools in litigation. Many of these orders are warnings on the negligent use of AI in motion practice. Then there are the gems like Morgan v. V2X, Inc., that truly stand out for access to justice.

There has been much written about the criminal case of United States v. Heppner, where a Federal Court in New York held that there was no work product protection for a criminal defendant who did his own legal analysis with an AI application because his lawyer was not involved in the use of the AI tool. The Morgan case stands in opposition to the Heppner decision, because in a civil lawsuit, the work product doctrine codified in the Federal Rules of Civil Procedure applies to PARTIES, not just the attorneys. A pro se litigant representing themselves has the right to claim their use of an AI tool is protected by the work product doctrine. That means lawyers' use of AI is, too.

The Case History

Magistrate Judge Maritza Dominguez Braswell entered the AI-in-litigation debate by ordering a pro se litigant in an employment case to disclose which AI tools they had used. Here is the ruling: the work product doctrine under Federal Rule of Civil Procedure 26(b)(3) protects the use of an AI application by a pro se litigant, but the pro se litigant had to disclose the name of the AI application they used on confidential information subject to a protective order. Morgan v. V2X, Inc., Civil Action No. 25-cv-01991-SKC-MDB, 2026 U.S. Dist. LEXIS 67939, at *1 (D. Colo. Mar. 30, 2026).

This might not sound as earthshattering as Gideon v. Wainwright, the Supreme Court case that held that criminal defendants who cannot pay for their own lawyers have the right to have the state appoint attorneys on their behalf, but it is huge for pro se litigants that take on an army of lawyers. Judge Braswell zeroed in on the fact that a person representing themselves in a civil lawsuit has the same rights and responsibilities under the Federal Rules of Civil Procedure. Rule 26(b)(3) states in relevant part:

...a party may not discover documents and tangible things that are prepared in anticipation of litigation or for trial by or for another party or its representative (including the other party's attorney, consultant, surety, indemnitor, insurer, or agent)...

USCS Fed Rules Civ Proc R 26(b)(3).

The lawsuit was an employment case with claims of hostile work environment, racial discrimination, and retaliation for protected activities. The plaintiff sought discovery about the defendant's insurance policy, which prompted a motion to compel. The defendant was concerned with their confidential information subject to a protective order being uploaded to plaintiff's unknown AI application. The plaintiff argued that prohibiting his use of an AI application would create "an unfair 'technological gap' by barring a pro se litigant from using modern analytical aids while Defendant's firm maintains its own proprietary AI and cloud-based systems." Morgan, at *3.

Judge Braswell framed the issue as follows:

AI is forcing litigants and courts to confront difficult questions about how and to what extent longstanding protections will apply when parties use AI to assist them in the litigation process. In particular, courts are beginning to wrestle with practical questions surrounding confidentiality, work product, and privilege. This dispute raises two such questions: (1) to what extent will work product protections apply to a pro se litigant's use of AI, and (2) to what extent should a protective order expressly restrict the use of AI?

Morgan, at *6.

Judge Braswell held that the work product doctrine applies to pro se litigant's use of AI, because the "plain language broadly refers to things prepared in anticipation of litigation by any party, language that would 'seem to include material created by a party before retaining a lawyer as well as a party who never actually hires an attorney.'" Morgan, at *7, Jennifer A. Gundlach & Zeus Smith, Expanding the Federal Work Product Doctrine to Unrepresented Litigants, 30 GEO. J. ON POVERTY L. & POL'Y 53, 62 (2022). Stated plainly, courts routinely interpreted the Rule to apply to a pro se litigant's work product. Morgan, at *8.

Given that pro se litigants are held to the same standard as represented litigants, they also are awarded the same protections of the work product doctrine. Morgan, at *10.

Addressing the Heppner Decision

Judge Braswell addressed the elephant in the courtroom of United States v. Heppner, 2026 WL 436479 (S.D.N.Y. Feb. 17, 2026). The Heppner decision involved a criminal defendant who used an AI tool without the direction or knowledge of their attorney. The Heppner court held that there was no work product protection of the defendant's AI research in Claude because of the lack of any involvement of his attorney. The reasoning behind the Heppner court's ruling included Claude's terms of use, the fact an AI application is not a lawyer, and that the defendant did not use Claude at the direction of counsel.

Judge Braswell explained that the Heppner decision was not binding on the court. Furthermore, Heppner was a criminal case; Morgan was a civil case governed by the Federal Rules of Civil Procedure. Rule 26(b)(3) protects parties and not just counsel. Finally, the Heppner defendant acted without his lawyer. There was no such gap between client and counsel in Morgan, because the party was both the litigant and advocate. Morgan, at *10-11.

Work Product and AI Applications

As to the issue of work product going through a third-party AI system, that is not the same as sharing mental impressions with opposing counsel. The court found that ChatGPT, Claude and Gemini's collection of data for training purposes does not on its own eliminate privacy expectations or waive protections, noting that "even though AI use technically 'discloses' information to a third party," it is highly unlikely that information would ever fall into the hands of an opposing party. Morgan, at *11.

Judge Braswell explained that nearly all electronic interaction passes through third-party systems. Just because a party uses Gmail does not mean they forfeit all rights to confidentiality and privacy. Id.

Judge Braswell explained that virtually all data passes through a third-party system, from smartphones to search engines. Case law has held that email subscribers have a reasonable expectation of privacy of their emails stored by internet service providers. United States v. Warshak, 631 F.3d 266, 268 (6th Cir. 2010). Furthermore, the US Supreme Court has held that a person's reasonable expectation of privacy in data is not automatically extinguished by the data being held by a third-party intermediary. Carpenter v. United States, 585 U.S. 296, 310-16 (2018).

Based on the above, the court explained that there is an arguably stronger privacy argument in the context of modern AI because the applications are designed to engage with the end user. The features include simulating empathy and endearing trust that feels genuine. Morgan, at *12-13.

Considering the case precedents and features of AI, the court stated, "AI interactions do not automatically compromise work product protections."

All of that analysis went to the main issue of the case: does a pro se litigant get the protections of the work product doctrine for AI? Yes. However, while a pro se litigant is entitled to the work product doctrine applying to their use of an AI application, that does not give them the right to cloak the product they are using. The court held that the defendant's request to know what AI product was used was reasonable. Morgan, at *16.

Protective Orders for Use of AI on Confidential Information

The court resolved the issue of confidential information being uploaded to an AI application by amending the protective order to state the following:

No party or authorized recipient may input, upload, or submit CONFIDENTIAL Information into any modern artificial intelligence platform, including any generative, analytical, or large language model-based tool ("AI"), unless the AI provider is contractually prohibited from:

(1) storing or using inputs to train or improve its model; and (2) disclosing inputs to any third party except where such disclosure is essential to facilitating delivery of the service. Where disclosure to a third party is essential to service delivery, any such third party shall be bound by obligations no less protective than those required by this Order. In addition, the AI provider must contractually afford the party or authorized recipient the ability to remove or delete all CONFIDENTIAL information upon request. A party intending to use AI that it contends meets these requirements must retain written documentation of these contractual protections.

Morgan, at *20-21 (D. Colo. Mar. 30, 2026).

The court's order was not meant to limit the use of AI by a pro se litigant. The issue was that confidential information would not be entrusted to platforms without safeguards as stated in the protective order. Morgan, at *22.

Logikcull Insight

The promise of AI applications to help level the playing field of a single lawyer vs a litigation team cannot be underestimated. The same holds true for a pro se litigant trying to have their day in court. However, the rules of professional conduct, ethical obligations, and protecting confidentiality must be maintained in any tool used by anyone in court.

In an age of constant stories about improper use of AI, it is hopeful to see a court recognize the potential of how AI can help an advocate analyze the facts of their case.

share this post
Previous Post
Next Post