U.S. Judge Orders Lawyers To Sign AI Pledge, Warning 'They Make Stuff Up'

U.S. Judge Orders Lawyers To Sign AI Pledge, Warning ‘They Make Stuff Up’

DALLAS — A federal judge in Texas is now requiring lawyers in cases before him to certify that they did not use artificial intelligence to draft their filings without a human checking their accuracy.

U.S. District Judge Brantley Starr of the Northern District of Texas issued the requirement on Tuesday, in what appears to be a first for the federal courts.

In an interview today, Starr said that he created the requirement to warn lawyers that AI tools can create fake cases and that he may sanction them if they rely on AI-generated information without verifying it themselves.

U.S. Judge Orders Lawyers To Sign AI Pledge, Warning 'They Make Stuff Up'

“We’re at least putting lawyers on notice, who might not otherwise be on notice, that they can’t just trust those databases. They’ve got to actually verify it themselves through a traditional database,” Starr said.

In the notice about the requirement on his Dallas court’s website, Starr said generative AI tools like ChatGPT are “incredibly powerful” and can be used in the law in other ways, but they should not be used for legal briefing.

“These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations,” the statement said.

The judge also said that while attorneys swear an oath to uphold the law and represent their clients, the AI platforms do not.

“Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle,” the notice said.

Starr said on Wednesday that he began drafting the mandate while attending a panel on artificial intelligence at a conference hosted by the 5th Circuit U.S. Court of Appeals, where the panelists demonstrated how the platforms made up bogus cases.

The judge said he considered banning the use of AI in his courtroom altogether, but he decided not to do so after conversations with Eugene Volokh, a law professor at the UCLA School of Law, and others.

Volokh said Wednesday that lawyers who use other databases for legal research might assume they can also rely on AI platforms.

“This is a way of reminding lawyers they can’t assume that,” Volokh said.

Starr issued the requirement days after another federal judge in Manhattan threatened a lawyer with sanctions over a court brief that included citations to bogus cases generated by ChatGPT.

Attorney Steven Schwartz of Levidow, Levidow & Oberman said in a sworn statement filed last week that he “greatly regrets” relying on the AI tool and was “unaware of the possibility that its contents could be false.” Schwartz did not immediately respond to a request for comment.

U.S. District Judge P. Kevin Castel will hold a June 8 hearing on whether Schwartz should be sanctioned.

Starr said that while the New York case was not the motivation behind creating the requirement, it did motivate him and Volokh to put on the finishing touches.

He also said he and his staff will avoid using AI in their work altogether, at least for now.

“I don’t want anyone to think that there’s an algorithm out there that is deciding their case,” Starr said.

REUTERS

By JACQUELINE THOMSEN

Our Standards: The Thomson Reuters Trust Principles.