In a cringe-inducing court hearing, a lawyer who relied on A.I. to craft a motion full of made-up case law said he “did not comprehend” that the chat bot could lead him astray.
Mr. [Steven] Schwartz, who has practiced law in New York for 30 years, said in a declaration filed with the judge [recently] that he had learned about ChatGPT from his college-aged children and from articles, but that he had never used it professionally.
He told Judge Castel on [June 8] that he had believed ChatGPT had greater reach than standard databases.
“I heard about this new site, which I falsely assumed was, like, a super search engine,” Mr. Schwartz said.
Programs like ChatGPT and other large language models in fact produce realistic responses by analyzing which fragments of text should follow other sequences, based on a statistical model that has ingested billions of examples pulled from all over the internet.
Irina Raicu, who directs the internet ethics program at Santa Clara University, said [recently] that the Avianca case clearly showed what critics of such models have been saying, “which is that the vast majority of people who are playing with them and using them don’t really understand what they are and how they work, and in particular what their limitations are.”