A lawyer recently found himself in hot water after using ChatGPT, an AI language model, to draft an affidavit that included fabricated legal cases. The lawyer, Steven Schwartz from the personal injury law firm Levidow, Levidow & Oberman, admitted during a sanctions hearing before a New York judge that he had been “duped” by the AI tool, as reported by Inner City Press.
During the hearing, Judge P. Kevin Castel questioned how Schwartz had missed the fake cases, describing ChatGPT’s contributions as “legal gibberish.” The affidavit, prepared by Schwartz and his colleague Peter LoDuca, was for a lawsuit involving an individual who claimed to have been injured by a serving cart on a flight. It was later discovered that the court filing contained six bogus court cases with fabricated quotes and citations.
Schwartz attempted to defend himself by claiming that he thought ChatGPT’s output were mere “excerpts” and that he believed these cases couldn’t be found on Google. However, the judge pointed out inconsistencies and deemed the content as legal gibberish.
Peter LoDuca, the other lawyer involved in the case, distanced himself from the research used in the affidavit, stating in court that he should have been more skeptical. LoDuca expressed regret for the incident and vowed that such a mistake would never happen again.
The sanctions hearing concluded without Judge Castel issuing a decision regarding possible penalties for the lawyers. The law firm’s attorneys were not immediately available for comment following the hearing.
This incident serves as a cautionary tale highlighting the potential pitfalls of relying solely on AI tools for complex legal work. While AI language models can offer valuable assistance in legal research, it is crucial for legal professionals to exercise due diligence and verify the accuracy and authenticity of the information they obtain. Maintaining human oversight and critical thinking remains essential to ensure the integrity and reliability of legal proceedings.