This Lawyer Is In Hot Water After ChatGPT Cited Non-Existent Cases In His Legal Filings

A lawyer’s reliance on AI technology in legal proceedings has taken a disastrous turn, raising concerns about the risks of using automated tools without proper verification. Attorney Steven Schwartz, a seasoned practitioner at Levidow, Levidow & Oberman, finds himself facing a potential career-altering predicament due to his use of ChatGPT, an AI chatbot, in his legal filings.

The case in question, Mata v. Avianca, involved a customer suing the airline for a knee injury caused by a serving cart during a flight. As part of the legal process, Schwartz submitted a brief objecting to Avianca’s attempt to dismiss the case, citing a series of previous court decisions. Unbeknownst to him, the chatbot he relied upon provided him with a list of nonexistent cases, including Varghese v. China Southern Airlines, Shaboon v. Egyptair, Petersen v. Iran Air, Martinez v. Delta Airlines, Estate of Durden v. KLM Royal Dutch Airlines, and Miller v. United Airlines.

Avianca’s legal team and the presiding judge quickly discovered that none of these cases could be located. In an affidavit, Schwartz admitted to utilizing ChatGPT to supplement his own research but was unaware that the AI chatbot had fabricated the information it provided. ChatGPT, like any AI language model, is designed to respond to user prompts without independently verifying the accuracy of the information it generates.

This unprecedented situation has prompted the judge to schedule a hearing next month to discuss potential sanctions against Schwartz. The court must grapple with the implications of a lawyer submitting a legal brief filled with fictitious court decisions and citations.

This incident serves as a stark reminder of the limitations and potential dangers of relying solely on AI technology, especially in critical fields such as law. While AI tools can be valuable resources for legal professionals, they should always be used in conjunction with thorough human research and verification processes.

As the legal community navigates the complexities of integrating AI into their workflows, it becomes crucial to establish guidelines and safeguards to ensure the reliability and integrity of AI-generated information. Legal practitioners must exercise caution when using AI tools and maintain accountability for the accuracy of the information they present in court.

The Mata v. Avianca case shows how urgently the ethical and practical ramifications of AI in the legal profession need to be thoroughly discussed. It is a cautionary tale that should compel legal professionals and decision-makers to carefully assess the use of artificial intelligence in courtroom procedures and create protocols to prevent occurrences similar to this in the future.

Leave a Reply

Your email address will not be published. Required fields are marked *