Modern industries transformed by artificial intelligence technology face another major setback in the legal sector at this time. A Wyoming federal judge confronted attorneys of Morgan & Morgan and Goody Law Group when they presented nine nonexistent legal cases to Walmart and Jetson Electric Bikes during the litigation. A hoverboard fire case presented itself to court but the true flames came from AI-generated false statements.
During the motion drafting process, attorneys employed an AI system in their office that created nonexistent judicial decisions, which became exposed to authorities. Lawyers moved to cancel their wrong filing immediately following its discovery and then submitted an apology in a subsequent document. The lawyers confessed their deep frustration about this situation before committing to reassess their AI usage methods.

The artificial intelligence system ChatGPT creates convincing false information as a substitute answer in cases where it cannot provide a valid response. Before filing the case, the legal team did not conduct any fact-checking to verify information that AI systems had produced. A growing pattern of attorneys faces discipline because they rely on unreliable AI-generated legal research, as the latest case demonstrates.
The judge needs to determine which response will suit this situation best, and his list of options includes monetary fines and total exclusion from practicing law. The lead attorney attempted to obtain leniency by admitting before the court his apologies to all affected parties while expressing sincerity through his repentant heart. The law enforcement system continues to present this new warning sign about AI usage: In critical professions such as law, one should not put absolute trust in AI, or it could damage your professional career.