The New York Times has filed a lawsuit against tech giant Microsoft and AI developer OpenAI in a legal maneuver that could reshape the intersection of artificial intelligence and journalism. T
The lawsuit, filed in the U.S. District Court for the Southern District of New York, contends that Microsoft, a key investor, and provider of cloud computing technology to OpenAI and OpenAI itself, owes “billions of dollars in statutory and actual damages” for the “unlawful copying and use” of The Times’s content. The newspaper asserts that the AI models developed by the defendants, notably GPT-4, directly compete with its content and limit commercial opportunities.
The New York Times acknowledges the potential benefits of AI, stating, “recognizes the power and potential of GenAI for the public and journalism.” However, it emphasizes the neneedor journalistic material to be used only for commercial gain permission from the source. The lawsuit alleges that Microsoft and OpenAI failed to seek such consent.
OpenAI, responding to the legal action, expresses surprise and disappointment, noting ongoing productive conversations with The New York Times. The AI developer emphasizes its commitment to respecting the rights of content creators and suggests a hope for a mutually beneficial resolution.
The legal proceedings involve Susman Godfrey, the litigation firm known for representing Dominion Voting Systems in a high-profile defamation suit against Fox News. The lawsuit against Microsoft and OpenAI echoes a separate case where author Julian Sancton and other writers accused companies of using copyrighted materials without permission to train AI models, including ChatGPT.
Concerns among media publishers about AI models utilizing their content without proper authorization have grown, with generative AI tools such as ChatGPT producing outputs that sometimes closely resemble the source material. Publishers fear potential impacts on traffic and revenues as AI-generated content competes with original journalism.
The lawsuit highlights instances where GPT-4 allegedly produced altered versions of New York Times articles. The Times argues that AIroducing and modifying its content limits the AI model’s commercial opportunities and infringes on copyrightable expression. Examples cited include removing product links from The Times’s Wirecutter app, impacting potential referral revenue.
As the legal battle unfolds, it brings to the forefront the intricate challenges and ethical considerations surrounding the use of AI in journalism, questioning the boundaries of fair use and the need for collaborative frameworks between technology developers and content creators.