ChatGPT-maker, OpenAI, has raised concerns about the proposed AI legislation in the European Union (EU) and hinted at the possibility of leaving the region if it fails to comply. The EU’s planned legislation would be the first of its kind to specifically regulate artificial intelligence, potentially requiring generative AI companies like OpenAI to disclose the copyrighted material used to train their systems.
According to OpenAI CEO Sam Altman, the present draft of the EU AI Act is regarded extremely restricted. There is, however, optimism that the restrictions will be changed. Critics claim that AI businesses take advantage of the work of artists, singers, and performers by teaching their systems to mimic their achievements without due acknowledgement.
Altman has expressed concerns that some of the proposed safety and transparency requirements would be technically unfeasible for OpenAI to implement. Despite these concerns, he remains optimistic about AI’s potential to create jobs and reduce inequality.
In an effort to address the risks associated with AI, Altman met with UK Prime Minister Rishi Sunak and leaders from other AI companies to discuss the need for voluntary actions and regulations. The focus was on managing risks such as disinformation, national security threats, and the potential dangers posed by super-intelligent AI systems.
At the G7 summit in Hiroshima, world leaders emphasized the importance of international cooperation in regulating AI and creating trustworthy systems. The European Commission aims to develop an AI pact with Alphabet, Google’s parent company, before any EU legislation takes effect.
Thierry Breton, the EU industry chief, stressed the necessity of international collaboration in regulating AI. He met with Google’s CEO, Sundar Pichai, to discuss the development of a voluntary AI pact and the need for transparency. Silicon Valley veteran Tim O’Reilly suggested that a good starting point would be mandating transparency and establishing regulatory institutions to ensure accountability.
As discussions on AI regulation continue, it is evident that finding a balance between fostering innovation and protecting the rights of creators and users is crucial.