Since its public debut last year, ChatGPT has been hailed as a game changer, with users bombarding the bot with queries and demands for it to write code, essays, letters, articles, and jokes.
It is a free tool that generates content in response to a prompt, such as articles, essays, jokes, and even poetry, and it has achieved widespread appeal while raising worries about plagiarism and copyright.
With an unbelievable 100 million users in only two months, one might imagine that the software developers would be sure in their choice to make it available to the public. But, sadly, they weren’t, according to a Time Magazine interview with Mira Murati, the chief technology officer of OpenAI, who oversaw the development of the product.
She believes that AI technology should be regulated since it could be utilized by “bad actors.” Murati stated that the company did not anticipate such enthusiasm for its “child” when it was released.
ChatGPT, like other AI-powered technologies based on a language model, may “make up facts,” she added.
However, its popularity has raised ethical concerns, according to Murati, who said that such tools “may be misused or used by bad actors,” raising questions about how to control it globally.
When asked whether firms like OpenAI or governments should be in charge of regulating the technology, Murati stated, “It’s important for OpenAI and companies like ours to put this into the public consciousness in a controlled and responsible way.”
She emphasized, however, that the company will require all help possible, including from regulators, governments, and everyone else.
“It’s not too early to regulate it,” she said.