Site icon Wonderful Engineering

OpenAI CEO Sam Altman Says Bias Is Inherent In The ChatGPT Database

SAN FRANCISCO, CA - OCTOBER 06: Y Combinator President Sam Altman speaks onstage during "What Will They Think of Next? Talking About Innovation" at the Vanity Fair New Establishment Summit at Yerba Buena Center for the Arts on October 6, 2015 in San Francisco, California. (Photo by Michael Kovac/Getty Images for Vanity Fair)

During a candid conversation with Lex Fridman, Sam Altman, OpenAI’s CEO, discussed several topics, including the controversial question of whether GPT, the language model, is “too woke” or biased.

Altman acknowledged that the definition of “woke” has evolved over time and ultimately agreed that it is biased and will likely continue to be so. Fridman’s interview, published on Saturday, revealed that OpenAI had significantly improved the GPT model between versions GPT-3.5 and GPT-4, for which Altman expressed appreciation for the critics who recognized the advancements made by OpenAI.

However, he also acknowledged that much more work needs to be done. Altman addressed Elon Musk’s criticisms of OpenAI’s AGI safety research, expressing sympathy for Musk’s concerns but also urging him to focus on addressing AI safety issues, which is a challenging task.

 “Elon is obviously attacking us some on Twitter right now on a few different vectors, and I have empathy because I believe he is understandably so really stressed about AGI safety. I’m sure there are some other motivations going on too, but that’s definitely one of them,” he said.

“I definitely grew up with Elon as a hero of mine. You know, despite him being a jerk on Twitter or whatever, I’m happy he exists in the world, but I wish he would do more to look at the hard work.”

Altman explained that AGI and AI are two different things; AGI is capable of understanding or learning any intellectual task that humans can, while AI excels at a specific task. Additionally, Altman discussed the likelihood of AI risks.

Altman continued by stating that a lot of forecasts made regarding the potential of AI and its safety issues turned out to be inaccurate. In relation to OpenAI’s objective for people to have control over the models while having open bounds, he also spoke about the topic of “jailbreaking.”

“It kinda sucks being on the side of the company being jailbroken. We want the users to have a lot of control and have the models behave how they want within broad bounds. The existence of jailbreaking shows we haven’t solved that problem yet, and the more we solve it, the less need there will be for jailbreaking. People don’t really jailbreak iPhones anymore,” said Altman.

Exit mobile version