Sam Altman, CEO of OpenAI, envisions a future where jobs replaced by powerful AI tools won’t necessarily require humans to earn money. However, this vision is predicated on living in a turbo-charged capitalist technocracy.
“In the next five years, computer programs that can think will read legal documents and give medical advice,” Altman wrote in a 2021 post called “Moore’s Law for Everything.” In another ten, “they will do assembly-line work and maybe even become companions.” Beyond that time frame, he wrote, “they will do almost everything.” In a world where computers do almost everything, what will humans be up to?
Altman believes that within the next decade, AI will be capable of performing a wide range of tasks, potentially rendering humans obsolete in many fields. While Altman foresees the creation of new jobs along the way, the nature of these jobs remains uncertain.
“In the next five years, computer programs that can think will read legal documents and give medical advice,” Altman wrote in a 2021 post called “Moore’s Law for Everything.” In another ten, “they will do assembly-line work and maybe even become companions.” Beyond that time frame, he wrote, “they will do almost everything.” In a world where computers do almost everything, what will humans be up to?
OpenAI, under Altman’s leadership, has developed advanced AI programs such as ChatGPT and GPT-4, which mimic human conversations and possess remarkable capabilities. As other tech giants like Google and Meta enter the AI race, fears arise that human efficiency will be surpassed, particularly in the workplace.
Altman proposes universal basic income (UBI) as a solution, providing a guaranteed income to supplement wages or sustain livelihoods. UBI has been tested in various forms and gained attention during the COVID-19 pandemic, addressing concerns about job losses and precarious work.
However, recent research offers a different perspective on the necessity of UBI. OpenAI’s working paper suggests that while AI will impact jobs, humans will likely continue to work alongside AI systems, with certain roles being less exposed than others.
The integration of AI into the economy will depend on factors like data availability, regulations, and power dynamics. Altman’s UBI proposal could be seen as an attempt to shape the future according to his vision, benefiting existing tech giants and limiting alternative possibilities.
The idea of UBI as presented by Altman highlights a power dynamic where the masses become shareholders in the wealth generated by AI mega-corporations. It raises concerns about power imbalances between workers and employers in a technocratic world dominated by a few profitable companies.
When he was asked in January whether OpenAI planned to “take the proceeds that you’re presuming you’re going to make someday and?.?.?.?give them back to society,” Altman demurred. Yes, the company could distribute “cash for everyone,” he said. Or “we’re [going to] invest all of this in a non-profit that does a bunch of science.”
The scenario Altman describes parallels the current situation of social platform moderators, who perform low-wage roles monitoring and correcting AI algorithms that filter out toxic content. These workers are often overlooked and subject to harsh conditions, highlighting the hidden labor behind AI-driven systems.