Sam Altman Says OpenAI Could Own 100 Million GPUs By The End Of The Year – Worth $3 Trillion

Sam Altman, the CEO of OpenAI, is not known for being modest, and this latest statement might be his most audacious to date. Altman recently stated on X that OpenAI is expected to have “well over 1 million GPUs online” by the end of 2025. For comparison, about 200,000 Nvidia H100s power Elon Musk’s xAI’s potent Grok 4 model. Altman still believes it is insufficient, even though OpenAI intends to deploy five times that amount. He wrote, “Very proud of the team, but they better start working on figuring out how to 100x that lol.”

It’s not a joke because of the “lol.” Altman has actually continuously advocated for extreme computation scaling. He acknowledged earlier this year that OpenAI had to halt the release of GPT 4.5 because it literally ran out of GPUs, which was a startling problem for a Microsoft-backed business with plenty of funding. Compute has since taken precedence. The building of mega data centers and extensive infrastructure partnerships indicate a strategic push more akin to a national utility expansion than corporate growth.

At current market prices, Altman’s goal of scaling to 100 million GPUs—worth about $3 trillion—is not immediately achievable. The energy requirements would be enormous, and Nvidia lacks the manufacturing capacity. However, the world’s largest AI data center, OpenAI’s Texas facility, is already using 300 megawatts and is predicted to grow to 1 gigawatt by 2026, which is about enough to power a city. Grid operators in Texas are already struggling to keep up.

Additionally, OpenAI is not placing all of its eggs in Nvidia’s basket. Although Microsoft Azure continues to serve as its cornerstone, Oracle alliances are growing its physical infrastructure, and reports of Google TPU use suggest diversification. Joining companies like Meta and Amazon in creating proprietary chips, OpenAI is even investigating custom silicon.

Setting the horizon is the goal of Altman’s projection, not realism. This year’s one million GPUs is the new baseline, not the endpoint. The possibility of 100 million GPUs is essentially moot. The important thing is that OpenAI wants to know.

Leave a Reply

Your email address will not be published. Required fields are marked *