Sam Altman said artificial intelligence could eventually be delivered as a metered utility similar to electricity or water, with users paying for access based on how much computing power they consume. The concept reflects a broader shift in how technology companies are positioning large scale AI systems as infrastructure services rather than standalone software products.
Speaking at the BlackRock Infrastructure Summit in Washington, DC, Altman said companies developing advanced AI models are moving toward a model where intelligence is delivered on demand through usage based pricing. In this approach, AI services would be measured and billed using units known as tokens, which represent the amount of data processed by an AI system during a request or response, according to Business Insider.
Under this model, access to AI would depend on available computing capacity. Large language models require extensive computational resources to operate, relying on specialized chips and large scale data center infrastructure. Altman said that if companies cannot build sufficient compute capacity to meet demand, AI services could become expensive or limited in availability.
The concept of AI as a utility reflects the increasing scale of infrastructure required to support modern machine learning systems. Running advanced AI models involves large clusters of graphics processing units and specialized processors distributed across data centers. These facilities require substantial electrical power and cooling systems to maintain continuous operation.
Industry analysts note that demand for AI computing has expanded rapidly in recent years as businesses integrate machine learning into software development, data analysis, and automation systems. Technology companies are responding by investing heavily in computing infrastructure capable of training and running large models.
At the 2026 Consumer Electronics Show, Lisa Su said that global AI workloads could require more than ten yottaflops of computing capacity within the next five years. A yottaflop represents one septillion floating point calculations per second, a scale many times larger than the computing capacity used by AI systems only a few years ago.
Meeting that demand requires significant expansion of data centers and supporting energy infrastructure. Large AI data centers can consume electricity comparable to that used by small cities, raising concerns about power grid capacity and long term energy supply.
The need for additional computing resources has also created competition within technology companies. Engineers developing AI applications often rely on limited pools of graphics processors, which are used both to train new models and to run existing ones for customers.
Executives across the technology sector have identified energy supply as a potential constraint on future AI growth. During a podcast interview earlier this year, Elon Musk said that electricity generation could become a limiting factor in scaling AI infrastructure globally.
OpenAI has also outlined large scale investment plans aimed at expanding computing capacity. The company has discussed committing substantial funding toward new data center construction and infrastructure development over the coming years.
Altman said the long term objective is to move beyond a situation where computing resources limit access to AI systems. If companies can expand infrastructure fast enough, AI could become a widely available service delivered through large scale computing networks similar to other modern utilities.
