The artificial intelligence boom has a familiar story. Demand is exploding. Tech giants are racing to build larger models. And cloud providers are pouring billions into data centers.
But the reality on the ground is more complicated. The next phase of AI may not be limited by software or algorithms. It may be limited by electricity.
Take Oracle (NYSE:ORCL). The company sits at the center of the AI infrastructure race. Its cloud division is growing rapidly, fueled by demand from model developers and enterprise customers. Oracle recently reported cloud infrastructure revenue growth of 66% year over year. Contracts tied to future revenue also surged, with remaining performance obligations exceeding $523 billion.
Yet there is a twist. Even as demand accelerates, data center expansion is becoming harder to execute. Power availability, land access, capital intensity, and supply chains are emerging as real constraints. The AI boom is not just a software revolution. It is also a massive physical buildout.
And the physical world moves slower than the digital one.
Below are four forces that help explain why the AI gold rush may soon collide with a power wall.
AI Data Centers Consume Gigawatts Of Electricity
Artificial intelligence infrastructure runs on electricity. That sounds obvious, but the scale involved is easy to underestimate.
Modern AI data centers are measured in megawatts rather than racks of servers. Oracle recently delivered roughly 400 megawatts of new data center capacity in a single quarter. At the same time, its massive AI supercluster in Abilene, Texas is deploying more than 96,000 Nvidia GB200 GPUs.
That kind of infrastructure requires enormous energy input. A large AI data center can consume as much electricity as a mid-size city. Multiply that by dozens of facilities and the numbers become staggering.
This is why electricity has become one of the most important variables in the AI buildout. Oracle executives have said they evaluate land and power availability before signing infrastructure contracts. If the grid cannot support a facility, the project simply cannot move forward.
This is a sharp contrast with the early days of cloud computing. Back then, compute demand grew quickly but infrastructure scaled steadily. AI has accelerated the cycle. Training large models requires massive bursts of compute power, and inference workloads run continuously.
The result is a new reality. The success of the AI economy now depends on the capacity of regional power grids.
And power grids do not scale overnight.
Power Grids & Infrastructure Move Much Slower Than AI
Artificial intelligence can improve dramatically in months. Power infrastructure cannot.
Building a new transmission line or upgrading a substation often takes years. Environmental approvals, regulatory reviews, and engineering work slow the process. Even when funding exists, construction timelines remain long.
That mismatch is beginning to matter.
Technology companies are planning data centers that require gigawatts of capacity. Utilities must determine how to deliver that power without destabilizing the grid. In many regions, the answer involves new generation capacity or major upgrades to transmission networks.
These projects take time. They also require coordination between utilities, regulators, and local governments.
The result is a bottleneck that rarely shows up in AI headlines. Engineers may be able to design the next model quickly. But the infrastructure that powers those models expands at a slower pace.
Oracle’s approach reflects this reality. The company has said it accepts new contracts only after confirming key inputs such as land, power availability, supply chains, and construction capacity. In other words, the company does not build first and hope infrastructure appears later.
This cautious process may look conservative in the middle of an AI boom. Yet it highlights an important truth.
AI scaling is increasingly constrained by physical infrastructure rather than demand.
AI Infrastructure Is Becoming One Of The Most Capital-Intensive Tech Buildouts
Another overlooked factor is cost.
AI data centers are expensive to build. The combination of specialized chips, advanced cooling systems, networking equipment, and power infrastructure pushes costs into the billions.
Oracle’s numbers illustrate the scale. The company spent about $12 billion on capital expenditures in a single quarter, much of it tied to cloud infrastructure. Management also indicated that planned spending could rise significantly as demand continues to grow.
These investments are tied directly to revenue-generating equipment installed in data centers. GPUs, networking hardware, and storage systems represent the core of the infrastructure.
Yet the economics of AI buildouts remain complex. Some customers bring their own chips. Others lease hardware through suppliers. Vendors may even rent computing capacity rather than sell it outright.
These arrangements help spread capital costs across multiple participants. They also reflect the sheer scale of the investment required.
Even for a company the size of Oracle, infrastructure expansion requires careful financial planning. Management has emphasized that future expansion will occur only when projects meet profitability targets and capital remains available on favorable terms.
In other words, the AI boom is not only a technology story. It is also a massive capital allocation exercise.
AI Demand Is Surging Faster Than Infrastructure Can Be Built
Despite the constraints, demand for AI infrastructure continues to grow at an extraordinary pace.
Oracle reported cloud revenue growth of 33% year over year. Cloud infrastructure revenue rose even faster, climbing 66%. GPU-related revenue jumped 177%, reflecting the rapid adoption of AI workloads across the platform.
The company’s backlog provides another signal. Remaining performance obligations reached more than $523 billion, rising sharply as large customers sign long-term infrastructure contracts. Many of those agreements involve hyperscale deployments tied to AI training and inference.
Demand is not limited to model developers. Enterprises are also exploring new AI applications that analyze private data stored in corporate databases. Oracle believes this market could eventually exceed the scale of public-data model training.
This surge in demand creates a challenge. Cloud providers must expand capacity quickly while maintaining profitability and operational discipline.
Data centers cannot be built instantly. Construction timelines, equipment delivery, and energy availability all shape deployment schedules. Even after a facility is completed, servers must be installed and configured before customers can begin using the infrastructure.
The digital world can scale rapidly. The physical world still follows its own timeline.
That difference may shape the next phase of the AI economy.
Final Thoughts
The AI boom is reshaping the technology landscape, and Oracle sits near the center of the infrastructure buildout. Cloud revenue is growing quickly, AI workloads are accelerating, and the company has accumulated one of the largest backlogs in enterprise technology.
At the same time, the physical constraints behind AI expansion are becoming more visible. Electricity availability, capital costs, supply chains, and grid infrastructure are emerging as key variables in the pace of deployment.
From a valuation perspective, Oracle’s multiples have moderated in recent months. The stock currently trades around 9.05x LTM enterprise value to revenue and roughly 21.3x LTM EV/EBITDA, with a 28.7x trailing price-to-earnings ratio. These levels are lower than earlier peaks, reflecting both market volatility and the heavy investment phase tied to AI infrastructure.
Whether those investments translate into sustained returns will depend on execution. The opportunity is large, but the buildout requires time and capital. For now, the story of AI may be as much about power grids and construction timelines as it is about algorithms and software.
Disclaimer: We do not hold any positions in the above stock(s). Read our full disclaimer here.





