[ad_1]
The latest shock in global energy markets is hitting at an awkward moment for artificial intelligence infrastructure. Oil moved from around $71 to above $100 a barrel in a matter of weeks, European gas jumped too, and those swings are now feeding through into electricity costs just as the world’s largest technology companies commit hundreds of billions to new data‑center build‑outs.
The strain is landing on something many AI teams still assume will keep pace: a power supply that’s becoming more expensive, harder to secure and nowhere near as easy to scale as the models built on top of it.
That is already visible in the markets adding capacity fastest. In northern Virginia, still the world’s largest data-center market, large facilities needing more than 100 megawatts of power can face waits of up to seven years for grid connections. Europe is running into the same problem in different ways, with power availability and grid connection delays now determining which projects advance and which ones pause.
Vacancy across European data centers is expected to fall to a record low of 6.5% by the end of 2026, even with more than 750MW of new capacity being added, because bottlenecks in the electrical grid are limiting how much supply can be brought to market.
The impact isn’t limited to build timelines. As the International Energy Agency put it in its recent report on energy and AI, there is no AI without energy. A typical AI-focused data center already consumes as much electricity as 100,000 households, and the largest facilities under construction will use far more.
Once power has less spare capacity and prices rise, the effect doesn’t stay contained to the utility bill. It changes the economics of a site and the assumptions behind long-term capacity plans drawn up on calmer energy expectations.
Too Much in Too Few Places
Much of the AI market still sits with a handful of providers in a small number of regions, all drawing on power systems that take years to expand. This leaves a large share of capacity exposed to the same constraints at the same time. When one of those regions comes under strain or prices move rapidly, the fallout doesn’t stay local. It affects where projects get built, how long they take, how capacity is sourced and what it costs.
Supply chains have seen this pattern before. Concentration can look efficient while conditions are stable, but it carries more risk than it appears to when disruption hits. AI infrastructure is starting to show the same weakness. Too much compute is still clustered in places where the same energy and delivery pressures land at once.
Supply chains have lived through versions of this before, whether in shipping, semiconductors or any market where too much capacity ends up sitting behind too few chokepoints.
Europe is arriving at a fairly clear conclusion. The issue isn’t only where AI capacity sits, but what sits underneath it. Power, grid access and infrastructure now do a lot to determine how secure that capacity really is. If those pieces are outside your control, a large part of the risk is too. That is why the sovereignty conversation is moving beyond data and into energy, location and who actually controls the systems keeping compute online.
That is starting to show up in how the region is responding. France is using its nuclear-backed electricity base to strengthen its case for AI infrastructure, while Germany is pushing data-center operators to think harder about energy sourcing and grid readiness. At EU level, the expansion of AI factories and regional compute capacity points the same way. More supply matters, but so does reducing dependence on a narrow set of vendors, regions and energy conditions.
What Resilience Looks Like Now
Supply chains learned during the COVID-19 pandemic that single-supplier dependency can look efficient right up to the point where it fails. AI infrastructure is running into a similar problem. Microsoft came into 2026 after saying it was on track to spend about $80 billion on AI-enabled data centers and infrastructure in fiscal 2025, even as power constraints continued to affect how quickly new capacity could come online.
McKinsey estimates that meeting global data-center demand could require $6.7 trillion in capital by 2030, with $5.2 trillion of that tied to AI workloads. The build-out is huge. So is the exposure if too much of that capacity remains concentrated in the same few places, under the same few constraints. The point is not that hyperscalers stop mattering. It is that they are exposed to many of the same energy and build-out pressures as everyone else.
A more resilient AI supply chain spreads both vendor risk and energy risk across a wider base. It doesn’t depend on one provider, one geography or one set of power conditions holding steady while demand keeps climbing. Capacity drawn from a broader mix of data centers, enterprises, universities and independent operators across multiple regions carries a different risk profile from capacity concentrated inside a few tightly clustered hubs. In supply-chain terms, that is diversification. It gives buyers more room to keep projects moving when prices move, delivery slows or one part of the market comes under strain.
That kind of flexibility matters quickly when the market comes under pressure. If compute can be sourced from a wider set of locations and operators, disruption in one region doesn’t automatically travel through the whole system in the same way. The companies that diversify compute procurement earlier will have more room to maneuver than the ones still assuming capacity will keep arriving through the same handful of channels.
Procurement Will Have to Change
For companies buying compute, the old assumption was simple enough: if the budget was there, capacity would be there too. That was easier to believe when supply felt deep enough and the underlying constraints stayed mostly out of view. It looks less reliable once energy, grid access and regional concentration start to affect how much capacity can actually be counted on.
That changes what good procurement looks like. Cost still matters. So does performance. But buyers also need to think harder about dependency, geographic spread and how much risk sits behind any single source of capacity. A broader mix of providers, locations and operating conditions gives them more room to respond when one part of the system becomes less dependable.
The AI race is often framed around models, chips and capital — the things you can see and count. The harder part sits underneath them. The Iran crisis didn’t create this exposure. It made it easier to see. Compute may still be sold as a technology story, but underneath it’s behaving much more like a supply chain.
Gaurav Sharma is chief executive officer of io.net.
[ad_2]
Source link


