
February 26, 2026 | 5 minute read
AI doesn’t run on the cloud. It runs on power and networks.
Jean-Philippe Avelange
Chief Information Officer
Why the AI boom is quietly turning connectivity into critical infrastructure
Artificial intelligence has become the favorite abstraction of our time. We talk about models, parameters, and “the cloud” as if intelligence materializes out of thin air.
It does not.
AI runs on electricity, physical infrastructure, and increasingly fragile assumptions about how data moves around the world. What looks like a software revolution is, in reality, a stress test of power grids, data centers, and global networks never designed for this level of intensity, concentration, or volatility.
And it is becoming a geopolitical contest measured in megawatts, concrete, and kilometers of fiber.
China is accelerating nuclear expansion, experimenting with space-based as well as subsea data centers, and investing in orbital compute research. The United States is extending nuclear plant lifespans, exploring small modular reactors, and funding space-based data processing. Both nations are racing to secure the physical foundations of AI dominance.
We’re all well aware of the technology race, but we need to talk more about the infrastructure race. Because power determines where compute can exist, sovereignty determines where data may reside, but most importantly; networks determine whether intelligence can function across both.
Below are five realities shaping AI at scale in the real world, not on a slide.
1. The era of “free” efficiency is over
For more than a decade, the technology industry enjoyed an extraordinary run of efficiency gains. Data center energy consumption stayed relatively flat even as demand exploded. Better chips, better cooling, better utilization. Everyone relaxed.
That era ended quietly around 2018.
Global data center electricity consumption reached roughly 460 terawatt-hours in 2022. AI acceleration is pushing that figure sharply higher.
Efficiency itself is not the relief valve.
AI chips are vastly more efficient than a decade ago. But cheaper, faster compute does not reduce total energy use. It increases it. This is Jevons’ Paradox: when something becomes more efficient, demand expands.
The same dynamic applies to networks. As AI becomes easier to deploy, traffic patterns grow denser, more concentrated, and more latency sensitive. Data no longer flows predictably. It spikes between specific regions, clouds, and data centers, often across borders.
Efficiency did not remove the problem. It accelerated it.
2. Power availability is reshaping the geography of AI
AI workloads require power that is abundant, stable, and continuous.
China is building new nuclear capacity and expanding grid infrastructure to sustain long-term AI growth. It is also piloting underwater data centers that use seawater cooling to reduce energy demand and thermal impact.
In the United States, hyperscalers are investing in small modular reactors, extending nuclear plant lifespans, and colocating compute near energy generation to bypass grid bottlenecks. Government agencies and private firms are exploring orbital data processing to reduce terrestrial power and latency constraints.
These efforts may sound extreme. They are responses to an extreme demand curve.
Data centers are now built where power exists, not where users live.
When compute moves, traffic follows.
New AI hubs often sit farther from enterprise locations and traditional interconnection points. Data travels farther, crosses more networks, and relies on fewer critical routes.
Power decisions are now network decisions. Enterprises do not control where AI capacity is built, but they inherit the connectivity risk that comes with it.
3. Infrastructure timelines are colliding with AI urgency
A hyperscale data center can be constructed in one to two years. High-voltage transmission lines can take a decade. Cross-border fiber routes and landing stations take years to permit and deploy.
The mismatch is already visible.
In parts of London, new data centers face grid connection delays until the end of the decade. Ireland has restricted new connections. Similar constraints are emerging across Europe, North America, and Asia. Subsea routes face congestion risks and geopolitical chokepoints.
At the same time, organizations are told to move faster with AI.
That contradiction cannot be solved with better software. It requires infrastructure and network architectures designed to flex around constraints rather than collapse under them.
IDC’s research shows 94% of organizations’ networks limit their ability to support large AI initiatives. Flexibility, scalability, and resilience remain the largest gaps. The issue is not ambition. It is readiness.
4. Sovereignty is fragmenting the AI landscape
AI will not run in one global cloud.
Data residency rules are tightening. Governments are defining trusted compute zones. Strategic industries face localization mandates. Cross-border data movement is restricted or monitored in many jurisdictions.
This creates a fragmented infrastructure reality:
- AI workloads must operate within regulatory boundaries
- Data must remain within national or regional borders
- Performance expectations remain global
AI systems must function across sovereign environments that cannot be treated as a single fabric.t.
5. Networks are becoming the strategic glue
Traditional enterprise networks were designed for predictable human workflows and regional hubs.
AI breaks those assumptions.
Training workloads generate massive east-west flows between data centers. Inference workloads demand consistent low latency across clouds and regions. Traffic concentrates along fewer high-capacity corridors. Failures that were once minor can halt entire pipelines.
The challenge is no longer moving packets. It is moving data across jurisdictions, power grids, and regulatory boundaries without breaking performance or compliance.
Connectivity shouldn’t be treated as the “plumbing” anymore. In an AI-driven world, networks are part of the production system. They connect fragmented infrastructure into a functioning whole.
The uncomfortable conclusion
AI is often framed as a virtual revolution. In reality, it is a profoundly physical one.
The US and China are competing on a global and orbital scale to secure an advantage in the AI race, but to win infrastructure strategy needs to become technology strategy.
Enterprises cannot control where compute is built or how regulation evolves, but they can control whether their networks are designed to operate across power constraints, geopolitical boundaries, and shifting data flows.
Because in the end, AI does not run on hype.
It runs on power and on networks resilient enough to carry the weight.
Share to
Jean-Philippe Avelange
Chief Information Officer
Jean-Philippe Avelange is Chief Information Officer at Expereo. With over 20 years of telecom IT experience, Jean-Philippe has a focus on cloud solutions, digital transformation, and agile methodologies. Starting as an IT manager at Capgemini Telecom, he has been dealing with complex information system architecture throughout his career. With an eye for business, he founded InovenAltenor and Avelto and has also worked as an independent IT consultant before deciding to join Expereo, back in 2017.
More articles from Jean-Philippe AvelangeStay connected with Expereo
Be the first to hear about our latest insights, news, and updates.
