41.1 C
Dubai
Thursday, August 14, 2025

‘Nobody’s Ready’: AI’s Rapid Rise is Outpacing Our Infrastructure

Must read

[ad_1]

Opinions vary when it comes to the future of artificial intelligence as a replacement for human workers, or its reliability as a source of information. But when it comes to the question of whether our existing infrastructure is prepared to handle the speed at which the technology itself is growing, the answer is consistent across the board.

“No, I don’t think we’re ready for this,” says Charles Yeomans, the CEO of Atombeam, a software company that specializes in reducing the size of data.

“Not at all — nobody’s ready for AI,” says Christophe Girardier, the CEO of Glimpact, a digital platform that assesses the environmental impacts of products and companies.

“Most certainly not,” says Jon Guidroz, the senior vice president for Aalo Atomics, which builds modular nuclear plants to power AI data centers.

The numbers appear to back that up. Goldman-Sachs predicts that AI will drive a 160% increase in data center power demand by 2030, and require an estimated $720 billion in upgrades to global power grids to account for that growth. In the U.S. — where power demand growth has been roughly zero for the last decade — power demand is expected to rise by as much as 2.4% between 2022 and 2030, with nearly 40% of that growth coming from data centers alone. Between 2022 and 2030, Goldman-Sachs calculates that the share of power consumed by data centers will rise from roughly 2% to 4% globally, and from 2% to 8% in the U.S.

As Guidroz points out, the growth in power demand from data centers is moving faster than utilities can plan. That’s necessitated a search for what he describes as a “clean and firm” alternative energy source, leading tech giants like Microsoft, Google and Amazon to explore nuclear power as a solution.

“Nuclear is particularly attractive because it’s a true base-load carbon-free power,” he explains. “The question is can we grow nuclear in a way that can meet data centers’ footprints and needs?”

Tech companies are betting that we can. Google is planning to spin up advanced nuclear reactors at three U.S. sites by 2035; Amazon has signed agreements to develop four advanced small modular reactors (SMRs) in Washington state; and Microsoft is looking to restart Pennsylvania’s Three Mile Island nuclear power plant. Guidroz sees SMRs in particular as “a game changer,” as they cost less than large reactors, can be built faster, and take up less physical space compared to other forms of clean energy like solar and wind.

Even so, Girardier is skeptical that nuclear power is a viable solution.

“Nuclear is very expensive,” he says, especially as construction and labor costs have risen dramatically since nuclear power stations first came online in the 1950s and 1960s. The United States’ own history with nuclear power is also rife with failed projects that were scrapped due to ballooning price tags, including several plants in Washington state planned in the 1980s that were canceled after estimated construction costs increased from $4.1 billion to $24 billion, and two reactors that have sat unfinished in South Carolina since 2017, after estimated costs rose from $9.8 billion to $25 billion.

Building and maintaining a nuclear power plant is a heavy lift from a process perspective too. Nuclear energy is highly regulated — as is its main source of fuel, uranium — and that creates a maze of red tape in the form of permitting requirements, environmental impact assessments, and stringent quality assurance testing that can delay projects for years.

On top of that, finding a clean energy source is just one piece of the puzzle, Girardier says, citing Glimpact data showing that AI’s carbon footprint from emissions accounts for just one-third of its overall environmental impact. Another 22% comes from the use of resources such as rare earth minerals and metals for AI chipmaking, while 21% comes from fossil fuel consumption, along with 7% from water used to cool AI servers.

Read More: Data Is Environmentally Dirty. What to Do?

So, what is the solution? The stakes are certainly high enough to demand one, with technology consultancy Accenture predicting that carbon emissions from AI data centers could increase 11-fold by the end of the decade, and account for 3.4% of total global CO₂ emissions, more than the entire share produced by the aviation industry. And while rising emissions, water scarcity, and strained power grids all represent questions in need of an immediate answer, Yeomans believes those issues can be addressed by changing the very nature of how AI models are built. 

“You have to look at it and say: There’s something that’s got to give here,” he says. 

Yeomans describes how AI agents in platforms like ChatGPT are built from large language models (LLMs), which are trained on trillions of datasets to respond to questions and prompts in seconds. LLMs excel at summarizing large blocks of text and answering basic questions, but have also been known to fabricate facts and quotes, struggle to understand context, and are subject to any biases that might be present in their training data. They also require mountains of processing power, lack the adaptive long-term memory of humans, and tend to perform better when they’re trained on larger and larger datasets, meaning that their power demands only increase as they’re upgraded. Essentially, Yeoman says, “they’re really, really smart parrots.”

But as ChatGPT has been trained on increasingly larger datasets, the improvements in each subsequent version have been narrowing.

“This total brute force approach stopped working,” he says. “We’re reaching a plateau in which the incremental improvement is almost zero.”

It’s better to make AI agents smarter, rather than larger, Yeomans asserts, where they can learn and adapt in real time, and not rely solely on enormous training models that eat up processing power and strain power grids. That, he says, has the potential to drive down the amount of computing power needed to generate an answer to a question, since the AI would be able to pull from a growing knowledge base rather than having to query any number of the trillions of datasets it was trained on. Metaphorically speaking, it’s the difference between having to go to Wikipedia to look up a word every time it’s spoken, and remembering that you know a word already because you came across it previously.

The good news is that Chinese AI company DeepSeek says that it’s made progress toward a more efficient AI. The company claims that its model uses 10 to 40 times less energy than similar U.S. technology, while one estimate published in a research paper by Nature reported that DeepSeek requires 11 times less computing resources than equivalent tech from Meta. Despite that, there’s a catch: U.S. companies are reluctant to tie their proprietary data to a Chinese company to boost their own AI models, especially given concerns over TikTok’s potential sharing of sensitive user data with the Chinese government.

What Yeomans finds concerning in the near term is that the prevailing approach to AI architecture among U.S. tech companies remains centered around making it bigger, and by extension, more resource-intensive on virtually every level. He predicts that technology will eventually catch up down the line, but, he warns, until it does, the current path isn’t at all sustainable.

“It’s simply not tenable to say this is going to go on ad infinitum — it doesn’t work,” he says. “AI has to evolve, and it’s got to evolve into something that’s a lot more powerful than LLMs ever could be.”

[ad_2]

Source link

- Advertisement -spot_img

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisement -spot_img

Latest article