9.9 C
Moscow
Saturday, October 12, 2024

Anthropic CEO goes full techno-optimist in 15,000-word paean to AI

Must read

Anthropic CEO Dario Amodei wants you to know he’s not an AI “doomer.”

At least, that’s my read of the “mic drop” of a ~15,000 word essay Amodei published to his blog late Friday. (I tried asking Anthropic’s Claude chatbot whether it concurred, but alas, the post exceeded the free plan’s length limit.)

In broad strokes, Amodei paints a picture of a world in which all AI risks are mitigated, and the tech delivers heretofore unrealized prosperity, social uplift, and abundance. He asserts this isn’t to minimize AI’s downsides — at the start, Amodei takes aim at (without naming names) AI companies overselling and generally propagandizing their tech’s capabilities. But one might argue — and this writer does — that the essay leans too far in the techno-utopianist direction, making claims simply unsupported by fact.

Amodei believes that “powerful AI” will arrive as soon as 2026. (By powerful AI, he means AI that’s “smarter than a Nobel Prize winner” in fields like biology and engineering, and that can perform tasks like proving unsolved mathematical theorems and writing “extremely good novels.”) This AI, Amodei says, will be able to control any software or hardware imaginable, including industrial machinery, and essentially do most jobs humans do today — but better.

“[This AI] can engage in any actions, communications, or remote operations … including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on,” Amodei writes. “It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use.”

Lots would have to happen to reach that point.

Even the best AI today can’t “think” in the way we understand it. Models don’t so much reason as replicate patterns they’ve observed in their training data.

Assuming for the purpose of Amodei’s argument that the AI industry does soon “solve” human-like thought, would robotics catch up to allow future AI to perform lab experiments, manufacture its own tools, and so on? The brittleness of today’s robots imply it’s a long shot.

Yet Amodei is optimistic — very optimistic.

He believes AI could, in the next 7-12 years, help treat nearly all infectious diseases, eliminate most cancers, cure genetic disorders, and halt Alzheimer’s at the earliest stages. In the next 5-10 years, Amodei thinks that conditions like PTSD, depression, schizophrenia, and addiction will be cured with AI-concocted drugs, or genetically prevented via embryo screening (a controversial opinion) — and that AI-developed drugs will also exist that “tune cognitive function and emotional state” to “get [our brains] to behave a bit better and have a more fulfilling day-to-day experience.”

Should this come to pass, Amodei expects the average human lifespan to double to 150.

“My basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years,” he writes. “I’ll refer to this as the ‘compressed 21st century’: the idea that after powerful AI is developed, we will in a few years make all the progress in biology and medicine that we would have made in the whole 21st century.”

These seem like stretches, too, considering that AI hasn’t radically transformed medicine yet — and may not for quite some time, or ever. Even if AI does reduce the labor and cost involved in getting a drug into pre-clinical testing, it may fail at a later stage, just like human-designed drugs. Consider that the AI deployed in healthcare today has been shown to be biased and risky in a number of ways, or otherwise incredibly difficult to implement in existing clinical and lab settings. Suggesting all these issues and more will be solved roughly within the decade seems, well… aspirational, in a word.

But Amodei doesn’t stop there.

AI could solve world hunger, he claims. It could turn the tide on climate change. And it could transform the economies in most developing countries; Amodei believes AI can bring the per-capita GDP of sub-Saharan Africa ($1,701 as of 2022) to the per-capita GDP of China ($12,720 in 2022) in 5-10 years.

These are bold pronouncements, to put it mildly — although likely familiar to anyone who’s listened to disciples of the “Singularity” movement, which expects similar results. To Amodei’s credit, he acknowledges that they’d require “a huge effort in global health, philanthropy, [and] political advocacy,” which he posits will occur because it’s in the world’s best economic interest.

I’ll point out, however, that this hasn’t been the case historically in one important aspect. Many of the workers responsible for labeling the datasets used to train AI are paid far below minimum wage while their employers reap tens of millions — or hundreds of millions — in capital from the results.

Amodei touches, briefly, on the dangers of AI to civil society, proposing that a coalition of democracies secure AI’s supply chain and block adversaries who intend to use AI toward harmful ends from the means of powerful AI production (semiconductors, etc.). In the same breath, he suggests that AI — in the right hands — could be used to “undermine repressive governments” and even reduce bias in the legal system. (AI has historically exacerbated biases in the legal system.)

“A truly mature and successful implementation of AI has the potential to reduce bias and be fairer for everyone,” Amodei writes.

So, if AI takes over every conceivable job and does it better and faster, won’t that leave humans in a lurch economically speaking? Amodei admits that, yes, it would — and that, at that point, society would have to have conversations about “how the economy should be organized.”

But he offers no solution.

“People do want a sense of accomplishment, even a sense of competition, and in a post-AI world it will be perfectly possible to spend years attempting some very difficult task with a complex strategy, similar to what people do today when they embark on research projects, try to become Hollywood actors, or found companies,” he writes. “The facts that (a) an AI somewhere could in principle do this task better, and (b) this task is no longer an economically rewarded element of a global economy, don’t seem to me to matter very much.”

Amodei advances the notion, in wrapping up, that AI is simply a technological accelerator — that humans naturally trend toward “rule of law, democracy, and Enlightenment values.” But in doing so, he ignores AI’s many costs. AI is projected to have — and already has had — an enormous environmental impact. And it’s creating inequality. Nobel Prize-winning economist Joseph Stiglitz and others have noted the labor disruptions caused by AI could further concentrate wealth in the hands of companies and leave workers more powerless than ever.

These companies include Anthropic, as loath as Amodei is to admit it. (He mentions Anthropic only six times throughout his essay.) Anthropic is a business, after all — one reportedly worth close to $40 billion. And those benefiting from its AI tech are, by and large, corporations whose only responsibility is to increase returns to shareholders — not better humanity.

The essay seems cynically timed, in fact, given that Anthropic is said to be in the process of raising billions of dollars. OpenAI CEO Sam Altman published a similarly technopotimist manifesto shortly before OpenAI closed a $6.5 billion funding round.

Perhaps it’s coincidental. Then again, Amodei isn’t a philanthropist. He, like any CEO, has a product to sell. It just so happens that his product is going to save the world (or so he’d have you believe) — and those who believe otherwise risk being left behind.



Source link

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article

Translate »