Thoughts on Dario Amodei's essay Machines of Loving Grace
The "Compressed 21st Century" beckons...
Kia ora Memia whānau,
Back on the ground here in Aotearoa after 9 weeks away… great to be back home, not least for the coffee!
Machines of Loving Grace
A quick weekend recommendation for an inspiring read: Machines of Loving Grace - How AI Could Transform the World for the Better, a thoughtful and optimistic essay by Anthropic founder and CEO Dario Amodei. It contains some deep insights from one of the very few people in the world with a daily close-up viewpoint at the frontier of AI.
(It also offers a more subtle, considered, counterpoint to the zero-sum “win AGI or bust” position of Leopold Aschenbrenner or the vague proselytizing of Sam Altman (“the future is going to be so bright that no one can do it justice by trying to write about it now”).
Amodei articulates his generally optimistic vision of how rapidly accelerating AI technologies may soon yield radically different futures that most people today wouldn't consider possible: in fields such as biology to solve health, economics to solve poverty and governance to solve peace.
(But these opportunities are not without concurrent risks or challenges... Anthropic will continue to mostly focus on addressing AI risk day to day…)
Three key concepts which stood out for me:
The "compressed 21st century":
"...my basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years. I’ll refer to this as the “compressed 21st century”: the idea that after powerful AI is developed, we will in a few years make all the progress in biology and medicine that we would have made in the whole 21st century."
“Powerful AI” — a better term than “AGI”.
(Summarised:)
"powerful AI" as an AI model that is smarter than a Nobel Prize winner across most fields, capable of performing complex tasks autonomously, and able to interface with the world like a human working virtually. This AI can be replicated millions of times, operates 10-100 times faster than humans, and can work independently or collaboratively on tasks, essentially functioning as a "country of geniuses in a datacenter."
“Marginal returns to intelligence”:
“Economists often talk about “factors of production”: things like labor, land, and capital. The phrase “marginal returns to labor/land/capital” captures the idea that in a given situation, a given factor may or may not be the limiting one …I believe that in the AI age, we should be talking about the marginal returns to intelligence, and trying to figure out what the other factors are that are complementary to intelligence and that become limiting factors when intelligence is very high.“
He goes into detail on five areas that he judges to have the greatest potential to directly improve the quality of human life
Biology and physical health
Neuroscience and mental health
Economic development and poverty
Peace and governance
Work and meaning
On the future of work:
“in the short term I agree with arguments that comparative advantage will continue to keep humans relevant and in fact increase their productivity, and may even in some ways level the playing field between humans. As long as AI is only better at 90% of a given job, the other 10% will cause humans to become highly leveraged, increasing compensation and in fact creating a bunch of new human jobs complementing and amplifying what AI is good at, such that the “10%” expands to continue to employ almost everyone....Thus, I think that the human economy may continue to make sense even a little past the point where we reach “a country of geniuses in a datacenter”.
However, I do think in the long run AI will become so broadly effective and so cheap that this will no longer apply. At that point our current economic setup will no longer make sense, and there will be a need for a broader societal conversation about how the economy should be organized.“
Exploring the question of “meaning”:
“… I think it is very likely a mistake to believe that tasks you undertake are meaningless simply because an AI could do them better. Most people are not the best in the world at anything, and it doesn’t seem to bother them particularly much. Of course today they can still contribute through comparative advantage, and may derive meaning from the economic value they produce, but people also greatly enjoy activities that produce no economic value. I spend plenty of time playing video games, swimming, walking around outside, and talking to friends, all of which generates zero economic value. I might spend a day trying to get better at a video game, or faster at biking up a mountain, and it doesn’t really matter to me that someone somewhere is much better at those things…The facts that (a) an AI somewhere could in principle do this task better, and (b) this task is no longer an economically rewarded element of a global economy, don’t seem to me to matter very much.“
Two criticisms of this essay (common with nearly all Silicon Valley discourse I come across) are:
(1) An absence of any mention — and hence, apparent blindness to — our planet’s collapsing biodiversity (he briefly mentions climate change… but only in the context of AI accelerating CO2 absorption solutions…) Any techno-optimistic, AI-enhanced future must surely involve placing value on the wellbeing of other biological life and ecosystems as well as just humans? (Or am I just projecting here…)
(2) He is similarly blue-pilled to Aschenbrenner on a US-led strategy of China AI containment, dressed up as:
“…an “entente strategy”, in which a coalition of democracies seeks to gain a clear advantage (even just a temporary one) on powerful AI by securing its supply chain, scaling quickly, and blocking or delaying adversaries’ access to key resources like chips and semiconductor equipment. This coalition would …use AI to achieve robust military superiority…“
Personally, living in a country with strong ties to both US and China and an intuitive aversion to any major global military disequilibrium, fully open source AI research and supply chains would be my alternative preferred strategy to achieve similar objectives. However, I take his point about there being “no strong reason to believe AI will preferentially or structurally advance democracy and peace”…
Nonetheless, Amodei concludes with a rousing call to arms:
“…it is a world worth fighting for. If all of this really does happen over 5 to 10 years—the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights—I suspect everyone watching it will be surprised by the effect it has on them. I don’t mean the experience of personally benefiting from all the new technologies, although that will certainly be amazing. I mean the experience of watching a long-held set of ideals materialize in front of us all at once. I think many will be literally moved to tears by it.“
Highly recommended that you take the time to read this one (and / or get your LLM of choice to help you summarise and distil the main points.)
(Memia reader Dan Fowlie has created the obligatory NotebookLM podcast: https://notebooklm.google.com/notebook/260f4093-8260-487a-a517-89d8764d6d30/audio )
Enjoy the rest of your Sunday!
ngā mihi
Ben
The idea of a "compressed 21st Century" is a bit strange. An analogy would be maybe that the invention of the microscope led to a "compressed 500 years" that allowed advances in medicine in 250 years that would have taken 500 years. It's not really the case because the microscope was necessary for the advances, and they wouldn't have happened without it. The counterfactual is wrong. Just like manipulation of big biological data requires AI. It's just another tool that provides new affordances. But anyway, he's still very upbeat about all this "Powerful AI" that is on the way (again, in 5-10 years), compared to Apple's new conclusions: https://arxiv.org/abs/2410.05229 (has Apple not released serious AI because they're not happy with how present day AI functions (or does not function)? 100% agree with your comments Ben about the environment/ecological blindspot of almost everyone working it tech. But I guess that's ok because the AI coming in 5-10 years will 'solve' this.