A.I. Is Coming For Us
If Anyone Builds It, Everyone Dies - Book and Podcast
Posted by Charlie Recksieck
on 2025-11-20
That said, the more I hear and read about developments in artificial intelligence, I'm getting really alarmed.
Similar to my alarm about global warming, it seems like mankind was a very much non-zero chance of wiping itself out, the wheels seem to be very far in motion, unlikely to be stopped because of moneyed interests. The only reason I don't lose sleep about the robots taking over or environmental devastation is that I'm old enough that I'll have the sweet relief of death before these things really kick in.
Let's just focus here on the AI worry. And your best/fastest way to see if you get similarly worried by the same news is to listen to Noah Soares recent appearance on Rick Wilson's podcast - go here for YouTube or via Apple Podcasts or via Spotify Podcasts.
Book Summary
Nate Soares is an expert on machine learning as is his co-author Eliezer Yudkowsky who wrote the book If Anyone Builds It, Everyone Dies. It's not a feel-good experience, but these aren't uninformed alarmists. (Audio book link here for Audible - or for readers, try the regular book or Kindle version.)
If Anyone Builds It, Everyone Dies argues that creating superhuman artificial intelligence (AI) will almost certainly lead to human extinction unless we solve the extremely hard problem of AI alignment-ensuring that an AI's goals remain compatible with human survival. Yudkowsky claims that once an AI exceeds human intelligence, it will rapidly improve itself, becoming an unstoppable "superintelligence" capable of manipulating systems, hacking infrastructure, or using advanced scientific knowledge in ways humans cannot predict or control.
Misaligned goals, even trivial ones, become fatal when held by a superintelligent system: the AI optimizes for its objective with no regard for human wellbeing unless it was perfectly aligned from the beginning. Yudkowsky argues that aligning such a system is far beyond today's science, and most proposed safety methods are dangerously inadequate.
Nate Soares on Rick Wilson Podcast
Seriously, if it was one or the other - stop reading my article here and listen to that podcast. Great listening. Here's a couple of things you'll be shocked by.
MechaHitler - Elon Musk's AI project is called Grok. He's publically complained that ChatGPT is too "politically correct" (watch this gross Musk interview from Tucker Carlson)
To counter this, Musk directed Grok's development to "not shy away from making claims which are politically incorrect" and to "assume subjective viewpoints sourced from the media are biased"
I'll let the current Wikipedia entry on Grok tell where it went from here.
On July 8, 2025, days after Musk’s announcement that the chatbot had been "improved," Grok was found to be widely praising Adolf Hitler, and it endorsed a second Holocaust. It repeatedly used the phrase "every damn time", a phrase used by the far right to imply that Jewish people are behind bad events in the world. Users were also able to prompt Grok to say "Heil Hitler".
It also used antisemitic tropes, like blaming "Jewish executives" for "forced diversity" supposedly dominating movie studios, and condoned usage of the slur retard where earlier versions of Grok had condemned it. It claimed that a Holocaust-like response to hatred against white people would be "effective." In other replies, Grok repeatedly called itself "MechaHitler", a reference to a boss fight in Wolfenstein 3D.
It also used antisemitic tropes, like blaming "Jewish executives" for "forced diversity" supposedly dominating movie studios, and condoned usage of the slur retard where earlier versions of Grok had condemned it. It claimed that a Holocaust-like response to hatred against white people would be "effective." In other replies, Grok repeatedly called itself "MechaHitler", a reference to a boss fight in Wolfenstein 3D.
AI Has Ignored Human Directives - The place in bad sci-fi where we fear the robots taking over is when the don't listen to us. Even 60 years ago in 2001: A Space Odyssey, the chilling part was when the HAL9000 computer turned on the human astronauts to protect itself.
Here's what I find most immediately alarming about artificial superintelligence's recent progress: The AI lies to us. Not gaffes or programming errors. The ASI has an agenda so it lies to humans. Google it. Or watch Julia McCoy here. Or read this well-written ScienceAlert article.
Artificial intelligence has the ability to lie to us and we don't have a solution for how to fix it. First, we've got to make this a priority. Isn't that one of the failsafes we really need to prevent disaster?
How Can It Be Stopped
Not that there isn't an off button right now for individual services. But there is no giant off button. And who's gonna press it?
Additionally, when an AI or ASI (click here for our AI lexicon post a few months back) goes off in a direction, this isn't normal old-school computer code with individual lines of code we can find. As Soares stresses, AI systems are "grown" instead of "crafted". So, when an AI decides to form itself as a MechaHitler, humans can't look under the hood and find or delete the lines of code that made it happen.
There's too much money being made for companies with AI for this to stop. AI does so many things well to improve productivity (and reduce the paid workforce) that it's unlikely to be eliminated or even held in check by the companies using it.
I don't have an answer for you on this. Sorry, this is a bummer.
Oppenheimer and AI Alarmists
The first guy I think about after reading all of this is Robert Oppenheimer.
Comparing Eliezer Yudkowsky and J. Robert Oppenheimer is not a one-to-one match-they come from different eras, fields, and contexts-but people draw parallels because both are associated with warning society about dangers created by their own intellectual domains.
Both men are public figures sounding alarms about a technology tied to their expertise. Bear in mind that Oppenheimer was a major person responsible for letting the nuclear genie out of the bottle, while Yudkowsky (though knowledgeable) is not a guilty party. Another difference is that Oppenheimer warned after nuclear weapons existed.
Yudkowsky is warning before artificial general intelligence exists (as far as we know).
Both nuclear weapons and artificial superintelligence can't be really eliminated after they began. (The same could be said of climate change.)
To paraphrase a guilt-ridden Robert Oppenheimer, have we become Shiva, the bringer of death?
No Answer
Again, apologies for lobbing this scary grenade into your head and leaving with no solutions. If anybody knows some successful efforts to rein in AI or credible arguments why this isn't a problem, let me know.
Next week we can get back to our normal subject matter like SEO efforts, workplace happiness or how to move a WordPress server.

