It Could Happen Here - The First Anti-AI Firebombing
Episode Date: April 22, 2026Garrison and Robert talk about Daniel Moreno-Gama, the 20 year old charged with attempting to kill OpenAI CEO Sam Altman. We discuss how he became an AI doomer, his belief that artificial superintelli...gence will wipe out humanity, and his Substack promoting IQ nationalism. Sources: https://www.justice.gov/opa/media/1435876/dl https://sfdistrictattorney.org/texas-man-charged-with-two-counts-of-attempted-murder-and-multiple-other-felonies-in-connection-to-incendiary-destructive-device-thrown-at-russian-hill-residence/ https://x.com/mehran__jalali/status/2042755218819961048?s=20 https://morenogama.substack.com/p/ai-existential-risk-is-real https://www.businessinsider.com/sam-altman-molotov-attack-suspect-daniel-moreno-gama-houston-2026-4 https://sfstandard.com/2026/04/12/sam-altman-s-home-targeted-second-attack/ https://www.wdsu.com/article/atf-suspected-molotov-cocktail-starts-fire-tesla-new-orleans-service-center/71025308?utm_campaign=snd-autopilot https://www.sfgate.com/bayarea/article/altman-sf-attack-crisis-parents-22208428.php https://podcasts.apple.com/us/podcast/sam-altmans-attacker-in-his-own-words/id1839942885?i=1000761713169See omnystudio.com/listener for privacy information.
Transcript
Discussion (0)
This is an IHeart podcast.
Guaranteed Human.
Hey there, folks. Amy Robach and T.J. Holmes here.
And we know there is a lot of news coming at you these days from the war with Iran to the ongoing Epstein fallout, government shutdowns, high-profile trials.
And what the hell is that Blake lively thing about anyway?
We are on it every day, all day.
Follow us, Amy and T.J. for news updates throughout the day.
Listen to Amy and T.J. on the IHeart Radio app, Apple Podcasts, or wherever you live.
listen to podcasts.
On a recent episode of the podcast, Money and Wealth with John HoBriant,
I sit down with Tiffany the budgetista Aliche to talk about what it really takes to take
control of your money.
What would that look like in our families if everyone was able to pass on wealth to the people
when they're no longer here?
We break down budgeting, financial discipline, and how to build real wealth, starting
with the mindset shifts.
Too many of us were never, ever taught.
If you've ever felt you didn't get the memo on money,
this conversation is for you to hear more.
Listen to Money and Wealth with John Hope Bryant
from the Black Effect Network on the I'd Heart Radio app,
Apple Podcasts, or wherever you get your podcast.
I'm Daniel Jeremiah.
And I am Greg Rosenthal.
I know that, Greg.
We're teaming up on 40s and free agents,
the podcast that owns the NFL off season.
This is where teams are built.
Free agency, combine, pro days, trades.
Every move matters.
From my draft boards and mock drafts.
To my vaunted top 101 free agents and how rosters come together.
Quarterback movement.
Surprise signings.
We'll tell you what it means and who really wins.
Open your free IHeart radio app.
Search 40s and free agents and listen now.
American soccer is about to explode.
The World Cup is coming.
Ramos sending on to earn.
I'm Tad Ramos.
I'm Tom Boca.
On our podcast Inside American Soccer, you'll get the real storylines.
the biggest decisions and the truth about the U.S. national team.
It wouldn't be a huge surprise if our team ends up in the quarterfinals or potentially a great run into the semifinals.
Listen, Inside American Soccer with Tom Bogart and Tabramos on the iHeart radio app, Apple Podcasts, wherever you get your podcast.
Welcome to It Could Happen here, a show about things falling apart.
Between attacks on Sam Altman's home, a Molotov at a Tesla office, and a warehouse causing over half,
of billion dollars in damages.
This past week or so has been a little snapshot of the Cool Zone.
I'm Garrison Davis.
This episode I'm joined by Robert Evans to discuss one of these events.
To be very clear, we had nothing to do with either of them.
The way you introduced them sounded a little bit like between, you know, going to Sam
Altman's house twice, it's been quite a busy week for us here.
A busy week for us here.
Just wanted to be extra clear.
Extra clear.
No, no, the cool zone just relating to, you know, the state of American society and where it's going.
Yes.
But as is typical of these sorts of events, the reality and motivations of attacks like these may not be as like clear cut as Lee epic-based praxis as one might want to imagine.
Yeah, this is not a gimley situation, you know, it's weirder than that and stupider.
It is weirder.
And this episode we're going to talk about the Sam Altman attacker, who is a lot
weirder than what you might have expected with a philosophical worldview downstream from
the original inspirations behind the Zizians and even the intellectual interests of Luigi
Mangione to a certain extent.
Ah, God, yes, that's right.
Our dear sweet friends, the rationalists.
Ah, man.
The alleged Altman attacker was a college student from the Houston area, whose interest
in the risks of AGI, artificial general intelligence, turned into an obsession, which earlier this year
turned self-destructive. But let's go through the actual events that happened in San Francisco a few
weeks ago. At around 3.30 a.m. on April 10th, a 20-year-old college student named Daniel Moran
O'Gama allegedly threw a Molotov cocktail toward the home of OpenAI CEO Sam Altman,
hitting the top of the security gate on the driveway leading to Altman's residence.
Moranogama did not get past this security gate.
About an hour and a half later, though,
Moranogama showed up outside the San Francisco headquarters of OpenAI
and tried to use a chair to break into the building through the glass doors.
He was stopped by security personnel and allegedly told them
that he came to the headquarters to burn it down and kill everyone inside.
according to the federal affidavit.
When he was arrested, officers allegedly recovered,
incendiary devices, a jug of kerosene, and a blue lighter,
as well as what the FBI has described as an anti-AI document.
Mm-hmm. Okay.
We currently do not have a copy of this document.
It's only described in the criminal complaint,
but this looks like it was a three-part manifesto
allegedly authored by Moranagama,
and the first part was titled, Your Last Warning.
In which he states, he, quote, killed slash attempted to kill, Sam Altman.
And also writing, quote, if I'm going to advocate for others to kill and commit crimes,
then I must lead by example and show that I am fully sincere in my message, unquote.
The document then lists the names and addresses of various investors, board members,
and executives of AI companies as a sort of target list.
The second part of the document was titled,
Some More Words on the Matter of Our Impending Extinction.
Great.
And this section discussed the purported risk that AI poses to humanity,
and we'll get into some of those beliefs a little later on.
But the third part of this three-part anti-AIA document
was a letter addressed to Altman, quote,
If You Make It, and reads in part,
If by some miracle you live,
then I would take this as a sign from the divine,
to redeem yourself, unquote. Now, like I said, we do not have a copy of this manifesto in full,
though the affidavit says that Maranagama appears to have emailed similar versions of this document
to people at his former college in Texas. But as of Monday 420, this document is still not online.
But like a lot of zoomers, we do have an online footprint made up of posts from Instagram,
Discord, a substack blog, and even a podcast interview, where Maranagama discusses his anti-AI views.
It's always sad when something terrible happens to a fellow podcaster.
You know, I just, I have a broad and deep pan podcaster solidarity.
Class solidarity, yeah.
All podcasters are good.
All of them.
Every last one of them.
famously no wrongdoing has ever come behind the microphone of a podcaster.
No, no, no.
It's a special place.
Now, back in January, back when Muranagamba was just 19,
journalists found him through some of his posts on an anti-AI discord server,
and he was asked to be interviewed for this podcast about AI called The Last Invention.
Our colleague Ed Zittron was also interviewed for this podcast, actually.
Now, he was interviewed because of his posts weighing the possibility of using violence
to stop the development of AI.
Now, in this interview, he says,
he grew up in the suburbs his whole life and, quote,
grew up quite close to the internet.
And claims that he's been online every day starting at nine years old.
Ronogam explained how his political worldview had largely been shaped by YouTube,
specifically debate videos on YouTube, like Ben Shapiro style.
And these sorts of debate videos are what originally exposed him to views critical of AI.
He says he first heard about AI, though, when Chad Giac,
GPT came out when he was a sophomore in high school and first thought it was, quote, the greatest
thing on earth because it would allow him to cheat on school, essentially. But after watching
videos debating the risks of AI and the possibility of advanced general intelligence, artificial
general intelligence, and the potential threat posed by this artificial superintelligence,
Moranagama's views started to sour on AI. At first, he was a bit skeptical of these AI-critical
debates, but eventually became convinced of the AI Dumer arguments and became an accolate himself.
He started arguing in YouTube comments and talking with friends and family about the danger of
AI. He describes himself getting annoying and, quote, a bit autistic about this, leading to his mom
suggesting he joined an advocacy organization. He joined this group called Pause AI in 2024,
which is an AI safety advocacy group that organized.
is online and as well as some in-person protests.
And he was also part of a Discord server called Stop AI.
His username on both Discord and Instagram was
Butlerian underscore Jihadist,
in reference to the crusade against AI in the Dune novels.
Yeah.
On Instagram, his account had a collection of Instagram stories saved
about the threat of AI,
including a meme about living in a Venn diagram of the Matrix, Terminator,
and idiocracy.
One of these Instagram stories
was a picture
of a hockey stick graph
showing the length of coding tasks
that AI can do
and how that's increasing
with the caption,
quote,
if we do nothing very soon,
we will die,
I'm sure of that,
unquote.
Another story contains
screenshots of articles,
posts, and studies
proclaiming that
artificial general intelligence
or the quote-unquote
singularity is already here.
Captioned, Being Right All the Time Fucking Sucks when it's about the worst things imaginable, unquote.
Yeah.
So if his concerns about AI started around summer of 2024, by the end of 2025, those concerns grew existential and he started spiraling.
Yeah.
Yeah.
There's a post he made on the pause AI Discord from November 6th, 2025, writing, quote,
we owe it to everyone who came before us and to ourselves and to everyone we know and love
and everyone who might exist someday to be stronger than that and at least die fighting if it comes to
that unquote. A few weeks later he wrote quote, we are close to midnight, it's time to actually
act. To this a moderator on the server replied, advocating violence in any form is grounds for a ban.
this all seems like a pretty natural progression if you're following like the kind of things that the less wrong crowd, which the website run by a guy named a Leisure Yadkowski, who is like the patron saint of the rationalists, a huge chunk of which have become like AI doomers.
Like a Leezer's book that came out a year or two ago is called If We Build It Everyone Dies or something like that.
Yeah. And I'm surprised more of them haven't so far. Like I think it probably talks to just people not.
actually believing it as much as they claim to, like, online. But, like, if someone truly believes
the stuff that crowd is saying about how, like, basically the creation of an evil God is inevitable
that will seek to purge the world of humankind, like, of course you do this. It's a really
natural progression. Like, unfortunately, what you have to pay that with is, like, if you believe
all of the hype about AI, yeah, right? If you believe that AGI is imminent, that it's on the
way, like if you are like me and I don't believe we're anywhere close to AGI, if that's even
possible, then yeah, you could believe the stuff the rationalists believe and not think that
you need to take immediate destructive action. But if you literally believe that these companies
are on the hinge of birthing an evil God, what else is there to do? Yeah, and that's the exact
thing that Maranagama ends up writing about on his substack. Yeah. And this is the fullest picture we
half of his views is from the substack because we don't have this manifesto, but he did write at length
about AI, and he's writing about a lot of the stuff that you're talking about here. And we'll
discuss that writing more after this ad break. Canadian women are looking for more. More to themselves,
their businesses, their elected leaders, and the world are out of them. And that's why we're
thrilled to introduce the Honest Talk podcast. I'm Jennifer Stewart. And I'm Catherine Clark.
And in this podcast, we interview Canada's most inspiring women.
Entrepreneurs, artists, athletes, politicians, and newsmakers, all at different stages of their journey.
So if you're looking to connect, then we hope you'll join us.
Listen to the Honest Talk podcast on I Heart Radio or wherever you listen to your podcasts.
I feel like it was a little bit unbelievable until I really start making money.
It's Financial Literacy Month, and the podcast Eating While Broke is bringing real conversations about money, growth, and building your effects.
future. This month, hear from top streamer, Zoe Spencer, and venture capitalist Lakeisha Landrum
Pierre, as they share their journeys from starting out to leveling up. If I'm outside with my parents
and they're seeing all these people come up to me for pictures, it's like, what? Today now,
obviously, it's like 100%. They believe everything. But at first, it was just like, you got to go
get a real job. There's an economic component to communities thriving. If there's not enough
money and entrepreneurship happening in communities, they fail. And we don't.
What I mean by fell is they don't have money to pay for food.
They cannot feed their kids.
They do not have homes.
Communities don't work unless there's money flowing through them.
Listen to Eating While Broke from the Black Effect Podcast Network on the IHeart Radio app, Apple Podcasts, or wherever you get your podcast.
Hi, I'm Bob Pittman, chairman and CEO of IHeart Media, and I'm kicking off a brand new season of my podcast, Math and Magic, stories from the Frontiers of Marketing.
Math and Magic takes you behind the scenes of the biggest businesses and industries while sharing insights.
from the smartest minds in marketing.
I'm talking to leaders from the entertainment industry
to finance and everywhere in between.
This seasonal math and magic, I'm talking to
CEO of Liquid Death Mike Cesario,
financier and public health advocate, Mike Milken,
take-to-interactive CEO, Strauss-Zalny.
If you're unable to take meaningful creative risk
and therefore run the risk of making horrible creative mistakes,
then you can't play in this business.
Sesame Street CEO, Sherry Weston,
and her own chief business officer, Lisa calls.
Coffee. Making consumers see the value of the human voice and to have that guaranteed human promise behind it really makes it rise to the top.
Listen to math and magic, stories from the frontiers of marketing on the Iheart radio app, Apple Podcasts, or wherever you get your podcast.
When you listen to podcasts about AI and tech and the future of humanity, the hosts always act like they know what they're talking about and they are experts at everything.
Here, the Nick Dick and Poll Show, we're not afraid to make mistakes.
What Cougler did that I think was so unique.
He's the writer-director.
Who do you think he is?
I don't know.
You meet the president?
You think Canada has a president?
You think China has a president?
The law crusade.
God, I love that thing.
I use it all the time.
I wrap it in a blanket and sing to it at night.
It's like the old Polish saying, not my monkeys, not my circus.
It was a good one.
I like that snake.
It is an actual Polish saying.
It is a great thing.
It is an actual poll.
Better version of Play Stupid Games,
win stupid prizes.
Yes.
Which, by the way, wasn't Taylor Swift, who said that for the first time.
I actually thought it was.
I got that wrong.
Listen to the Nick Dick and Paul show on the IHeart Radio app,
Apple Podcasts, or wherever you get your podcasts.
Okay, we are back.
The most in-depth piece of publicly available writing by Moranogama,
explaining his anti-AI views,
comes from a post on his substack blog,
dated January 6th, 2026.
this article outlines his belief that AI poses a quote-unquote existential risk to humanity.
And I think this essay was the first thing I saw that really demonstrated that his opposition to AI
is not like based on fears of AI disrupting the economy, contributing to a loss of jobs or, you know,
risking like labor rights for workers, but the belief that AI will become like a superior race
and wipe out or enslave humanity. That is the standpoint that, that, is the standpoint that,
Maranagama is coming from.
Gotcha.
The belief that AI will quote unquote lead to human extinction, he says is based on two ideas.
The first being the rapid progress in artificial intelligence, the rapid technological development
that we've seen the past few years and continuing right now.
For evidence to this claim, he references claims from AI companies themselves that fully automated
AI researchers at like an intern level are coming soon, including claims from the
Guy Anthropic, who says that he expects these models by 2026 or 27, saying, quote,
the capabilities of AI systems will be best thought of as akin to an entirely new state
populated by highly intelligent people appearing on the global stage, a country of geniuses
in a data center, unquote.
Yes, a whole country worth of geniuses.
All living in your computer.
Cool.
What about like school shooters and stuff?
Like, what about a country full of like psych?
Like, what are the, any, any group of geniuses is going to have like some genius pedophiles.
And like, right?
Like, if they're actually genius, that does imply the capacity for like various different like illnesses and quirks that cause all sorts of wild behavior.
year. One would assume, unless you think that AI is immune to that, but then could it really be
intelligent? Yeah, I mean, this is that, a version of that idea is kind of what Murano-Ogama
believes is like, these things, if real and do become, you know, super intelligent, then they
might not really have the best interests of humanity. Right. Because they will be interested in
self-preservation, which is just part, part of like how he gets to this idea that it is an existential
threat is by using all of this kind of marketing hype. Yeah. That is being pushed out by
AI companies. Well, yeah, that's the thing.
thing. Like, that is a logical thing you could infer from the shit being said by Sam Altman and his
crew, right? Yeah, and others, like, like, like, Dario Amodi at Anthropic and Elon Musk saying
that the AGI is maybe already here or that the next GROC model will be AGI. These are all
things that Maranagan was referencing, like on Instagram and, and in other places online. Now,
the second reason that he believes that AI will usher in human extinction is because AI is not aligned
with the interests of the people creating it or with best human interests in general.
And for evidence, he refers to instances of AI models allegedly lying, cheating on tasks,
or blackmailing their own creators, specifically in studies.
He cites a 2025 anthropic study on agentic misalignment, which he characterizes as demonstrating
that, quote, most of the current AI models are willing to blackmail and even kill people
if it ensures their own survival, unquote.
This may be the first terrorist attack I've heard of
inspired by media created by the people they're attacking.
Like, not media that was, like, designed to make them attack people
or, like, carry out acts,
but specifically by the propaganda being put out by the companies
that they're radicalized against.
Yes.
Like, that's very strange.
The media put out to raise the stock price of a company.
Radicalized a guy to take a shot.
at Sam Altman's house. There's two people to throw a bomb or something. I forget which way it went,
but yeah. Yeah, first throwing this Molotov, then days later, two people fired shots.
First was Molotov, right? No, I mean, it's interesting, right? Because, like, these studies are
basically doing, like, linguistic exercises with these models and getting, you know, large language
models to say or to threaten certain things for various reasons, usually, like, their own survival.
right and these are kind of interesting studies that these companies are doing but they're doing
with the intent of trying to align these models better that's that's why the study's on agentic
misalignment because it's trying to tweak these things to be more friendly to consumers and like
getting an lLM to say that it will kill or blackmail in order to ensure its own survival
is different from the LLM being able to do that right that is that is a big jump and there's
there's not much currently that facilitates that job.
Moranagama writes, quote,
ignore for a second these models current limitations
or questions on how truly intelligent or conscious
these models may actually be.
The truth is, all these nuances are completely irrelevant
to my argument.
There are only two questions we should be concerned about
at this moment.
Is it willing to kill to preserve itself
and is it capable of doing so?
These signs indicate that AI is willing
and becoming potentially capable
of doing both of these things
and that is all that matters, unquote.
That's really where this argument rests,
is that even if these models aren't currently intelligent,
even if they can't currently kill,
the fact that they could in the future
is enough to stop any further development of these models.
Right. That's his argument.
Okay.
And he writes that AI will only become a larger threat,
the more we improve it,
and that AI, quote,
will graduate from an active threat to individuals
to an existential threat to humanity.
I estimate the probability of AI causing human extinction
to be nearly certain, unquote.
That's the thing, because like,
there is a massive threat that the new,
what is it, mythos upgrade to Claude
that's just about to come out.
Like, yeah, actually does represent
to individuals and to society,
which is that it's going to supercharge fraud even more,
which is already up by something like a trillion dollars a year
and that ruins people's lives.
Fride and cybersecurity.
Yeah, fraud, I mean, fraud often as a result of cybersecurity,
of, like, its ability to penetrate.
And that's really bad, and it doesn't imply the, like,
skynet devastation of the entire world in its biosphere,
because that would be stupid.
There's no reason.
And also that computers don't have access to it, the nuclear arsenal.
But, like, AI absolutely will enable assholes
who want to scam a bunch of people out of their money to do that better.
Like, that's a problem.
We should probably stop that.
He writes that, quote,
We are dead if we do not act now.
So what does acting now entail?
For starters, stopping all construction
of new data centers,
these are the brains of these models
dictating their physical limitations.
Second, stopping all research
and beginning downscaling
of these data centers,
closing them down
while still keeping them monitored, unquote.
He also proposed.
striking a deal with China to, quote, stop the AI race,
and to create international treaties akin to Cold War Nuclear Weapons Treaties or Post-Cold War Treaties.
And finally, he advocated that people will need to take strategic action,
which could include sharing information about AI, campaigning, protesting, saying, quote,
although doing nothing is akin to suicide and a disgusting amount of negligence.
in that podcast interview
from around the same time
this AI article was published
this is January 2026th
Moranagama said in that interview
quote, before we even think about violence
we need to exhaust all our peaceful means
first unquote
which he says includes protesting
and sharing information
but the hosts asked him
about posts he had already made about
quote unquote Luigiing CEOs
and he says
that he didn't really mean that as
a threat, that it was more rhetoric, it was hyperbole, and answered no to a question on if it would
be wise to try to kill Sam Altman, saying, quote, one person is not going to do that much of a dent.
I understand the frustration with a person that might advocate for that, but it is not practical,
it's not worth it. It's almost all risk, no reward. People may feel that way, but not too many
people would do it, unquote. Wow. Okay, man.
Great. I mean, that's, yeah.
Though when asked if we will continue to see AI development going in the direction that it's moving now,
and if so, if he believes that we have to stop the extinction of the human race,
by whatever means necessary, Maranagama just replied,
I'll say no comment.
Okay, man. Well, I mean, yeah.
later saying that he would quote only advocate for violence as the final solution and that he
later realized that what he said had tried to de-emphasize the final solution part but great guy that
is kind of his ultimate sentiment is around this time he was considering violence he was
toying with advocating for it publicly on discord and you know in this in this podcast but
still still kind of had some like reins on that yeah but it was something
in his mind as a sort of like final solution to this problem.
This is why it's so fucking irresponsible to push these ridiculous claims about like the power
of this technology and what it's going to be able to do and how smart it's going to be.
And in part because like it makes it hard to actually look objectively at the situation.
And if Muranagama had looked at what's actually been happening with data centers,
he would see that like more than half of the recently announced projects have been like either
stalled or halted, and there's been tremendous success on the local level in, like, counties
and in most recently, this entire state of Maine passing laws against the construction of
data centers.
Like, yeah, there's actually been a lot of success in fighting the building of new data centers.
If someone wanted to have a positive impact on this, there's a lot of room right now to make
that fight even more effective as opposed to doing like stupid bullshit that you would only
want to do if you had convinced yourself that we were like literally moments away from judgment day.
And he used to be involved in this sort of action and this sort of organizing like around that kind of
stuff when he was when he was doing stuff with pause AI. Yeah. But it was really in the past few months where he
started to like spiral in this like in this very like dumer direction. Like you know, he was already
very critical and very dumer about about what AI could do. But the sort of intense existential
like immediate existential threat that that's a very critical. And that's a very doomer about about what AI could do. But the sort of intense existential, like like immediate
existential threat that that opposed. It's something that he was really developing at the end of last
year and the start of this year. And, you know, this is in part because of the sort of environment
that he was immersed in. Yeah. And we'll talk more about that environment after this ad break.
Canadian women are looking for more. More to themselves, their businesses, their elected leaders,
and the world are out of them. And that's why we're thrilled to introduce the Honest Talk
podcast. I'm Jennifer Stewart. And I'm Catherine Clark. And in this podcast,
We interview Canada's most inspiring women.
Entrepreneurs, artists, athletes, politicians, and newsmakers,
all at different stages of their journey.
So if you're looking to connect, then we hope you'll join us.
Listen to the Honest Talk podcast on IHartRadio or wherever you listen to your podcasts.
I feel like it was a little bit unbelievable until I really start making money.
It's Financial Literacy Month and the podcast Eating While Broke is bringing real conversations about money, growth and building your future.
This month, hear from top streamer, Zoe Spencer,
and venture capitalist Lakeisha Landrum-Pierre,
as they share their journeys from starting out to leveling up.
If I'm outside with my parents and they're seeing all these people come up to me for pictures,
it's like, what?
Today now, obviously, it's like 100%.
They believe everything.
But at first, it was just like, you got to go get a real job.
There's an economic component to communities thriving.
If there's not enough money and entrepreneurship happening in communities, they fail.
And what I mean by the job,
files, they don't have money to pay for food. They cannot feed their kids. They do not have homes.
Communities don't work unless there's money flowing through them.
Listen to Eating While Broke from the Black Effect Podcast Network on the IHeart Radio app,
Apple Podcasts, or wherever you get your podcast.
Hi, I'm Bob Pittman, chairman and CEO of IHeart Media, and I'm kicking off a brand new season
of my podcast, Math and Magic, Stories from the Frontiers of Marketing.
Math and Magic takes you behind the scenes of the biggest businesses and industries while sharing
insights from the smartest minds in marketing. I'm talking to leaders from the entertainment industry
to finance and everywhere in between. This seasonal math and magic, I'm talking to CEO of Liquid
Death Mike Sessario, financier and public health advocate Mike Milken, take two interactive CEO, Straussle.
If you're unable to take meaningful creative risk and therefore run the risk of making horrible
creative mistakes, then you can't play in this business. Sesame Street CEO Sherry Weston
and our own chief business officer, Lisa Coffey.
Making consumers see the value of the human voice
and to have that guaranteed human promise behind it
really makes it rise to the top.
Listen to math and magic,
stories from the frontiers of marketing on the Iheart radio app,
Apple Podcasts, or wherever you get your podcast.
When you listen to podcasts about AI and tech
and the future of humanity,
the hosts always act like they know what they're talking about
and they are experts at everything.
Here, the Nick Dick and Poll show, we're not afraid to make mistakes.
What Coogler did that I think was so unique.
He's the writer-director.
Who do you think he is?
I don't know.
You mean, like, the president?
You think Canada has a president.
You think China has a president.
Those law crusade.
God, I love that thing.
I use it all the time.
I wrap it in a blanket and sing to it at life.
It's like the old Polish saying, not my monkeys, not my circus.
Yep. It was a good one. I like that snake.
It is an actual Polish saying.
It is an actual Polish thing.
Better version of Play Stupid Games, win Stupid prizes.
Yes.
Which, by the way, wasn't Taylor Swift, who said that for the first time.
I actually thought it was. I got that wrong.
Listen to the Nick Dick and Poll show on the IHeart Radio app, Apple Podcasts, or wherever you get your podcasts.
Okay, we are back.
Maranagama had a bunch of other writing on his substack, which gives us a bit of a closer look at his political.
and the sort of information ecosystem that he exists in beyond just the AI question.
In this AI article that he wrote or published in January, he recommended that people read
Eliezer Yutkowski's book. If anyone builds it, everyone dies.
Mm-hmm. Thanks, Eliezer.
Do you want to, like, I guess briefly give some background on, like, who this guy is?
Well, he wrote a rationalist Harry Potter. Like, he's the guy who started a website called Less
Wrong. Yeah.
which was about basically like logic puzzles and trying to like optimize your thinking and your
responses to behavior with like Bayesian analysis.
Yes.
And he's kind of branded himself as an AI expert.
He's not like a coder or anything and he's not like an expert on machine learning.
He doesn't have any qualifications.
But he's like an expert on like, again, game theorying how an AI would have, super intelligent
AI would have to act.
And he generally makes very dire conclusions that are all pretty much based in like Terminator
or Horizon Zero Dawn,
if you want my honest opinion of Elisa Yankowski.
Yeah, yeah.
And his irresponsibility is largely down to him being a dummy.
And he's definitely part of what radicalized this guy.
But the fact that you have Open AI and Anthropic and a number of other people,
like a lot of like folks like fucking Elon Musk,
but also just a lot of like popular public intellectuals, quote unquote,
and their podcast and shit, talking about all this like,
Yep, we're moments away from super intelligent AI that's going to be able to do everything.
Everyone's losing their jobs.
None of it's like, we won't need people doing anything.
If that weren't all over the fucking place, Elyzer would sound a lot less convincing.
Sure.
No, I mean, and like the media environment around, or like the, you know, the sort of online
communities around the Ratchelists are interesting because you have a lot of them who are
AI Dumer's like, like Yudkowski, but a lot of them are also AI accelerationists, right?
A lot of the sort of west coast, you know, parts of the zizzians were kind of like this.
Yeah, there was a splinter, yeah.
Yeah, there's this kind of splintering around people who maybe even believe some of the, you know, existential claims around AI, but believe that developing it is then the best way to kind of get out of that crisis.
And this creates an interesting dynamic among, you know, rooms full of these rationalists or post-rationalists.
And in that podcast interview, Moronogama says that it was videos of Yudkowski, debate videos on YouTube that originally exposed him to his work.
And on other posts on his substack, Uronogama also mentions Yudkowski's work.
As a part of Maranagama's like other interests, which contain writing on pseudo-spiritual philosophy, he writes about, quote, the ultimate tree of reality or the tree of ultimate reality, the aborition of man.
and genealogy of being and the warrior and the martyr.
And on February 28th, 2026, he posted, quote,
an analysis of political extremes,
which goes over some of his political philosophy,
which relates to like rationalist arguments
or some rationalist arguments around like IQ.
In this essay, Muranagama primarily described himself
as a consequentialist and critiqued leftism
for being trapped in an idealized world,
like a, quote,
schizophrenic patient who attempts the same
zealous plots over and over again
without hesitation.
This essay defends discrimination
as a justified means
of reacting to inequalities
and claims that such statements
are only controversial
because of a, quote,
natural emotional resistance
to intrinsic judgment, unquote,
which he says has nothing to do
with the factual truth of certain claims,
like, quote,
East Asian people are on average, more intelligent than black people, unquote.
Okay.
Mm-hmm.
Now, Moranagama argues that the problem with right-wingism, as he puts it, is that it has no boundary.
Its constant scaling and outward expansion inevitably leads to self-consuming defeat.
Quote, it goes from being about preserving the best of human qualities to being deeply
anti-human and producing zero winners, unquote.
This sort of refers to Carl Schmit's fascist writing on internal conflict being externalized by the establishment of a border which expands to push out an increasing number of enemy groups.
Instead, Maranagama proposes what he calls a sustainable form of rightist discrimination
by establishing a sedentary floor for movements slash radical policy suggestions
instead of an always rising idealistic ceiling.
So, for example, instead of deciding that a certain IQ score should be required to vote,
he advocates setting a concrete, unchangeable floor by, quote, limiting voting.
to certain people who pass critical thinking and civics tests, unquote.
Ah, mm-hmm.
Yeah.
Yes.
So somehow determining a critical thinking and civics tests is less arbitrary and less prone
to arbitrary changes than deciding a certain IQ score.
I wonder what kind of critical thinking he's going to be interested in.
I wonder what kind of IQ, you know?
Yeah.
Well, yeah, these people always break down to the same thing.
I mean, yeah, I mean, this essay in particular gets, gets really contradictory,
a certain things, and then later on basically argue the opposite.
It's very, it's very disordered thinking.
I mean, this is, this is the work of like a 19-year-old who was in, like, a mental spiral,
leading to him traveling across the country to firebomb Sam Altman's house.
Yeah.
Like, this is not, is not the product of, like, you know, a logically ordered mine,
despite how a rationalist might, you know, perceive themselves as such.
Yep.
Now, at the end of this essay, he advocates for, quote, ending mass migration and initiating mass deportations.
He says that this is necessary because, quote, nations have a right to preserve their ethnic
identity and low-skill immigration saturates the job markets of these countries, making jobs
which could once earn a living wage become unlivable, increasing the amount of value-draining
people in society by both importing them and undercutting low-skilled natives. Generally,
whiteness in these countries is a decent correlative to some of the things I value, unquote.
Mm-hmm. Some of them, okay. Now, Moroogama isn't white, and he says that he opposes white supremacy,
but he does this by saying, like, you know, it's not actually about whiteness. It's that
whiteness correlates to certain things I value, like high IQ. And that, that's how he,
tries to justify it in his head. And rather than establish an explicitly white supremacist state,
something he claims to oppose as race is an imperfect metric to discriminate effectively based on
traits, according to him, rather he advocates for, quote, the most effective type of discrimination,
evaluating the possibility of IQ slash merit-based nationalism, unquote. Basically, that's
having a country where citizenship is determined by IQ. And again, this
contradicts his previous claim where he advocates against requiring IQ to vote,
instead having a critical thinking and civics test,
but then advocates for a country which citizenship is determined by IQ,
and usually citizenship is the factor that determines if someone can vote.
So this is just one example of this sort of contradictory writing in this essay.
Now, Maranagama writes that the only problem with this IQ nationalism
is that it would create a, quote, brain drain across the third world,
leading to worsening conditions in third world countries
and thus even more illegal immigration.
Because people with high IQ is going to then be able to immigrate
and gain citizenship to first world countries, right?
In a United States where citizenship is determined by high IQ,
then people who with people with high IQs from around the world
would then all just move to the United States.
Of course.
So to solve this problem, he says that he rejects, quote,
quote, classical eugenics and extermination
in favor of what he calls
ethical eugenics in the third world.
Oh, oh, ethical eugenics.
Ah, good.
I'm glad someone figured it out.
And this ethical eugenics is to promote,
quote, IQ growth genetically, unquote.
So the whole basis of this article
is his belief that IQ is genetically determined,
not determined by class.
And he never interrogates this idea.
All of the statements that he makes in this article,
is based on the idea that IQ is genetically determined.
That it's not determined by education in class.
It is primarily or almost solely genetically based,
thus ethical eugenics to create high IQ babies,
which he thinks will solve the problems of the world.
Yeah.
Yeah, we never tried that before, for sure.
Like there haven't been generations of times
in which that was attempted that all ended in disaster and mass death.
nah, they just didn't
get it right
because they didn't put ethical
in front of it.
No, those were classical eugenics
Robert.
They forgot to put ethical
in front of it.
That was the big,
ah, what a tragedy.
They were one word away
from greatness.
So, yeah, that is,
that is the other piece of writing
that I think is worth
expounding on
to get a more full sense
of kind of where this guy's head is at,
right?
This is not a leftist
Antifa Super Soldier
firebombing Sam Altman's home.
No.
That isn't,
to say what happened isn't, isn't interesting. But I think, you know, if, like you said,
you know, this is the first quasi-terrorist attack inspired by the sort of rhetoric that these
companies are producing themselves to boost their own stock price. Yep. I mean, I literally
just saw on Reddit earlier today. The title from the actual, like, post was CEOs make
shocking predictions about AI. Huge job losses are coming soon. 20 to 30 percent, 50 percent
unemployment within the next two to five years. And when you trace it back to its source,
Dario Amadeh of Anthropic, just basically quoting some statistics he found from estimations
by like Axios and Fox News.
Yeah.
And talking on some fucking podcast, scaring the shit out of people.
Like, it's, it's every day.
Like, of course some people are going to react like this.
Other reason I wanted to talk about this, the second half of this, this sort of IQ and, like,
you know, rationalist stuff.
Because this is just another instance of, you know, public acts of political violence,
I think, done by people down.
stream from the rationalists.
You know, Luigi is
a part of this. The
Zizians is also a part of this.
It's like an extended network. This type
of thought does keep producing
acts of public violence like this, and that is
an interesting thing to chart.
On April 14th,
Moranagama's public defender said in court
that he has a, quote, history of
autism and mental health illness,
and that his actions, quote,
appear to have been driven by an acute
mental health crisis. His parents
released a statement that same Tuesday saying, quote,
Our son Daniel is a loving person who has been suffering recently from mental illness crisis.
We have been trying our best to address these issues and get him effective treatment,
and we are very concerned for his well-being.
Unquote.
He currently is facing federal and state charges, including attempted murder.
That is all I have on this for now.
Well, I mean, this isn't going to stop.
happening like these won't be the last attacks like this uh i haven't seen a big push in the media or from
like elected leaders to talk about like anti-a-i sentiment as like a terrorist threat yet yeah that hasn't
really seemed to pick up yet and i haven't seen this yet be blamed on like leftist stuff i have
seen it been blamed on like the anti-a-i thing yeah uh which i you know it is part of like some of the
anti-a-i movement are people who literally believe it's
like a demon god that's going to destroy things. But I'm interested as there are more of these,
as, you know, this kind of stuff continues to happen. What form that takes and like how it actually
looks when this was this starts to hit politics in a big way. Yeah, US attorney Craig Mazakian
said referring to this case that they are going to treat it as an act of domestic terrorism.
Yeah, I mean, it is. Like, it is. He was trying to do terrorism. Like,
his goal specifically was to cause changes in policy.
Yeah, but you're right.
Like, I haven't seen them refer to this from the sort of political lens.
Like, there's been statements from, you know, other U.S. government officials referring
to that warehouse fire as, you know, being, being motivated by anti-capitalism and, like,
threatening our way of life, threatening the capitalist way of life, which is how they
referred to that warehouse fire in Ontario, California.
Right.
I've not seen them specifically kind of lay out, like, any.
anti-AI sentiments as a motivating factor of terrorism.
Yeah.
Though I'm sure they will quite soon.
Right.
Yeah.
Between the shots fired at the home of that city councilman in the Midwest over his vote in
support of a data center.
Over data centers.
As we see more incidents kind of like that, as we see stuff like this, I think it's very
likely that they will add anti-AA sentiment to the list of common recurring motivating factors
of this sort of domestic violent extremism.
Yep.
All right.
Bye, everybody.
It Could Happen here is a production of Cool Zone Media.
For more podcasts from Cool Zone Media,
visit our website,
coolzonemedia.com,
or check us out on the IHeart Radio app,
Apple Podcasts, or wherever you listen to podcasts.
You can now find sources for It Could Happen here
listed directly in episode descriptions.
Thanks for listening.
Hey there, folks.
Amy Robach and T.J. Holmes here.
And we know,
Oh, there is a lot of news coming at you these days from the war with Iran to the ongoing Epstein fallout, government shutdowns, high-profile trials, and what the hell is that Blake lively thing about anyway?
We are on it every day, all day.
Follow us, Amy and TJ for news updates throughout the day.
Listen to Amy and TJ on the IHeart Radio app, Apple Podcasts, or wherever you listen to podcasts.
On a recent episode of the podcast Money and Wealth with John Hope Bryant, I sit down with Tiffany the budgetista,
Aliche to talk about what it really takes to take control of your money.
What would that look like in our families if everyone was able to pass on wealth to the people when they're no longer here?
We break down budgeting, financial discipline, and how to build real wealth, starting with the mindset shifts.
Too many of us were never, ever taught.
If you've ever felt you didn't get the memo on money, this conversation is for you to hear more.
Listen to Money and Wealth with John Hope Bryant from the Black Effect Network on the IEy.
Heart Radio app, Apple Podcasts, or wherever you get your podcast.
I'm Anna Navarro, and on my new podcast, Bleep with Anna Navarro.
I'm talking to the people closest to the biggest issues happening in your community
and around the world.
Because I know deep down inside right now, we are all cursing and asking what the bleep
is going on.
Every week I'm breaking down the biggest issues happening in our communities and around the
world.
I'm talking to people like Julie K. Brown, who
broke the explosive story on Jeffrey Epstein in 2018.
The Justice Department threw.
We counted four presidential administrations failed these victims.
Listen to Bleep with Anna Navarro on the IHeart Radio app, Apple Podcasts, or wherever you get your podcast.
American soccer is about to explode.
The World Cup is coming.
Ramos sending on the Army Stewart.
I'm Tab Ramos.
I'm Tom Boke.
On our podcast, Inside American Soccer, you'll get the real.
storylines, the biggest decisions, and the truth about the U.S. national team.
It wouldn't be a huge surprise if our team ends up in the quarterfinals or potentially a great run into the semifinals.
Listen, Inside American Soccer with Tom Bogart and Tabramos on the IHeart Radio app, Apple Podcasts, wherever you get your podcast.
This is an IHeart podcast.
Guaranteed Human.
