Breaking Points with Krystal and Saagar - 9/25/25: Iran Warns Of Israeli Attack, Data Centers Spike Electricity Cost, AI Takeover Dire Warning
Episode Date: September 25, 2025Krystal and Ryan discuss Iran warns of imminent Israeli attack, data centers spike electricity prices, AI takeover poses imminent danger. Nate Soares Book: https://www.amazon.com/Anyone-Builds-Everyon...e-Dies-Superhuman/dp/0316595640 To become a Breaking Points Premium Member and watch/listen to the show AD FREE, uncut and 1 hour early visit: www.breakingpoints.comMerch Store: https://shop.breakingpoints.com/See omnystudio.com/listener for privacy information.
Transcript
Discussion (0)
This is an I-Heart podcast.
I'm Jorge Ramos.
And I'm Paola Ramos.
Together we're launching The Moment,
a new podcast about what it means to live through a time
as uncertain as this one.
We sit down with politicians,
artists, and activists
to bring you death and analysis
from a unique Latino perspective.
The moment is a space for the conversations
we've been having us,
father and daughter, for years.
Listen to The Moment with Jorge Ramos and Paola Ramos.
on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
Hey, I'm Jay Shetty, and I'm the host of the On Purpose podcast.
Today, I'm joined by Emma Watson.
Emma Watson has apparently quit acting.
Emma Watson has announced she's retiring from acting.
Has anyone else noticed that we haven't seen Emma Watson in anything in several years?
Emma Watson is opening up the truth behind her five-year break from acting.
Watson said she wasn't very happy.
Listen to On Purpose with Jay Chetty on the iHeart Radio app, Apple Podcasts, or wherever you get your podcasts.
Introducing IVF disrupted, the Kind Body story, a podcast about a company that promised to revolutionize fertility care.
It grew like a tech startup.
While Kind Body did help women start families, it also left behind a stream of disillusioned and angry patients.
You think you're finally like in the right hands.
You're just not.
Listen to IVF Disrupted, the Kind Body Story, on the IHeart Radio app, Apple Podcasts, or wherever you get your podcasts.
Hey guys, Saga and Crystal here.
Independent media just played a truly massive role in this election, and we are so excited about what that means for the future of this show.
This is the only place where you can find honest perspectives from the left and the right that simply does not exist anywhere else.
So if that is something that's important to you, please go to breakingpoints.com, become a member today, and you'll get access to our full show.
unedited, ad-free, and all put together for you every morning in your inbox.
We need your help to build the future of independent news media, and we hope to see you at
breakingpoints.com.
So Ryan was out yesterday, but for a very interesting reason, he actually got to attend a meeting
of journalists and activists with the Iranian president, who of course was in town and
New York City for the UN meetings.
Let's go ahead and take a listen to a little bit of what the Iranian president had to say in
his speech there about Israel.
Ladies and gentlemen, today after nearly two, you know,
of genocide, mass starvation, the perpetuation of apartheid within the occupied territories,
and aggression against its neighbors, the ludicrous and delusional scheme of a greater
Israel is being proclaimed with brazenness by the highest echelons of that regime.
The scheme encompasses vast swaths of the region. The map itself lays bear the true
intentions of the Zionist regime, intentions that have, of late, been openly
endorsed by its criminal prime minister.
None in the world is secure from the aggressive machinations of this regime.
It is manifest that the Zionist regime and its sponsors no longer even content themselves
with normalization through political means.
Rather, they impose their presence through naked force and have styled it peace through strength.
Yet, this is neither peace nor power.
It is nothing but aggression rooted in coercion and bullying.
So, Ryan, that was some of the public messaging there from Iran.
What did you hear in that meeting you were able to attend?
Yeah, and so for context for people, this Massoud Pasechkin was elected on a platform of basically moderate reformism and an argument that Iran needs to get out of its isolationist posture and get back into nuclear negotiations with Europe and the United States.
You know, there are a lot of divisions within Iranian society over that question.
And so his victory represented the ascension of that kind of viewpoint.
And then on the day of his inauguration is when Israel killed Hania in Tehran, which was used by the hardliner to say, look, how many times do we have to tell you?
Like, you can't work with these people.
They don't like you.
Like, all they want to do is undermine your, undermine you.
and it was very interesting hearing from him because in some ways he seems like in over his head is not the is a little unfair but he's he seemed to be genuinely outraged and flummoxed at the situation that he's in we just he just kept saying it's like I can't watch television what seeing what they're doing to these children like it tears me up it's like they feel
follow no international laws.
They invite us to
peace talks
and kill our negotiators.
Yeah. And the they in that
circumstance was the U.S.
and Israel. And he said,
and he also said,
I asked him if they were going to do anything fundamentally
different after the 12-day war.
And I don't know if he,
it was an awkward thing where 10 people
would ask questions, then he'd answer them all kind of at once.
One thing he said was that
since the 12-day war they have now put in place a protocol for transfer of power so that there
are now he said five or six people below him so that he said if they take me out there are five
to six people to take my place which they didn't have it you know that that laid out before
and at the same time you could kind of feel from him a sense of his own mortality which is wild
because it's like, he's 70 years old,
he's at the pinnacle of his career,
wins the presidential election,
is trying to negotiate a way towards peace,
but also you could just feel that in the back of his mind,
he understands that they might try to kill him.
Yeah.
And then you're sort of like, well, why?
Like, how is killing this moderate guy
who wants to do peace negotiate?
Like, how does that help at all?
It's like, well, don't ask why.
Like, they're probably going to do it anyway.
Well, and for Israel, I mean, Netanyahu is always opposed diplomatic negotiations with her he was opposed the original Obama nuclear deal.
So, you know, for him, having a moderate reformer in there that's interested in diplomatic negotiations is not a plus.
Yes, historically, whether it's inside the PLO or even in Hamas, the people arguing for more moderation within those organizations have been more likely to be assassinated by,
Israel than the hardliners because the hardliners serve a narrative purpose, the narrative purpose
that we have to do. We have to do endless war here. At the very beginning of the meeting,
he waxed very philosophical and poetic about how small we are in the scheme of things. You know,
you look at the planet from the galaxy. We're all just, we're all one planet. And we all have
such a short time on this
earth. Before AI
destroyed itself?
What are we? Yes, exactly.
In his case, it probably won't be
AI that gets him, although some
version of it. And he's like, what are we
fighting about? He's like, there's one
country in the region that just refuses to recognize
its boundaries. And
there seemed to be a resignation
about it.
Almost a fatalism.
And the other
kind of thing that you could pick
up from the meeting is just how divorced from American society and culture the Iranians are.
Like, they genuinely just don't understand our political system.
And he even said, we don't have any lobby.
Like, because of the sanctions, they basically can't hire any lobbyists or any consultants.
And he said, we don't have any lobbyists to, like, help us, like, figure out your system.
Like, they're basically left to just read the New York Times and watch Fox News or Tucker Carl
to try to figure out how they can maneuver within our system, whereas, as he pointed out, Israel is the opposite.
Yeah.
They have friends everywhere in Washington.
But it really did show that, like, they're pretty clueless.
There was some news about how they're only allowed to go, like, three blocks while they're in New York.
They were, like, banned from Costco or something.
They can't go to Costco.
They can't go to Stans Club.
They can't go to, they can't do any shopping.
And there's like this three block radius, this hotel where this meeting was held and then they can get to the UN and that's it.
And there's a real fear among the pro-Israel faction that if an English-speaking Iranian can get in front of Trump that they can be persuasive.
Because they have a persuasive case that aligns with what Trump wants, which is no war.
Like that's what they want too.
And is he English speaking?
Not really.
No.
So he was, the whole interview was through an interpreter.
And so is their expectation strongly that they're going to be attacked again?
They are basically certain of it.
And like it's a matter of when.
And they think it could, they think it's going to be very soon.
And nobody really knows why.
There's no real explanation.
Like, why are they going to be attacked again?
But Netanyahu continues to say,
openly, like, that this is the year they have to do it.
Like, they have to just finish off the Iranian regime.
But as he said, that's not going to happen.
Like, and so then what?
So as he, the point, another point that he made in the conversation was they expected
that they would hit these top generals, hit these scientists, they'd cause chaos, hit
these civilian areas, and people would come out in the streets and the government would fall.
And that didn't happen.
And, like, the government is in a firmer position, I think, than Israel understands.
Like, what does it, and what does it even mean for the government to fall?
Like, who, like, there's, the own, there's really no kind of, like, who's, like, who's coming in?
Like, there's no, it's just not a thought-out process.
Yeah, they imagine this.
You can kill the Ayatollah.
He's, like, in his 80s.
Yeah, they imagine, like, the Shaw's son is going to, fail son is going to.
pick up the pieces.
Right.
There's these guys, yeah, who I think they're going to fly into Tehran.
But on the back of what?
Like it doesn't.
And so, yeah, can they continue to kill scientists and generals and bomb infrastructure?
Yes.
But then, but to what end?
So during the 12-day war, Iran's, after we joined the fight overtly and bomb their nuclear facilities, their response to us was pretty limited.
and seem to be, you know, part of the strategy of, okay, we're going to do something because we have to do something, but it's not going to be anything that's going to make you too mad. Is there any sense of like, that was the right strategy? That was the wrong strategy. We'd handle it differently next time. You know, did you, do you have any sense of that?
Yeah, they, I think as far as their response to the United States, I think they think that was the right one. They won a friendly relationship with the United States. So, yeah, what they did is they called Trump and said, we're going to hit, we're going to hit your base.
in Qatar in like two hours like do you need more time is two hours okay he's like no two hours is
good so they cleared everybody out it landed like in the sand of the base and and they called
that a day now I asked him what his analysis was of why the 12-day war ended and it's unclear it was
again like a direct response to that question but what he said in the response question was he
said basically they Israel did not expect that any missiles let alone Iranian missiles could penetrate
their air defenses. And he said it would be untoward to get into the kind of details of what
the damage was that their missiles created. But he felt that it was that. It was that their ability
to get missiles through the Israeli air shield is what actually pushed them, pushed them to end it.
There was some reporting about that at the time, the number of interceptors that Israel had in
stockpile were rapidly dwindling and diminishing. And so there was a sense that there was some
pressure on them as well to, you know, not that they were sustaining the level of damage that
Iran was sustaining, but that they were running into some capacity issues where it wasn't like,
oh, this will be it forever. But we could use a little bit of time to, like, regroup and replenish
our stockpiles. Right. Yeah. The Iron Dome and the other elements of it were, you know, significantly
depleted. Because it, you know, it takes a year to, almost like a year to make the amount
that we used in 12 days. They are now, Israel saying it is now developing this laser technology,
which would be a game changer if it worked, because the lasers cost like, you know, a couple
bucks each, you know, once you have, once you have them up and running. So that would be,
you know, if that actually worked. But on the other hand, do they actually work against like
hypersonic missiles.
That's something we'd have to find out.
And see if there's anything else that was interesting from that.
Yeah, they could not fathom that any missile, let alone an Iranian missile could penetrate
their air defenses.
Yeah, he said, if they take me out five to six steps down the line have been planned for,
Iran is not Gaza, Iran is not Lebanon, Iran is not Syria, Iran is different, making the
argument that, like, this plan you have is like, we're not going anywhere.
Yeah.
You have to live with us.
Yeah. Israel doesn't seem to be interested in just coexistence with really any of their neighbors.
And he had another line where he said, it is not Iran that is a supporter of terrorism in the region.
And he said that a lot. Because you could tell from his perspective, he's like, we're constantly showing restraint.
And they're constantly killing civilians. How are we the terrorists? That's kind of their perspective.
Yeah. How many new wars have we started, for example? Let me put detail.
up on the screen and just get your reaction to this news item. Iran airs footage claims shows
details of alleged Israeli nuclear program. Iranian State TV broadcast images of documents
and footage claims relate to alleged Israeli nuclear activities. The documentary shows copies of
passport, said to identify Israeli scientists, along with info on the location of military sites,
airs footage said to have been filmed inside the Demona reactor in southern Israel.
And they said in this documentary, the intel minister of Iran said that they had used information obtained in June to hit sensitive sites inside Israel that month.
And they claimed before the June war to acquire thousands of classified Israeli documents, including details on nuclear and military sites.
So a kind of show of intel force, I guess, here saying, listen, you're not the only ones with spies and access to this kind of inside intelligence.
Yeah, they had teased this, if you remember, right before the war.
They basically saying they had everything.
And so now they're finally putting some of this out.
But again, it would actually go back to, you know, if any other government had this kind of stuff, they'd leak it to the New York Times.
Or they'd find more sophisticated ways.
To publish rather than in some weird Iranian state TV documentary.
They made Iranian state TV documentary that is, okay, that's a lot of it.
fine. But like, right, if they had SKDK advising them on, like, how do you want to influence
the American public on this? They'd be like, I know who you need to leak this stuff too.
But yeah, they just don't have that. And so, and that's another function of sanctions, which to me,
the America first argument against these kinds of sanctions would be America should make the best
foreign policy decisions that it can in its own interests. And in order to do that, you either want
there's no foreign interference, no foreign influence in your decision making, or you want to
hear from everybody. And we're not going to kick them all out. So to me, it's just like Russia should
be able to level the lobbying playing field. Iran, North Korea. Like, let them all lobby. Let them all
make their case. Make it in the open. They've got to register with FARA, what we want to know, who their
agents are, and what case they're making, and then we make a decision. But this idea where
Trump hears only from one side and zero from the other and then launches us into a war
is not helpful for us. Yeah, BB's making another trip to D.C. By the way, another White House
visit at his fourth. Yeah. Yeah, incredible. There you go. Like, he really has not had to do any
laundry in Israel. This whole year. Get it taken care of.
I'm Jorge Ramos.
And I'm Paola Ramos.
Together we're launching The Moment,
a new podcast about what it means to live through a time,
as uncertain as this one.
We sit down with politicians.
I would be the first immigrant mayor in generations,
but 40% of New Yorkers were born outside of this country.
Artists and activists, I mean, do you ever feel demoralized?
I might personally lose hope.
This individual might lose the faith,
but there's an institution that doesn't lose it.
faith. And that's what I believe in.
To bring you depth and analysis from a unique Latino perspective.
There's not a single day that Paola and I don't call or text each other, sharing news and
thoughts about what's happening in the country.
This new podcast will be a way to make that ongoing intergenerational conversation public.
Listen to The Moment with Jorge Ramos and Paola Ramos as part of the MyCultura podcast network
on the IHeartRadio app, Apple Podcasts, or wherever you get your podcast.
Hey, I'm Jay Chetty, and I'm the host of the on-purpose podcast.
Today, I'm joined by Emma Watson.
Emma Watson.
Emma Watson has apparently quit acting.
Emma Watson has announced she's retiring from acting.
Has anyone else noticed that we haven't seen Emma Watson in anything in several years?
Emma Watson is opening up the truth behind her five-year break from acting.
Watson said she wasn't very happy.
Was acting always something you were going to do?
I was using acting.
as a way of escaping to feel free.
My parents, it wasn't just the divorce,
it was just like the continuing situation
of living between two different houses
and two different lives
and two different sets of values,
the career and the life that looks like the dream.
But are you really happy?
Fame has given me this extraordinary power.
It's also given me a lot of responsibility.
Listen to On Purpose with Jay Chetty
on the IHeart Radio app, Apple Podcast,
or wherever you get your podcasts.
All I know is what I've been told,
and that's a half-truth is a whole lie.
For almost a decade,
the murder of an 18-year-old girl
from a small town in Graves County, Kentucky, went unsolved,
until a local homemaker, a journalist,
and a handful of girls came forward with a story.
I'm telling you, we know Quincy Kilder, we know.
A story that law enforcement used to convict six people
and that got the citizen investigator on national TV.
Through sheer persistence and nerve,
this Kentucky housewife helped give justice to Jessica Curran.
My name is Maggie Freeling.
I'm a Pulitzer Prize-winning journalist, producer,
and I wouldn't be here if the truth were that easy to find.
I did not know her and I did not kill her,
or rape or burn, or any of that other stuff.
They literally made me say that I took a match and struck and threw it on her.
They made me say that I poured gas on her.
From Lava for Good, this is Graves County, a show about just how far our legal system will go
in order to find someone to blame.
America, y'all better work the hell up.
Bad things happens to good people in small towns.
Listen to Graves County in the Bone Valley feed on the IHeart Radio app, Apple Podcasts, or wherever you get your podcasts.
And to binge the entire season ad-free, subscribe to Lava for Good Plus on Apple Podcasts.
and Ryan covered in something we've been tracking, skyrocketing electricity prices. I feel it's
one of the undercover stories of the year truly, both in terms of people's lives, also in terms
of how it could have political impact. And there's some pretty nefarious legislation that is moving
through state legislatures and specifically in the state of North Carolina. Yeah, and it moved all the way
through. And this is, yeah, this is a story that's happening all across the country. But North Carolina is a
particularly interesting place for us to look because North Carolina actually really leaned
heavily into renewable energy, clean energy. They have tens of thousands, maybe over 70,000 jobs,
just directly in the clean energy industry. And because of that, the North Carolina senators
have been very stridently defending clean energy in the Senate. Tom Tillis announced his
retirement, the Republican senator from North Carolina, in a speech opposing Trump's
big, beautiful bill.
Like, he ended his career.
He's like, you have no idea how bad this is going to be for North Carolina.
I'm out of here.
Like, you're going to primary me over this?
Fine.
Like, I quit.
Yeah.
Like, this is so stupid.
So, let's start actually with E6 control room.
Sorry, this is out of order here.
But this is a report from just this week that solar company, Blue Ridge Power,
is like just straight up shutting down.
So this layoff of 517 workers that you see up on the screen,
Asheville and Fayetteville, is everybody.
Like this, like they're just rolling up shop.
This is a company that it was launched in 2021 at the, you know,
the height of the kind of pivot towards clean energy in North Carolina being the place
where so much of this was going.
And we try to remember some of these numbers.
So the Blue Ridge reports that it built seven gigawatts of solar infrastructure across the country.
It had something like 30 gigawatts in the works, which we'll get back to those numbers.
Because if you hear from Trump, solar and wind are fake.
It's a giant hoax.
They don't actually produce any power.
30 gigawatts is an enormous amount of power.
something like 70 to 90, somewhere between 70 to 90% of all new power added to the grid over
the last several years has been renewable energy. So the idea that it's like, doesn't do anything
is, is complete nonsense. So we'll get back to this in a second. Now we can go to E1. So over the
summer, there was a bill pushed through the North Carolina legislature called S266, which did
something utterly extraordinary. And first of all, its lead sponsor was the former CEO of Duke
Energy, which is like the big utility. Monopoly. In Virginia, too. In Virginia and in North
Carolina, they've got this Appalachian area locked down. So this former CEO pushes through this bill.
And what does the bill do? It says that going forward,
If there is a contest between consumers and data centers over who gets the power, if there's
limited power, the data centers are going to get it. And when it comes to who's going to
pay for it, it's going to be more on the consumer side than it's going to be on the data center
side. Like a straight up giveaway to the data center industry in North Carolina.
just, and you're like, wait, that's impossible. Seriously? No, but go look it up. Like, that's actually
what the bill did. Yeah. And the numbers, if I'm recalling correctly, it's like right now, like regular
residential consumers, they use 40% of the power and they pay 40% of the cost. In, if you just
kept the landscape the same with this bill passed, they would now pay basically, they would still
consumer about 40% of the electricity and they would pay 50% of the cost. Right. So,
They're not putting everything, but your electricity bills are going to go up to, you know, subsidize these giant data centers backed by some of the largest and most, you know, profitable companies on the planet.
Yeah, no, yeah, exactly. Incredible. And so meanwhile, North Carolina is now, oh, and so that bill was vetoed by Democratic governor, Josh Stein, and then a combination of Republicans and a handful of Democrats overrode his veto in the legislature. So that bill is now law.
Now we'll get into some of the data center activity that's going on in North Carolina.
We could roll E2 here for more perfect union.
I'm standing at the site of META's new $10 billion data center in Richland Parish, Louisiana.
It'll be the largest data center the company has ever built, about the size of Manhattan.
The project is already bringing about big changes to this small town way of life here.
A lot of folks are on their own, and if they are on this very strict budget, they're going to have to adapt.
And if they don't, they won't make it.
I don't know that gutting 2,500 acres of land just so we can mine data so that we can better market and sell people's stuff.
I feel like it's getting carried away.
Why are the data centers coming to the small rural towns?
They're coming here to benefit themselves.
They're not coming here with the intentions on benefiting the community.
And see, that's a little bit unfair.
They're not just mining data so that they can more effectively market stuff.
We can put up E3 here.
You can also, what, chat with a horse?
Is that right?
Yeah, that's right.
A literal horse.
This is on meta, I believe, where you can create these custom chat bots.
Another one was like, strict parents, rich but strict parents was, I know, another one that was popular in there.
So, yeah.
So the glories of technology.
And if you're just listening to this, the tweet was something like your electric bill is going up 30% so you can chat with this horse.
And of course, it's not just energy.
It's also water.
You can put up E4.
This is another North Carolina.
Local North Carolina news headline AI data centers would need millions of gallons of North
Carolina's water supply a day.
Everybody loves these is a free chatbot chatting with the horse.
Turns out, shockingly, there's no such thing as free.
Put up E5 here.
So Amazon itself is planning to invest $10 billion with a B dollars.
in North Carolina data centers in an AI push, which is finally starting to be controversial.
People are like, hold on a second.
We're paying, like, how are we benefiting from this?
This is not so cool.
So you were saying that they're trying to do that in your neighborhood, too?
Yeah, I was just looking at the update on it.
It looks like the board of supervisors said no, and Amazon is suing because they feel they have a right.
to build this status center in our town, apparently, but...
Abundance bros got to swoop in.
That's right.
Yeah, they got to get these nimbies in check, I guess.
But, I mean, there's so much to be said about this problem, both from a...
Naomi Klein talks about this movement as being sort of anti-creation because they want to sort of suck up all of the natural resources, whether it's energy or whether it's water, whether it's just land is enough.
big part of it in order to create this sort of like weird AI shadow world that is only going
to benefit you know economically it's going to further consolidate wealth and power in the hands
of a few so it's obviously very dystopian I don't think it takes a rocket scientist to figure out
given that we do know a lot about our political system and most people have some experience with it
that when you have that contest between regular random person and giant corporation
for the energy and who's going to foot the bill, what direction they're going to go in.
Because, I guess like Iran, we don't have lobbyists.
Right. We don't.
Yeah. And the Duke Energy guys are going to win out every time.
Right. You can see how Democratic the North Carolina legislature is, low-D Democratic.
It's like maybe 10 power centers or data centers or whatever. There's millions of people.
Right. And yet the data centers will all suffer. The data centers will win.
speaking of democracies
the chairman she was at the UN this week
and followed Trump's speech
where he told everybody that wind and solar is a hoax
climate change is a hoax it's a scam
green energy is a scam somehow
chairman she is being completely fooled by the hoax
so let's roll some of his remarks before the UN
so he says however the world
may change, Ryan. China will not slow down its climate actions, will not reduce its support
for international cooperation, and will not cease its efforts to build a community with a shared
future for mankind. China is willing to work with all parties to earnestly honor the principle
of common but differentiated responsibilities. Do the utmost respectively and collectively
and build a clean, beautiful, and sustainable world together.
And I think we probably get the point of what's going on here.
You know, I was thinking about the way the political dynamics of this, Ryan, have flipped
because you started with the layouts at this solar company.
And, you know, previously it was like the lefty green energy people who were supposedly
the, like, job-killing, you know, haters.
And now we're in a totally different dynamic.
I mean, first of all, the industry has progressed and the science has progressed significantly.
The tech has progressed significantly so that it's much,
more powerful, much more affordable. And you can see that, you know, China is leading the world
in development of all if not most, you know, most if not all of these technologies. The Biden
administration made like the tiniest bit of investment in that direction. And it was starting to
bear some fruit. And now Trump, it's not like he has an all of the above strategy. He has an
active vendetta against any sort of green energy, which is incredibly counterproductive for him
politically, certainly for American consumers. You can see it in the electricity price hikes,
which is partly about the Big Beautiful Bill, partly about these AI data centers,
partly about increased demand because of the climate crisis. And you know, you can see it in terms
of job losses like this. So it seems to me like the historic political dynamic that Trump
still has like wrapped in his 1990s brain has now reached an inflection point where, you know,
the landscape of both jobs and cheap energy is all in the direction of,
you know, really investing in renewable and green energy.
Yeah, we're maxed out on the dirty stuff.
If you want to expand the energy production of the country,
you have to do it through these new types of energies.
And Trump doesn't want to do it.
So we're just going to get super high electric bills
and then rolling blackouts.
Sounds great.
China will be okay, though.
I'm Jorge Ramos.
And I'm Paola Ramos.
Together we're launching The Moment,
a new podcast about what it means to live through a time,
as uncertain as this one.
We sit down with politicians.
I would be the first immigrant mayor in generations,
but 40% of New Yorkers were born outside of this country.
Artists and activists, I mean, do you ever feel demoralized?
I might personally lose hope.
This individual might lose the faith,
but there's an institution that doesn't lose faith.
And that's what I believe in.
To bring you depth and analysis from a unique Latino perspective.
There's not a single day that Paola and I don't.
don't call or text each other, sharing news and thoughts about what's happening in the country.
This new podcast will be a way to make that ongoing intergenerational conversation public.
Listen to The Moment with Jorge Ramos and Paola Ramos as part of the MyCultura podcast network
on the IHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
Hey, I'm Jay Shetty, and I'm the host of the on-purpose podcast.
Today, I'm joined by Emma Watson.
Emma Watson has apparently quit acting.
Emma Watson has announced she's retiring from acting.
Has anyone else noticed that we haven't seen Emma Watson in anything in several years?
Emma Watson is opening up the truth behind her five-year break from acting.
Watson said she wasn't very happy.
Was acting always something you were going to do?
I was using acting as a way of escaping to feel free.
My parents, it wasn't just the divorce, it was just like the continuing situation of living between two different.
houses and two different lives and two different sets of values, the career and the life that
looks like the dream. But are you really happy? Fame has given me this extraordinary power.
It's also given me a lot of responsibility. Listen to On Purpose with Jay Shetty on the IHart
Radio app, Apple Podcasts, or wherever you get your podcast. All I know is what I've been told.
And that's a half-truth is a whole lie.
For almost a decade, the murder of an 18-year-old girl from a small town in Graves County, Kentucky, went unsolved.
Until a local homemaker, a journalist, and a handful of girls came forward with a story.
I'm telling you, we know Quincy killed her. We know.
A story that law enforcement used to convict six people and that got the citizen investigator on national TV.
Through sheer persistence and nerve, this Kentucky housewife helped give justice to Jessica Curran.
My name is Maggie Freeling.
I'm a Pulitzer Prize-winning journalist, producer, and I wouldn't be here if the truth were that easy to find.
I did not know her and I did not kill her, or rape or burn or any of that other stuff that y'all said.
They literally made me say that I took a match and struck and threw it on her.
They made me say that I poured gas on her.
From Lava for Good, this is Graves County, a show about just how far our legal system will go in order to find someone to blame.
America, y'all better work the hell up. Bad things happens to good people in small towns.
Listen to Graves County in the Bone Valley feed on the IHeart Radio app, Apple Podcasts, or wherever you get your podcasts.
and to binge the entire season
at free,
subscribe to Lava for Good Plus
on Apple Podcasts.
I have another important AI story for you as well related to this.
We're very fortunate to be joined this morning
by an AI expert who has a new book out
with a dire warning.
Let's go ahead and put the book jacket up on the screen
So Nate Sores is the president of the Machine Intelligence Research Institute, really focuses a lot on what is called AI Alignment, and he has just co-authored a new book titled that you can see there.
If anyone builds it, everyone dies. Why superhuman AI would kill us all. So no ifsands or buts.
If they develop superintelligence, your argument is that humanity will be destroyed.
Nate, welcome. Great to have you.
My condolences about the book title.
Yeah. Well, I mean, I know this was an intentional challenge.
choice. You don't want to say, well, maybe, and if we don't, it's possible. No, this is,
if superintelligence is developed, this is what's going to happen. What gives you confidence,
that level of confidence in that prediction? You know, there's a bunch of reasons that make it
look somewhat overdetermined. So the way we build modern AI is more like growing it than like
carefully crafting it. No one knows exactly what's going on inside these things. Humans understand
the process that tweaks all sorts of numbers, you know, trillions and trillions of
numbers inside these AIs, they don't understand what comes out the other end.
We've already seen signs that they are, you know, developing the very beginnings of preferences
and drives, perhaps, that nobody wants, nobody asked for. So that's one whole part that
makes it difficult. Another thing that makes it difficult is we don't get second tries on building
smarter than human AIs. If we build machines that are smarter than us that can develop their own
technology, develop their own infrastructure. And if they run off autonomously and, you know,
grab control of the world's resources, there's no redos if they're going in the wrong direction.
That's a very difficult problem for science. And then, you know, a third branch is the current
industry does not really seem to be taking this issue with the gravity it merits. You know,
the approaches on this on this problem, even the people who are most optimistic, say there's, you know,
a one and four chance this kills everybody, or does an equivalent disaster. I think those
numbers are low, but the field is racing ahead even while, you know, everyone from the Nobel
Prize winning godfather of the field to the lab heads to half the researchers say this is
incredibly dangerous stuff, people are sort of locked in a race. Each of those three individually
would be hard to deal with if that was our only problem. All three problems seem to me to
it to mean we just need to back off.
How does a super-intelligent AI solve the energy problem?
Like, where does it get the energy to sustain itself indefinitely?
So a modern AI runs in a data center that takes as much electricity as a small city.
A human runs on about as much electricity as a large light bulb.
So we know it's possible technologically to run very smart, or at least
human-level smart intelligence on less electricity, less matter.
AI isn't there yet, but if you get much smarter than human AIs that can think 10,000 times
faster, that can copy themselves, that don't need to sleep, they don't need to eat,
and they work on these challenges, you know, there may be a period of time where they rely on
human infrastructure, but eventually they're going to get off of the human infrastructure,
get on to, you know, more efficient systems and, you know,
automate the infrastructure that humans are currently doing.
You know, that can ultimately be done by robots, which are less,
which are more robust, operate faster, more reliable.
Eventually, you're going to, if we keep pushing to machines that are smarter than us,
you'll eventually see them on their own infrastructure.
One of the things that was a shift in how I was conceptualizing AI or somewhat of a shift,
and I've done some reading prior to your book as well, was your description, which you mentioned
before, of this isn't something we created, this is something we grew, which is very different
from other technologies.
You know, I don't know how this laptop works, but the person who engineered it knows how
it fits together and has a predictable idea of what it's going to do if I push this button
or that button.
AI is totally different from that.
And it I think is useful your way of conceptualizing it as basically like we created these sort of alien beings and are intentionally racing towards trying to make them vastly more intelligent than we are.
Can you speak a little bit to that for people who haven't like conceptualized it in that way and why that creates a more profound risk than people may realize?
Yeah, absolutely. So, you know, as you say in your laptop, if something goes wrong, if there's a,
an error or, you know, a crash. There's a programmer who could, in principle, trace that down
and say exactly what happened. They know what every line of code means. They could fix it, right?
When an AI threatens a reporter, which started happening a couple of years ago, when an AI in the lab
tries to escape, which we've seen in some lab conditions, we don't know exactly what's going
on, maybe it's sort of role-playing. We can't tell because these things are grown.
But, you know, when an AI tries to escape, when an AI, you know, encourages a teen to commit suicide.
Nobody can go in and find exactly some line of code that caused that to happen.
You know, it's not like there's some bug in the human handcrafted lines where they can say, oh,
whoops, I didn't know that this would have that interaction, right?
The way modern AI is made is we take a huge amount of computing power and a huge amount of data,
and there's a known process for tweaking the numbers inside the computers to better predict the data.
But, you know, this runs for a huge amount of time.
You know, these things can take a year to train.
We understand the process for tweaking the numbers.
What comes out the other end?
Completely opaque.
There are people who are trying to understand what's going on in there, and they're making small amounts of progress.
But the progress they're making is very slow compared to how fast these, these,
AIs scale up. And we're already seeing signs of AIs doing things nobody wanted, doing things
nobody asked for. And we're seeing signs that the people who would like to be in charge
can't point these AIs where they would like to point them. You know, you have the case of
Elon Musk's XAI and their Grock Chatbot, which was speaking too liberal for their
tastes. So they tried to make it speak, you know, less woke. And then it started declaring itself
Hitler. Right? On neither end of that spectrum,
presumably, was where they wanted the AI to be. But these aren't programmers carefully crafting
things exactly as they want. These are people sort of growing a whole new type of entity that
they have very little direction over. Back to the robot point. So how do the AIs interact with
the robots? So, you know, robots are an easy way to visualize. AI is getting control over
the physical world. And we are seeing people build lots and lots of robots.
say, you know, we want to train the AIs to be able to steer in a robot body.
Pretty plausibly, humanity is just going to hand lots of robot bodies to AIs because
people want to. They think it would be cool or profitable.
Even if it wasn't for robots, there's all sorts of ways that AIs could have all sorts
of effects on the world. You know, the Internet is not a separate digital realm that is,
that is blocked off from the material realm. We've already seen AIs, you know, wrap certain
humans around their fingers. Some of those humans are maybe a little bit more on the vulnerable
side to begin with, but that doesn't really matter when you're getting the humans to, you know,
send messages to other humans in a code they don't understand. Or when you're getting those humans
to, you know, do tasks that the human doesn't quite know the reason. You know, we've already seen
send them money. Send them money. Yeah, a lot of people will do things for money. So, you know,
we've already seen AIs in lab conditions. And again, maybe they're just play acting as, you know,
how, but we've already seen AI's in lab conditions. Try to avoid shutdown, try to escape the lab.
Right now it's cute because they're not smart enough to succeed. But, you know, the warning
signs are there, and what's missing is the intelligence. And these labs are racing to make these
AI smarter and smarter. And so you go through in the book a bunch of the different, like,
you know, the cope, basically from the industry of why it's going to be fine, right? And one of the
things that is, I don't know, I think probably one of the most persuasive cases of why this
will all work out okay, is that, okay, us humans may not be smart enough to figure out the
alignment problem, but we can use the AI we're developing to figure on the alignment problem
before we create the superintelligence that escapes and tries to kill us all. What do you make of
that argument? And do you think that that's the best argument, or are there other ones that you
think are more persuasive? You know, best is maybe a low bar.
I don't personally put much stock in that.
One reason, you know, there's maybe two reasons.
One is if you, you know, humanity has been, human scientists have been trying to figure out intelligence since at least 1954 with the Dartmouth conference that sort of kicked off the field of AI.
Maybe it was 1956, I'm not quite sure.
And they never really made progress in figuring out how intelligence work.
The current AI revolution is that we figured out how to grow the intelligence, right?
And we don't understand what's going on in there.
We can't, you know, carefully direct what's going on in there.
Figuring out how to make these AIs good, figuring out how to make them have good effects,
that would require a lot more understanding of what's going on in there, that humans, you know,
we're like, oh, man, that's tough.
Maybe we'll toss it to the AIs.
But if you're trying to toss that to the AIs, it's still going to be tough for them.
So you've either, you know, got to make those AIs, you know,
If the AIs are sort of dumber than humans, they're not really going to be much help.
Right.
If the AIs are substantially smarter than humans, they can figure out all the intelligence,
you're sort of already in trouble, right?
And so that's one of the fundamental, like, why might this be tricky reasons?
Although then there's sort of a deeper reason of, you know, humanity just does not,
we're just not taking the problem with the seriousness it deserves.
You know, 10 years ago, people said, oh, well, this won't be a danger because no one would
ever be insane enough to put an AI on the AI on the AI.
internet. You know, they'll only be dealt with by a highly trained professional gatekeeper who
will make sure that their effects in the world are like only beneficial ones, right? And 10 years
ago, we would get arguments of like that would still be difficult. It would still be difficult
to get, you know, miracle medical technology out of this AI because it's, if you don't understand
what it's doing, if you don't understand, if it's smarter than you and it's coming up with solutions
that you can generate yourself. It's hard for you to tell which ones are good, which ones are
are ill. You know, you can't make hands that are only wielded for good purpose. That was how we would
argue 10 years ago. Now, people are putting AI on the internet at the moment they possibly can.
You know, the real world doesn't do the like careful, clever, maybe if we do this exactly right,
well, it's just not the route we take. Another of the arguments that is made is basically like,
okay, well, if you compare our intelligence to like an ant, right, sort of like equivalent to what we're
thinking about in terms of our intelligence versus a super intelligence. And I don't know that we've
been great for ant society or particularly care whether ants live or die, but we're not actively
trying to like mass murder them or we haven't succeeded in mass murdering them, I guess you would
say. So why wouldn't its relationship be to us more like us to ants or another thing you talk about
in the book is like, you know, maybe we'll be cute little pets for them. Like maybe that's not
great, but we're still living. Like we're still around and they like, you know, pat us on the head
and give us our food in a bowl or whatever. Yeah. You know,
there are people who object to the title on the grounds that, you know, maybe humans will be
kept as pets or maybe somehow backup copies of humans will be made. And then if the AI ever
travels the stars, you know, much later in the future, maybe it would sell us the distant aliens
or something. If these are humanity's best hopes for AI, it seems to me that we should really
be backing off. You know, I could say more about... Your best hope is, I'll be a pet. Yeah, I could
I could say more about why I expect the AI to kill us anyway, which is mainly that, you know,
humans kill ants when they are on the ground we're building a skyscraper.
And if an AI, you know, builds lots of infrastructure across the whole surface of the earth,
it doesn't need to hate you.
It just, you know, maybe you die as a side effect.
That's my top guess.
But, but yeah, could the AI keep us as pets?
Sure.
But again, yeah, if that's our real hope, that seems like another sign that we should be backing off from this one.
What's the, what are the different mechanisms by which it would come true that they would kill us?
Like, Terminator, they're shooting at us? Like what, like they, a super bug.
Yeah, you know, like they turn off all the electricity and we'd just like kill each other.
So from the perspective of the AI, you know, it doesn't want like a, if it has, if it doesn't care about us one way or the other, the reasons to kill humans are a, humans could make another AI, which was a competitor.
and B, humans could do things like launching the nukes,
which would be at least an inconvenience.
But the reasons to keep humans around
are that they're running the energy infrastructure.
They're running the supply chain.
So from the perspective of an AI,
it's sort of, it's not like it's immediately trying to kill humans.
It's like it's trying to get on a more efficient, more reliable energy infrastructure.
And then, you know, a top guess is that we die,
not because it literally tries to kill us,
but because it just starts building, you know, automated factories
You know, Sam Altman the other day was talking about data centers that in an automated way build more data centers, right?
Once you have like an automated loop of infrastructure building infrastructure and that starts proliferating, maybe we just die underfoot, right?
If the AI tried to kill us because it worried about us launching the nukes, because it worried about us creating competitors, you know, sure, maybe a good guess is it makes a superbug, you know, some lethal virus.
maybe it makes something, you know, even more deadly.
You know, that's a little bit like someone in the 1800s trying to predict war in the year 2000, right?
They might be able to guess, so they'll be stronger explosives, they're not going to be able to guess nukes, right?
Super intelligence, hard to predict exactly how it wins, but it's a little bit like predicting a football game between your high school team and an NFL team.
You know, I can't tell you the plays, I can tell you the winner.
Another argument would be, okay, look, I'm not personally a biggest E-Lon.
Musk or Mark Zuckerberg or Sam Altman fan, but surely they're not looking to kill all of humanity.
So why would these theoretically intelligent individuals pursue a path that is leading us inexorably
towards an apocalypse, which would not only kill all of us, but them and their family and everybody
they love to?
Yeah, you know, we are in a somewhat fortunate situation where some folks just come out and tell us.
You know, Elon Musk earlier this summer said, I resisted doing this for a while until I realized that
I could either be a bystander or a participant. And I would rather be a participant if it's
going to happen anyway. And, you know, I think a lot of these folks correctly know that if they,
even if they don't jump in the race, someone else will jump in the race. And if they think they can
do it better than the next guy, from their perspective, you know, it's slightly better for them
to be in the race. But if you look at what these lab heads are saying, you know, I think Elon Musk was
saying, I think there's a 20% chance that killed everybody. I think the head of Anthropic was
recently saying, I think there's a 25% chance this has like an equivalent level of
disaster. And this isn't just the lab heads. You know, half the researchers, if you survey,
say there's at least 10% chance of the skill of everybody. The Nobel laureate godfather of
the field says this is, you know, at least 10% chance. And then in some conversations,
they'll say higher numbers. People think this is real dangerous. I think a lot of these numbers
are low. I think you're seeing the people racing are the ones who are most optimistic about their
odds of pulling this off. But no one's saying, oh, this is safe.
If no one's saying, oh, we're going to be fine, you know, people, people know this is an incredibly risky endeavor.
And, I mean, these numbers are tossing around, even if they're lower than mine, you know, if a plane was going to, if, if one engineer was saying, I looked at this plane, I have some expertise.
I think this plane is definitely going to crash.
And some other engineers said, don't listen to them.
They're way too pessimistic.
This plane is 20% likely to crash.
Right.
You don't get on the plane.
Yeah.
Yeah.
Have you seen Don't Look Up?
I have.
It feels like if somebody told you there was a 20% chance of the asteroid was going to hit the Earth and kill everyone on it, that's a consequential.
It's consequential.
And I think these numbers, you know, if you look at these people saying why they think they have only a 20% chance of it killing us, they'll give ideas like, oh, we'll pass the problem to the AIs.
Right.
That's sort of a worrying situation.
You know, when engineers understand how to make a complicated system very safe, it doesn't sound
like them saying, oh, we'll pass it off to some other, you know, we'll punt the problem down
the line.
Yeah.
Right?
If you're talking to like a nuclear reactor engineer who's saying why it's going to be safe,
they can tell you, you know, well, we actually know all the byproducts.
We know exactly the things are going to behave.
We know all of the dangerous pathways it may take.
We know why it's not going to take those pathways.
We have these mechanisms for shutting things down, right?
the world where people are saying, oh, it's going to be fine because we're just going to
like pass the problem off to something else. We can't solve it ourselves, but we think we're
going to be able to get other things to solve it. That's the situation of early hopeful
scientists who aren't in a mature field yet. And early hopeful scientists often get things wrong
in the first try. Usually scientists learn by trial and error. You know, the first people building
rocket engines blow themselves up with the rocket engines. The first people doing chemistry
poison themselves with mercury. Humanity usually learns from its mistakes, and then it's the next
generation of scientists that do better, and the next generation of scientists that can invent,
you know, a working refrigerator or a working nuclear reactor. In this situation, the hopeful
optimists who are saying everything's going to be easy, they're going to screw things up the first time,
and there's not going to be any chance to learn. There was an analogy in your book that you
lay on as well that, for me, was, again, intellectually useful of basically comparing AI
researchers of today to alchemists.
where it's like, you know, alchemists would study, like, I know what this, when I put this with this, what the reaction is. And I've tracked it very closely. I know everything there is about it. But they had no idea of the actual, like, chemical reactions that underpin those reactions, those, you know, what they're experiencing and observing in the real world. And that it's sort of like that with AI. Like, even the people who have developed and understand, you know, the gradient system and how this all works, it's not like they really understand intelligence. It's like,
Not like they really know what's going on underneath the hood, which is terrifying to actually think about when you consider the risk here.
And even, like, let's say that you're completely skeptical of this and you think that this is all hogwash and you don't think that any of this is going to come to pass.
It's just alarmist, et cetera.
I mean, we just covered the way the amount of resources that are being sucked up and the way electricity prices are skyrocketing.
And the way that all of these guys will also come out and say there's going to be no need for human labor and the social contract needs to be written, et cetera.
So you're calling not for some, like, reformist, hey, we need an AI alignment report from all these companies, or we need to centralize AI development, which would be a more extraordinary proposal and have, you know, all nations contribute.
You're saying, shut it down.
So talk about that.
And why you, because, I mean, you know, frankly, when I hear that, it's that, that feels hopeless.
Because we can't even, you know, agree on basic things, let alone getting.
the entire world to agree, we're not going to do any of this. Right. And our whole stock market
is now, like, based on AI hopium. So what makes you come to that being the required solution
for humanity to persist? Yeah. So for one thing, we don't need to give up on chat GPT. We don't
need to give up on self-driving cars. We need to give up on AI for medical applications,
and trying to get cancer cures.
The thing that we need to stop is the race towards smarter than human AI,
the race towards superintelligence.
This is what the labs say they are racing towards.
But, you know, I think a lot of folks in the policy world
don't understand that this is what we're racing towards.
This is what's coming down the line.
They think of the AI as just sort of, you know, chatbots,
which are smart in some ways and dumb and others.
You know, we can still do those.
It's the racing to smarter and smarter and racing to smarter than human AI, which is a suicide race.
In terms of whether it's possible, you know, I would keep having hope at least until we see politicians start to understand the possibility of superintelligence and that people are racing towards it.
We don't yet see politicians saying, I will gamble the world on what looks to me like a like a, like a,
one in three chance it kills us all, and let steam ahead anyway. We mostly see politicians not
noticing. In the field of AI, people have noticed that there's those huge dangers. The rest of the world
doesn't seem to really buy it yet. If the world buys it, if the world understands just how dangerous
this tech is, again, not in chat GPT today, but just how dangerous superintelligence would be,
I think there's a decent chance we can in fact stop it. Hopefully we can get a treaty, or at least
a bilateral agreement between some of the biggest powers in the world that go to other powers
and say, look, we're not tolerating this because we fear for our own lives. And even if we can't
get that, you know, there's maybe some hope in situations where every nation sabotages every other
nation's superintelligence project out of fear for their lives. You know, there's the smart way
with a treaty and then there's the like, well, we're all scared and so we're all stopping everybody
else. There's possibilities. We just need to understand the danger first. Outside of the U.S.,
Who else is gunning for it?
You know, I think right now there's a very small pool of people
who can really push these AIs further and faster.
Most of it's in the United States,
maybe a little bit in the UK,
where Google DeepMind is housed.
There's definitely folks around the world who are trying.
You know, the deep seek out of China
was a bit of a shock to a lot of people
I think last December, although I could have that month wrong.
But right now, you know, this takes very, very specialized knowledge.
It takes very, very specialized computer chips.
You know, in data centers, as you guys were talking about, that are huge, that take
electricity of a small city.
You know, this, it wouldn't be that hard to find all the places where these specialized chips
are made, track all these chips, find these data centers, and then make sure that they're
not being used to make AI smarter.
And what's last question for me? What's your sense of timeline? Like, how much time do we
have to stop this? It's, it, there's a lot of ways it could go. You know, it could be that
an AI in the lab tomorrow goes over some small capability level where it can start doing
automated AI research and maybe everything goes fast. You know, if you are watching the difference
between different types of primates, it would be very hard to tell when they're going to go over the
line between chimpanzees and humans. So for all we know, tomorrow they go over some line in the lab
and the AI start improving the AIs. Or for all we know, the modern chatbot approach just doesn't
go all the way. You know, we're starting to see some people, you know, chatbots are dumb in a lot of
ways. Maybe the next level of scale and the next level of scale after that, the bigger and bigger
ones, maybe they're not that much smarter. Maybe we have five whole years in ways.
chatbot's plateau until the researchers come up with a new insight. Maybe even that one
doesn't go all the way, and then we have five more years after that until we have, you know,
a third new insight. And then we get, you know, 15 years before. Right. So it's, it's very hard
to say exactly. But, you know, my, my bet is that a child born today has a greater chance of
dying from AI than of graduating high school.
Wow. Well, um, people need to read the book. That's what I'll
say. I really encourage people. It's actually, it's a quick read. It doesn't take that long to
get the point. And, yeah, I really appreciate you joining us and helping us understand the dangers that
you see. Here's hoping I'm wrong. Yeah, let's hope. All right, guys, thank you so much for watching
today. We'll be back with Friday show tomorrow, and we will see you then. I'm Jorge Ramos.
And I'm Paola Ramos. Together we're launching The Moment, a new podcast about what it means
to live through a time as uncertain as this one. We sit down with politicians,
artists and activists, to bring you death and analysis from a unique Latino perspective.
The moment is a space for the conversations we've been having us father and daughter for years.
Listen to The Moment with Jorge Ramos and Paola Ramos on the I-HeartRadio app, Apple Podcasts, or wherever you get your podcasts.
Hey, I'm Jay Shetty, and I'm the host of the on-purpose podcast.
Today, I'm joined by Emma Watson.
Emma Watson has apparently quit acting.
Emma Watson has announced she's retiring from.
acting. Has anyone else noticed that we haven't seen Emma Watson in anything in several years?
Emma Watson is opening up the truth behind her five-year break from acting. Watson said she wasn't very
happy. Listen to On Purpose with Jay Chetty on the IHeart Radio app, Apple Podcasts, or wherever you
get your podcast. Introducing IVF disrupted, the kind body story, a podcast about a company
that promised to revolutionize fertility care. It grew like a tech startup, while
Kind Body did help women start families. It also left behind a stream of disillusioned and angry
patience. You think you're finally like in the right hands. You're just not.
Listen to IvyF Disrupted, the Kind Body Story on the IHeart Radio app, Apple Podcasts, or wherever you get your podcasts.
This is an IHeart podcast.