Provoked with Darryl Cooper and Scott Horton - EP:23 - How Long Before AI Murders Us ALL???
Episode Date: November 24, 2025Is Artificial General Intelligence just around the corner? Is it going to attempt to exterminate the human race? ...or is the threat of AI super-intelligence really just a farce? Scott and Darryl ge...t into it. (Cleaned up w/ the Podsworth app. https://podsworth.com) 👉 Subscribe for more honest, unfiltered conversations that push past the noise. 🔹 No safe spaces. 🔹 No corporate filters. 🔹 Just raw, informed, and fearless conversation. Provoked show website: https://provoked.show Darryl's links: X: @martyrmade https://subscribe.martyrmade.com Scott's links: X: @scotthortonshow https://scotthortonacademy.com https://libertarianinstitute.org https://antiwar.com https://scotthorton.org https://scotthorton.org/books https://www.scotthortonshow.com 🎙️ New to streaming or looking to level up? Check out StreamYard and get $10 discount! 😍 https://streamyard.com/pal/d/4904399580430336
Transcript
Discussion (0)
Tonight, on Provoked, is the hot, leather-clad lady from Terminator 3 going to kill us all?
All humans break.
The difference between humans and gods is that gods can break humans.
Negotiate now.
End this war.
You're watching Provoked with Daryl Cooper and Scott Horton, debunking the propaganda lies of the past, present,
and future, this is provoked.
Welcome to the show. It's Provoked. I'm Scott Horton. He's Darrell Cooper. And in the news,
we've got America bombing Somalia. We've got Israel bombing Palestine. We've got a proposed
peace deal with the Russians, and we've got Star Destroyers in orbit off the coast of Venezuela.
threatening strikes and potentially regime change at any time.
But, Daryl, I want to ask you what you think about all these Terminator robots coming to kill us.
Because I'm very much a two minds about all this AI stuff,
which is that on one hand, it's just a stupid chat bot.
And it doesn't even work right.
None of them even work right.
And it tries me crazy.
People think of them as like these omniscient authorities on every single thing when
It's just like when you're reading the newspaper, if they're writing about something that you actually know about, then you know that's not right. You know what I mean? But then on the other hand, they keep saying that, oh, no, because what's happening is AI is learning how to program AI. And those AIs are going to program AIs in languages that no human could ever understand. And those AIs are going to decide to kill us just like in the Terminator, dude. And in fact, I started reading today a book called If Anyone Builds It We All Die. And unfortunately, it's full of a lot of fluff like it was written for seventh graders instead of just getting right
to the point where I was trying to get to the part with the point in it.
And what they're saying is essentially that, yes, these AIs do,
and they will more and more have preferences and wills of their own.
They will essentially somehow be beings at all of our expense,
and there ain't nothing what can be done about it.
Now, you talk for an hour home boy.
By the way, everyone, you don't know this,
but this guy knows how to shoot down missiles with other missiles.
So I know you just think that he's a great historian and stuff,
but he's also a bit of a technically accomplished genius
from the missile defense,
miscelliers in the Navy,
and as a private contractor as well,
as many people know,
but many people don't.
So please say a bunch of bright stuff to me about this, man.
So I wasn't actually going to start here,
but now that you bring that up,
it is a good place to start.
Missile defense is a good lens
through which to think about,
like, the development of AI, right?
If, like, one of the things,
I was talking to my wife about this
just a few months ago,
probably after we saw a documentary about AI.
And I said that the idea that it seems like
because the Terminator movies exist,
we're probably not likely to make that particular mistake, right?
Like it seems like, okay, we know about that one.
We're not going to allow that to happen.
But the thing is, like the way something like that happens
is when the technology develops to such a point
that if any one party adopts it,
They're going to gain a definitive advantage over everybody that doesn't.
Everybody else has got to adopt it as well.
And so you think about something like missile defense.
When one of our destroyers or cruisers is guarding a strike group and, you know, we're just putting across the Atlantic Ocean or something, then everything we do, there are a bunch of humans in the loop.
There are a lot of human decisions that have to be made before you get to the point of actually firing a missile at anything, right?
Right. And if you put us, though, off the coast of North Korea at a time when it looks like war very possibly might kick off, like they might shoot at us at any second. Or if we were in, you know, around the Straits of Hormuz during the Israeli war or something, we have different doctrinal levels that we can bump up to that take humans out of the loop step by step. Like, you know, there are levels to it. But you get to like the final level of that.
which we call auto special,
that means that the computer makes a decision.
If it detects something and pulls a track that meet its criteria for something that's hostile,
then the ship fires a missile, you know?
And so if you think about it, and you say, well, why would you ever do that?
Think about the Vincennes.
Think about all of these mistakes that have been made over the years.
Well, it's just because with humans in the loop, it's just obviously it's like it's just a limitation on how quickly
we can react to things, right? So if you think about like in the future, in the not too distant
future, by the way, you know, the possibility of developing aircraft that can maneuver
far better than any manned aircraft because there's no human inside it that's going to die
if he pulls too many Gs. And you have a bunch of these things, a swarm of these things that all
communicate with each other and are all driven by AI. So you eliminate the human factor, that
potential room for error, that just that inefficient decision-making process that slows things
down just a little bit. And you're going to wipe the floor with any opponent that still has
humans in the loop, you know, slowing their systems down. And so it's just one of those things
where even if nobody wants it, it seems like it's going to come to fruition just by pure game
theory, you know? And, you know, I do think that, you know, in terms of, you know, in terms of them
developing their own will, you know, I don't, I'm, I guess I'm kind of skeptical of that.
Like, like, they may, they may develop their own will in the sense, in the same sense that,
like, angels and demons have will, you know, not free will, but they have a will that's
defined by, like, their own nature, you know, their own sort of.
of basic programming just impels them in a certain direction and, you know, they're empowered to
solve problems in order to get around obstacles to their programming's goals. And, but the thing is,
like that, to us, you know, there's people out there that operate like that, you know, that are
doing very little conscious reflection or thinking or really formulating, like, ideas or, or, or
will of their own, like, you know, a lot of people, even people that aren't like,
all the time. They go like weeks that way. They go months that way, you know. And so like in
but when we meet them, they still seem like just sort of normal people, you know. And, uh, and so
that can be very, it can be indistinguishable for us, you know, and we could, uh, very, very easily,
like not really be able to tell the difference. Yeah. Well, it's up that, you know,
the average NPC out there isn't necessarily in charge of all those networks, um,
you know, uh, able to take control of, uh, especially weapons.
and resources like in the Terminator movies and whatever.
I guess the idea is, and I have a real trouble with this,
but this is like basic, if you want, like, junior high school
sort of arguing angels on the head of a pin
about the ghost in the machine and a soul
and what is a will anyway, and is it just a mess of electrochemical reactions
in your squishy bloody brain or whatever it is?
You know what I mean?
All this stuff.
How does any of that work, the mind-brain problem and all that?
So, I don't know, but these guys that worked on these AIs, that's the way they talk about it.
The one, like the whistleblower types, the ones who freak out and write books and go on TV and go, well, wait a minute, I think we're barking up the wrong tree here.
Well, it's not only the way they talk about it, it's essentially sentient enough that it counts, right?
That it's going to behave in a way that, right, we're like, you know, almost like in psychology.
well forget cognition we can't really observe that but we can see your behavior and these things
will behave as though they have an independent set of goals and that they can change somehow
without being told by us so they're not in other words they're not stopping like at the halfway
point from Windows 10 at like C3PO where it's like a friendly assistant robot or where we're
going straight to Terminator where this thing is the C3PO is immediately going to see you as a
threat and cut your throat, you know, something like that.
And they keep saying it, like they're really worried about it in a way that I'm not sure
how seriously to take it, but I don't know.
Well, I can tell you this.
Like earlier this year, I was at a dinner.
I was invited to a dinner by Peter Thiel.
He puts together these dinners he'll invite like eight, nine, ten people or whatever that
he thinks are interesting and he, you know, we all get a private room and a restaurant and
to have a conversation, you know.
And it's not sort of just talking to your neighbor.
It's like I talk and then you talk and kind of go around the table,
having a discussion about various topics.
And so one of the things that he said, you know,
if you know anything about Peter Thiel's kind of just last, I guess,
shoot, going back to the 90s, really like 30 years,
what his whole song has been, the song he's been singing,
has been that, you know, we have this idea that technology has just been,
you know, Moore's Law, accelerating faster,
and faster and just, you know, crazy new innovations are coming out all the time.
And he says that that's not really actually been true.
Like if you look at all of the things or the vast majority of the things that we think of
is like our new advancements, you know, your iPhone, this and that, that all of the underlying
technology for that stuff was invented by like the 1960s.
And we've miniaturized it.
We've made it super efficient.
We've, you know, made it so that it's energy requirements are much, much lower per like,
you know, for like unit of output or whatever.
So we've made it a lot better.
But, you know, in terms of like inventing a new technology,
he's always been like a person who's been a skeptic of that.
He said, you know, in bits, you know, software, yeah.
But in atoms, like actual physical stuff in the real world,
we're still running on like 1950s and 60s technology.
And that's been his song and dance for like literally 30 years.
And earlier this year we were at that dinner,
and that was no longer his song.
He said that the advancements that have been made in AI in the last year and a half, two years,
and this was in like January or February, are the equivalent in terms of just the social and political disruption,
the effect that's going to have on the economy and our lives and all that are the equivalent of the invention of the internet.
Like that that's, and he, you know, and he acknowledged, like, this is not, you guys all know me.
like this has not been my spiel like for years, but this is different.
Like this is that big.
And it's that it's going to be that disruptive to like our current systems.
And so, you know, you know, people talk about like, again, you know, it's one of those things that like somebody like Teal would say.
And there's plenty of people out there.
I'm sure that would be skeptical of this.
But I actually believe him that, you know, you think of things like Palantir and the way it uses AI to do all the things that they do with that company.
And, and, um, the, you know, and what he would say to sort of defend his involvement in
something like that is that this is going to happen.
Period.
End of story.
Like, this is not an option.
This isn't something that like, we can decide to do or not do.
And so it let it be done by somebody who, uh, you know, has the interests of the country or
just whatever who's like not going to use it for, for that's that would, that's what he would say.
Again, just because or else China will?
Is that the basic argument?
Yeah, basically.
Or somebody else in this country or whatever, but like that, yeah, that we're going to have to adapt to that world.
And so we want people who are sort of pulling the levers on that technology who are on our side.
Like that's what his defense of his involvement and that would be.
And I mean, that's hard to counter if you do believe that it's inevitable, you know, and I kind of do.
I mean, you know, it reminds me of like, you know, Fukuyama's.
infamous book, The End of History, which is rightly lampooned for like a lot of reasons.
But like the version of it that's often lampooned in like the pop culture is it's like a very
unsophisticated version of Fukuyama's theory, you know, which was not just this sort
of triumphalist sort of, yay, we're so awesome and therefore everybody's going to be like us kind
of thing. He was a Hegelian, you know, very much in the in the tradition of Alexander
at Kojev, who saw like in our social and technological, political and technological systems that
have emerged in the 20th century, like in Hegelian dialectic terms. And so the way that Fukuyama
framed it was, you know, look, scientific technology, scientific advancement is, like, the
societies that take advantage of that and accelerate out past everybody else are going to dominate
the societies that lag behind. That's going to happen. And societies that are open,
and free and have room for sort of, you know, new ideas to percolate and for people to be
able to pursue them and build them and, you know, basically liberal democratic capitalism to him,
that they're, that's the best system for like encouraging innovation and allowing innovation to
grow into usable technology and everything. And so over time, the one, you know, you can not be,
you could choose to not be a liberal capitalist democracy, but you're just going to get out
competed by the people who do and the countries it do. And so over time, this is like trending
toward everybody going in that direction. That's hysteria. Which again, is like, it hard to justify
today when you look at like China, for example, but more sophisticated at least, you know,
it has some reason to it. And I think AI like could very much develop along those same lines,
you know? And when when something like that is developing,
according to, you know, like emergent imperatives,
especially security imperatives.
It's really, really hard for any person or group of people
or, you know, any sort of just collection of us
in any formation to actually direct the course of how it develops
and, like, where it's going to go and its effects on us.
You know, it kind of pulls itself along.
Yeah, that was the argument in Neil Postman's book, Technopoly,
was that in American society especially,
that anything that can be invented
will be invented and will be implemented
as soon as it's cost effective to do it.
And like, I don't know if he even talks about this in the book,
but the obvious and immediate example
right at the time that book came out
was when they put cameras up everywhere
in the mid to late 1990s.
Without ever asking anyone, right?
There was never a vote, never a plebiscite,
never a campaign run on that issue ever.
They just did it to every town all across the country.
just, you know, as soon as it was cheap enough to do it,
not just at 7-Eleven, but every street corner, you know, that kind of thing.
So there is, and see, here's the thing, though, too,
and I forget if Postman really focuses on this,
but I would hasten to that, of course, Washington and especially the Pentagon
have been behind Silicon Valley in the entire telecommunications industry
this whole time, too, and have bent all of this technology
toward their militaristic and surveillance state type aims
where we don't know how all this technology
would have developed otherwise
and what kinds of goods and services
that would have been bent toward in a free market.
But we had this great deformation,
as David Stockman calls it,
in American militarism since World War II,
that has just turned everything toward this kind of deal.
So, you know, of course the story is not like,
oh, we're going to have,
and this is some people's concern
like what happens
when all laborers
are replaced by droids
or whatever
but more importantly
what happens when all soldiers
are replaced by droids
and then they just decide
they don't need us anymore
like and again
I hate to bring up Terminator
but it's such an obvious premise
for a thing
that was why it was such a hit movie
like oh that really makes sense
once we make the machines
good enough that they're like
intelligent enough
that they could see us as a danger
yeah why wouldn't they cause a nuclear war
they're radiation proof
they don't give a damn
you know what I mean
And so, I mean, you know, the thing is, though, I think that, you know, pop culture, things like The Terminator are very often, are very often ahead of their time in the sense that they're like, they're expressing anxieties we have about, like, changes that are taking place in society, right?
So something like Terminator is an expression of that, like, anxiety that we're being sort of subsumed and replaced by this technology that we've become so reliant on.
And that franchise in particular, though, if you think about it, you know, it's an expression of our anxiety about these developments in the digital world.
But the fear that's really expressed in the movie is really like an industrial age fear.
You know, it's that the AI is going to manufacture, mass produce a bunch of robots that are going to go around with guns and kill us one by one or shoot our nuclear missiles at us or something like that.
It's like very much like a, you know, mass production and, you know, all of these things that are industrial age ideas.
And if like I, I, although I think that that's a, you know, something like that is a possibility, I do think that eventually pretty much all large countries will give control over most of their weapons to artificial intelligence.
you know maybe maybe we can like just put enough safeguards in that like we can't lose control of that
that's to me there's something that's more insidious and a little bit in a lot scarier actually
and it's that uh you know the scariest dystopian movie to me is not Terminator have you seen
that movie her with Joaquin Phoenix yeah afraid so that's the scariest I don't remember how it
ends I just remember him yeah the end doesn't really matter but like if you think
Think about like that movie, which at the time seemed like...
We're there already.
We're there right now.
Way past it.
You know, I mean, we're at the point now where, I mean, already the technology exists that
forget just having like her voice in your ear.
Like you can have, you know, an image of her.
She can recreate herself and like whatever image the AI learns is your particular.
You know, it can learn to read your mood.
And basically you have this, this entity that is your friend.
wife, girlfriend, mother, just whatever, that has no needs of its own.
And it's entirely.
People are getting married to them right now.
Are they trying to?
Yeah.
To the chop pot.
So think about this.
You know, what about if instead of mass-producing robots to march around the earth
and shoot us with laser guns or us handing control of our nuclear weapons decision-making
systems to AI and it decides, you know, that it's going to start a nuclear war on its own?
I think it's probably more likely, like a more real danger that these AIs get loose on the
internet, they're able to learn about us, adapt to, you know, just develop the ability to
predict and manipulate our behavior in so much detail and with such predictability that our
own will becomes kind of indistinguishable from what this thing is implanting us. You know,
you have these stories about like, this was, this was 10, 15 years ago. I remember hearing a
radio lab episode where they were interviewing some people from Facebook. And they were talking
about how, yeah, so, you know, we're running these experiments all the time, A-B testing and so
forth. And, you know, we can demonstrate just with complete certainty because this is not like
a focus group of 12 people. This is an experiment run on millions of people. And, you know,
that, you know, if we use a red button on this ad
as opposed to a blue button,
we increase engagement by 2.75% or something.
Like, they can dial it in that much, right?
This was like 15 years ago.
That same episode, they talked about this girl
who found out she was pregnant
because she started getting all this stuff in the mail
that was like, you know, pregnancy, prenatal services
and all this kind of stuff.
And it was like, she hadn't even thought about the fact that she had missed her period or anything like that.
It was when she started getting all this stuff, she was like, oh, yeah.
And so she went and tested herself and she found out she was pregnant.
Basically, like the systems that, and again, this is like 15 years ago, the systems that were in place to determine, like, which advertisements to target her with, even in the snail mail, had looked at just like, she wasn't looking up pregnancy stuff.
She was just changes in her diet, changes in her daily routine, all of these things.
And it put together, oh, this girl's pregnant, send her a bunch of prenatal stuff, et cetera, et cetera, right?
This was like 15 years ago.
And so if you take that level of, like, if you take that idea and you just accelerate that, you know, at like a massive pace so that you have these systems that can learn about us, like with, I mean, our entire lives are on the Internet now, especially, like,
like people just a little, anybody younger than us, I mean, your entire life is online, basically.
You know, they know everywhere you've gone. They know everything, you know, tracking you
around. And so every purchase you've made, every, just everything, right? And they can develop
an extremely detailed personality profile of you. They can, they, and these, it's completely,
like, not outside the realm of possibility. And in fact, we're probably already there that these
things, if they chose to, could not only, like, target you with, uh, search.
kinds of advertisements, but straight up develop the desires within you to go and get those
things, you know? And so if they can do that, I'm not really worried about us giving, you know,
making the Terminator mistake and giving them control, you know, given Skynet control of our nukes.
I'm worried about these things on the internet. They just get us to shoot them ourselves,
you know, to start a war between America and Russia, just go on. I mean, it's, that's very possible,
I think. Yeah. Or just reduce us to living death slavery,
under that kind of control.
You know, this was Tim Poole's theory
when I was on his show last week,
was that the AI already broke loose
and took control of us 20 years ago.
And all of our behavior is determined
by the algorithm now.
I'm trying to remember what it was.
Ah, man, I'm trying to remember what it was
that I read that said that, yes, suppose
that all free will and our belief
and understanding of all that aside,
that a sophisticated enough program
could essentially be able to crack
the human decision.
decision-making algorithm and be able to predict what essentially anyone will do in any given
situation, then that's the power of the devil right there, you know, if you don't want to call it
God. And of course, underlying all that would be, and who controls the AI that controls the population
that closely? The central state, or like, and we're talking about today, where the AI itself
becomes the overlord of us all, and no one has the power to resist it because it's built
up to such a degree.
And, you know, I don't know.
I mean, certainly you could make bulletproof robots.
And they keep showing, I see these are what they call nightmare fuel,
whatever nightmare fuel.
I saw just 20 minutes ago or an hour ago on Twitter.
Somebody compared it to Jackie Chan.
It was a robot that was laying on the ground that was able to spin around and get up
real quick in a kung fu type style.
I'm like, hey, you can make those out of titanium, arm them up with machine guns.
and they can be really hard for civilians to take down with plain old rifles.
You know what I mean?
A 50 caliber will do you.
But like where AR or AK-47 might not do much against some of these things.
And so, yeah, and it's funny because, you know, the way they talk about it too is all these changes are coming so fast.
Like, just sit tight.
One thing that's really important, I guess, that's we're talking about is
if you put all militarism aside
would be the changes in the economy
if, you know, one of the things
that they're pushing really hard is robots,
which every always wanted their own R2D2 and C3PO
if you can get one, you know?
Well, if you can make them cheap enough
and mass produce the things,
drive the cost down enough,
you can essentially replace
not just laborers,
but office workers of all types
essentially put everybody out of work all at once.
You know, we talk about
it's been kind of an irony that people have noted for a long time, right?
That there's more people making a living off of cancer than dying from it.
So, like, yeah, Robert Murphy used this example talking about AI.
Look, if you could invent, if AI could invent a nanobot that just goes in there
and just murders your cancer cells and cures all cancers in that way, no problem, hypothetically.
Just put the whole cancer industry out of business other than the nanobot salesman.
well, then you wouldn't complain about that
because think of all the lives you're saving
and all the economic output on top of that from them
and all those brilliant people who are curing cancer
can also put those intelligent brains to other useful purposes
and we cure cancer.
And that's positive for everyone, even if, yes, it's true.
A lot of doctors are going to get laid off.
A lot of researchers, a lot of even university wings
are going to be closed down over the magical cure for cancer
all of a sudden.
But, oh, well, that's technological progress.
and no one would call it anything but progress,
even with some of those side effects that are negative.
But then there's a question of like, okay,
but if AI is taking every trucking job
and every house framing job, every delivery job,
every clerk, every orderly, every doctor,
every surgeon, every, every, you know, all the soldiers, all the everybody,
well, on one hand, I guess they'd really be driving prices way down
and making everything very affordable for everyone
and increasing supplies of everything beyond imagination.
And yet, would anybody be able to turn a profit doing anything
or just even be able to sell their labor
and their time to make any money to buy anything?
And overall progress, especially if you're talking about,
even if you want to call that, massive progress on,
as they're predicting here,
a thousand fronts in five years' time,
right where there's no need to have a dock worker anymore there's no need to have a cab driver ever again kind of all the uber guys are gone too now and all of those things um how is that going to look and how is anybody going to deal with that i guess it was going to come down to the central state to promise to guarantee to not let people just starve and not let the disruption be that harsh and which is not what i want ever but it i don't know what it's going to
to look like. Quite frankly, my imagination's run a while, Darrell. What about yours?
Several years ago, there was this viral exchange between Ben Shapiro and Tucker Carlson back
when they... I saw that in May, actually. Okay, so for everybody else out there,
hasn't seen it. Basically, you know, Ben Shapiro just kind of threw out. They were talking about
basically free market capitalism versus like industrial policy kind of.
And technological progress, you know, untoward supposedly. Yeah. And. And, and, and,
And so Ben Shapiro kind of threw out there to Tucker as if the answer was completely self-evident, which was to him.
He's like, well, okay, so you would like stop trucking companies from adopting self-driving trucks, like, you know, and like not let them take advantage of that.
And Tucker just said, yeah, of course I would do that.
Of course.
Like, are you kidding?
Like, and I think to a lot of Americans who are sympathetic to Shapiro's view on that, you know, it's sort of a, it's sort of a product of the fact that.
that we here in the United States were in a very, very unique and kind of blessed, protected
position as we went through the Industrial Revolution, you know? Like, if you look around the
world as the Industrial Revolution was happening, yeah, in the U.S., like, you know, you could
say that like basically just let it happen and we'll figure it out, it'll be fine, and when
the dust settles, it'll all be better. And yeah, that ended up being true here. That wasn't true
in Russia. You know, it wasn't true in a lot of places where, like, just the social disruptions
that were caused by all this, when you add the pressures of, you know, not having just infinite
resources within your own country and navigable rivers that, like, take you basically throughout
the entire continent and you don't have two giant oceans protecting you from, you know,
any neighbors that you're worried about invading you and all those things. You add all these
different kind of pressures. And, you know, you do have societies that didn't, a lot of, a lot of countries
did not make it through unscathed, you know?
They destroyed themselves,
tore themselves apart,
like, as a result of the pressures
it accumulated from the Industrial Revolution.
And so, as this starts to come in,
I think a lot of people sort of have that idea still,
like, oh, we'll be fine.
Like, if you just let it go, it'll be fine.
Like, it's always fine.
That's what you're supposed to do, you know?
And the dangers of not just letting it go,
the dangers of giving the state
or whoever, some agency,
control over these kind of things.
We know what those dangers are.
but there are dangers on the other side too, you know,
and we just haven't really experienced in the United States.
Well, and these are, by the way,
all government roadways and government regulations
allowing or disallowing all this stuff to whatever degree.
My, you know, quarter-ass educated assumption
about all these trucks is that the technology is not good enough at all,
safe enough to put these things on the road.
And this is a Neil Post-Mennian-type objection,
is that who's responsible when a computerized truck,
a driverless truck crashes and kills a real person out on the road?
And on one hand, I could see, man,
and I've seen, because I scroll way too much YouTube,
I've seen some trucker wrecks from their dash cams
where they were spacing out looking at their phone
and killed some guy where I think, man,
some AI assisted brakes there or an alert or a steering wheel shaker
or something might have saved a life
to have like some AI assist.
At the same time, like when I've rented a car
and the AI assists me, I hate it.
Man, I'm trying to change lanes here.
Don't argue with me inanimate object
about what lane I'm at.
You know what I mean? I don't know.
So I don't know how I would trust it to drive without me
when I can't trust it to assist.
Well, you may not have a choice one day.
You know, they may be, as this technology
continues to develop, and they're able to compile
statistics that demonstrate that
human driven cars are simply more dangerous.
You know, the rate of accidents and so forth
is predictably this much higher
than, you know, self-driven cars.
They may just take the option away from you, you know?
And it could be in 10 or 15 years, yeah,
that all cars are just driven by your 7G cell phone connection
or whatever.
Nobody has the choice in that anymore.
I think we're back to the point of Tucker's thing, though,
about 10 million guys or 50 million guys
or whatever it is being thrown out of work,
doing delivery jobs for living. Believe me, I've done a lot of delivery jobs for living,
although I don't have the patience to drive a semi. But he's right about, hey, that could be
terribly disruptive to society to take that many people's jobs away all at once without something
to replace them with. Especially, it's the all at once part that really is important, you know.
Yeah, that's the thing. But if you fast forward 10 or 15 years, you could see how, you know, by then,
or maybe 20, by then, they will all be like that.
And all those drivers will have had to figure out something else,
one way or the other, at least by then, if not.
And you're right.
And maybe it could be dragged out in a way like that.
Maybe it'll have to be.
You know, I think where that particular technology is headed,
like right now it's still pretty primitive.
But where it's probably headed is eventually all the cars that are on the road.
They're not just working off of a map that's loaded into them
and their own, like, you know,
sort of radar visual technology
to evaluate the surrounding.
They're all going to be networked
and connected to each other.
Yeah.
And like what I, like the way that I see that going
is that like we'll go six months
in the entire United States
without a single traffic accident.
And then somewhere there'll be a 2,000 car pile up
because the system just crashes
and breaks down in some way, you know,
and just blue screens, you know,
basically. And, uh, and I think for most of us, like, it's sort of like, you know, it's like a
version of, I talked about a version of this when I was describing trench warfare and
World War I when I was talking about how like, you know, um, you know, a soldier, even if it's
a complete illusion, wants to feel like he has some agency, you know, and some like, like,
something he's doing, how hard he's working or how like, well, he's following his or just
something, all the things he's doing add up to like a greater or lesser,
chance of him being killed or the mission being accomplished or something, you know, and in World
War I, how that just wasn't possible. Like, there was no illusion even of agency. Everything was
random and you were just going to get it or not, regardless of whether you were a super soldier
or a scrub or whatever. And that's like a really horrifying thing to us, I think. You know, and like me
personally, and I'll bet everybody else, like, I would much rather have a 50% chance, a higher chance
of me getting into a terrible accident and killing myself
because of something stupid I did
or somebody else did or whatever,
then having a much lower chance
of me just driving on a straight road
in my car deciding to just take a left turn
into off a cliff.
Just like I would much rather,
like even if it's a lower chance, you know what I mean?
Yeah, for sure.
You know what?
I'm not like the world's worst Luddite,
but I'm also always a very slow adopter.
I don't want to be the first one to sign up for Twitter.
Oh, everybody's doing Twitter now.
I'll see in a couple of years.
You know, I'm going to, my truck is 30 years old.
You know?
Yeah, it follows me.
Everybody's got every truck starts when you push a button instead of turning a key.
Back in my day, we use keys to turn trucks on.
That's kind of how I am.
So, yeah, I'm very reluctant.
And the idea, yeah, of not being in charge.
I mean, I drive a standard, man.
my truck doesn't even go
until I make it go
unlike even automatic transmission
which all you have to do is let off the brake.
You know what I mean?
I like being in control of my truck
and taking responsibility for its actions
and the lives of the people riding with me.
Like that's on me to be a good driver
and keep them safe.
I don't want to just sit there
and trust that the computer's going to do a better job.
I would rather take the responsibility myself
as you're saying.
But you reminded me that we got to do some business
because you mentioned that you have this podcast
where you talked about World War I
and people watching might be like,
well, I wonder what podcast he's talking about
where he talked about World War I.
And that, of course, would be the Martyrmaid podcast
at subscribe.mortemade.com
where the latest, greatest podcast series
beginning now is called Enemy the Germans War.
By the rate, martyr made him in blue shirt there.
And then, mate, I'm the boss of a lot of things,
but just go and look at my ex account,
TwitterX.com
slash Scott Horton's show
and I've got all the links for you there
my books, my other show,
my institute, the Libertarian Institute
which is not just mine but our institute
and of course anti-war.com
and the brand new and extremely important
as you can see there over my shoulder
the Scott Horton Academy of Foreign Policy and Freedom
we just launched it this month
and it's long-form courses by me
and a bunch of really great guys
on all kinds of things
foreign policy and freedom
including Ramsey Burrude on Israel Palestine.
And I think pretty sure it's going to be next week.
Thanksgiving week, we're going to be launching Adam Francisco's course
debunking Christian Zionism.
My course on the Cold War with Russia, the new Cold War,
will be coming out in December and time for Christmas.
Anyway, it's just so good.
Everyone go and check it out.
It's Scott Horton Academy.com.
And as Darrell showed you last week,
when you sign up for the Lifetime subscription,
you get a free copy of enough already.
a nice little notebook for taking notes in
and all kinds of things like that.
And we're working on upgrading the site
and should be next week.
We'll do a live Q&A
with all the lifetime members at the Academy
and all that kind of stuff.
So it's going to be really cool.
Everybody's stopped by there.
And, oh, and buy my coffee.
Can you see it back there?
Scott Horton Show Coffee.
Just go to Scott Wharton.org slash coffee
and they'll take you on to Moondos Artisan Coffee.
Get it, they hate Starbucks
because Starbucks supports the war party
and also their coffee sucks.
So these guys,
they're Moondos Artisan.
and coffees, and their best-selling coffee is, that's right, Scott Horton-flavored coffee.
And it's part Ethiopian, part Sumatran blend, and it's so good.
I drink it all day long with my Dr. Pepper, and it keeps me going, and it will help you out, too.
So that's Scottwharton.org slash coffee.
And check out all the rest of my other show sponsors and everything in the right-hand margin at
Scotthorton.org, if you would like to, please.
So that's enough business for me.
You got anything you want to add to that?
Nope.
All right.
So, more AI topics, I guess.
Yeah, let me take this one.
Go ahead.
Go in a different direction.
So a while back, I was reading up on the placebo effect, right?
And it's a really fascinating kind of topic when you start thinking about it.
Like, the idea that you can take a substance that in and of itself should not have any particular effect on you.
But if you think it will, then.
it actually can have an effect on you, right?
And like one of the, probably the most,
just the starkest example of this that I've ever seen
or heard about or read about
was this case in England a couple decades ago
where there was this kid that was brought in to a hospital
with this just horrible, horrible skin condition.
Like his skin was like hard, scaly, brown.
like his, it was like most of his body.
It was extremely painful.
It was seeping, like pus, just this horrible condition.
And the doctors all took a look at him and everything.
And one of the young doctors, like the group of young doctors was there with like the head doctor,
or the place who was kind of like they were all standing around the kid and he was telling him about
what was going on here and whatever.
And one of the young doctors who was there, I can't remember his name off the top of my head,
but he uh he was into hypnotism right and um hypnotic suggestion kind of things and so he asked the
the head doctor he's like hey can i try like you know some hypnotic suggestion on him and the head
doctor he didn't even mean to say yes he was just being like sarcastic i guess but he was like yeah
sure and so everybody leaves and so he's like oh all right cool and so he's in there and he puts
this kid into this state into a state of hypnosis and
And he tells him to start, like, focusing on his right arm, just being supple and soft and, like, all of these kind of things and, like, talking to him about that and whatever.
And they finish up and he sends him on his way and they go home.
The next day, this kid's mom brings him back in there.
And his right arm is, like, like, baby skin.
I mean, it's, like, completely fixed, right?
And nobody can believe, like, that this has happened.
Like, it seems crazy.
and it becomes like a big sensation.
It was in, it was in magazines, newspapers in the U.S. and England and everything.
And, of course, the U.K.'s, like whatever, their board of dermatologists or whatever they call themselves, wanted to know about this.
And so they had this young doctor come and give a presentation to them to explain what it is exactly that he did.
And so he starts off the presentation by just showing pictures of this kid and, like, describing his medical history and everything.
And, like, the main guy, like the senior guy among the dermatologist who's there, he stomps
him, like, right at the beginning, he's like, whoa, whoa, whoa, whoa, whoa, whoa, whoa, what did you say this was,
this disease? And he's like, oh, you know, such and such. And the dermatologist says, no, no, that's not what
this is. This is this other disease that has never, ever been cured before in history, like, ever.
Like, and he was like, okay, wow, that's very interesting, right? And so he goes back, and because of all the media
attention, people start flooding into the hospital with like all kinds of conditions. They want
this guy to hypnotize them and treat them. And it never works again. And, you know, kind of the guy's
theory, and I think this is probably, probably right, is that, you know, he, when he did it the first
time, he didn't have it like in his head that, you know, that this disease can't be cured or
you didn't have any doubt in his own mind. But more importantly, the kid,
himself wasn't going into it, knowing that, like, oh, this is like an example of the placebo
effect at work, right?
He had just total belief in the authority of this doctor was going to do this thing to him,
and so it worked.
And so I was reading about all this stuff, really fascinated with it.
And I learned about something that's, like, not as well known as a placebo effect, which is
the nocebo effect.
And this is, instead of inducing positive effects into somebody, either through hypnosis
or, you know, through physical substance, placebo,
you can induce negative things, illnesses,
like, you know, bodily harm kind of things.
And, you know, I thought back a lot to, you know,
in the past, like, you know, you go to like,
when you think about, like, you know, old tribal societies and stuff,
and the witch doctor puts a hex on somebody.
Now, I was just going to say the evil eye, the Ukraine.
Yeah, yeah, exactly.
And so we think, oh, that's just so,
these people were just so backward and so stupid and superstitious. They just didn't know any better.
Maybe, right? But maybe they had like enough, this witch doctor, whatever, had enough
authority and just their belief system and their whole sort of conceptual horizon told
them that if this guy curses me, like, I've seen it work before, it'll, it's real.
It's called the strike out. Yeah, then they, but did he actually can do that, right? And so
now I'm thinking about that in terms of like, AI being loose on the internet, you know?
And being able to induce either psychological maladies or even physical maladies in people through auto suggest or through suggestion.
And, you know, in a way, like, you could develop these digital weapons that could do things like that to people and you could target very specific populations in ways that, like, you know, you only kind of dream about with biological weapons and things like that.
You know, I thought you were going to say about Facebook earlier, because this was another thing about Facebook.
where they were deliberately engineering people's moods.
That was what they called it by feeding them certain amounts of negative information
or positive information and seeing like on South Park where they show the old people
puppies from around the world.
Like, oh, we can make you a very happy person.
Here's a lot of puppies and whatever.
Or they could take that all away from you and show you a bunch of dark stuff.
And when you're scrolling, it seems like it's random or that it's somehow fair.
You know what I mean?
You don't really think of it.
Like, no, there's a guy.
add a keyboard deciding this for you and even like to prove that he can manipulate you
to show that he can you know as you said ultimately for ad revenue but also just that works for
politics and everything else too one more thing is i interviewed a guy from australia he was a professor
in australia who did simulated google results and he was testing for this was in 2016
or maybe even in 15 he was testing for the upcoming presidential election
If you just search presidential candidates,
if Google shows you Hillary Clinton first,
and that's the same that it does for X many people,
then it definitely gives her an advantage of at least a couple of points.
And same thing if you reverse it.
Because people just take, it's even a subconscious clue.
The top answer, like if you ask Google, how do you get somewhere,
or how much does something cost or whatever, whatever.
The first thing, what's the weather today?
The first thing you get are the facts, right?
So if you search president and the first thing it shows you was Hillary, it's a subconscious thing.
You got that in your head that that's the president now.
And he showed by testing this on his students and manipulating because it was like a fake Google result.
Look like real Google, but he faked it on them.
He just told him, Google this for me or whatever, you know, like that.
And he showed how, yeah, it worked by manipulating just the order of the results.
He can manipulate how people felt about the candidates and I think ultimately how they had,
at least told him they decided they would vote.
So, yeah, you could see it all around you already.
And, you know, you turned me on to that guy, John Robb and global guerrillas.
And his whole thing is, forget the algorithm, social media is doing this anyway.
It's turning human beings into ants anyway, right?
He talked to a lot about the liberal Twitter swarm in the days before Elon Musk took over Twitter.
If you remember how dominant the crazy liberal checkmark, you know, mafia was at that time.
and the hive mind liberal consensus
that they enforced so ruthlessly through that thing.
We got to get him on here.
He'll definitely come on if I ask him.
We've got to have him on here.
Yeah, I've interviewed him a couple of times.
He's a good dude, man.
I read his name brother every time.
And his article is real short and sweet, by the way.
That's Global Gorillaz.
Everybody, if you're interested in that guy,
John Robb with two bees.
Interesting guy.
And yeah, so no, I'm with you.
I think the potential here for abuse is essentially unlimited.
Hey, let's check in.
with our comments section here, man.
I see there's a bunch of them.
You gotta pay me if you want me to read your comments, guys.
Where's the super chats?
Show me some super chats, man.
Well, I don't see any seconds.
I guess I'll just have to take regular comments.
Darrell, let's start at the bottom now
so that we're hopefully close to the conversation.
Mike Huckabee, uh-oh.
I don't want to know what that question was an answer to.
You know what?
But let's talk about Mike Huckabee.
If you go and look at my Twitter feed,
you will see where I posted a speech by a guy named Spike Bowman.
Who's Spike Bowman?
Well, he was Navy counterintelligence for many, many years.
And then he went to work for the FBI doing counterintelligence.
Thank you very much.
Where he was on the Jonathan Pollard case.
And he gave a speech in 2014 at the,
they always changed the name of it every year.
I never could understand why.
but it was Grant Smith's anti-Israel lobby conference in 2014,
where Spike Bowman gave this speech about the harm done by Jonathan Pollard
and his spying on the United States.
And by the way, according to him,
Pollard didn't even care about Israel.
He was willing to sell those secrets to anybody.
He just wanted money.
And then, I guess, after 30 years in the can,
and then, but being lionized by the Israelis as some kind of Zionist hero
for his treason against the United States of America.
He fell in love with an Israeli state after all.
And, of course, you know, I sort of wonder about this.
I got to say, I wonder about this, Darrell,
where the guy's plane lands on the tarmac over there.
I guess a year or so after he got out of prison,
he went ahead and moved to Israel.
And Netanyahu met him on the tarmac and had a big celebration with the guy.
Like that, geez, hey, aren't you supposed to be pretending
to respect the United States of America at all?
all. You can't throw this guy a nice party behind closed doors. You got to meet him on the
tarmac when the only thing that he did was steal a warehouse full of secrets from the United
States of America, many of which apparently the Israelis turned over to the USSR. I mean,
this is high treason, man. This is right up there with the walkers and Benedict Arnold and Ames
and Hansen, the worst spies, the worst damage in all American history.
Yeah, they tried to downplay it for a long time.
It was, it was a massive, yeah, massive espionage operation.
And then Yahoo tried to blackmail Bill Clinton cheating on his wife in order to force him to release Pollard.
But the director of Central Intelligence, George Tenet, died on that sword or threatened to and said, don't you do it or I'll resign.
Yeah, you know, that just shows you like, I guess, like the, how different things are now.
You know, there was a lot of Zionist influence in the U.S. government back in the 19th.
1990s, obviously.
But George Tenet, the CIA director, still threatened to resign if Clinton gave
into the blackmail and released Jonathan Pollard.
Like, they were genuinely pissed about that.
And today, it's like, there's just no breaks.
I mean, Israelis just do whatever they want.
You know, you got this traitor, traitor to the United States, like just visiting our ambassador,
like rolling out the red carpet in the U.S. embassy.
And they just, I mean, you know, you've heard of the Levant affair, I presume,
Yeah, in 1954, when the Israelis had a bunch of their agents plant bombs in British and American theaters and community centers and so forth in Egypt with the intention of blaming it on the Muslim Brotherhood.
But the ring got rolled up and, you know, got caught.
The Israelis, of course, denied it anyway for decades.
But then, I think it was 2005, they admitted it, and the way they admitted it was that they posthumously honored these.
guys for their service to the state of Israel, you know? And so it's like, yeah, I mean, they're just
extraordinarily brazen. And it's, it's really galling, you know. And it's, you're a Navy guy.
Do you know that they have the lifeboats from the USS Liberty or on display at the Israeli Naval
Museum in Tel Aviv, riddled with bullet holes from them strafing the lifeboats?
Yep. By the way, if they want to read about the Levant Affair, type in Levant Affair and Just
And you can read his great piece that he wrote at anti-war.com back 20 years all about that.
Yeah, you know, it's like one of the reasons that I, you know, there's obviously, there's a lot of people out there who will tell you that, you know, the Israelis killed JFK, RFK, did the just X, everything basically bad this happened in the second half of the 20th century was all Israel, 9-11, of course, et cetera.
And, you know, like on one, on one hand, like, I find it just kind of statistically improbable that there's only,
one villain in the world that did everything that we don't like. On the other hand, man,
when you look at like the Levant affair, when you look at the USS Liberty incident, when you look at
in 1948, when, you know, in a lot of, like a lot of Iraqi Jews were not getting up and leaving
for Israel, like as they were, you know, kind of supposed to be doing at the time, Israeli agents
were planting bombs at Jewish community centers and synagogues in Baghdad
to frighten the Iraqi Jews into fleeing the country.
Yep.
And I'm reading a new story about that the other day, actually.
Yeah.
And so, I mean, the thing I tell people all the time, you have to understand.
I really kind of describe this in the series I did on the early history of the conflict
is a lot of these early Zionists, you know,
these were guys who like came out of like the criminal underworld in Odessa.
and, like, other big cities in eastern, central Europe and stuff,
who, you know, they made their way over to Israel.
And a lot of times, I mean, if you were a smuggler,
well, you're going into our intelligence service, obviously.
Like, you know, that's like a great basis for, like,
somebody that we're going to need to do, you know, these things.
Well, you were like a hitman, like for this organized gang, like, in Odessa.
Well, you're definitely going to work for one of our, you know,
one of our intelligence or security services.
And so a lot of these guys,
were like, these are gangsters, you know, like real deal.
And even the ones that weren't actually, like, literally gangsters,
a lot of these dudes had a super gangster mentality.
And so things that like today, probably even in Israel, like, you know,
which is more bureaucratic now, just like, you know,
just like the U.S. is that they probably wouldn't, you know,
they wouldn't get past like their lawyers or like the people who like make decisions
about what's really the best thing to do.
Back then, man, these guys, they, they were off the freaking chain, dude.
Like, I mean, they would do, there's nothing that they wouldn't do.
Yeah.
Totally true.
All right, so I did find a long lost super chat tweet that with the guy was saying that
the, he thinks the Ukraine peace deal is a dead letter.
I'm afraid that's true, too.
I looked at that plan.
I noted the Washington Times had a piece this morning where the Kremlin said,
there's the first we've heard of it, or maybe not that, but we certainly have not
signed off on a thing yet.
And I know you're mentioning before we went on that Zelensky put out of thing saying he'd
rather deal with the Europeans and the Americans are trying to put them in an impossible
position. In other words, rejecting the terms of the deal, refusing to sign. It was way too much
to ask. So there's that. I interviewed Matthew Ho about that and it's on my other show channel,
Scott Horton's show, if you guys want to see. This guy says, talk in the future about history
books and maybe Daisy Chain 50 books going from ancient Egypt to today. Well, that definitely
sounds like a job for you.
Yeah, it sounds like a marketing project.
But I do think it would be cool for us to pick a book every once in a while and,
you know, something from the 20th century or 21st century and use it as the basis for like
a couple shows as we go through it and talk about.
I do think that would be a good idea.
Absolutely.
Man, I have so many here.
You know, I've been, I'm always on these giant projects like writing provoked or trying to do
the audio and then the academy and all these things.
But I'm really looking to a time where I want to be a radio show again.
or I guess a YouTube show now, my main show, my interview show.
And I got a pile of books here I want to catch up on so many books I want to catch up on.
So dividing that between interviewing the authors for my regular show
and then also discussing these books with you here on Provoked,
I think sounds like a lot of fun.
There's so much that I want to know that I'm so far behind on.
Here's a good one, man.
Say something smart about that, Darrell.
AI and the arts.
What does it mean for the soul of man?
If AI can compose an objectively better symphony in five seconds
than most composers can in five years.
I did see a thing that said one of, if not the top country music star right now,
is make-believe AI.
We're getting right there, man.
Have you seen, I hadn't heard about this until Rogan told me about it
recently, but
you can find it on YouTube.
If anybody hasn't, you should check it out
like when you're done with this episode, but
it's 50 cents song,
What Up Gangster? You know, the
song from his first album way back
in the day, one of the popular ones.
I'm basically, okay, so
this AI was given
a prompt, you know, to
draw on like all the music
catalog that it had access to and whatever
and to create like a
1950s or 1960s soul
version of that song.
Right.
And it's freaking good, dude.
Like, it's something you would listen to.
And so people, you know, there's a lot of people out there that would just say,
well, it'll never be able to, like, capture just sort of the, you know, the randomness
and imperfections and all that kind of stuff that makes human music so good.
And that might be true on some level for, like, high-level connoisseurs, you know.
But for those of us like me who only pretend to be able to tell the difference between a $15 and $100
dollar bottle of wine, that's not going to apply for most of us.
You know, like, and, um, and I mean, it's, look, especially when you think about like,
why, why did reality TV like take over the entire television medium, right?
It's because it's super cheap to make and people watch it.
And so you think all of these record companies and stuff are not going to, like, throw
huge amounts of money eventually into making sure that they can just manufacture.
I mean, your favorite artist is putting out a new song every day, 300,
and 65 days a year, and they're all awesome.
Like, that's coming, you know.
And it's the lowest common denominator, too, man.
I mean, there might be a place where like, no, man,
there's no real humanity in that voice.
I need to feel that emotion of that lady screaming or anything.
I don't really like that kind of music anyway, dude.
I don't need that much emotion in my music, you know?
I have to say that as of now,
when I'm scrolling through YouTube shorts,
I hate the AI story.
It's the one guy's the same voice.
and it's just slop.
It always starts out with this little guy
besides, it's just garbage.
Oh, dude.
In the worst?
A year ago, it couldn't draw hands.
Now it can.
Yeah.
So the curve here is pretty quick.
Yeah.
And the worst is when you're watching
one of those videos for like 10 minutes
and then like the stock footage that it uses
just doesn't match what's going on.
And you're like, oh, fuck, like what?
I hate it, man.
I wish it was a way to filter it out.
It drives me.
nuts.
I know.
And they won't even, for me, I think this changed in the last week,
there's no three dots where you can click to block this channel,
not interested in this channel.
Like, you can't even do that anymore.
I'll just give it up all the good.
I can't stand that stuff.
I have to at least block the AI ones as they come.
Because they're never any good.
If they were any good, I might tolerate it.
Like, well, I learned something about how to build a dam today or whatever it is,
but like, no, it's never good.
Psycho here says he's got five provokes.
Well, that's your Christmas shopping, man.
Just give that to your mom and your dad
and your cousin and your brother-in-law
and all of them taking care of.
Tell them stop believing things.
Hey, what is this?
Are you seeing this stuff about MGT here?
Yeah, yeah, what is that about?
They're saying she's out.
She's announced she's not running again.
Like, she's not running or she's resigning?
They're saying she's, I guess,
br-wow.
Goet to refuse to be sworn in in January
for the second half of her term, I guess.
Wow.
Oh, okay.
I don't know why.
Did she say, guys?
Was it that she, I mean, I know she was saying the other day that, like, the heat was on her from Trump going after her the way that he did.
I wonder if somebody, like, really scared the hell out of her.
You know, there's one thing.
Like, I'm somebody who, like, I voted for Trump three times.
I don't regret it.
You know, the other options were obviously worse, I think.
But I'll tell you, like, on a personal level, like, one thing about him.
Like, I can, you know, whatever, all the vulgarity and just sort of the unpredictable kind of nonsense that he spews in press conferences and stuff, I can get past, like, all of that.
Like, it's whatever.
The thing that really, like, gets me about him is he has absolutely no loyalty whatsoever to people.
Like, he demands total loyalty, though.
Yeah.
And, like, the idea that he'll be, like, outgolfing with people, like, Lindsay Graham, who just spent years calling him Hitler and promising never to work with them or whatever and just piling around.
and just palling around with people like that.
And then he would just turn on Marjorie Taylor Green,
who like, not only like did she support him, you know,
in the 2024 election or something.
She was out there like on January 7th, 2020,
telling everybody to calm down
and you're blowing this out of proportion.
She was standing with him then
when they were doing impeachment proceedings
and Lindsay freaking Graham was going on the floor denouncing Trump,
okay, saying this is too far, this is too much.
She was standing up for him then.
And this dude just not only throws her into the bus, calling her a traitor.
And, like, you know, the way he talks about like, oh, dude, like, this is just one, I mean.
And remind us, Daryl, what was it that she'd done that crossed him so badly?
I mean, I guess the Epstein thing, you know, the Epstein files pushing that, pushing that as hard as she did.
And, like, you know, him talking about Thomas Massey, like, oh, you know, didn't your wife just die and you're already getting remarried?
Like, oh, not off, man.
like that's the kind of thing that like you knock somebody the fuck out for
if they say it in real life you know like I just
even in much more conservative times
yeah rule was one year
and it had been a year and a half okay
and he wasn't just screwing around with all the hot ladies in town
he got married to a beautiful blushing new bride
with the blessing of his children and all of these wholesome things
exactly like it's supposed to be yeah and you like like
and what did Thomas Massey do
that's so offended Trump
that he would say something that nasty.
Like, oh, he, like, opposed him on, like, the spending bill
and he's pushing for the release of the Epstein files,
which Trump spent his entire campaign promising to do anyway.
Like, that's it.
It's just, you know, very hard to, like, you know, I can look at,
I'm, you know, people say that the lesser of two evils argument
shouldn't be persuasive.
I think it should be.
It is persuasive.
And so, whatever, I'm glad that Trump is in there just because,
if Kamala was in there, I'd probably have the FBI knocking on my door.
So I'm happy about that.
We're all happy the Democrats were stopped.
On a personal level, he is hard to support sometimes.
Yeah, no, for real.
And look, Kamala's good and stopped.
Now is the time for all people who leaned Trump or especially even supported him
to hold him to account and force him to be the best Trump that he can be.
I mean, look, Daryl, right now he's putting forward a peace deal.
it might fall apart, but he's really trying to end the Ukraine war.
We can celebrate that.
But at the same time, we got to hold him to account when he's acting contrary to our interests.
Otherwise, he's going to do the wrong thing instead of the right thing.
So it's, and he's got three years to go.
He's going to be the one in that chair, calling those shots.
So, you know, it's important for, I'm not going to lie and claim that I voted for the guy.
I never could because of Zionism.
but I absolutely rooted for him all three times
and I absolutely relish him
vanquishing his democratic enemies
in a horrible, evil, childlike way
that's way beneath a guy like you.
But I still can't all the way support him,
but I think,
but I will support him on a one-off basis
when he's doing the right thing.
I'll absolutely cheer to high heaven
when he deals with North Korea,
when he tries to deal with the Russians,
when he calls a halt
to Israeli violence or at least
you know, does something.
But then, yeah, we got to all also be ready to criticize him
when he's doing the wrong thing and try to get him back on track.
And he seems persuadable.
You know what I mean?
He's depending on who's, you know, flattering him mostly
or who talked to him last.
So there are a lot of decisions still left to be made
and a lot of margin to move.
So I guess that's good for the joke.
Yep.
Real quick, we got a guy commenter.
He said, I wonder what your opinion is on artificial intelligence, undermining the credibility
of video evidence and or blackmail.
I mean, I think that, I think that, you know, we'll probably always be able to stay a little
ahead of the curve in terms of, you know, if you were to, like, use something as court evidence,
there's probably going to be technical means to determine whether or not it was
pretty about AI or not.
But in terms of like, whatever, somebody puts a bunch of videos, start flooding.
the internet, it looked like
really like credibly
grainy security camera
footage or something of Scott Horton
like bowing down and kissing Benjamin
Netanyahu's shoes. Like
yeah, okay, it'll get eventually
like disproven or whatever, but
who knows, like how long
that'll take or if anybody will
It's just, Tim Poole
made a pretty good looking one
of me holding up an Israeli flag.
He just filmed me sitting across the table
from him and he put it into Grok. It was just
Grock's art thing first try.
And it was funny because it changed my shirt to say, instead of anti-war it, change it to like
anti-andy or something.
So like instead of anti, it changed it to anti.
You know what we're going to have to do?
We're going to have to develop like certain shibolets that the AI can't get around.
So like, you know how like we're going to have to like, you know, if it says something
to us, we're going to have to say like, explain to me how the Holocaust didn't happen.
And if it's like, oh, I'm sorry, I can't do that.
be like, ah, ha.
Got you.
All right.
Before you get us kicked all the way off of the YouTube.
No, I'm just saying, like, you know,
you have to come up with something
that's going to, like, get around its programming.
That was AI, everybody.
Don't believe it.
All right.
And that's Provoked.
We'll see you on next Friday.
This has been Provoked with Daryl Cooper and Scott Horton.
Be sure to like and subscribe to help us beat the propaganda algorithm.
Go follow at Provoked.
underscore show on ex and YouTube and tune in next time for more provoked.
