Chewing the Fat with Jeff Fisher - We Will Endure... Guest: Joe Allen, Author of Dark Aeon: Transhumanism and the War Against Humanity | 12/2/23
Episode Date: December 2, 2023Talk with Joe about his new book and what the past has shown us about today and what could be happening in the future. Dark Aeon: Transhumanism and the War Against... by Allen, Joe (amazon.com) Learn ...more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Boarding for flight 246 to Toronto is delayed 50 minutes.
Ugh, what?
Sounds like Ojo time.
Play Ojo? Great idea.
Feel the fun with all the latest slots in live casino games and with no wagering requirements.
What you win is yours to keep groovy.
Hey, I won!
Boating will begin when passenger fisher is done celebrating.
19 plus Ontario only. Please play responsibly.
Concerned by your gambling or that if someone close, you call 1866-3-3-1-2-60 or visit comex-octo.ca.
Blaze Radio Network
And now, Chewing the Fat with Jeff Fisher
The book, Dark Eon.
Some people pronounce it Aeon, but I'm telling you it's dark Eon.
Transhumanism and the War Against Humanity.
Joe Allen, welcome to Chewing the Fat.
How are you, sir?
Jeff, I'm very well.
Thank you for having me.
Absolutely.
So transhumanism.
Okay, so just a quick, the book was fascinating.
I haven't really finished it, so I apologize.
I've been digging into it, and every time I get into someplace, I go back and I want to read it again.
So we'll just figure, I know how you ended it, so we'll get there.
But first, I want to talk about, you know, we're so worried now in today's world about AI.
And I say word, we're so worried.
We believe, many people believe that it's going to, you know, turn us against us.
and transhumanism, we're being sold the bill of goods that it is good, right?
I mean, all of this is supposed to be good for us and help humanity.
And many of us, including, I think you, believe that it really isn't going to help humanity at all, right?
I think the downsides certainly outweigh the up, at least as I perceive it.
Yeah.
Yeah, I think much like think about fentanyl and or even the more corporate friendly Oxycontin,
Oxycontin was initially billed as a product that would, you know, stop pain and would be non-addictive.
And I think that that paradigm holds with artificial intelligence as it's being sold now.
Yeah, no question about that.
Now, you know, it all started.
I love you talk about your grandfather in the book talking about television.
And I've always reminded of my mother, and I've talked about it on the show before,
was always saying that television is going to be the ruination of the world.
And I know your grandpa thought the same thing.
And yet we still get sucked into it, right?
I mean, that's part of our life.
So hot when we get into the box and we're all hooked on the,
box and maybe not all of us should be but many of us are how does that affect is that is that what's
dampening our mind or deadening the mind so that we just let this stuff happen to us and we're
like it'll be okay don't worry about it yeah we went from the the deadening television to the
deadening PC to the seemingly enlivening smartphone although i would say that that's no life to
aspire to and right if if the trends go in the direction that the people at meta or google or apple
want them to go will soon have smartphones resting on our faces and then of course if Elon Musk
realizes his dreams we'll have smartphones in our brains i i don't think that the downsides decrease
as you increase the intensity of transmission so okay so what
What's the end game then?
When you talk about transhumanism,
I'm talking about a complete,
just a chip in my brain.
We've had the shows where we have the chip in the body
and we're able to have an IP so that we can get information
and we can download fresh and new information.
But I was always,
I always figured we would just wear a motorcycle helmet
for lack of a better,
what they would call it,
even just an astronaut helmet, and the screen on that helmet would be our computer screen.
And we'd be able to see live.
We'd be able to see what's going on in real life through the glass,
but on the glass we would have that information in front of us.
So am I, are we going to, is that going to be considered transhumanism,
or are we going to actually have the chips inside of us?
It's hard to say.
It depends on how well they're able to develop the technology,
such as the neuralink brain computer interface or any of the other companies working on them.
But insofar as the motorcycle helmet or the astronaut's helmet is concerned,
I mean, they tried it with Google Glass over 10 years ago.
It was pretty much mocked out of existence.
But the new round has begun.
And you've got Meadow with their Rayban-style augmented reality glasses.
That's already here again.
We'll see what the rate of adoption is.
But I think that as they keep pushing it and as it improves and as a younger generation is much more pliable than the older and less suspicious, I would imagine that it will, if not become ubiquitous, it will at least take off to the point that it's significant.
Well, they had the problem with the original ones, and I'm not sure if they're still having the same problem with the latest studies and with the glasses, was that people were having a difficult time going back and forth, right?
between what's live and what they're viewing on whatever they're viewing on the screen.
So that's where I got the, you know, that's where I was started thinking about the helmet.
So it would be, it's an easier to me anyway, an easier, pliable way to see the difference between what's live and what's on the screen.
But I'm, you know, whatever, they'll do their studies.
And I'm sure that everyone will just be happy with it because it's all good for everyone, isn't it?
Yeah, I think that that notion, you know, it's interesting right now, especially with artificial intelligence, but you see it in other realms.
There are the kind of competing narratives between those who believe artificial intelligence will make a world of radical abundance and near omniscience.
And then on the other side, you have the people that are like, no artificial intelligence, maybe within the next five years will kill everyone on Earth.
It will turn the entire planet into gray goo or it will create bio-weapons to kill every.
or it will launch nuclear strikes from one nation to another, so on and so forth.
But those extremes keep people distracted from the more incremental advances of what I would say
is a profoundly dystopian technocracy or a kind of transhuman aspiration.
So I don't believe that AI is going to kill everybody within five years.
And if I'm wrong, there'll be nobody to call me out on it.
do I believe that we're going to see radical abundance. I think that what is happening right now
is not dissimilar to the kind of reckless rollout of the television or the personal computer.
There's maybe a bit more thought than before, but simultaneous with that, you've got
such a complex landscape that no one, for the most part, is concerned about what I believe
the impending threat is, which is, again, a kind of incremental or a gradual, a similar
of these technologies so that we offload our cognition more and more,
offload even our decision-making capacity more and more and become human AI symbiotes,
as it's often described.
Well, that's, I mean, that's where, again, you know, you say no one is thinking about that,
but I mean, that's exactly where, to me, it seems that we're heading, right?
I mean, we're able to, we're going to reach a point where even Elon says in interviews about
Tesla.
Well, just let the computer do it.
The computer will tell you what to do.
And with the computer, well, okay, so at some point, you're not even thinking about not letting the computer do it.
I mean, the computer is just going to do it and figure it out for you.
And that's where you're going to go or that's what you're going to do.
And that's what happens, right?
I mean, we're just going to say, oh, well, the computer has been right for the last four, five, six, seven, ten years.
Well, it's never going to be wrong.
We'll just let it be, we'll just do what it says at all times.
right so now we have the fight in the military or at least they claim they're having the fight over you know drones being able to kill humans and we have the incremental move from there saying well you know people are still going to be in charge of it it's the AI that's not going to be you know it's not going to the actual computer artificial intelligence isn't going to decide we're telling it what it's going to decide well at what point in this slippery slope
does it decide, you know what, I got it. I got it. I'll take care of it for you. Don't worry about it.
And that's where you're at, right? I mean, that's the problem. Yeah, that maybe is,
that's a really dramatic case in which you've got life or death decisions being made by AIs or
algorithms. And two of the loudest proponents may be the most important, as far as I'm concerned,
you've got Eric Schmidt, former Google exec. He was the chair of the National Security Commission
on artificial intelligence, they released their enormous report 2021.
And in that report with Schmidt really being the face of it to the media, in that report,
they argued that the U.S. was simply not prepared for the competitive landscape of the future,
of advanced artificial intelligence being deployed by China and Russia and other large-scale
actors.
So the argument they made was that the U.S.
should not sign on to any treaty banning lethal autonomous weapon systems.
And, you know, Schmidt now is partnered with Ishtari.
Ishtari is a military contractor.
They use algorithms, artificial intelligence, to fast track the development of drone systems.
In the time since then to now, we've seen a really strong push in the DOD for the development
of and deployment of artificial intelligence systems
that, again, would be able to make the decision
to kill or not kill without a human in the loop.
Yes, of course, those targets would be determined
by a human at the beginning, at the outset,
say it would be based on IP addresses
or based on even facial recognition,
maybe it would even be based on uniforms, so on and so forth.
Right, right, right.
But what you see now, you've got the replicator program
being rolled out by the DOD, they are saying that tens of thousands of autonomous drones
will be created and deployed within the next year and a half.
And that's one assumes if it's successful that these sorts of weapons systems
will become more and more commonplace.
There's one other individual that I think is really, it kind of surprised me when he said it,
but I guess in retrospect, I shouldn't have been surprised at all.
That's Mark Andresen, the venture capitalist, who is pouring many millions, hundreds of millions, billions of dollars into various AI startups.
In a recent interview with Lex Friedman, he put forward the argument that no human being should be making the decision to kill on the battlefield, that the AI systems of the near future would always be superior to human beings who are sort of blinded by the fog of war.
You know, Andresen is not some sort of lefty radical.
He's very anti-communist.
He's very pro-American nationalism.
He's not unlike Peter Thiel in that regard.
And Elon Musk, I guess, is somewhere on that spectrum leaning towards that.
And what I see in all of this is this crazed techno-optimism that simultaneously is saying,
don't worry, the AI won't kill everybody.
But we also need to create AI systems that can kill some.
people sometimes I'm not convinced it's just some people sometimes it'll be fine it'll be
fine all right so we'll get back Joe Allen author of dark eon transhumanism the war against
humanity a fascinating book and we can break it down I mean I know you break it down which I
really like the way you broke it down starting from the very beginning when you start putting
things together. And, you know, I know you, you know, with between the Ted Kaczynski and the Ray
Curdswile, bringing those together. Fascinating. I mean, I haven't met very or talked to very many
people who, you know, put those guys together. But it's absolutely true. And it's fascinating how you
did that. But I, I wonder how or if we were able to stop.
it now because we're not going to be able to right there's only there and I know I'm
talking over myself so let me just stop I wonder if there's any way to stop it
without going back to living in a cave for I think that like many
technologies artificial intelligence or any of the other technologies that
descend from that brain computer interfaces of any sort non-invasive or
invasive virtual reality all this
it will be distributed unevenly like any technology.
Now, the smartphones are kind of an exception to that.
Smartphones are much more ubiquitous and evenly spread across the planet
than many other tech examples.
But I suspect that especially as more and more people are resistant to the idea of
integrating artificial intelligence into their businesses,
into their churches, into their schools,
that we will see something like a, you know, a control.
group emerge out of this.
Ironically, the pandemic kicked up a lot of techno-scepticism among people, and I hope that that
endures.
So to the extent that some people are resistant to adoption and incorporation, I don't
see it as being a ubiquitous development across the entire planet for every person,
everywhere. However, as you say, most likely those who are driving it forward are going to continue
doing so outside of technical barriers that are unforeseen. I suspect the technology will keep on
moving forward, maybe at the exponential pace that people like Kurzweil predict. And to the extent
that happens and to the extent it has the backing of Wall Street, which it does now, to the
extent it's run on the engines of Silicon Valley and Seattle.
which it is and to the extent that it enjoys the support and patronage of our DoD
which it does and all of the other competitors across the world are pushing an
arms race forward yeah it's it's everything but inevitable right yeah there's
anything is possible but I suspect that the drive for more wealth for more power
that will keep pushing it forward and that's that's all we got I mean we will
have to learn to live with this.
Yeah, and that's what we are learning to live with it.
Do you think that the United States is, you know, with Eric saying that we weren't prepared,
I would think that we are actually trying to be prepared, but then I see where we're not trying,
you know, he's saying we're not prepared and don't sign any, sign any papers banning it,
which is what we're doing, right?
We're saying to the UN, we're not going to sign any of your paperwork.
we'll do a handshake deal that it's okay,
but we're not going to sign on to any kind of bands around the world.
Well, I mean, that's what we're doing now.
So are we preparing and getting better, or are we just, well, okay, we'll try to catch up?
No, you know, Eric Schmidt, he frames this as if the U.S. is somehow lagging behind in this process.
and, you know, China poses a serious competitive threat.
I don't buy that at all.
In fact, almost nobody speaks of it in this way.
The U.S. is far, far ahead of China.
I'm glad. I'm happy to hear that.
I mean, I would assume that myself, but I'm just going to be what he says.
He's kind of a corporate accelerationist.
So he's not going to be as extreme as the Mark Andresens or the Peter Thiel's of the world.
But ultimately, that's where he's going.
He wants the guardrails in place.
He wants the kind of public, private.
partnership that you see emerging right now in DC between OpenAI, Microsoft, Google, and the U.S.
government.
So I guess you could say he wants a more controlled acceleration, but he wants acceleration nonetheless.
So of course the rhetoric he employs is going to leave Americans feeling somewhat insecure
in a competitive landscape.
But ultimately, what he's talking about is creating an American-style technocracy.
One of the elements that is necessary for advanced artificial intelligence is data.
And so in order for artificial intelligence to be used to streamline or accelerate economic trends,
economic growth, it's going to require massive privacy invasions like we see already.
Well, yeah, we see that every day.
Yeah.
So my sense is that what Schmidt's trying to do is just simply prepare the way for that.
that with the, I see it as a veneer of American freedom and, you know, American independence.
I'm not saying that he's totally disingenuous, but I do believe that the realization of the world
that he's talking about, and any of these people on the kind of techno-optimist or even
transhumanist spectrum, that their dream of preserving human freedom and human dignity or even
human identity, while at the same time constructing this mechanical and digital behemoth,
I don't think it's going to work out that way.
It has a history, and it seems to be going, in my opinion, in a very profoundly negative direction,
especially something as seemingly mundane as AI in education.
I think that as students become more and more dependent on machines, as teachers and professors
become more and more dependent for their instruction, what you're going to see is sort of like
the couch potato writ large across academia.
and it's already happening, it will just accelerate.
It will just increase.
I mean, it is happening.
We see it every day in retail stores across America.
I mean, people count on machines doing their work,
whether it's at warehouses or even just fast food places,
but people count on those machines to do the work for them.
So they just don't know how to do it.
They don't care about doing it because the machine's doing it.
It's fine.
The machine does it.
It's okay.
Don't worry about it.
Don't worry about it.
Okay.
So in your book, Dark Eon, you talk about the plan to stay human.
Now, you call it the 55 point plan.
I didn't count every point.
It looked less than 55.
But the plan to stay human.
So in that plan, what is probably the most important plan out of the
55 point plan that you have in your book.
Yeah, you know, in some ways that that title is kind of a stab at all the different presenters I've seen at conferences who end with their 10 point plan to save America, 25 point plan to stop Marxist Gnosisism.
You know, it's, you know, it is, it should, obviously you should put all your cards on the table.
I'm not confident that much of anything that I wrote in that appendix will be followed.
least not by a significant number of people, but I really do think that the most important is the
personal choice. I mean, I begin with that. I don't believe that the state will necessarily be the
solution to any of this. They can put in buffers. They can put in protections like data privacy.
Those are really, really important. But mostly it's going to amount to what individuals and
communities choose to do in response to this rapid transformation. And,
I don't necessarily believe that everyone should follow my recommendations.
I'm pretty upfront about that, but the more suspicion is cast on the system, the more resistance
to reckless integration of these technologies.
And maybe one of the really key elements will be to have some sort of rubric or calculus
to determine how much you're gaining from any of these technologies and how much a company
or a government organization is getting from you.
Who's really winning out in this equation?
Who won with Google, for instance?
Who is winning with Amazon?
In my opinion, I don't think the consumer is winning at all,
nor is kind of American business life as a whole
or the information landscape in regards to Google.
So I think that's really important.
Human beings who live in nations where we still have choice
are going to have to make some very difficult choices going ahead,
And I hope that a significant number, a critical mass of people are wise enough or at least paranoid enough to hold out.
Well, I mean, we all should own us, right?
I mean, no matter what, we should own ourselves.
And I think a lot of people are not owning themselves at all in today's world.
And that definitely needs to be thought about.
And I will say that at the end, and I don't want to spoil the ending.
of the book, Dark Eon, but you say and claim that we will endure. I want to ask you a question.
Will we? Absolutely. You have to have faith in that. You have to have faith in that.
I can't say with any 100% assurance that any one person listening right now will endure,
nor will their children or their children's children, but you have to have that faith. If you don't
have that faith, you're not going to respond in a way that is conducive to that endurance.
And you might say, okay, well, that's just pie in the sky. That's just, that's you fibbing,
telling a, you know, a pretty lie. I don't believe it is. I think that, for one thing,
I also have faith that this is not the end, this, this material realm is not the end-all,
be-all, and that, you know, what is truly important extends far beyond this world where
you know, treasures are buried in the moth rusts.
I think that that is for at least the religious people who are listening,
I think that's probably the most important focus.
But even for the materialist, it's also very important to believe in yourself
and to believe in your human capacities.
Because if you don't, again, these corporations, by and large,
with all of their talk of empowerment and putting you in conclusion,
control, what they're offering are systems that will put them more and more in control and
disempower you. So you have to have faith in yourself. You have to have faith that you're going
to endure. Joe Allen, author of Dark Eon, Transhumanism and the War Against Humanity. The
forward in this book is by Stephen K. Bannon, which was fascinating. I would love to have you back
and talk to you maybe a little bit about your story, let alone now your book. Your story is fascinating,
too, your American dream story.
But the book, fascinating.
I appreciate you coming on.
I know you're busy.
I'll let you go.
I could talk for another hour.
And I know you will say,
hey, shut up.
I've got to go.
So, Joe, thank you very much for joining me.
I'm chewing the fat today.
I really appreciate it.
And we'll talk to you soon.
I'm always good to come back and chew the fat, Jeff.
Stream and subscribe to more Blaze Media content at the blaze.com
slash podcasts.
