QAA Podcast - AGI Is a Conspiracy Theory (E347)
Episode Date: November 7, 2025Have you been having fun with the newest slate of AI tools? Have you been doing research with GPT-5? Coding your projects with Claude? Turning pictures of your friends into cartoon characters from the... Fairly Odd Parents using the image editing tool Nano Banana? Are you impressed with what they can do? Well guess what? You’re only impressed with them because you’re basically a naive child. You’re like a little child with an etch a sketch who is amazed that they can make crude images by turning the knobs, oblivious to greater possibilities. At least, that’s the impression you get when listening to tech leaders, philosophers, and even governments. According to them, soon the most impressive of AI tools will look as cheap and primitive as Netflix’s recommendation algorithm in 2007. Soon the world will have to reckon with the power of Artificial General Intelligence, or “AGI.” What is AGI? Definitions vary. When will it come? Perhaps months. Perhaps years. Perhaps decades. But definitely soon enough for you to worry about. What will it mean for humanity once it's here? Perhaps a techno utopia. Perhaps extinction. No one is sure. But what they are sure of is that AGI is definitely coming and it’s definitely going to be a big deal. A mystical event. A turning point in history, after which nothing will ever be the same. However, some are more skeptical, like our guest today Will Douglas Heaven. Will has a PhD in Computer Science from Imperial College London and is the senior editor for AI at MIT Technology review. He recently published an article, based on his conversations with AI researchers, which provocatively calls AGI “the most consequential conspiracy theory of our time.” Jake and Travis chat with Will about the conspiracy theory-like talk from the AI industry, whether AGI is just “vibes and snake oil,” and how to distinguish between tech breakthroughs and Silicon Valley hyperbole. Will Douglas Heaven https://bsky.app/profile/willdouglasheaven.bsky.social How AGI became the consequential conspiracy theory of our time https://www.technologyreview.com/2025/10/30/1127057/agi-conspiracy-theory-artifcial-general-intelligence/ Subscribe for $5 a month to get all the premium episodes: https://www.patreon.com/qaa Editing by Corey Klotz. Theme by Nick Sena. Additional music by Pontus Berghe. Theme Vocals by THEY/LIVE (https://instagram.com/theyylivve / https://sptfy.com/QrDm). Cover Art by Pedro Correa: (https://pedrocorrea.com) https://qaapodcast.com QAA was known as the QAnon Anonymous podcast. The first three episodes of Annie Kelly’s new 6-part podcast miniseries “Truly Tradly Deeply” are available to Cursed Media subscribers, with new episodes released weekly. www.cursedmedia.net/ Cursed Media subscribers also get access to every episode of every QAA miniseries we produced, including Manclan by Julian Feeld and Annie Kelly, Trickle Down by Travis View, The Spectral Voyager by Jake Rockatansky and Brad Abrahams, and Perverts by Julian Feeld and Liv Agar. Plus, Cursed Media subscribers will get access to at least three new exclusive podcast miniseries every year. www.cursedmedia.net/ REFERENCES Debates on the nature of artificial general intelligence https://www.science.org/doi/10.1126/science.ado7069?utm_source=chatgpt.com Why AI Is Harder Than We Think https://arxiv.org/pdf/2104.12871 AI Capabilities May Be Overhyped on Bogus Benchmarks, Study Finds https://gizmodo.com/ai-capabilities-may-be-overhyped-on-bogus-benchmarks-study-finds-2000682577 Examining the geographic concentration of VC investment in AI https://ssti.org/blog/examining-geographic-concentration-vc-investment-ai Margaret Mitchell: artificial general intelligence is ‘just vibes and snake oil’ https://www.ft.com/content/7089bff2-25fc-4a25-98bf-8828ab24f48e
Transcript
Discussion (0)
POMAYORI, OUHU-W-O-W-O-O-O-O-W-U-W.
Ah, oh-ah-ah-ha-ha-ha-ha-ha-ha-ha-ha-ha.
If you're hearing this, well done.
You've found a way to connect to the Internet.
Welcome to the QAA podcast, episode 347.
H.E.I. is a conspiracy theory.
As always, we are your host, Jake Rakatansky.
And Travis View.
Listener, have you been having fun with the newest slate of AI tools?
Sometimes.
Have you been doing research with GPT-5?
Not officially.
Coding your projects with Claude,
turning pictures of your friends into cartoon characters.
from the fairly odd parents using the image editing tool, Nanobanana.
Are you impressed with what they can do?
Well, guess what?
You're only impressed with them because you're basically a naive child.
You're like a little child with an etch-a-sketch who is amazed that they can make crude images
by turning the knobs oblivious to greater possibilities.
Because according to tech leaders, philosophers, and even governments, soon the most impressive
of AI tools will look as cheap and primitive as Netflix's recommendation algorithm in 2000.
Soon the world will have to reckon with a power of artificial general intelligence, or AGI.
What is it?
Definitions vary.
When will it come?
Perhaps months, perhaps years, perhaps decades.
But definitely soon enough for you to worry about.
What will it mean for humanity once it's here?
Perhaps a techno-utopia?
Perhaps the extinction of humanity.
No one is sure.
What they are sure of is that AGI is definitely coming and it's definitely going.
going to be a big deal, a mystical event, a turning point in the development of humanity,
after which nothing will ever be the same. At least that seems to be the consensus. Others are
more skeptical, like our guest today, Will Douglas Heaven. Will has a PhD in computer science
from Imperial College London is the senior editor for AI at MIT Technology Review. He recently
published an article based on his conversations with AI researchers, which provocatively calls
AGI the most consequential conspiracy theory of our time.
Will, thank you so much for joining us to talk about this.
No, thank you.
It's good to be here.
Yeah, it was a great, a great article.
Yeah, definitely, yeah, made me sort of like rethink the kind of like, you know, rhetoric is coming out of the AI space right now.
Yeah, it made me feel a little foolish because I, you know, like many of you, I have a group chat with a handful of friends.
And there's a lot of AI in there, you know, of us turning each other into various things, various squids,
creatures, you know, all sorts of stuff. And I did, after finishing the piece, feel kind of like
I was, you know, just like kind of playing in a sandbox with, you know, a shovel and bucket.
There's a lot of way I everywhere these days. Yeah, of course. I mean, but that's, I mean,
it's funny that you talk about playing. I mean, so much of what we've seen is just a lot of fun,
like the sort of the gimmicky stuff we've seen, which, I mean, maybe we'll get into this,
but, you know, the vision that we're sold of utopia and solving the world's problems and
what are we getting? We're getting sort of, you know, cute little, uh,
studio jibbley generators and, you know, erotic chatbots.
Yeah.
It's like the new wave of, remember when the Snapchat filters came out?
And at least for people in my age group, it's elder, elder aging millennials, you know,
we thought the Snapchat filters were so fun and like, wow, it can put a face right on top
of yours and like it mimics your expressions.
And oh, look, now it's on grandma and grandma's a raccoon.
You know, I remember that we were looking at that in the same, with the same whimsy, I feel like, that is accompanying these, these little AI apps nowadays.
Yeah, yeah.
And now we have Sam Altman barbecueing Pikachu.
Yeah, I want to get into like the, like, really interesting sort of like conversations you've had with researchers for this.
But before we do, like, could you help me understand, like, what broadly you think is the difference between like the kind of like AI tools that consumers might be familiar with, might the.
to do like, you know, research or studio Ghibli or that kind of stuff.
And this hypothetical AGI.
Sure.
So, look, I'll do my best here because, I mean, there are a lot of good faith people
who genuinely think they're sort of, they're building this technology.
And I think the difference between what they're aiming towards and what we have today,
I mean, the clues in the word, right?
So it's the generality.
So even the best, the best sort of tools we have today are really, really good at one thing.
You know, they're really good at generating images or generating video.
chatbots are like sort of getting towards being more general and I think that's why the sort of
the excitement about AGI has has ramped up a lot in the last few years. You know, you can talk to
them and they can talk back at you about anything. But, you know, it's not hard to push a chatbot
and break it, make it say something really dumb. I mean, I don't think anybody would seriously
trust them to do something, you know, really serious. You wouldn't trust it with your health or
your money or, you know. But what we're aiming for is an AI that you,
you would, that you really could just ask it to do anything. There's anything you would sort of ask
a reasonably capable person of doing, you know, do your taxes, you know, help run your family
logistics, you know, run a business. I mean, and these are real examples. I mean, like a lot of
people in the field, you know, imagine building an AI that can, you know, go out and earn your
company billions of dollars. So it's, it's the idea of an AI that can basically do what a smart
person can across the board, not just in sort of these niches. And I see these advertisements all
the time, you know, along with, I'm in, I guess, the exact right age group where, you know,
I've joked about on the show before, it's like they're sending me the balding medications,
the shoes that make you look taller. And the other suite of ads that I get are from these kind
of like rise and grind bros or bras. You know, it seems to be men and women are very interested in
pushing this and they basically are like are you over 35 and not using AI to optimize your life sign up for
this course and we'll take you through these 30 different AIs to help you like become whatever it is you know
and it's usually has to do with you know making your business successful giving you the body that you
want you know all these things that we see online that we really crave and this seems like a new
grip that it's like hey if you're not using all of these the suite of tools you're left behind you
know, join me. Join me. Join my seminar. Yeah. No, no, I'm nodding along to that. Yeah, the grift side of this is
enormous. You know, rewind a few years and, you know, these same people were shilling for NFTs or whatever.
Right, right. You know, the industry pivoted, I think. And I speak to a lot of people, you know,
I speak to a lot of founders of startups. And it's the same company that was doing crypto stuff a few years
ago. But now, oh, everything's swung and everything's now chasing AI.
Yeah, I read an article in science that collected statements from
tech leaders related to AGI, and I want to read some of them here because I think they're
interesting.
So Open AI state's mission is, quote, to ensure that artificial general intelligence benefits
all of humanity.
Google DeepMind's company Vision Statement notes that artificial general intelligence has
potential to drive one of the greatest transformations in history.
AGI is mentioned prominently in the UK government's national AI strategy.
The U.S. Department of Commerce's National Artificial Intelligence and
advisory committee's charter says that it should advise the president on progress towards
artificial general intelligence. Microsoft researchers claim the evidence of sparks of AGI were
present in GPT4 and Sam Altman, CEO of OpenAI, called GPT5 a significant step along the path to
AGI. So it's like this is quite a collection. Like, yeah, it's like major world governments
that like, you know, tech CEOs of like, yeah, multi-billion dollar companies. And, you know, you
know, researchers, they're all working on the presumption that this AGI thing is real and is
definitely 100% coming. I mean, why should we doubt all these people, you know?
Yeah, why should we doubt these people? I mean, yeah, maybe we're wrong. Maybe we should
pack up and go home. That is exactly what, you know, has been bugging me, you know, for all these
years covering this industry and what sent me down down the rabbit hole, because it never used to be
like that. Ten years ago, the idea of AGI, the idea that you could make.
an AI that really could do everything that a, that a human could, was ridiculous. And even when
Open AI was founded, you know, just less than a decade ago, you know, the sort of the swagger and
the ambition of this new company that was, you know, its mission statement from the start was to
build AGI. You know, it really made them stood out because no one else was actually saying
that that seriously. And there's no accident to that, right? This is a new company that was coming
out. And we're just going to, we're going to take a big swing at this, like in this concept. But yeah,
Over the years, I think, like, a couple of things at least have happened.
Like, one, you know, if one company is saying they're going to make AGI, then you've got to say you're making it too.
Otherwise, like, what's the point of view, right?
So AGI just became, you know, the thing over the horizon that the best AI companies in the world were chasing.
And this was just a name that, like, one guy made up that they, that they were like, he was like, oh, you should call it AGI.
Like, it doesn't really come from anything other than that guy who worked beneath.
I can't remember his name, but you talk about him in the piece.
Yeah, so I think you're talking about Shane Legg.
Yes, Shane Legg.
I mean, that sort of the origin of the term is kind of fun.
I mean, there's this guy Ben Gertzell, who's a really lovely, sweet guy, but, you know, he will present himself as being sort of, you know, on the edge of things.
He's just, just what he's drawn to.
He's drawn to these, you know, fringe ideas.
I mean, he's been in the AI field for ages.
So, like, back in the mid-2000s, he was like sort of influential figure in this fringe community
that was interested in making an AI that could do these sort of human-like things.
And like, it's important to say that even though this modern concept of AGI is maybe, you know, most 20 years old,
the ideas that that is built and go way back, you know, back to the 1950s when people first started
talking about artificial intelligence, you know, those early pioneers wanted to build a machine
that could do the things that people could. So those ideas have been bubbling around for a while.
But it was only in the mid-2000s with this guy, Ben Gertzell, who wanted to put a name to
the stuff that he and his sort of colleagues in his fringe community were working on.
And he turned to like a former colleague of his called Shane Legg, who put forward this term,
you know, AGI.
Let's call it artificial and general intelligence.
You know, it's like AI, but it's broader, it's bigger, it's more general.
It's like my generalized anxiety.
Broader, bigger, not specific.
100%.
And the amazing thing is, like Shane Leg went on to co-found DeepMind, now Google DeepMind, you know,
one of the biggest AI companies in them.
world. So Shane Legg took this term AGI and all the concepts behind it into deep mind. It's probably
important to, I mean, some of your listeners may know this, but like whether or not they do, like,
just as a sort of a point of fact footnote, after these guys, Ben Gertzel and Shane Leg came up with
the term AGI to sort of name this ambitious set of ideas, it sort of emerged that there was
another figure who had used the term AGI in a book back in the 90s. And so this guy's often, you know,
He's sort of given credit for first coming up with the term, but it died and disappeared.
And it wasn't until the mid-2000s that AGI as a label for all of this sort of really took off.
We were talking a little bit before the show and before we were recording.
And I mentioned that was like one thing that I really liked about your piece.
I'm really interested in conspiracy theory, especially when we're talking about conspiracy theories,
it's kind of like less conventional sense, when it's sort of being promoted by people who are otherwise very respected and credentialed.
And I was like contrarian takes.
And this is this one, this definitely has both those elements.
You have this great quote in the piece that I screenshot it to read because I thought it was so just like tight and easy to understand.
You're right.
Every age has its believers, people with an unshakable faith that something huge is about to happen, a before and an after, that they are privileged, parentheses, or doomed to live through.
And I think this captures the conspiracy mind so well because there are two sides of it, you know, especially as we've seen over the last, you know, five or so years, is that there are people who,
who believe in conspiracies from both angles, right?
That it's either this amazing, it's going to usher in this amazing golden age of prosperity
and wealth, or it's going to be an apocalypse.
Both are a conspiracy, but it's just kind of like, pick your pill, pick your flavor.
And I thought you presented that in a really like, like, easy to understand way in the piece.
Yeah, no, yes, thank you.
But yeah, you're totally right.
This, I mean, I should say, like, the idea of even treating AGI as a conspiracy theory.
Like, at first, I was only half serious when I first started thinking.
you about it, right? Because obviously, AGI isn't a conspiracy. Like you were saying earlier,
Travis, like, this is what, you know, the biggest, richest companies in the world tell us
sincerely they're going to build. But when you start to look at it, things like this just pop
out, these parallels. Like, what we're being told is that this is a sort of a savior-like technology
that's going to get rid of all the world's ills. It's going to make us more prosperous. It's going to
cure disease. It's going to help us solve climate change, you know, or maybe not, right? Because
that's where it flips on a dime, right? Because if you have a technology that really could be that
powerful, then of course, you know, are us feeble humans going to be able to control it? And if we
can't, then, you know, that's the end of us. It's all part of the same belief system, the sort of
the flip between boom and doom. And it's, you know, it's been presented to us like that, you know,
in popular culture for, you know, decades and decades, I think to, you know, Hal from 2001, right? This
malevolent artificial intelligence or even better Skynet from the Terminator, you know,
this evil artificial intelligence.
These ideas are out there.
And what's far less common, actually, if I'm just kind of digging into my video library
in the back of my head, is AIs that are benevolent, that actually are going to help, that are
going, I mean, the only one I can really think of off the top of my head, and I think this is
a bad comp, and you guys might not even know what I'm talking about.
But the old 80s Disney film Flight of the Navigator, where you have an artificial
intelligence who is running the ship who ends up being a good character and getting and getting
David back home. So like, but that's a fair, you know, I can't think of too many examples where
the AI in the movies or the book is something that is good and something that is going to actually
bring about this thing that all of these Silicon Valley guys are saying that it's going to
this new age of, you know, this golden age. Yeah, I hadn't thought that through. But yeah,
as you were speaking, I mean, some positive examples. If I had to think of some, you got Wally,
right and uh right okay yeah wally's good yeah johnny five i mean maybe that's dating yes johnny five is
good yes these guys are what they're like sort of they're they're played for comic relief right right
and they're weapons right johnny five was a weapon that went rogue you know it the it was the opposite
it was built to be a weapon but actually it became this kind of goofy guy yeah i think yes it's just it's
just it doesn't make good drama does it like the idea of a genuine beneficial beneficent all-powerful
AI that just basically solves all our problems is that's pretty boring. It's boring. And does it get
people to spend? I don't want to go and watch a society get it better than I've got it here. I want to go
to the movies and see somebody who's got it worse than I do so I can leave the movies feeling good.
Yeah. I think, I mean, obviously we've seen since chat GPT came out, well, like three years ago now
and sort of the world woke up to what, you know, present day AI could do. We've seen that sort of
that wave of doomerism really take off. I think that just like really, that grabbed the imagination
because it's exciting. It's exciting to be scared, right? Yeah, and it's exciting. At least,
you know, personally, it's exciting to think about living in an apocalypse where all of a sudden
your credit cards don't matter anymore. Your job doesn't matter anymore. All the video games
you've played, it turned out to be training. I think a lot of people, and I hear my friends, you know,
or being like, oh, preparing for the end of the world. Oh, yeah, you know, it's my apocalypse mobile or
whatever, myself included, like, Travis included, we both have like kind of like off-road vehicles that he probably needs, but I don't. And it's like, that's fun to think about. It's fun to think about this system collapsing because all of our current problems are now solved, which oddly enough is what they're saying the AI is going to do. Yeah, you know, it's pretty normal for like, I guess like tech companies to talk about like their product and very grand world-changing terms. Like Facebook talks about community and connect.
activity and like they're bringing the world together. And these were like, I think, overblown promises. But like, you know, it's the general rhetoric of like, you know, startups generally. It's like in order to convince investors to like, you know, put up their hard earned money and convince people that you're a worthy bet. You have to make these big promises of like why, why you're so significant and stuff. But it feels like the AI is like these companies, they are taking it to a whole new level. Like they're making like extraordinary promises. They're like they're saying this is going to be the last and
invention, you know? This is something, you know, more profound than the printing press in terms of how it's going to change humanity. It is just is worrying how like just incredibly overblown the rhetoric is on the potential impact of what they're building. And the stock market's buying it, right? I mean, big time. What we're seeing now is is utterly unprecedented. Like the money flowing into these companies, the the valuations we're seeing. And it's nearly all riding on this promise that is really quite vague and and hand wavy. Um, the money flowing. Um, the money.
There's something that's stuck in my head, like when, I don't know if you guys watched it,
but when, you know, when Sam Altman came out and they were announcing that deal with
Nvidia for, you know, just nuclear power stations worth of, you know, computer chips and energy.
And Altman said something like, you know, now we don't have to choose between curing disease
and giving everyone free education, you know, we can do it all.
And this is what they're sort of telling us, like the one that this near future all-powerful
AI can do those things, just, you know, educate the world for free and cure disease.
but if we try and stop him, then we would have to choose.
It's on us if we sort of somehow don't support open AI in this mission,
that, oh, you know, we tried to do both, but we couldn't.
We could only cure your disease.
We can't help you kids.
There's something like really turns me off about this really, I was going to say,
subtle rhetoric.
It's not subtle at all.
But the way we're sold this technology is laughable.
And yet, as I said, like the stock market is buying it.
And what's so funny and interesting to me, you know,
I'm thinking about as you're talking about this,
it's like they're saying, hey, these, you know,
Nvidia chips, they're going to, yeah, do nuclear power plants.
They're going to do all this.
But all I'm actually seeing, like, in my own life is that, like,
they can give you a couple extra frames on your computer games
because, like, you have a kind of a, you know, shitty processor or whatever.
And, like, the AI is basically adding in, you know,
where your game would be choppy otherwise.
Like, that's what I'm seeing on my screen.
They're like, yes, this is going to solve world hunger.
in, oh, nuclear power plants, they'll be able to talk to one another, oh, everything.
But all I'm really seeing is, like, I'm getting 10 to 15 extra frames for a second in
like battlefield or something.
Like, that's what it feels like the application is.
And there's nothing wrong with that, but.
Yeah, I was going to say, what's wrong with that?
I mean, I often think, I mean, don't get me wrong.
Like, this is an amazing subject to cover because it's, it is so wild.
And, like, the stuff that's going on and people are saying is really off the charts.
But a lot of me think it's like, what's really.
wrong with just like having better framework. Let's all like sort of come back down to earth and sort
of, you know, treat this technology as if it were a normal technology that just made lots of
little things a little bit better. Yeah, that's a really, really good point. Why does it have to be
the thing that saves the world? And I think that gets into, and I know, I know Travis has got some
questions about this, about the people who are pushing these kinds of narratives. But yeah,
why isn't it just enough? Why, why can't the stock market rally on the fact that like somebody with
like a slightly older GPU can play the latest games at higher frame rates.
That's going to sell cards.
I mean, I got the 4,000 series cards so that I could, you know, use the DLSS technology.
That's a really good question, I think speaks to an illness amongst the wealthy in Silicon
Valley, and all of us maybe, is that it's not good enough to solve small problems.
We need to convince people to invest in something that's,
much, much bigger.
Like, like you said, a way better movie, you know,
something that's not just kind of boring and utilitarian, I guess.
Right.
And we want to believe, like Fox Mulder says.
Yeah.
You know, another way that sort of like that AI company sort of talk about the product
in a way that's different from, I guess, previous sort of like Silicon Valley Giants,
is that they keep talking about AGI as this goal, this endpoint.
There's something that where we are working towards is going to happen.
We don't know when.
And it's this different than like, I guess, like, to return to Facebook.
They talk about generalities.
So it's like, we're going to make the world more connected and more, more, we're going to build
communities.
These are sort of like vague goals, I suppose that like, they aren't talking about we're
building ultimate community one time and they'll, it'll change the world then.
I mean, what consequence is that is that they keep pushing back when the CAGI is going to happen.
And you discuss how this is sort of like this mirrors conspiracy theory talk in your piece.
And like the big prophecy, the big event is always happening soon, always in the perpetual near future.
I mean, we see this a lot in Q&N where, like, the big storm of, like, arrests is always going to happen.
They first predicted in 2017, that didn't happen.
So they pushed it back and back and back.
And they always have an excuse about why isn't going to happen.
I mean, but this is a little strange behavior coming from these, like, well-credentialed AI researchers and these big money tech firms.
I mean, how does this talk kind of manifest, this belief that this AGI thing is, we're just on the cusp.
We're getting a little closer.
It's going to happen in a year now.
I mean, how does this manifest in the space?
If I can just interject really quick, and you say it perfectly in this passage from the piece that I screenshot, it'll be a perfect way to tee this up. You write, you have to admit, it all sounds a bit tinfoil hat. If you're building a conspiracy theory, you need a few things in the mix. A scheme that's flexible enough to sustain belief, even when things don't work out as planned, the promise of a better future that can be realized only if believers uncover hidden truths and a hope for salvation from the horrors of this world.
Right. Yeah, I mean, there's a lot there that, well, I would guess that, you know, the people building this technology like on the inside, you maybe think about it one way. I mean, because these are the scientists and the engineers that, you know, know, know, exactly what they're building. And yet, you know, they will still believe that, you know, given this thing in front of me, where two years away, three years away, 10 years away, whatever, from, from building AGI. You probably have to separate them from, you know, just the rest of us who are just, you know, following along, sort of the, the AGI stand.
that want to believe. So I have sympathy for people who are just falling along and I told these
amazing things are going to happen. You know, why wouldn't you sort of get excited by that and sort of
and not think about it too critically? And then, you know, when it doesn't happen, you think,
oh, well, you know, maybe next time I'm going to sort of, you know, keep my faith. But the people
actually building it, yeah, I scratch my, I mean, I talk to these guys a lot and, you know,
there's a very large spectrum of opinion. We're talking about, you know, the AGI believers here,
But I don't want to give the impression that that's sort of the majority of the field.
I mean, there are people who really, really push back against this sort of this overhyped talk.
But the people building this technology who genuinely believe AGI is coming.
Like you mentioned in the intro, there was a Microsoft paper a little while back called, you know, sparks of AGI.
Where they played around, you know, the scientists played around with an earlier version of GPT4 and were just blown away by what it could do.
And really, I think, just got over-excited and, you know, wrote this academic paper and put it out there, you know, saying that what they'd seen within this model was, you know, sparks of AGI.
And I think what was going on there is that even the sort of the insiders, the scientists, the engineers, building this technology were not prepared for it getting as good as it did.
Like, so we're laughing about the whole notion of AGI.
But, like, just take a pause and think, like, the chat GPT and all the models that have come, you know, since are incredible.
Like it's, this is stuff that people didn't think we'd see so quickly, you know, five years ago.
And that's true even for the people building it.
And I think they were just blown away by how good this tech had got.
And so I thought, wow, if it's got this good, this fast, then, you know, just sort of, you know, project that on a few years.
And we are going to have this awesome human-like intelligence.
But the other, the other crucial thing that I think has happened in the last few years is that the AI we now have, we interacts with by talking to it, by typing.
to it in natural language. And I think even if you try really hard not to, it's difficult to
not get that sort of, you know, hair on the back of your neck feeling that you're talking to
something, right? That I just think we're so hardwired to see some kind of intention, some kind
of intelligence behind the language that it's been spat back at us, that even if we know better,
that we just feel there's something more there than there actually is. And I think that plays
plays into all this a lot that we just we sort of give we give these systems the benefit of the doubt that
they may be smarter than they are related to that there's a massive problem with how we
evaluate these models you know it's now a bit of a joke you know a new model comes out and you know
there's sort of there's a leaderboard of you know my model can do this better than your model and
it's it's sort of it's almost like a new release of an iPhone every few months where you know
this iPhone is slightly got a slightly better camera it's got a shinier case and stuff so what all these
evaluations do is sort of, you know, they make the model do a bunch of tests. You know, maybe it's like
how good is it at generating code? How good is it answering sort of math problems? And they're
trained to do those things. So when they do very well at them, you think, oh, my model is broadly
intelligent because it can solve a math exam. But again, I think that's confusing the models for,
you're treating them as if they were people. Like if I sat a math exam and I did really well in it,
then you'd probably think that, oh, he's a smart guy. It's like a, it's a proxy for,
my broader, my broader intelligence. But with these models, if it passes that particular math
test, all it tells you is that it's passed that particular math test, you shouldn't then,
you know, project more onto it. So there's, I think there's a real mess with how we evaluate
these models, how we think about them, and all of that allows this AGI myth to sort of take
hold and be more persuasive than it ought to be. I want to mention, yeah, recently I read that
there's a team led by University of Oxford that carried out a systemic review of 440.
45 benchmarks for LLMs across major machine learning conferences.
And they assessed how well the benchmarks adhere to the concept of, I guess, construct
validity as whether a benchmark truly measures the abstract phenomenon like reasoning,
robustness, and safety.
It purports the measure.
And they found that basically, so only about 16% of the benchmarks reported uncertainty estimates
or statistical tests.
I guess the point in review is that the majority of like the benchmarks that people are
using to evaluate the, I guess, real abilities of these things aren't really, they're measuring
proxies that don't actually evaluate the core thing that they're trying to measure. So even when we
talk about how impressive and powerful these things are, we still don't even have something
really concrete that we can use to evaluate how much these things are improving or how much
they are, I guess, you know, to say being taught to the test, you know, being able to pass the test
without having a real kind of like more impressive sort of like abstract reasoning ability.
Yeah, yeah, that's it.
That's it exactly.
And because we don't have, I say we, you know, the industry, the academic field does not
have a good grip on exactly what today's AI can do, it leaves the floor wide open for, you
know, how good they're therefore going to get.
Yeah, and it's a way better story that like you invent this thing and it goes and it goes
out of control and it's up to you to figure out the way to you know safeguard society from it or use
it for good it's not like nearly as interesting a story for a rich guy anyways you know who sold a
couple companies and you know it doesn't have to worry about money instead of being like i invented
this thing and you get better frame rates and um you know it's integrated into all these other
apps and it makes it easier to edit and audio stuff yeah can clean up your yeah you want to make
little videos. It's fun. You know, you can kind of make a little video. You know, you want to make
your own Tom and Jerry cartoon, but put your friends on faces on the animals. Like, you can do that
and it's fun. It's like, it's boring, I feel like for a guy like Sam Altman or any of these guys.
Yeah, I'd be, yeah. If you talk about more modest terms, you can't build a trillion dollar
company though. Right, exactly. But why? But this goes great back to Will's point. It's like,
why not? Why can't you build a trillion dollar company on the fact that like, hey, you got a
shitty processor? Well, guess what? Our new tech is going to get you $20.
more frames or you know what you're editing something well our new tech is going to be able to pull all the transcripts for you already so you can put it on time like yeah why isn't the little convenience stuff enough i think that speaks to something bigger about our society and about how it's more fun for us to think even if it's a dumer scenario they're like oh man like here i am like in the early days of sky net like am i going to fight for kyle reese or am i going to like become like a capo for the terminators you know you're
you're not going to go down in history as somebody who changed the world if, you know, you make
a better frame rate, no matter how many people might want that.
Yeah, the really interesting thing is like just the massive stakes, like, of the way they talk
about it.
Like, this is, this is the most consequential kind of like, you know, invention of our time is
going to change the world in sort of unpredictable ways.
And this is why they have, like, a range of predictions about the ultimate consequences.
And that range from, like, total tech utopia or, or.
an absolute destruction of humanity. And like you mentioned, like even the concept of AGI isn't
universally held. There's increasing a lot of pushback on the belief that this is a sensible goal
for companies. But there are some big names who talk about it and talk about in these big
existential terms. For example, there's the British Canadian computer scientist Jeffrey Hinton
often called the godfather of AI as credentialed and as decorated as anyone in the field is.
and he predicts that the coming super-intelligence, which he believes as a certainty, will replace humans.
Fantastic.
Yeah, this is the good stuff.
Nearly all the leading researchers believe that we will get super-intelligence.
We will make things smarter than ourselves.
There's a very good chance that'll happen in the next 20 years, maybe 50%.
It's coming quicker than I thought.
After a while, the super-intelligence has just get fed up with the fact that we're so incompetent and just replace us.
They may keep us around for a bit, and they'll certainly keep us around to keep the power stations running for a while.
would take over. They would run things. I've talked to Elon Musk about this. He thinks they'll
keep us around as pets, because the world will be more interesting with people in. I mean,
that's the plot of the Matrix, right? Yes. Yeah. Yeah. That's, yes. That's like, yeah. So we
create superintelligence, super intelligence, super intelligence, suppresses humans, then the superintelligence
enslaves humans, essentially. That's, yes, that's the Matrix. I interviewed Jeffrey Hinton
a couple of years ago. So he was retiring from Google and he wanted to make an announcement as he
retired. Like, you know, he is an honorable guy. He didn't want to shuddle over, over Google while he
was still an employee. But the week he stepped down, you know, he went public with these fears. And he
told me at the time that he was just going to spend a couple of weeks, you know, putting it out
there. And it kind of amuses me that he's basically being on a non-stop press junket ever since.
I think he absolutely loves the new, you know, the new role he has as this doomsayer of the
industry. I think it's fascinating that he has come out and, you know, had a sort of a career in
his sunset years as, you know, this, this guy going out and doing all this, this fearmongering.
He told me that he was essentially surprised at how good AI had got in, in such a short time.
And I think this goes back to what we were saying about, there's something weird going on
with, with language models, that when you talk to this stuff, it gives people, I think, even like
Jeffrey Hinton, who knows how the tech works inside out, the sense that there's more there
than there is. But also it's the conclusion, you know, the sort of the logic to his argument
that, you know, even if you accept like he does that this technology is going to become far
smarter than we are, for him that automatically means that it's going to turn on us, that it's
going to, you know, keep us as pets. But says who? This is, this is just, this is just the stuff
of science fiction. Yeah, you know, it sounds like, yeah, they're like assigning, I don't know,
like more kind of like, I don't know, feelings of affection or feelings of hostility, which are
like something beyond mere cognition to these AI systems. And there's a paper that I read all
researching. It was talk about, I think it was called like why AI is harder than we thought or
something to that effect. And it talks about, so one of the fallacies that they think causes people
to overestimate how easy AI is going to be is the fact that I guess like intelligence or knowing
in humans is embodied. It's like something part of our more complex nervous.
system rather than the mere sort of like disembodied cognition and thinking and sort of like a
brain in a box kind of thing but it's very strange that people like hit and they're sort of like
assigning embodied kind of like knowing and feeling to these AI systems like already which
I don't think people haven't really developed a path to that as far as I know at least no no
and I mean even people are split on that I mean we said already that there's no there's no firm
definition of what AGI is but you know even even among people that are convinced
Vince it's coming, some people think it is just going to be, you know, like a brain in a
VAT, well, brain in a laptop.
Other people think that you're going to have to have, like you say, like a body and a robot
because intelligence doesn't exist without the sort of the interaction with the world.
Part of me wonders if, like, us humans are just very impressed by video still, because it
seems like the, this belief that AI is on the kind of course that, you know, its creators
believe it's headed towards whether that's a utopia or a dystopia. It's one thing that we've
all seen, right, is like the Will Smith spaghetti video from the early days and then you see
what it looks like now generating video of human beings. And I think that just the fact that the
AI has gotten better at generating video, it's like any conspiracy theory where like you take
one small piece that's real and true and impressive and then you use that.
to say, well, if this is possible, then this is possible. And if that's possible, then
this is possible. And then you get to your, you know, your super grand conspiracy theory.
But to me, so much of this, you know, because if you go to the LLMs, like even on the latest
chat GPT, like on the off chance that I'll try to use it for like research, I end up doing
more work because the chat GPT will spit out something. And I'm like, that doesn't sound real.
That sounds like it made up like a Reddit community that doesn't exist. I have to, now I have
to go and like fact check the AI and now I'm doing even more research. And it's like that still
isn't great. But what is great is the video generation. And part of me wonders if like we're all
just so like screen bound that like that's enough for us that we're like, oh wow, well look at
Will Smith five years ago and look at him eating spaghetti today. Like this thing's going to take over
the world. Yeah. I mean, video has got a lot better really quickly. A lot. Yeah. And it's yeah,
it's amazing. Most purposes, it's, you know,
it's near-perfect. I know video, video as a media is just, it's just extremely popular.
We're so familiar with it, so conversant with it. So seeing a machine sort of turn our thoughts
into a near-perfect video is, it's truly awesome. And like, you know, the technology is truly
awesome. But it's a video generator. It's not going to save the world. Yeah, exactly. It's going to
destroy the visual effects industry most likely as studios just are greedy and they want to spend less.
I don't think that AI actors will like make as big a splash as they're saying it is until we have a generation of kids who grows up with only AI actors so that they don't really have anything to compare it to and they're perfectly happy to watch those characters.
I think that that could be, you know, potentially the future.
But most immediately is, I think, at least from my peers in entertainment, is that this is going to decimate the visual art industry.
it's funny yeah it's like they can't even like sell it in terms of like saying well this is a new
revolution in entertainment technology the same way that like i guess like sound was for film right
right right it's a new leap in entertainment is going to change how it's produced and it's going to
change what consumers expect from their entertainment but like that's that's that's huge that's like
a multi-billion dollar industry you're disrupting but that's not enough they need this right
spiritual element this sort of like this history defining element in order to sell their
product. Like, they could go to the effects houses and say, we have this new tool that's going to
make it so much easier for your artists to create even more what they want on a smaller scale
budget. Like, it could have been a multi-billion dollar industry as something as a tool for
artists to use as opposed to this thing that's going to inevitably replace them because people
are, you know, me personally, I'm just that cynical and I think that the studio systems are
that greedy. But, like, even then, why wasn't that enough? And I,
I think it speaks to this culture of, like, Silicon Valley.
They want to be gods, because if you create the super...
And you talk about this in your piece, Will, is that, like,
if you can be the guy who created the super intelligence,
if you can create how, right?
Then, like, you're a god in a weird way, you know, in a way.
And I think that's...
Nothing less is good enough for these guys
who have achieved every kind of material.
I've said this before.
I'm on this kick that I think these guys have conquered
what they believe is the material world,
and they want to conquer the spiritual world through technology.
Yeah, I like that.
There's something weird going on as well.
Like, I mentioned this in the piece as well.
Like, there's a lot of parallels with sort of new age thinking, like,
sort of peaked around the 80s and 90s, you know, that if we could only sort of access
our inner powers, you know, humanity can transcend itself and, you know, we're all sort
of float up into the sky with great smiles on our faces.
There's some aspects in that, too, in the sort of the stories that we're told about, you know, what AGI is going to do to us.
But the kind of sad thing in a way is that it's no longer, you know, humans that are going to, you know, save themselves.
We have to look to a machine to do it for us.
And I don't know.
I feel like there's something, there's a lot to unpack there.
Yeah, very cynical.
It's a very cynical outlook.
And when you have these kind of, like, massive, you know, egos at the top of this, this sort.
of ladder, you gotta, you gotta wonder, like, what happens when it doesn't pan out? I mean, I think
then you're, you know, I mean, whatever, they'll find a way to grift. You know, all of these
bubbles burst at some point and, um, yeah, I don't know, yeah, we'll be catapulted into the
next thing. As, you know, you talk about it's like at first it was the computer, then it was the
internet. Now it's AI. What's the next, you know, what's the next big dot com bubble going to be?
Yeah, I think we're primed to, you know, when we're told, you know, what the next big thing is going to
be AGI, we just sort of, you know, nod along.
say okay cool when is it coming yeah i get i get roped into this every year with the with the video games it's
like rate tracing now it's like invidia remix they're they're like oh you can play gta 4 but with like
real sunpath uh god ray i'm totally messing that up and your computer science guy i'm embarrassing
myself but that's like you know that's how i experience it as a consumer is it's just it's it's
always like hey there's like one piece there's a new piece but then you're like okay sweet like i'm gonna enable
ray tracing and then your frames go all the way down to zero. But wait, there's an AI that can bring
your frame rates back and you're just, you're held hostage by these technologies when all you want
to do is like, you know, you know, get spawn camped for 45 minutes. I was thinking about the broad
economic implications of like what you're saying and how you're describing, you know, the
AGI kind of like talk that's coming out of these industries. And like, yeah, it's like you're not
alone. Like Margaret Mitchell, who is a pioneer of AI ethics. She works, she's a scientist and researcher
with the AI platform hugging face has ascribed artificial general intelligence as just
vibes and snake oil, which is, you know, I think sounds right to me, but I think it's concerning
in light of the amount of money that's being poured into the AI space. I looked up as like
according to the State Science and Technology Institute, 40% of venture capital dollars for deals under
$100 million are going to AI startups. I mean, and presumably a lot of that money is going on
the bet or the promise that these AGI like dreams could be made real.
That just speaks to, I mean, to get into the bubble talk, that's just very, very concerning
the idea that like all of our hopes and dreams for future technology, all of the capital
is flowing directly into the hope that this thing is going to be real when, you know,
some very astute people are calling it just snake oil.
Yeah, Meg Mitchell is great.
I'm very much with her on the vibes in snake oil.
But, again, we're back to the cynicism.
That's all you need these days, right?
That's all you need in politics.
That's all you need to sell your company.
Yeah, maybe that's less of an indictment than she intended it to be.
If it's like, well, if it's vibes, if it's good vibes, if people are feeling the vibes,
then, you know, you could pour billions and billions and billions of dollars to it and, like, and see a return, you know, and maybe.
I think this is just part of something that human beings do.
Maybe that's why we're so intent on the robot saving us or destroying us.
because, like, you go back to, like, you know, Asap's fables, you got Jack and the Beanstalk, right?
You know, this guy, he gets, he gets fucking housed on the road.
He's going to sell the family's cow, or whatever.
I can't remember what he's going to do, but he's got to save the family.
He's got the cow.
And this, like, guy appears on the road, and he's basically like, hey, man, for, like, four magic beans.
Like, I'll take your cow, and who knows what the beans will do, plan them, you'll see.
And he gets home, right?
And he gets home, and his mom is like, what the fuck, dude?
You know, that's real life.
The real life is you get home and the beans aren't magic, right?
And your wife's mad at you.
Your mom's mad at you.
Your partner's mad at you because you fucked up and you believed something that was ridiculous.
You believed in a conspiracy theory.
But the story is that he plants the beans in the ground and it grows.
And I think that is the American dream is that you can plant some regular, ordinary beans in the ground.
And it can take you up to the giant, you know, to the giant's castle.
And I don't see anything different.
between what these guys are doing.
And it's just like, they're telling you what the beanstock is.
You know, you're planting the seeds.
And then they're like, well, the beanstock is the video getting better.
The bean stock is your frame rate's getting better.
The bean stock is it's going to manage your finances.
I just feel like we're stuck doing this, especially in America.
Well, I mean, what I think is going on.
I mean, Jake, you clearly, you clearly want better video games.
Clearly, clear.
I have two, I run on two issues.
You want to know, where's, why are the AI competition?
and he's not giving me that.
Like, I mean, there's a lot of reasons, right?
But, like, one is if they did that and you didn't think it's very good,
then you'd say, like, you know, you haven't given me, you've failed.
But if they can keep selling you something that they don't have to deliver on,
then they can just do that all the time, right?
The AGI is always going to be something that's just around the corner.
And the technology they're making, I mean, people are finding, like, amazing uses for it.
But like you were saying earlier, they're not generating,
they're not building video generators to solve the problem in Hollywood.
They didn't build chatbots to solve a problem that any business had or any, you know, any of us actually had.
You know, they built this stuff because it was cool and they could.
And then once it's out there, you know, the rest of us have to sort of, you know, figure out what to do with it.
And I think, you know, that's what's powering this industry.
They're making stuff and throwing it out there.
And, you know, they're just saying, you know, the stuff's going to get better and better and better until one day, boom, AGI.
But, you know, that's not a deliverable product.
There's not like a spec sheet that they are building a product to towards and then, you know,
selling it to Jake and then have Jake go on the internet and rant about it, you know, because that
that would be failure. And they know so well that that is failure, that if the people turn
on you online, you know, forget about it. You might have to disappear for a couple years and
come back with something that's like totally. Yeah, I mean, you bring up a great point is that
so much of the tech world that now is so, you know, just so enveloped in artificial intelligence.
We're never asking for any of this shit.
We didn't ask for the internet, honestly.
Nobody was going, ah, I'd love to be able to, like, click on people's profiles and see what they ate for lunch and see who.
I really want to know what my Aunt Barbara ate for lunch.
You know, it's like, oh, I want to know what color, Uncle Vic is, what color he thinks he is after he took this Facebook quiz.
We didn't ask for email or any of that stuff.
They just invent it and then go, you need it.
Well, it was like, well, the internet was originally invented by governments to share data more rapidly.
And then it was sort of developed by scientists and researchers who had academics who had a desire to share information more frequently.
And then eventually in the 90s, people got the idea that if you turn it into a consumer product, you can make a lot of money.
And it feels like they want you to think of it as an intelligence, right?
Because you could easily just come out with like, I don't know, Adobe could be like, we have a new update.
And the update allows you to drag your video.
a little bit further and we'll, you know, we'll sort of like auto generate some frames to help
you out. Like, it could have just been kind of marketed as like a feature as opposed to an
intelligence. And I think that's, you know, that's really what this whole crux is, is how far
will this intelligence grow and, and what are we to do about it, I guess?
Open AI had no idea what, what it had on its hands when it released chat GPT. I mean, it took,
it took the company by surprise as much as it did everybody else. I mean, they'd, you know, they'd been
tinkering away this technology.
for a while, you know, chat GPT was like, was the slickest version of it. The key thing that
chat GPT did that previous language models hadn't was this sort of back and forth dialogue. So
you could chat to it. Because, you know, models beforehand, you could sort of, you know,
say, here's the first line of a, of a story about, you know, unicorns prancing on clouds,
finish the story. And, you know, you'd get, you'd get the story. It was the back and forth
chatting that chat GPT brought that really sort of gave people, people chills. But open AI,
I've spoken to people there a bunch of times.
They had no idea how this was going to take off.
And they've been sort of scrambling to capitalize on that success ever since.
What they do is astonishingly expensive.
And so any way they can spin their tech into products, they obviously can.
I wonder if you could speak about, in light of all this,
what would you think would be a better, more productive way, I guess, for people
and maybe especially like tech journalists to talk about,
AI advances and the goals of these AI companies. Because like you mentioned, these products are
sometimes they are very spectacular and impressive and they're doing things that were unimaginable
still five years ago. I mean, I could, you could count me among the people who like sort of scoffed
that sort of the first generation of AI image generators. And I thought, but boy, they got to,
they got a lot better much faster than I thought they would. But, you know, at the same time,
It's like, you don't, well, how do you, I guess, how do you balance the acknowledgement of the advances and its potential impacts on consumers and the economy and all these things without buying into this AI hype, you know?
I think it's, I don't know, it's a very, I think it's a very serious challenge for people who cover the sphere to, you know, walk that fine line in between the acknowledgement of the advances and the acknowledgement of the impacts it might have and the many interesting use cases that they could have without, you know, talking like these founders, you know,
talking about how AGI is upon us as either utopia or, like you, the apocalypse.
Yeah, this is something that we think about and talk about all the time.
Within the broader media, I mean, there's a lot of people that are just boosters who just get excited
and sort of, you know, re-re-up the hype that the companies themselves are putting out.
But there's also a lot of people who adjust the cynics, you know, who adjusts sort of the opposition
to that.
And I think neither gets it right, because you need to recognize that the technology that has
been developed is amazing. And the genuine applications that is potentially going to have,
you know, are amazing. And hopefully we're going to see, you know, more good ones than
bad ones. But, you know, since this is in the hands of, you know, basically internet companies,
who knows. But, you know, walking that path between, you know, the hype and the cynicism,
it is, is difficult. And so you always got to make sure that you're sort of, you're cheerleading
when it's justified and you're really pushing back when you think something is overhyped.
But, I mean, in general, and specifically on AGI, I mean, it's just taken for granted now that the industry is on a path to AGI, whatever that is.
I mean, even thinking it as a destination is nonsense.
It's because it's not a thing, right?
There's not going to be one day when it's slow.
We've made AGI, here it is.
But I think really stop taking that for granted.
Like the sense that this near future technology is inevitable, I think that really needs to be pushed back on, like, you know, says who?
It's only the people building it are telling us it's inevitable.
And there are enormous costs involved in building, you know, not just the obvious sort of financial and sort of environmental costs, but, you know, the potential harms on, you know, increasing inequality and the massive upheaval for jobs.
And education is something that, you know, gives me some personal dread.
I had the idea of being an educator now gives me shivers.
Not to mention the environmental costs and, you know, all of that stuff.
Yeah.
Nobody is going to support a company like Open AI building all these data centers and the power stations to power them if all they're making is, you know, slightly better technology than we have already.
I mean, I wish that we could just think, you know, AI is going to be like, you know, as big as the internet again.
The internet has done amazing things.
But I think even that is not enough.
You have to sell this idea that we are going to change the world.
And therefore, any costs along the way are worth it.
Taking advantage of a shitty world is really unfair.
You know, because if everything was awesome and, you know, some snake oil salesman came up and he was like, I'm going to change the world.
You know, and most people were like, we like it the way it is.
Yeah, the other thing is that I think the reason why a lot of people buy into it is just because I feel like for a lot of people, like including myself, AI research is such a black box.
And that is something people who are much better at math than be going and do when they come out of the black box.
And they tell us scary things are on the horizon.
And gosh, I have no reason to doubt them.
And so you just kind of like go along with it.
Yeah.
And even to the people building it, it's still largely a black box.
I mean, the engineers making these models do not fully understand how they work, how they do the amazing things that they do.
And whilst there's still that sort of mystery to them, you can sort of choose to believe that there's more potential inside this tech than there may be is.
Yes.
As long as there's a mystery, there's somebody who's got, like, a narrative that solves it.
And people are going to, people who are uncomfortable with a mystery are going to gravitate towards one narrative or another.
Yes, we've been speaking to Will Douglas Heaven.
Yeah, go read this article.
I'm going to put the link in the show notes.
Read that.
Yeah, MIT Technology Review.
It helps, it helps color a lot of coverage of AI, I'll say that.
So, yeah, Will, thank you so much for joining us today.
No, thank you.
I had fun.
Where can, is there anywhere you can direct our listeners to find more of your stuff,
more of your writing?
Yeah, just got a technology review.com.
We have a bunch of stuff that goes up every day.
I mean, we also do really cool biotech stories and energy stories.
So yeah, it's a good place to get all your tech news.
Cool.
Go check it out, folks.
We'll put the link in the show notes.
Thanks for listening to another episode of the QAA podcast.
You can go to patreon.com slash QAA and subscribe for five.
a month to get a second episode every week, plus access to our entire archive of premium episodes.
We've also got a website that's QaA-A-A-Podcast.com.
Listener, until next week.
May the superintelligence bless you and keep you as pets.
We have auto-keyed content based on your preferences.
Right now, you know, I know, presumably everybody knows.
no great secret.
Musk and Bezos and Ellison and Altman and others
are putting hundreds and hundreds and hundreds of billions
dollars into AI and robotics, correct?
Correct.
Okay.
Now, does anybody really believe
that these guys are doing it
in order to improve life for the average American?
Zero people believe that.
Statistically.
It's funny that you say that.
I was at Davenport Island.
Those four guys don't even know.
I was in Davenport, Iowa, a couple of months ago.
And we had a few thousand people out at a rally.
So I said, you know, I said what I just said now.
They're putting all those money.
Raise your hand, thousands of people there.
Raise your hand if you think AI and robotics is going to help the working class of this country.
In a room with several thousand people, two hands went up.
So people understand, you know, and now what are their goals?
What are they trying to do?
And this is where it really becomes kind of creepy.
in my view, and I don't claim to be the world's greatest expert.
But what you're going to see with AI and robotics
is the displacement of millions and millions of people
from the jobs that they have.
You know, I want to see manufacturing rebuilt in America,
but for a worker, it's not going to mean anything
if robots are doing the work.
And we want to see young people start their own small businesses, et cetera.
But it's going to be incredibly hard
when we see more concentration of ownership
And when entry-level jobs are going to be done by AI, so you're looking at a revolution,
a huge economic transformation, cultural transformation of our society.
Who is determining what's happening?
Do you have much say in it?
No.
I've got a handful of people who are really determining the future of the world.
That's scary stuff.
No, no.
