Factually! with Adam Conover - Is AI Really Inevitable? with Glen Weyl
Episode Date: July 28, 2021Everywhere you look, some pundit is claiming that AI is soon to replace humanity. But is that really inevitable - or are we choosing to make it happen? Technologist Glen Weyl joins Adam to di...scuss why he believes AI isn’t really a technology - it’s an ideology. Learn more about your ad choices. Visit megaphone.fm/adchoices See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Transcript
Discussion (0)
You know, I got to confess, I have always been a sucker for Japanese treats.
I love going down a little Tokyo, heading to a convenience store,
and grabbing all those brightly colored, fun-packaged boxes off of the shelf.
But you know what? I don't get the chance to go down there as often as I would like to.
And that is why I am so thrilled that Bokksu, a Japanese snack subscription box,
chose to sponsor this episode.
What's gotten me so excited about Bokksu is that these aren't just your run-of-the-mill grocery store finds.
Each box comes packed with 20 unique snacks that you can only find in Japan itself.
Plus, they throw in a handy guide filled with info about each snack and about Japanese culture.
And let me tell you something, you are going to need that guide because this box comes with a lot of snacks.
I just got this one today, direct from Bokksu, and look at all of these things.
We got some sort of seaweed snack here.
We've got a buttercream cookie. We've got a dolce. I don't, I'm going to have to read the
guide to figure out what this one is. It looks like some sort of sponge cake. Oh my gosh. This
one is, I think it's some kind of maybe fried banana chip. Let's try it out and see. Is that what it is? Nope, it's not banana. Maybe it's a cassava
potato chip. I should have read the guide. Ah, here they are. Iburigako smoky chips. Potato
chips made with rice flour, providing a lighter texture and satisfying crunch. Oh my gosh, this
is so much fun. You got to get one of these for themselves and get this for the month of March.
Bokksu has a limited edition cherry blossom box and 12 month subscribers get a free kimono
style robe and get this while you're wearing your new duds, learning fascinating things
about your tasty snacks.
You can also rest assured that you have helped to support small family run businesses in
Japan because Bokksu works with 200 plus small makers to get their snacks delivered straight
to your door.
So if all of that sounds good, if you want a big box of delicious snacks like this for yourself,
use the code factually for $15 off your first order at Bokksu.com.
That's code factually for $15 off your first order on Bokksu.com. I don't know the way I don't know what to think
I don't know what to say
Yeah, but that's alright
Yeah, that's okay
I don't know anything
Hello everyone, welcome to Factually, I'm Adam Conover
So happy to have you join me on the show
once again as I talk to an amazing expert and learn all the crazy shit that they know
that I don't know. Let's jump right into today's show. You know, there's this narrative
that we're all hearing now that soon super powerful, humanacing, godlike artificial intelligence is inevitably going to come.
And so we need to get ready.
We got to prepare ourselves for the advent of the AI super being on planet Earth.
You hear it all the time.
You know, Andrew Yang ran an entire presidential campaign on the idea that AI is coming soon.
So we need to start just giving people money because there aren't gonna be any jobs
for humans left to do once AIs
are driving all our forklifts and whatnot.
Elon Musk is constantly making headlines
by saying AI is coming soon
and we're unleashing the demon and we must beware.
And more and more, we see big promises about AI
and almost every consumer product under the sun,
from self-driving cars to our cell phones to, I don't know, like Photoshop has AI crap all over it now.
But is it really a guarantee that this version of AI, a super intelligence that is going to replace humanity at every human activity,
is that really inevitable?
Is that something that is destined to happen? Or is it a choice? Is it something that we are choosing to make manifest?
Well, on the show today to answer that question, we have Glenn Weil. He's a researcher who has one
of the greatest titles I have ever heard. He works for Microsoft and his title is Office of the Chief Technology Officer,
Political Economist and Social Technologist,
or Octopest.
I'm sure there's a story behind that title
and no, I did not ask him about it.
My mistake.
You'll have to go Bing it to find out the answer.
But without further ado, please welcome Glenn Weil.
Glenn, thank you so much for coming on the answer. But without further ado, please welcome Glenn Weil. Glenn, thank you so much for coming on the show. It's my pleasure, Adam. Okay, let's jump right into it. You wrote a piece
for Wired a little while back entitled AI is an ideology, not a technology. It's a very provocative
title. What do you mean by that? And what do most people misunderstand about AI, the way that we use it
today in our daily discourse? I mean, I think the reality is most people don't know what AI means
exactly anyways. Everyone has a pretty vague notion of it. Most people literally just think
the robots are going to kill us. That's like, most people will say that and then that's about it.
Yeah, well, I would say probably most people don't even have the robots are going to kill us.
They just think it's some weird techie word.
And then I think most of the rest of the people that are investing in it is like quite different from what the like usual description that people superficially give
of it. I mean, a lot of people say, oh, wow, there's this amazing stuff. Like there's these
neural networks that are like recognizing people or there's, you know, we're, or sometimes, you
know, companies will just refer to pretty much all
digital technology as if it's AI or something like that.
You use the example that a couple of years ago, like something that, you know, would
make one image look like another image or image recognition technology, that sort of
thing.
Five, 10 years ago, we would have just called that image processing.
Like, oh, there's some fancy image processing like Photoshop.
And now we call it AI. Like, wow, you know, Instagram has AI to make me
look like a bunny rabbit or, you know, can identify your face using AI or that kind of thing. And it's
just become a different way of labeling the same old technology to some extent.
Yeah. And in fact, what used to be called AI, which was these logical systems that would diagnose you,
it would ask you a bunch of questions, that used to be called AI. That stuff's no longer called AI,
and anything that's cool and new just gets called AI. So that's not a very useful definition.
That's basically just calling all technology AI. And in fact, the people who are actually
building these things and investing in them, that's
not what they're trying to achieve.
They have a very particular vision, which is that they want to create autonomous systems,
systems that operate independent of a lot of people's oversight or involvement, and
that achieve intelligence that's human level on a lot of different things.
It's like human level on a lot of different things.
And that vision that like,
that's what we're trying to achieve.
I claim is not,
it's not actually technology. It's not anything specific.
It's like an ideological vision of the future.
It's like,
that's a particular set of ideas one has about how technology should be used.
It's a goal that one would have of technology.
It's like if I was going to say, I don't know,
everyone should all ride Segways all the time
and no one should walk anywhere.
That's not like a statement about technology.
That's a statement about how I think technology should be used.
Yeah, another way of putting it is like going to Mars is not a technology.
Going to Mars is like something you want to do, right?
Yeah.
And like you might invent some technologies to do it.
And like, in fact, this is a great example.
Think of the Apollo mission.
The Apollo mission was not a technology.
The Apollo mission was a goal.
We wanted to go to the moon.
And like we invented all sorts of stuff along the way.
A lot of it was like really useful in all sorts of other applications, like the GPS,
which had nothing, nobody thought that had anything to do, came out indirectly out of the Apollo project. But like, you know, going to the moon was the goal. And like,
AI is like a goal set. It's like, it's like something that we're all trying to achieve
together. It's not a, it's not a specific technology.
Yeah. What a, what a strange thing. The strangest thing about it is that it's so often
talked about as being inevitable. I mean, we had one of the candidates for the Democratic nomination
a couple of years ago literally ran on AI is inevitable. It's going to replace all of our jobs,
and so we need to start cutting people checks because no one is going to be able to work
anymore because AI is going to do it for us.
And my reaction to that was always like, why would that be inevitable?
Like, why would that be?
What math problem, what math equation says that that's going to happen?
Like, that doesn't seem, as you say, that sounds like an ideology about like, that seems
like something you make happen.
It doesn't seem like something that naturally follows
the progress of technology.
Yeah, I mean, I think most claims
about what's inevitable in technology are like wrong.
And in fact, there's like a long history
of people who want to make something happen
saying that it's inevitable.
Like the way-
Oh yeah, that's what you do
when you want to make something happen.
That's like every political candidate says, well, and our energy will overwhelm
the opponent and, you know, we are destined to win and, and et cetera.
Like every, if you're in any kind of conflict and you want something to happen, you say
it's going to be inevitable that the, that the forces of nature are on your side.
And this comes back to Marxism, uh, and ideology.
Like Marxism is the, like the original example of that, right?
It's that the inevitable course of history is that, you know, communism will supplant capitalism.
Yeah.
And we're just, you know, helping it happen in the right way or something like that.
And that's exactly what's going on.
That's ideology, not science.
Exactly.
So, okay, that's starting to make more sense to me why that statement, okay, AI is going to take over and do XYZ instead of humans.
Well, when someone says that, they're not making a scientific prediction.
They're making a statement about here's what they want to happen and what they are going to make happen.
Is that your argument?
Yeah, well, let me give you an example.
to happen and what they are going to make happen. Is that your argument?
Yeah. Well, let me give you an example. So the canonical, like the definition of success in AI is usually this thing called the Turing test. So a Turing test is where there's a
person sort of on IM or whatever, talking to a computer and a human. And if they can tell which
one's the computer and which one's the human, the machine loses. And if they can't tell which one is, then the machine wins.
This is a very famous test.
And by the way, I always thought this test was kind of bullshit.
So look, the Turrent test is really a three-person contest.
There are two people involved and one machine involved.
So if the machine does really well, it can win.
But if the people do really badly, the machine can also win.
Like if the person is completely drunk and like can't tell the difference between anything, then the machine wins.
And if the person who's participating in it has been like indoctrinated by their society to behave exactly like a machine, then the machine wins too.
society to behave exactly like a machine, then the machine wins too. And so like the real question in the Turing test to me is not like, well, you know, is this an interesting test, but rather
like, why are we taking the side of the machine in this? Like, do we want a world where people
are so stupid that they can't tell the difference between machines and people or where people are so
like robotic that nobody has any use for humans anymore
because they're just as bad as robots.
I would say our goal set, if you want to present that problem,
should be to make sure we create a world where people are interesting and rich enough
that no one's ever going to mistake them for a machine.
Wow.
I mean, it's a really good point that like the Turing test leaves out like the human
at the other end.
Like it's a famous thought experiment, right?
Or criteria set up by Alan Turing.
And there's a lot of, it's fun to think about.
And it's like, I think a very important idea.
But yeah, it leaves out the context of who the person is.
Here's the story I was going to tell.
When I was in college, I had read about the Turing test
and for fun, I took Eliza, which is the,
I'm sure you know what Eliza is, but for everybody else,
it's like the very-
It's a really interesting historical story
about that, by the way,
which we'll come back to it in one second.
Oh, I would love to hear it.
Well, Eliza is the very simplest
like conversational program ever. It's from the fifties. And it like, basically you go, hello, Eliza. And it says,
how do you feel today? And then you're like, I'm feeling sad. And then it just sort of plugs in.
How do you feel about feeling sad? Like, it's like very, very, very simple. And so what I did
was I, I took it and I started, I created a new instant messenger account tells you what year
this was. It was AOL. I was using AOL instant messenger.
I started messaging my friends from a new account with Eliza.
And despite how rudimentary this thing was,
they all believed it was a person instant messaging them.
And they got so angry at it. They were like, you're not making any sense.
Like, why are you speaking gobbledygook to me?
Like why you say, and then the program would reply,
how do you feel about why are you speaking gobbledygook to me?
But they would still engage with it. And I was like,
fucking Eliza passed the Turing test.
I tricked all these people into thinking it was a real person.
And I know that's not like technically how the Turing test works,
but I'm like, I, so I understand your point about that. Like there's, there's the,
the idea of who the person is on the other end is like left out of this,
of this experiment. Well, and who the person that they're other end is like left out of this experiment.
Well, and who the person that they're comparing it to is left out of the experiment too, right?
And the thing that's interesting about ELISA is that ELISA was not actually created by like an AI
person. So one thing that people forget about, and this is again, the, you know, the issue of
ideology and the history of it and so forth, is that people have been really since
the beginning of computer science, in fact, the founders of computer science, arguing against
AI as a good way to go, saying that this was not like a good set of goals, that this didn't make
any sense, et cetera. And in fact, ELISA was created by one of the people who was in that camp
as like an illustration of how stupid the Turing test was.
So it was actually created to do precisely what you did with it, not to like be a therapist.
Wow. Well, and the thing is, I very quickly felt bad. Like I did it like two or three times. And
then I was like, realized I was upsetting my friends because they were like in an argument with something that they didn't understand you know and and under false
pretenses and i was like why am i doing this to people like i shouldn't i shouldn't do this so
so i think it actually had its its its correct its intended effect on me yeah exactly you said
that a lot of people at the beginning didn't feel that AI was an appropriate set of goals to work towards.
How do we end up in a situation now where it's like seems to be one of the main goals of like an entire sector of our economy is to pursue AI?
I mean, every possible like, you know, every company, every tech company is devoting massive amounts of resources to AI.
Consumer products are coming out that all have AI in the name.
And we as consumers really want to believe in these products.
We get products that say AI on them, and we're like, oh, yeah, it's working.
It's smarter than I am.
It seems like we've all swallowed the pill.
Why did that happen? I think that an important thing to understand,
and this you get from studying political history
and the history of ideologies,
is that what determines the success of an ideology
is almost never its effectiveness,
like whether it actually does good things in the world.
It's drama.
So think about,
there's a guy named Henry George,
who probably almost none of your listeners
have ever heard of.
Oh, I think, I actually think more,
you'd be surprised,
but for those who don't, tell us.
So Henry George was this amazing guy.
He had, like, the best-selling book
in the English language for 30 years.
He was, like, the founder of the center-left in the United States. In had like the best selling book in the English language for 30 years. He was like the founder of the center left in the United States.
In fact, the progressive, the term progressive comes from the title of his book, Progress and Poverty.
And he was like the first person to run on like a real center left platform for a political office in the U.S.
And in fact, one of the guys he beat was Theodore Roosevelt.
And like that's where Theodore Roosevelt got his progressive ideas from.
And he's just like an incredibly influential, amazing guy.
He's mostly forgotten today.
Mostly forgotten today.
And he, but he actually like, if you want to say like, what was, who was it that like
played the biggest role in inspiring the stuff that like became the New Deal, became like
the post-war settlement that like kind of made the world that worked for a while that we had, I would say Henry George is probably, like, one of the most
important people. But he's totally forgotten. Why? Because, like, his stuff actually made some sense.
And, like, it actually, like, did stuff. And it actually kind of worked. And as a result,
it kind of just got incorporated into institutions in a variety of ways. And it just sort of faded
into the background, you know? Whereas Karl Marx,
you know, he had this like apocalyptic vision of like the, you know, end, you know, like the
clash of this and that and whatever. And it didn't work. And like everywhere it was tried,
like things went really badly and whatever. But because it was so-
Well, they'd argue it wasn't, they'd argue it just wasn't tried right.
Well, exactly exactly but that's
the point right you know it's like it's like okay you know it wasn't done you know like
the thing that like never actually works and that is always like tantalizingly out of reach
and an imagination but like apocalyptic and like either it's going to do something amazing and
bring in utopia or it's going to destroy everything things with that character um are like great ideologies you know
yeah but but things that like just like make the world work better you know they just make the world
work better than everyone forgets about them and this guy there's this guy jcr lick lighter
lick lighter was the founder of the the five computer science departments that were the
first ones in the world. He was the program officer at the defense department who gave out
all the grants that created the computer science departments. And those became the first five nodes
of what became the internet. So he was like, if you want to talk about someone who shaped the
actual technologies that people actually use today, he's probably the most important person.
But you've never heard of him.
You've heard of Turing.
You've heard of Minsky.
You've heard of, you've never heard of J.C.R. Licklater.
And the reason is, is stuff like, it just worked, you know, like actually got stuff done.
It was, he just, he just didn't care about like, you know, big ideologies, big visions. He just cared about
actually making things work. And so we kind of forgot about him. But his whole vision
was man-computer symbiosis and computers as a way of communicating between people rather than
as a computational device to replace people and so forth. And, like, all the founders of, like, the actual tools that we use today were, like, followers of Lickliders.
And, but, you know, Marvin Minsky and the AI people, they had this, like, imagination-capturing vision.
Mostly people thought it was horrible, but, you know, a few weird people thought it was horrible, but you know, a few
weird people thought it was great. And, and, and that's sort of like the same thing has happened
with Marxism. You know, it just came to be the ideology that everyone talked about mostly because
people were scared of it, but partly because people were attracted to it, you know?
And it's fun to talk about, you know, like, and there's, and by the way, there's a lot of,
there's a lot of value in
talking about those ideas. But it's like brain Velcro. You know, it's like you can you can spend
a lifetime like pulling it apart and dissecting it or or look at like I mean, honestly, we talk
about Elon Musk too much on this show. But, you know, when he says as he was doing a lot, you
know, five, six years ago, would go up and but before everyone sort of
like you know caught on to the to the scheme a little bit uh you know he would go on a contact
conference he's like oh i think we're all living in a simulation everyone go oh oh what what does
this mean and you know it's like the the comment that launched like 10 000 podcasts you know or
the same thing about like you know i think we should be very careful. Like with AI, we are unleashing the demon, like that kind of thing.
It's like, it's a, it's a very hot thing to say.
Well, Donald Trump does this a lot too, right?
You know, like his whole thing was like, when he doesn't want you paying attention to like
X, he says something really inflammatory and that polarizes.
But the thing is when something's inflammatory and polarizes, it drowns out sense-making.
You know, it drowns out your ability to, like, actually work on the real problem.
Because everything gets distracted by, like, for and against on whatever this inflammatory thing is.
Yeah, and it's, in the case of, like, you know, AI or are we living in a simulation?
Sometimes the inflammatory thing is like also completely hypothetical.
It's a philosophical it's a philosophical question more than it is a real one.
So so what are the real issues about AI or technology that are being drowned out that that we should be talking about instead?
What is being missed?
Well, I think what's being missed is that when we have an ideology that says,
we've got to do this crazy, amazing thing, be smarter than any human being.
And we've got to do it in an autonomous way. Some system has to do it without any humans being
involved, which is what AI is saying. The way that you get that done is by putting as many resources as possible inside of some
veil, sort of like in a Wizard of Oz.
You've got the curtain, and then you put as much stuff inside of the curtain as possible
because that's the way that you make whatever's inside of the curtain super awesome, amazing,
ultra powerful.
Right?
And you want to have as few people behind the curtain as possible, because the more people that are behind the curtain, right, the like, less, it seems like there's no one behind the
curtain. So what you basically do is you concentrate an enormous amount of resources in the hands of a
very tiny set of people. And that is just like bad. Like if you want to call
it communism, you can call it communism. If you want to call it ultra capitalism, you can call it
ultra capitalism, whatever version of it is. It's like this thing of like a tiny, huge amount of
power going to a tiny set of people. And that's just like really not a good way to like make our
future work. And like, we're seeing it all over the economy. We're
seeing it all over our politics, what that's doing to us. And we just have to get past that.
We need to have a vision of what progress looks like that actually empowers different sets of
people and doesn't just concentrate all that power in this fake autonomous system.
Yeah. And concentration of power is
one of the biggest problems in maybe the biggest problem in human society right now in terms of
capitalism is becoming hyper concentrated. Geographically, power is becoming concentrated
in specific regions and on the coast rather than in the rest of America. In business,
the coast rather than in the rest of America, in business, monopoly capitalism. I get all that,
absolutely. But you're saying that this AI ideology, when somebody says, AI is going to take over, we're going to build systems that run autonomously without human intervention,
the effect of that is actually, well, some human is interven intervening just only like a couple now like like
all the power is being concentrated in a couple people is that am i getting it right exactly
wow that makes sense to me i mean like when you look at because so many of the ai systems that
we have are like you know facial face recognition for law enforcement you know is entirely about
uh you know casting a wide net over a lot of people and concentrating the power to determine who's who and who went where in the hands of a very small amount of law enforcement or companies.
Well, and there's all this discussion about AI bias, and that's an important discussion, and addressing particular biases is important and so forth.
But the thing is that the systems are going to be biased. Like there's no such thing as an unbiased
system. That's the, that's actually the fundamental problem. There, there is no unbiased system. It's
not like you've got to like, bias is a problem to be fixed. Bias, like people are biased,
like systems that are designed by people are going to be biased. The question is, who determines what the bias is, you know?
And who has the power to figure that all out?
Sophia Noble talked about this on our show, I believe.
Oh, she's great.
That's great that you had her on.
That's great context for this.
Yeah, she is terrific.
And go listen to that episode in our archive
if you haven't, folks.
Yeah, she talked about how, well, people have biases
just to a greater or lesser degree,
but it's part of being human.
And when people design a system, they embed their own biases within it.
Yeah. And the thing is, what we need to seek is not no bias. What we need to seek is a distribution of power over the digital systems we have. So we actually have a pluralistic society where people can have their different communities and so forth. And that's what's being undermined by this myth of autonomy, because the myth of autonomy is making us believe that there's just this neutral, independent thing that's outside of anyone's control that's just causing things to happen and therefore deflecting all the
responsibility from the people who actually are designing the systems. Yeah, wow. This is making
a lot of sense to me. Like, and it is a different vision of technology, like the other vision
technology you talked about where technology is to, you know, something that is meant to help
individual people as a tool is much more, that's the techno utopia I grew up in, in the nineties, you know,
the personal computer era, the early internet,
everyone can buy a personal computer and make their own website, you know,
and communicate with their family or, you know, like you can do your tax,
you can make, make a spreadsheet or whatever, you know,
it's designed as a tool for a human to use, but AI is the opposite.
It's, Oh no, this isn't something a tool for a human to use. But AI is the opposite. It's,
oh, no, this isn't something that's going to exist that nobody uses. It just sort of exists
and it happens to you. It's something that is done to you that someone else controls and implements.
Well, what you said about the 90s was not a coincidence. That stuff was all directly designed
by Licklider and his buddies as a response to what they thought was the problem with the AI direction for technology.
So there's a guy named, in fact, my family is involved in this in kind of a funny way.
My dad worked at a lab at Stanford that was working on AI.
And in fact, my dad was co-founder of what was arguably the first AI startup in the 80s.
And he worked down the hall from a guy named Doug Engelbart. And Doug Engelbart was the inventor of
the mouse and of the graphical user interface. And he basically, like all the stuff that you
associate with personal computing, like came out of his work. And he was sort of, you know,
rivals with my dad's lab, because they were pursuing the AI thing. And he was sort of, you know, rivals with my dad's lab, because they were pursuing
the AI thing. And he had an opposite approach, which he called augmenting human intellect.
And they were working on this opposite problem. And, you know, you saw what bore fruit. And in
fact, that experience ended up changing my dad's mind about what was the right thing to pursue
after having lived through that whole revolution
with the personal computers.
But that lesson was not one that our society learned.
We came back to the AI thing
because we didn't have that personal experience with it.
Well, we learned it for a while,
but you know what else this reminds me of is,
you know, I'm a fan of science fiction literature,
you know, and uh i used to
read like old isaac asimov stories and stuff like that and i was always really struck when i read
these in the 90s and early 2000s i was really struck by the version of a computer from the
from that era is it's always called like omnivac and it's like a giant computer that you talk to
like there'd be you know science fiction short stories where like there's one enormous computer and everybody has like a little teletype to it where they can ask it
questions and it, you know, like, oh, great Omnivac, tell me the answer to this or that.
And it's like, the answer is 42. It was referenced in Douglas Adams. But I was, as a kid, I was like,
well, that's stupid. That's not how, like Isaac Asimov was dumb. That's not what computers turned
out to be. They turned out to be like, you have your own little computer that like you can do whatever you want
with. It's not some super intelligence. But when you're describing this to me, I'm like, oh, that
actually was in the air. That was what some people were trying to build. And they just ended up
losing to all of our benefit. But now the people who want the one giant supercomputer that we're
all praying to like a God, those people are back,
basically. Yeah. And under the title of inevitability of AI and whatever, we're all
funneling all the resources of our society into the hands of the people with that bizarre, scary,
apocalyptic vision. Yeah. Wow. Okay. Let's take let's take this moment for our break. Cause I, I have a lot of
momentum that I want to ask you more about, and I don't want to start a new line of questioning
before the break. I want to go read some, some ads for car insurance or whatever, and come back
and keep grilling you. Cause this is wonderful. Can I, can I read, um, one poem before, as,
as we leave the reader, no guest has ever asked if before the break,
they can read a poem. So I'm going to grant it because I really like this request. Read the
poem, please. So this, I think, captures the spirit of what we should be doing with technology.
This is from Taiwan's digital minister, Audrey Tong. It's her job description. She says,
when we see the internet of things, let's make it an internet of beings.
When we see virtual reality, let's make it a shared
reality. When we see machine learning, let's make it collaborative learning. When we see user
experience, let's make it about human experience. And whenever we hear that the singularity is near,
let us remember the plurality is here. That was beautiful. On that note,
we're going to go to break.
We'll be right back with more Glenn Weil.
Okay, we're back with Glenn Weil. I've read a couple ads before that.
You read a poem, which is much more nourishing than an ad, frankly.
I'm curious why you mentioned that this is the poem was written by a Taiwanese person, someone working for the Ministry of Information.
She's the digital minister of Taiwan. Digital minister of Taiwan.
Why? I know that Taiwan has come up in your work.
Tell me about why Taiwan is an inspiration for you.
We need more than anything not to talk more about AI and how stupid it is, etc.,
but to show people a better way.
People need to start talking about and focusing on what actually matters.
People need to start talking about and focusing on what actually matters.
And Taiwan is the society in the world that I think is most effectively showing a different way of doing things. And this woman has an incredible life story, but really encapsulates both in her life story and in the work she's done there, I think everything that we should be aspiring to in the way that we
design our digital society. So what is it that they're doing in Taiwan that is so wonderful?
So they have a participatory democracy platform that more than a quarter of the citizens of the country are monthly active users on,
where people figure out consensus-oriented solutions to major policy problems,
participate in hackathons, and upvote solutions to water pollution
or issues that they're having with mask delivery, etc.
So they've actually managed to create like a infrastructure where rather than us wasting
all of our time screaming about like, you know, whatever the latest divide in American
politics is at the national level, People are actually participating in a more concrete way
in finding digital solutions to the problems that they face. And that has become, rather than like
the polarization that goes on in Facebook or Twitter, the focus of the digital culture in Taiwan.
Wow. Now, first of all, I want to say that that sounds like first very
Pollyanna-ish. And I'm like, well, hold on a second. How do you really get people to log on
and work in a participatory culture, you know, way? Aren't they just going to fight with each
other? But it does make me think about how, you know, organizations, platforms like Facebook and Twitter are specifically designed in ways that
like breed division and breed unproductive relationships. And they're done that way
because it benefits the people that own the companies to have those arguments, to have that
misinformation, to have that rancor and, and et cetera. That's, that's why the platforms exist that way.
And they don't have to, like, we could build a technology that does not do that to us. And that gives us a, gives us like a, a fruitful way to interact with each other that actually
serves us better as a tool. If, well, and we, and we know that that's possible because
like anyone who's ever participated in a thoughtfully mediated conversation, if you work for a company that has decent management practices, you probably will have gone through trainings that taught you how to facilitate a conversation.
And it's like, this is not like some magical thing.
Like, there's huge volumes of like management practices about how you have a respectful, meaningful, inclusive
conversation. Now, the question is, can we scale that to platforms where there's millions of people
participating? And I think the answer is, if we wanted to, we could focus on building capacities
that like actually sort of do in a scalable or AI way, you know, those functions that facilitate human collaboration, consensus
building, et cetera. But if instead what we do is we say, oh, we've got this set of incentives to
just sell ads to people, and now let's just throw an optimization engine at it and do it in the
smartest way possible for that given goal, then you're going to get the information ecosystem
that we have, you know?
Yeah. Well, look at an example of that is maybe Wikipedia as a shining example in American or
really worldwide internet culture, but it started in America that like, you know, Wikipedia is,
you know, there's technology behind it, but the technology hasn't changed that much since,
you know, 2003. It's really like a set of values and a set of community standards that facilitate discussion and, you know, conflict resolution.
And Wikipedia has its problems.
It has, you know, a very non-diverse volunteer base and, you know, et cetera, et cetera.
But that it's created a resource that we think of as being a technology resource,
but could only have been created by people.
Like, it could never have been created by anybody.
Yeah, well, the thing is that, like,
everyone is like, oh, Google, blah, blah, blah,
AI, blah, blah, blah.
But if you actually look, you know,
there's been some economists who've tried to study this,
computer scientists and economists.
They find that something like 40% of the value
that people get out of internet
searches comes from Wikipedia articles.
Of course it does.
So like you go to Google and whatever, and we're like, you know, they're worth, I don't
know, a trillion whatever dollars.
But like most of the actual value there, as opposed to the crap, is coming from something
that's built on a completely different set of principles, not around AI optimization,
not around profit maximization, not around AI optimization, not around profit
maximization, but around like building thoughtful community. Imagine we could scale that process
and we could have a thousand Wikipedias rather than a thousand Google features. Think about how
much better of a world we'd have. And that's what they're doing in Taiwan.
And putting it that way, by the way, and by the way, it's too much fun to rag on the AI. I know you said we shouldn't just do it, but it is too much
fun to do it because, you know, so much of Google, just their search product over the last 10 years
has been like, we use sophisticated algorithms to give you the answer you want before you even
search for it. Like we put the answer right there on the screen using AI, blah, blah, blah. It's
just scraping Wikipedia. It's just like half the time when AI, blah, blah, blah. It's just scraping Wikipedia.
It's just like half the time when you search something on Google,
it's literally they've just pulled a fact from Wikipedia
and made it bold on the front page of your search results.
And half the time you got to click through anyway to go.
Well, there's a great XKCD cartoon,
which I don't know if you've ever seen, Adam,
but there's a stop sign and it says,
in order to complete your website registration, please identify whether there's a stop sign in this photo.
Our autonomous vehicle is approaching the stop sign, so please do it in the next three seconds.
Right, right.
Well, yeah, there's that.
I mean, you log into a website and it asks you to identify what's a bridge and what's a stop sign.
You're like, I'm helping out some fucking AI somewhere that is being trained on me.
Like these things are ultimately these are built on humans one way or another, I think, is the point.
Like these systems that we build are always it's always going to be humans all the way down at the end.
And the question is whether we conceal that and undermine the dignity and participation and voice and agency of the people involved and only give voice and agency to the geniuses who like, you know, create the system.
Or do we recognize all those people, realize that it's their actual thing, lift them up and magnify their voices?
Yeah.
Yeah.
Well, that would have been a great end to the podcast, but we have another 25 minutes to go.
So let's keep exploring those ideas. That was just a wonderful, that was like a wonderful concluding line.
And you could even say it again at the end if you want.
you want um but uh like are are there any positive examples of things that are called ai even frivolous ones that you that you think are well hold on a second this is like a another way
to do it this is a good way to do it or is that entire term you're like i just would love it if
nobody ever said it again i mean i don't think that the terminology is useful, but there are definitely technologies
that fall under it that, or that use the same set of techniques that do better things.
So let me give you the example again from Taiwan.
So they have a system there called Polis.
What does Polis do?
It's a Wikipedia-like structure, but where there's sort of active moderation using some statistical
techniques, things that people would usually call AI. So what happens is that anyone, like,
imagine we're talking about, like, you know, gay marriage or something. People can enter in,
here's what I think about gay marriage, right? And then people can say, do I agree or disagree with someone else's statement?
And based on their responses and the language and the thing, you can then cluster these together.
And you realize, OK, well, there's this population of a million people, but there's only really like 10 opinions, more or less.
Like everyone's kind of there's one or 10 things that everyone's saying.
And then you can actually identify which of these is the most articulate way of saying
it based on what people are voting for, right?
And then you can read those 10 statements, right?
People can, you can't listen to a million people, but you could read those 10 statements,
right?
And then you can have people say things again.
But then you can score the next things, not based on how many votes you get, but how diverse across the groups from the first time are the votes that you got.
So that you actually get points if you manage to get support from people who are coming from
different things. We're not just repeating and digging ourselves into the same position,
but we're actually creating new positions that cut across the existing divides.
I see. the same position, but we're actually creating new positions that cut across the existing divides.
I see.
And if you iterate a system like that, you can get pretty quickly to at least some rough notion of consensus on most issues. And it's come up with some really brilliant things. So like one example,
take the gay marriage case, is that they went through a system like this. And in Taiwan,
a very traditional Confucian society,
there's this notion that when two people get married,
their families automatically get married as well.
Because, you know, there's this whole notion of extended families
and the Confucian tradition and so forth.
But the thing is, like, a lot of the younger generation
don't really believe in that,
and they want to just be able to marry.
But then the older generation say,
well, do we want to be be able to marry. But then the older generation say, oh, well,
do we want to be forced into this extended family relationship that goes through, you know,
a gay marriage? And so what they ended up coming to using this type of a thing was that they
actually separated out the marriage of the individuals and a separate contract that was
signed by the extended families. And so that gave the freedom to, you know, a gay couple to marry, but it also gave
the freedom to the family to say, look, we don't want to be joined together at this moment, or we
do. And that's the sort of win-win solution that these types of processes can lead you to if you
have the right incentives. And so this is like a technologically assisted
sort of method of what, policymaking,
where it says, hey, let's poll people
in this specific way, let them participate
and see what we can come to.
But it's designed, it's technology designed
to form consensus rather than divide folks as a tool.
Yeah, but the thing is there was AI in there, but you almost probably lost track of the fact that there was even AI in there.
Because the thing is, the focus was not on the AI.
The focus was on helping people build consensus, and then we built whatever tools we needed to do that, right?
Yeah.
And that's how I want to see technology be used, not let's get it to some human capability or whatever,
and then figure out what to do with it. Because usually the easiest thing to do if you've built
something to imitate a human ability is to unemploy the people whose ability you're imitating.
You know, whereas if you say, no, our goal is to help people reach consensus. Now, maybe you'll
unemploy a few facilitators, but like, that's not the main thing that's going to happen. The main thing that's going to happen is you're
going to get a more cooperative society. And it's not saying no one ever gets unemployed by
technology, but if technology is built with the goal of imitating human capabilities,
it's probably pretty likely that it's going to replace a lot of people. Whereas if it's built
with the goal of facilitating like some kind of human cooperation, it might employ some people.
Probably the main thing it's going to do
is facilitate the cooperation.
You know what I mean?
Right.
We can choose the goal
of the technology that we create.
We don't need to create technology
that is going to harm humans.
Like we don't,
we can just not create technology
that's going to put humans out of business.
We could create technology
that is going to help humans become better,
help humans do more.
And of course there might be unintended consequences. And I'm not saying
you can predict everything. And of course you need to worry about, but, but like why set up the goal
as being replicating every human capability? Why not set up the goal as being fostering a diverse,
pluralistic democratic society where people cooperate with each other and hear each other's
perspectives in a way that can lead to reasonable consensus.
Yeah.
I think that's a very wonderful vision.
That's a very wonderful vision.
But we still somehow seem drawn to the AI model in so many ways.
Like people seem to want to believe it, you know?
Like there's the, we talked
about, talked about on this show before the way that we tend to trust technology more than we
should, you know, the example of people following their ways, directions into a lake, right. Or even
just in, you know, I know so many people in LA traffic who religiously follow ways, even though
I do not believe it gets you anywhere any faster. Right. But the fact that an app, because you're making left turn after left turn,
there's no way.
You gotta wait five minutes
every time you make a left turn.
You're wasting time.
Just go straight.
Like, it doesn't matter
if there's slightly less traffic on that street.
But the fact that a piece of technology is telling them
this will be the fastest way
relieves anxiety from them, right?
Because now they're not worried
that there's a faster route somewhere.
People seem to like to be told that, or even, I don't know, a very, a very, this is the most
trivial example, but something that really bothered me as a comedy writer was that for a
couple of years, there was all this stuff, all these posts on Twitter. I taught an AI to write
a Seinfeld episode and here's what it spit out. And then it would spit out a couple, you know,
there'd be a post of a couple of pages of like fake Seinfeld episode and here's what it spit out. And then it would spit out a couple, you know, there would be a post of a couple pages of like fake Seinfeld episode.
There was like, oh, it's so funny.
Cause look how stupid the computer is. Right.
And I was looking at this,
knowing a little bit about AI text generation and knowing a lot about comedy
writing, no fucking computer wrote this. All right.
Like a person wrote this, the person who made this post wrote the thing.
Maybe they used a program as basically magnetic poetry, kind of, where they
let it generate and then they picked and chose their best things, right? But at the end of the
day, a human did it, but we all love to imagine that a computer did. And so everyone just talked
about it as though that's what happened, even though it didn't, right? We seem to have some
attraction to this idea. Well, and it goes way back historically, way, way back. So there's a
great piece by Edgar Allan Poe in the 1830s, in which, well, he basically says that, like,
everybody's obsessed with this thing called the Mechanical Turk. So the Mechanical Turk was this,
this, you know, person hitting, hiding under a chessboard. And he was, you know, person hiding under a chessboard.
And he was, you know, claiming that it was playing chess, right?
And he, it's a, I'd have to find his phrase exactly,
but it's, he says that like, you know,
everyone, all the great technical geniuses think
this is the greatest thing that people have ever created,
is this thing that's totally independent of human agency
and is playing chess.
And of course,
it was just a person hiding underneath the, underneath the...
A little machine making the machine work, yeah.
Well, and the thing that's so funny is that then Amazon Mechanical Turk was named after this.
Yeah.
They couldn't have been more self-consciously aware of what was going on. And yet, nowadays, everyone thinks that everything that's done by Mechanical Turk was just like some magical thing that came out of the machine,
rather than paying any attention to the people who actually do the work.
And in fact, my colleague Mary Gray wrote a book, Ghost Work, which was all about that.
And there's a great story by E.M. Forster in the turn of the century called The Machine Stops that I really recommend to everyone.
It's like 10 pages long and it's like one of the best science fiction things ever written.
It like totally anticipates the Matrix and everything else afterwards.
And it's basically about us going down the path of turning more and more over to a machine and the consequences that
follow from that. But yeah, it's a perennial attraction, precisely because it's so repulsive,
I think. It's sort of like the thing where when you get vertigo and you're on the edge of a cliff, really what you're
afraid of is you'll throw yourself off, not that you're going to fall off. And that's somehow how
this is. It's like, there's something so apocalyptic about this vision that we can't
resist pursuing it, you know? Well, yeah, that is a really striking point. Yeah. I mean, if you are in a world where you're like, Hey, I'm going to get up and go for
a walk.
Cause my Apple watch told me to, and, and ways knows the best directions and I'm going
to renounce it.
Feel if you're, if you're having the experience of it, feeling pleasurable to renounce your
humanity in some way and take orders from a machine, it might naturally follow.
Well, we're all just
going to do this to the grave. Like, well, of course we're going to do this to ourselves.
But like, we don't have to. I mean, we have the experience of using technology for our own
benefits and knowing, hey, this is helping me or this is not helping me. I can use it or not use
it. And I think what makes the difference, and I think this is very clearly,
you know, pulled out by history, and what I think has a chance of making the difference here,
is a sense of a threat, a real threat that we have to rally against. So you think about like,
what got us out of the 1930s, it was really the threat of, you know, fascism. And why is Taiwan the place where this is happening?
Probably not much of a coincidence, right? China's right there, and they need to show that liberal democracy can work. So people are willing to devote their time to defend their way
of life. And the other place that's working incredibly well is Estonia, right on the border
with Russia, you know, constantly facing. So those are the cases where this really works. And I think
that can give us a reason to be optimistic, because I think increasingly, especially after
COVID, a lot of people in the West are feeling like, can we compete with the Chinese system?
And I think the Taiwan case pulls that out really effectively. And so I think we have a chance of
rallying people around that shared sense of purpose to, you know, remember that freedom isn't free and that, you know, we actually have to take on Republican responsibilities, not capital or Republican, but what it is to be part of a republic if we want to avoid losing our republic. You know, famously, they asked, you know,
I think it was Franklin, what kind of government have you given us after the
Constitutional Convention? And he said, a republic if you can keep it.
Well, I love your optimism. Why did I say it that way? I love your optimism. I love your optimism.
But I have to pressure you a little bit because you said uh you know that that competition can
really spur change but i hear all this talk currently of an ai arms race with china with uh
you know the the authoritarian china not not uh the taiwanese country that claims that mantle um
uh and it specifically seems to be goading us like whenever we hear that to create the
authoritarian version of AI, the version that, you know, tells humans what to do that operates
without us, that replaces us because, oh, that's what the Chinese are going to do.
So we got to build it first.
And so it seems like that, that arms race is pushing us in the wrong direction or the
direction that's opposite to what you lay out? Yeah, well, and the threat of fascism in Germany created the New Deal and so forth in the U.S.
and it created Stalinism in Russia. So obviously the same stimulus can lead to different outcomes.
I think, you know, we have a moment of choice. We have a moment of opportunity,
but we also have a moment of great peril. And so the question is, how do we meet that opportunity? And I think the story of Taiwan,
which is a case of actually successful society using technology to actually overcome
that exact threat, and how are they doing it successfully? That's what we need to tell people.
That's what we need to look to. I think if there's a single thing in the world
I could snap my fingers and make happen
it would be to have a really
compelling documentary
about the experience in Taiwan
or maybe biopic or something like that
you gotta think bigger than a documentary
coming out, if you could change one thing about the world
you'd be like I want there to be a documentary
and it'll air at Tribeca and then it'll get picked up by Showtime.
You can do more.
If you got genie powers, you should just try to solve climate change first.
But I don't want to do more because I think that democracy is like when you use genie power, the first thing that you should think about is making sure that you don't wish for something ambitious.
genie power, the first thing that you should think about is making sure that you don't wish for something ambitious. Because if you wish for something ambitious, you might well get what you
wished for. But the documentary, if you do that on a monkey's paw, well, the worst that happens
is it's like a bad documentary. You're like, well, people are like, I didn't like the documentary
very much, but I guess, you know, you didn't like, I don't know, turn into a cat or get a Hitler elected or something like that.
Exactly.
I mean, that's the problem with AI, you know, as people often describe it, is always that, well, it's going to be too powerful and we'll wish for the wrong thing and it will destroy the world or something like that.
Yeah.
And that's a real problem.
And that's a reason why I would never wish for something that is well beyond
the sort of power that's reasonably allocated to me in society.
Like I would want wish that any wish that I make people,
other people wouldn't be like,
Oh shit,
he wished that,
you know,
they would be, they would be like, okay, fine he wished that you know they would be they would
be like okay fine whatever you know that's go you know like you shouldn't wish you shouldn't wish
you shouldn't make a wish that gives you like authoritarian power over other people basically
you should make a wish that gives you a chance to participate in a democratic conversation and
persuade other people around to your view not a wish that just like changes things. You know what I mean? Wow. I've never thought of this egalitarian philosophy of
wish granting that you should not, like, if you wish, I wish I was so wealthy and famous and,
and, uh, it lived infinitely long while you're turning yourself into a God and other people
might not like that. That's an unethical wish. You should be thinking about a wish that's like, I hope everybody is able to have a fair share, a fair say in their community.
But even like something like that, well, then what is fair say in your community?
Like I would say like maybe here's a message that I wish everyone could hear.
Or like here's a story that I wish people could partake of or not,
you know what I mean?
So that's,
that's the sort of wish wishes that I have.
Well,
you have that wish as regards the people who listen to this podcast,
who are listening to your voice right now,
you can make them learn something.
So,
so what,
what is your,
you know,
takeaway and message for them, especially, you know that they are listening to another podcast or reading the paper about, oh, AI, this is how it's going to change the world, the claims that are normally made, right?
How should we take those?
And what should we try to cultivate ourselves instead?
Learn about Audrey Tong and what they're doing
in Taiwan. Don't let people talk about AI and the inevitable thing and how we're locked into this
and this, that without pushing back against them and asking them like whether there's actually a
basis, whether it's actually scientific consensus, whether there's that, you know, like treat those
things critically and think about the type of future that you want for technology and where
you see that actually happening, something that you want and focus on, on those things. Uh, and,
and try to push back on the, you know, cataclysmic end of the world type scenarios. And instead,
think about, you know, the way I think about it is there's this term in philosophy of religion
called eschatology. That's like the end of the world. Like, how's the end of the world going to
come? And then there's a term in biology called ecology, which is there's a bunch of different
stuff. It's all interacting with each other, et cetera. Like try to think more of an ecology than an eschatology.
Don't think of like the one thing that's going to be the end of it all. Think about how we get
more and more richness and diversity and cooperation and so forth. And this is something
that we can do. That was a wonderful ending, but I'm just gonna make my own comment. This is
something that we can do in our own lives using technology.
Like one thing that really inspires me
is there's so many people who still use technology
in that wonderful way from the 90s
where they use it to empower themselves
and empower their communities.
You know, whether it's like,
I love that now there's this trend of people
who make a podcast for like their friends,
for a small community, right?
Or who build a software tool that helps them and the people in their community do something like that.
Mutual aid societies are something that I love and the people who've been building the tools for
those. And I think what we, you know, if you want to ask, what should we do on the big picture public
policy level? I think the thing we need to do most of all is empower and scale those best community-oriented
things, not take some sweeping action. You know what I mean? Like, we need to lift up
all of the things that are actually doing the good work at those community levels.
Yeah, that is such an optimistic vision, and I really appreciate it. You gave me a lot to
think about and connected a lot of topics that we've had on this show
before.
A lot of names that have come up in the past in a really exciting way.
So thank you so much, Glenn, for coming on the show.
Where can people find out more about your work?
Where can they support what you do?
Check out Radical Exchange.
It's a global social movement of people trying to do all
this kind of stuff. And we've got a paper coming out that's tentatively titled How AI Fails Us.
That'll probably be out in a month or two with a whole bunch of people from all different walks of
life, including many of the top experts in AI, all sort of trying to do this pushback
together in a coordinated way along some of the messages I talked about today.
Amazing. Thank you so much for being here, Glenn.
My pleasure.
Well, thank you once again to Glenn Weil for coming on the show. If you want to pick up his
book, you can get it at factuallypod.com slash books. That's factuallypod.com slash books. Once again, I want to thank our
producers, Chelsea Jacobson and Sam Roudman, Ryan Connor, our engineer, Andrew WK for our theme song.
Hey, don't forget about the fine folks at Falcon Northwest for building me the incredible custom
gaming PC that I'm recording this very episode on. You can find me online at adamconover.net or at Adam Conover, wherever you get your social media. Thank you so much for listening.
And by the way, please remember, tell a friend or family member about the show if you enjoyed it.
It really does help us out a lot.
Until next time, we'll see you next week.
Thank you so much for listening. that was a hate gum podcast