Podcast Page Sponsor Ad
Display ad placement on specific high-traffic podcast pages and episode pages
Monthly Rate: $50 - $5000
Exist Ad Preview
Tech Won't Save Us - Generative AI is Not Inevitable w/ Emily M. Bender and Alex Hanna
Episode Date: May 22, 2025Paris Marx is joined by Emily M. Bender and Alex Hanna to discuss the harms of generative AI, how the industry keeps the public invested while companies flounder under the weight of unmet promises, an...d what people can do to push back.Emily M. Bender is a Professor in the Department of Linguistics at University of Washington. Alex Hanna is Director of Research at the Distributed AI Institute. They are the authors of The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want.Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Kyla Hewson.Also mentioned in this episode:New York Magazine reported on the consequences of increasingly widespread use of ChatGPT in education.Support the show
Transcript
Discussion (0)
There's this inevitability narrative that we hear from the tech companies all the time,
and that is a bid to steal our agency.
And so, you know, Alex said it can be inspiring to take a principled stand.
And one part of that is to say, no, I don't believe you.
If you say, well, this is here, we have to learn to live with it.
No, right?
There's nothing inevitable either in the development of something called AI or AGI
or in the sort of continued presence of this tech in our lives.
Hello and welcome to Tech Won't Save Us. I'm your host, Paris Marks. And this week,
I have two fantastic guests joining me on the show.
Alex Hanna is Director of Research at the Distributed AI Institute, and Emily M. Bender is a professor in the Department of Linguistics at the University of Washington.
They have a new book out called The AI Con, How to Fight Big Tech's Hype and Create the Future We Want.
It sounds like exactly the type of thing that we'd like to talk about on this show. Alex and Emily have been on the show in the past, of course. Alex for the series that we did called Data Vampires, looking
at data centers and things like that. And Emily to talk to us about AI and her input on that as
a computational linguist. Their new book is a great primer for people who are looking to understand
what is actually going on with the AI hype that we've been seeing over the past few
years, all of the problems with it, and what the actual goals of these people are. And importantly,
they also look at how we can push back against these things, right? Both as individuals,
but also in collective ways through government and through other means. And I wanted to make
that a big part of our discussion, but we started off by talking about, you know, some of the bigger
ideas of the book around AI hype, but also a number of the things that we have been seeing
occurring at the moment that I wanted to get their input on as people who have been paying
such close attention to everything going on with AI, not just for the past couple of years,
but even before the hype around chat GPT and generative AI and the types of things that
have been motivating the tech industry and investment since 2022.
I have to be honest, I really don't think that there is much more context that I need
to give you for this episode.
It's just a really great conversation about where we stand right now with AI and how we
might want to push back against the things that we're seeing in order to limit the harms
that we're seeing come from it, but also the broader futures that these tech billionaires
are trying to bring into being. So with that said, if you do enjoy this conversation, make sure to
leave a five-star review on your podcast platform of choice. You can share the show on social media
or with any friends or colleagues who you think would learn from it. And if you do want to support
the work that goes into making Tech Won't Save Us every single week, so we can keep having these
critical in-depth conversations on important issues, not just affecting tech, but affecting
all of us.
You can join supporters like Maddie from Toronto, Alex from Edmonton, Vladimir from Brazil,
Brian from Houston, Abe from Grand Rapids, Violet from Pennsylvania, Matt from Maryland,
and Jan from Prague by going to patreon.com slash techwon'tsaveus where you can become a supporter as well. Thanks so much and enjoy this week's conversation.
Emily, welcome back to Tech Won't Save Us.
Delighted to be here again. Thank you. And Alex, it's great to have you on the main show, not just one of our series as you were last time you were on. Thanks for having us, Paris.
Absolutely. I really enjoyed the new book, The AI Con. I feel like if you have someone in your life
who is like, what do I need to know about AI? What is the problem with this? This is the perfect
book to be like, pick this up,
read it. It has all the things you need to know in there. It's really concise, but also still has all the detail that you need to actually understand what is going on with this industry and why you
should be concerned about this products and, you know, what these CEOs and things are actually
doing here. So I really applaud you on the book. And so my first question was really like, how did
you decide that you wanted to put
this together?
How did you decide we need to write this book about AI to really start to dispel these myths
that are so persistent?
Oh, I'd say it actually starts with our podcast.
So we run Mystery AI Hype Theater 3000, which is an accidental podcast.
If you listen to the first couple episodes, you'll tell that we did not know we were making
a podcast.
And, you know, we're doing the work in that podcast of trying to top the bubble of AI hype.
And it's a lot of fun. We still do it. It's roughly twice a month and we do different topics each time. But it started feeling like we could reach a larger audience if we could
construct the argument and put it in one sort of neat package.
Yeah. And so we were kicking the idea of a book around
and got approached by an agent.
And we're thinking, well, yeah,
we could really think about a way
to tell a very coherent narrative.
And I think one of the things that we appreciate you naming
that it's pretty accessible,
because the idea is let's start kind of from zero
or start from a place of,
I've heard about this thing called AI and really explaining what these technologies are, explaining that AI is not one thing.
I've been trying to demystify everything that just has sparkles in front of it.
Yeah, I was really mad when they started stealing the sparkle emoji and making it seem like it was AI.
I know. It's a fun emoji and making it seem like it was AI.
I know.
It's a fun emoji, right?
It's a great emoji.
And whenever I see something that has, and now I have like a, you know, a moment, it's like a jump scare.
I'm like, Oh no, but I want it to be good.
Absolutely.
So the book, I feel like one of the big focuses in the book is AI hype, right?
And you know, the, the effect that AI hype has on our understanding of this, how it's able to be
sold to the wider public in a way that hides some of the harms that come with this technology.
So why focus on that so much? And how do you assess the state of AI hype in May of 2025 when
we're speaking and when your book is coming out? When we finished the manuscript in September of
2024, there was this urgency. It's like, we really wish that we could have had the book on the shelves right when we finished writing it. I'm
sure all authors do like, I'm done. How come nobody can read it? Right. And I remember saying,
it will be a better world if this book is not needed in May of 2025. And unfortunately,
we are not in that better world, right? The hype continues. And why hype? I think part of it is that it really draws on our expertise. So I'm
a linguist, Alex is a sociologist, and we can look at how people talk about things, how they sell
things, how the large language models play on our linguistic competence as people and sort of open
up a possibility for hype. Sociologically, what does that do in the world? I think it gives us
an interesting angle in that we don't see represented so much in, there's other fantastic books that people should
be reading about the AI industry and the harms that are being done, but I think hype was an
open space we could move into. Yeah, I would say that one of the kind of handles of hype
from a sociological perspective is that it gives you a sense of talk about the political economy
of the AI industry. It gives you a sense of talking about, well, these are the promises of the industry,
and here are the institutions that are surrounding it,
as well as the massive amounts of investment going into it, right?
It's kind of fascinating, too, here in May in 2025,
I had just re-skated or quote-skwed this study that IBM had done of CEOs. And there was some
really flooring statistics in that study. The one thing I posted to Blue Sky on it, I mean,
there's many different parts of it which are flooring, but the one that I quote skewed was
surveyed CEOs report only 25% of AI initiatives have delivered expected return on investment over
the last few years, and only 16% have scaled enterprise-wide.
And then the really flooring part of that, if you go further down, is basically, I think,
something like around 68% of those firms still plan to go full steam ahead.
So we're still within the era of AI hype. And unfortunately, or maybe fortunately for
us, unfortunately for the world, the book is still very relevant. It's always wild to see those stats,
right? Like it's pretty clear it's not paying off, but we're still going to keep pursuing these
sorts of things. I remember there was one last year where they had surveyed a bunch of employees
and a bunch of like management or like C-suite people or whatever. And the employees overwhelmingly were like, this makes me do more work. Like it's not very
efficient. It's not working very well. And like most of the managers were like, it's great that
we're implementing these things. It's very effective. Like there was this like total
disconnect there. Right. But I want to double down on that second part of my question, because
I feel like there have been multiple moments, even like you're talking about, you know, end of last summer and whatnot, where we were talking about, oh, is the
bubble finally going to burst? Is the hype finally exhausted? Are we finally waking up to what is
going on here? And, you know, it's going to start collapsing and we'll see what kind of comes on the
other side of it. But it feels like again and again, it has managed to continue limping on. So I wonder
how you assess the moment that we're in right now in terms of AI, the support that exists for it,
how it is continuing to maintain that interest in that investment.
Yeah. I mean, it's interesting because I think what it is, is it kind of signals that this is
a bubble, not a bubble-shaped thing, you know? And it makes me think what it is, is it kind of signals that this is a bubble, not a
bubble-shaped thing, you know? And it makes me think about, well, what's going to happen here
and what happens after AI? And, you know, we spent a little bit of time at the end of the book
thinking about when the bubble bursts, however it bursts, because it might burst slow, it might
burst fast. It might have these people, you these people tossing the good money after bad, which
then becomes bad money, and the way in which what is left behind in that. And that's the kind of
data center growth and energy and waste that's been generated by that, the carbon that's put
into the atmosphere, as well as the other air pollution and small particulates. There's the
jobs that have been completely decimated,
the fields that have been decimated.
The thing is, I don't really know.
I mean, it's hard to sit here from our chairs
and try to play industry analyst,
but it's like, what kind of shape does this look like?
And I mean, I think there's either probably going to be
one of two ways, maybe one
of four ways. I'm not sure what the second axis is. It's either the Uber model, which is, well,
we're just going to keep on tossing money at this. And at least one day, this is going to turn
an investment. And that's after a lot of capital has been destroyed, but there's going to be some
kind of sustainable path where it's, I don't know,
you have a few data centers and that's going to help students cheat on homework or whatever,
or it'll be the big explosive kind of way, which may be less plausible, but maybe there's something
that really disrupts it. And there's a real emperor has no clothes sort of moment, but it's
hard to speculate at this point. And you ask, why is it still
limping along? Why was last summer not the end of the bubble? And again, not an industry analyst,
but from where I sit, it looks like it's a question of just how much money is there already,
and maybe something of a sunk cost fallacy on the part of the venture capitalists.
And the promise, and it comes back to the hype, right? The hype says, if we can do this thing,
if we can actually build the thing we say we're going to build, it is going to be infinitely valuable.
And so somebody whose job it is to maximize profit, who believes that hype, like,
why would they not invest, I guess? No, it's a really good point. And Alex,
you mentioned what happens after the bubble burst. I want to come back to that a little bit later
while we have a discussion about some of the stuff you discussed near the end of the book.
But there are a few other things I want to dig into before that.
Obviously, this boosterism is very much closely associated with the hype that you're talking
about here, right?
And I feel like again and again, unfortunately, we see this boosterism not just coming from
people at the top of these companies and venture capital firms who you would
expect to be boosters of this technology, right? They're hoping to make a lot of money from it and
to kind of spread it into the world. But we also often see that from a lot of media, right? And a
lot of reporting on this. What role does that play in helping to maintain this idea that AI is the
next big thing, that it is transforming the world in the way that some of these tech folks want us to believe? And how does it distort the conversation and the
public understanding? I think every time a journalist uses the phrase artificial intelligence
uncritically, right? They say, oh, there's a new AI powered tool that connects YZ that
misinforms the public. And I think about, for example, teachers who are now faced with lots
of pressure to bring AI into the classroom so that their students are trained for the workplace of the future, right? As if that's what education is
about. And that's another sad topic. But I think that if we had more critical journalism, and there
is, you know, wonderful journalism, constantly pointing people to your podcast, Paris, and really
appreciate the work at 404 Media, and people like Karen Howe, whose fantastic book is coming out.
So there is
wonderful critical tech journalism, but there's also a lot of credulous access journalism.
And one of the things that I hope we can do with this book is help people sort of navigate to the
critical journalism and know what they're looking for so that they can continue to get good
information from the people who are asking the tough questions. Yeah. And I mean, the, the credulity comes a little bit and I'm, I'd be interested in,
in a bit of a history of tech journalism, but so much of tech journalism has been all about access
or has been all about just, Ooh, look at this fun whiz bang thing. And so you have, you know,
so unfortunately, you know, you have your folks like Kira Switzer, who was like,
Elon Musk was great until he lost his mind.
Yeah, he was great until he called me an asshole.
And now he's not great.
And stop returning my calls.
Exactly.
Yeah, like that man lost his mind a while ago or, you know, was died in the wool, someone that you shouldn't trust.
And you've had a number of other folks.
And we talked, I think, before we got into call about Kevin Roos and Casey Newton. And Kevin Roos has been this person that I think has really been brought in to the kind of,
he's been kind of dumbfounded by all these different kinds of the outputs of chatbots.
You know, I think there was at one point where you had this conversation with,
what was the Bing chatbot, Sydney? And then at some point he had said it had tried to get him
to leave his wife. And I don't think he's recovered from that because everything about that has really been
kind of flooring.
And I mean, it's been this kind of thing that it's just granting the tech billionaires so
much more grace than you would grant any kind of area of research that you didn't know anything
about.
Like, how would you explain this?
I mean, there's a lot of hand-waving that's going around.
And, you know, we saw this interview with a CBS show they did with Dimas Hassabis,
Hassabis.
And the way I was reading the transcript and they started off with, you know,
he grew up and he was a test prodigy at 12.
And, you know, and then they had this interview where he, you know,
said it was going to cure cancer and cure all illness in 10 years. And you can't let people
just get away with saying this complete batshit like that. Right. Yeah. It's just like, wow. Oh
my God, the future is going to be so incredible. You don't have to follow through on it, but it
sounds so great. Right. But no, I think that those are all really important points,
right? Because the public really relies on media in many senses. Obviously, we've seen this erosion
in trust with some media over the past number of years, but to a large degree, people do still look
to media and journalism to get an understanding of what is going on in the world. And when they
approach this and get these takes on AI and technology that are overwhelmingly boosterish, that are
overwhelmingly repeating the press statements of these tech companies that doesn't actually help
the public understand what is happening, but just tells them what the companies want them to believe.
And that's not how you have an informed discussion as you're trying to promote and encourage us to
have in this book, right? I feel like another piece of this, right? We look at the journalism
piece, but another piece is how these discourses play out within the tech
industry themselves. And in the book, you talk about the doomers and the boosters. We've talked
about the boosters, but, you know, there's these doomers as well who promote this notion that AI
is going to be so powerful and is a big threat to us. Can you talk to us about how these narratives
that in some senses might seem to be in conflict with one another actually work together to promote this notion that AI is
huge and powerful and going to change our lives in all these ways?
So in a nutshell, the AI boosters say AI is a thing, it is imminent, and it is going to be
all powerful and solve all of our problems. And the AI doomers say AI is a thing,. It is imminent. It's going to be all powerful and it's going to kill us all.
And saying it that way, you can see very quickly that it's actually the same position.
And it's just, what turn do you take at the end? And the really frustrating thing
is that the booster versus doomer discourse sets it up as these are the full range of possibilities.
You're at one end, the other, or somewhere in between, that is it. And coming back to journalism, a lot of the journalism sets it up that way too.
And what we say, I have been saying for a long time and say in the book is, no, that's actually
two sides of the same coin. It is one small part of the range of possible ways of seeing things,
and it is not a reality-based part of it. Yeah. And so we talk in the book a lot about
these different, I mean, we don't go into the different kind of flavors of doomerism.
We kind of gesture to the test real bundle, the transhumanist, extropianist, singularity, this cosmism.
Okay, now we're to the real rationalism, effective altruism, and long-termism.
And you've had to meet on the program a few times.
Respect for having it memorized so well.
It's hard.
It doesn't exactly roll off the tongue to actually spell out the acronym.
I finally figured out what extropianism means, actually. From Adam Beckett's book,
More Everything Forever, it's extropy as the opposite of entropy.
Oh, I'm learning things every day here. I got out of this book on my big pile of books too.
And so we talk about getting into that and, you know, the different flavors and sort of, you know, the kind of thing that's interesting is that you have Udowski as kind of this person who is like kind of this true believer, this person that is, somebody on Blue Sky called him Big Ud, and I thought that was kind of great.
So you have this kind of towering figure, the people who are like really getting
high on their own supply. Then you have people who also have the kind of mixed incentives,
you know, like Elon Musk, who's, you know, a long-termist, says he's a long-termist, but also,
you know, is of course just trying to make money and, you know, has status and power aspirations
too. And so there's kind of a mix, you know, it's all the same effect kind of at the end,
whether you're kind of a strong long-termist or a weak long-termist, but it does lead to this
kind of doomerism and underlines the same bottom line that AI is inevitable. It's a thing we need
to believe in. It's a thing that we should put our trust in and it's just a matter of aligning
it correctly. And I feel like one of the ways that this becomes really concerning, which of course you wrote about in the book, is when you see this kind of booster-doomer dichotomy or relationship
then be the one that defines discussions in policymaking, right?
Whether it's in the US government or other parts of the world, where all of a sudden
that is the frame through which the conversation is happening.
And like you discuss it, it very effectively shapes that conversation in a
favorable way. Yeah, 100%. I mean, we saw this. We started the book with an anecdote about the
Schumer's insight fora, and especially focusing on the ninth one in which he asks, what is the
P doom and what is the P hope? P doom is the probability of doom and P-Hope is the probability of hope. So P
parentheses doom and parentheses. I'm sure this is language that Chuck Schumer is using all the
time in his everyday conversations. Yeah. Chuck Schumer is, he's just walking down the halls
of the Dirksen office building in DC and asking, hey bro, what's your P doom. And so, you know, this, this got some reporting to,
there was like very credulous reporting in like the SF standard or whatever
saying the hottest question around Silicon Valley is like,
what is your P doom? And Oh my God. Yeah. But I mean, anyways,
on the policy element, I mean,
that became a frame through which to,
to have conversations during
that Insight Forum. And then the output of all those Insight Forum were that the whole thing
focused on innovation or national security and whatnot, and had very little bit to say about
present-day harms. Yeah, it's a real concern, right? And when you think about present-day
harms, I feel like this is something that we are increasingly talking about. It's something that
all of us talk about a lot when it comes to AI, but I feel like it's slowly making its way more
into the general conversation. And I wanted to pick up on a story that was published just recently
in New York Magazine talking about how generative AI is increasingly kind of moving into education,
you know, being used by students and educators and even administration. And I wanted to ask,
because you've both been involved in these
spaces for a while, what you make of how generative AI has moved into education in the past couple of
years and the effects that it has had there. So I got to speak at the PAC-12 Dean's Conference
last spring, back when there was a PAC-12. So this is deans from universities up and down the
West Coast. I think we, not just the US, we had some folks from Canada. And I was brought in to talk about AI. And I got to say to these deans, look,
the reason this is your problem now is not that there has been some enormous breakthrough in the
technology, but rather there's a lot of money and people are trying to push it on you because they
want to start recouping some of their investments. Like that's, that's all this is. And the second thing I said to them is the only use of something like chat GPT at a university is as a contrast
dye test to understand where more resources are needed. When a student is turned to chat GPT,
it is either the instructor has not motivated the importance of doing the homework. And so
why bother? Or that student doesn't have enough resources. They haven't had enough background to do the assignment. They don't have enough time because they're also
working full time or whatever it is, but it's basically, and if you've got staff turned to
chat GPT, again, there's some lack of resources there. And that is sort of the only useful use
case from a university's point of view. In my mind, we talk a lot to K-12 educators who see,
we had a wonderful episode of our podcast with
Adrienne Williams on talking about ed tech.
And, you know, she has a long history in the classroom watching how basically the stuff
is just getting pushed on students.
And the students of the wealthiest families in the U.S., at least, I'm not sure what's
going on in Canada, are in schools with no screens in sight, right?
People who really have resources, who want the best,
as we all do for our kids, are staring far clear of this. And just in general, I think that anytime
someone is using a chatbot in the context of any educational assignment, it's a missed learning
opportunity. I mean, universities have really fallen flat in the response to this. Either they have made deals with the companies,
or they have, I think they mentioned in that New York Magazine article, like, you know,
they've sort of said, okay, maybe not, but like, you decide, but they haven't really provided any
strong guidance for faculty, and it's been sort of a missed opportunity. And so, in the worst cases of it,
it's kind of like, well, you as a professor should basically just throw up your hands and
declare bankruptcy, and you just have to grade this as if someone had written it. And that is
kind of the worst thing to do, because now you're putting the onus on the professor to try to think about this and try to think about what it is in the context of their pedagogy.
There's also been turns to become just intensely carceral and just turn all instructors into cops and saying that you need to police this really heavily.
And both of those are terrible strategies. And I think that there really needs to be strong leadership from, you know, universities,
but where universities haven't pushed back, some leadership has come from some education
unions where they're like, okay, this is where we can support our members and support a really
supportive learning environment.
So for instance, in California here, Cal State, the Cal State system had one of the largest contracts.
So Cal State has, I think, something like half a million students across all their campuses, largest system in the US.
And they ended up with a $16 million deal that they had with OpenAI for just a site license for a year, not even beyond that.
So huge waste of money. Cal State, like every system in a budget crisis,
and the California Faculty Association said, okay, slow down. No, we're not actually going to have
this. We need to push back because we're not, why are we giving all our money? Why aren't we
reinvesting in kind of pedagogy and thinking about what it would mean to really invest in
our instructors and our students? I want to add another worst case scenario, which we've seen reported a few times,
is especially in hybrid classrooms like iTeach, where you've got some students in person and some
online, we've seen a couple of universities proudly announcing that they have some AI
students joining the class.
Oh, Jesus.
And the reporting on that is usually appalling, right? Just sort of talking about them as
students instead of text processing surveillance machines being added to classes. Other cases where you get faculty sort of proudly
training chatbots on their own works and then putting those chatbots up as if they could be
teaching assistants for the students. Yeah, lots of terrible ideas.
Yeah, really. I hadn't even heard of those ones. Yeah, that's terrible. When I was reading that
story and as I've been
seeing stories about this come out more and more, one of my concerns is really around
critical thinking and what this actually does to the broader society. And it's also not to say that
this is a brand new thing with AI or generative AI, but it feels more like the continuation of
a process of reducing education, reducing
critical thought within society more broadly. And this feels like it comes after a long list
of developments over many decades that have further kind of degraded that. And it just
seems like a very kind of scary indication of where things are going.
Yeah, for sure. I mean, as a sociologist, my tendency in this book was to historicize. And so just taking the adage of many social scientists, always be historicizing. And so the onslaughts of LLMs in the classroom has really been part of this process of the increasing casualization of the university, the increasing digitization of the university, the increasing digitization of the university,
the increasing surveillance nature of the university. So in the cases where the university,
you know, at one point, at least in the US, where we have this private public system,
whereas, you know, I remember speaking to someone at a student power conference once,
and they're from Montreal. And we just
explained the notion of a private university to them. And they were like, someone could just start
a university. And it just, you know, it is a thing that boggles the Quebecois mind.
Yeah. In Canada, generally, like the notion of private universities are pretty,
I don't think there are many up here at all. Like it's mainly public. Yeah. Yeah. Yeah. It's, it's, it's, it's, it's completely outside of what is, what is considered
to be kind of a legible institution. And so in this place where you do, you had private investment
universities where many flagship universities now have 8% or less investment in funds actually
coming from the state where most of the funds are coming
from donors or tuition or X, Y, Z, other place. And that's been happening for years, for decades.
And we have, I think, a stat in the book. I don't know if it's in the main text or in the body of
the book or in the footnotes where it talks about the decrease in tenure, the increase in casual
labor instructors and lecturers and adjuncts, the rise in debt financing and debt servicing
as being a huge part of expenditure. I mean, this is the next line of the casualization of
the university in which universities are always telling their instructors and department heads to
do more
with less. And so it's not really surprising that this is the next point. I mean, the last kind of
thing we had was MOOCs. Everybody is going to be on Coursera or whatever, you know, Andrew Ng is
going to replace your machine learning 101 class. But I mean, it's the next line of this. Now
everybody's going to have a personalized tutor,
and that's how they really shine that turd.
And if we could drop the scarcity and austerity mindset and think in terms of what would it be like if we wanted to sort of richly invest in education, there's so much that we could do.
And I have a favorite passage from the book that I'm not going to read the whole thing of, but
this started as a newsletter post that I authored, just in frustration at all of these
universities putting out their AI statements about how we're going to be the leader in AI,
we're going to invest in it throughout. And it's like, what if just one university would say
instead, and this was a point of an edit in the book where I had, so the introduction is,
imagine if just one two or four year college put out a statement along the following lines. And the original version of
that had O-N-E and then parentheses, you know, digit, numeral one. And Alex is like, this joke
isn't going to fly. So that's out of the book. But the imagined university statement ends with,
we recommit to our mission of training students to be critical thinkers across disciplines,
who can consider sources of information and locate them within their context, And I would love to see that or something along those lines where, you know, we really inhabit education for what it is. And that to me sort of speaks to this larger thing that we can do in resisting AI
hype, which is to really make vivid what it is that we all do in our lives and in our work.
For some people, art is work, but not everyone, right? And when education is sort of held up as,
oh, a chatbot can do it, it completely devalues what it is that we do as educators. And that
same thing happens over and over and over again in all of the fields that are supposedly automatable via AI. It's important to recognize or to aspire to
want to promote that critical thinking ability in as many students as possible, but also to promote
that notion that people should be able to assess the technologies in front of them or the different
tools and systems in front of them to see what is working for them and what is not to be able to
properly do that. Before we get to those kind of broader ways to address, you know, the AI
hype and the harms that come with it, I wanted to ask you both a broader question. You know,
I asked about education and the rollout of generative AI there, but what are your kind
of biggest concerns that you've seen with the AI rollout over the past couple of years?
There's so much different things. I mean, generally, the kind of areas that I'm most worried about are just the question of how
it is being used to displace work or, you know, and being promised to do a whole bunch of different
things in different forms of labor. In the book, we talk a lot about journalism and science and medicine. And I
think the last one is probably the one I am the most worried about. I think although social
services are kind of a close second, medicine worries me the most because I think that's where
there's probably the most excitement and the most money. Medical AI startup is a phrase that I've seen way too much,
but I mean, that has real immediate effects, right? And it's sad that at this point we have
more than one story where someone was speaking with a chatbot, it said one thing or whatever led them to kind of a psychiatric crisis and they took their own life. And that's very upsetting. I mean, that happened with a character.ai bot. It happened with this bot that was developed by Chai Research. And we mentioned this in the book, I think at the outset of chapter four, that's a pretty alarming thing.
There's more of those forming, not fewer, right? And we're seeing these not just in mental health,
but also in skilled nursing, in different kind of clinical settings. It's not a thing that
professionals, nursing or medical professionals want. These are things that people in National
Nurses United
have done so much political education about.
They're like, we don't want these in our offices.
My sister's a nursing practitioner
and she's like, I don't want this stuff here
to be with me in the clinician's office, right?
I feel like the United States
is probably gonna be like an innovator
with scare quotes around innovator on that
because of the private healthcare system, right? I feel like even before the generative AI moment, you already saw
Amazon and a number of these other tech companies trying to find their kind of niche in healthcare
to tap into these profits. And so unfortunately, it's no surprise to see them trying to use
generative AI for that as well. Yeah. Just to add to that for a moment,
I think that you're right that the disaster that is our healthcare system makes it ripe for this. But I also think that those with
nationalized healthcare are not necessarily safe because it can look like cost cutting. And you
were just telling us before we started recording that you have a new minister for AI in your
national government. And so like, you know, what's going to come? And I worry for the UK as well on
those lines. Oh, and I would just say, I completely agree with you. I don't think we're free from this at all.
Yeah, unfortunately. But so, yeah, I mean, healthcare is one really concerning area. I
have a hard time ranking these things. Another area that is super concerning because it is,
I think, invisible to much of the public is the use of so-called AI systems in warfare.
And I forget where I came across this line, but somebody was saying, well, I mean, surely autonomous weapons,
no one wants war, but as long as we have war, then wouldn't autonomous weapons be good because they
keep people out of harm's way? I'm like, who is being kept out of harm's way, right?
That was a potential question that someone was...
Oh, it's an upcoming podcast. We haven't done that one yet, I think.
I think there was something about that in one of Andreessen's kind of manifestos as well.
That seems very likely. Yeah. I mean, also very dehumanizing,
just the people that the autonomous weapons are attacking are not considered human.
Exactly. The people the weapons are pointed at are... Yeah. So, you know, there's an awful
lot of stuff there and there's a really big potential for sort of displacing accountability. And so there's some great reporting in 972 Magazine talking about how Israel in their war on Gaza created this like target identification system. And the people behind it are like, this is great because it allows us to generate targets faster. So kill more people and oops, if we get the wrong
person, well, the computer told us to, kind of a situation. And I think a lot of that is less
immediately visible and often bound up in, well, it's national security. We have to do this to
stay safe. And also bound up in the long history of AI, where the start of the project was very
much military funding. Yeah, no, it's a very important piece of this, and one that I feel like doesn't get near the
attention that it deserves. And certainly, as you say, we've seen that great reporting on how
AI is being used to justify increasing death in Gaza, and I'm sure the West Bank as well.
And I have no doubt we're seeing that in other areas of conflict around the world too.
And we're just going to see that expand as these tools are continuing to be promoted in that way. I want to pivot to what you
discussed near the end of the book, because this is something that I thought was really interesting
and it's probably something that we haven't discussed as much on the show. And this is how
we think about addressing AI hype and these broader harms. And you kind of go through multiple layers of
potential points to target these particular systems. And so I wanted to start by talking
about regular people, right? I think there's often a recognition, and I talk about this on the show a
lot, that to really address these things, there need to be broader structural changes, right?
You need policy to target these things. It's individual actions aren't always going to be
able to take it on. But I feel like on the flip side, that doesn't mean that there's nothing individuals can do to try to push back against this, to try to educate themselves, to try to think about what they're doing. So I wonder on that front, what do you think that individuals can do when they're thinking about AI hype and the rollout of these tools? One of the strategies is just around asking questions, right? If someone is encouraging you to
use an AI system, encourage you to buy an AI system, maybe you are actually in a decision
making thing, or maybe you are doing some community advocacy. Tawana Petty, for example,
is an amazing leader who's managed to push back on various facial recognition technologies.
And we can all ask questions like this privately and in public. So when we have some proposed
automation or it's been in place and we want to fight back against it, the first question is, okay, what's being
automated, right? What's the input, what's the output, and can you reasonably connect them to
each other, right? So if the input is audio and the output is text, and in the audio there was
someone speaking and the text is meant to be a transcription of what they said,
that is a reasonable task. It might not work equally well, and there's lots of things to
worry about when it goes wrong, but that's a reasonable thing. In contrast, you sometimes
see these research projects where people claim that they can tell from a picture of someone's
face if they're a criminal or not. There is no sensible connection between input and output there.
And so that is a place where you can say this does not make sense. You can also look at how the systems are being described, right? If
you've got this anthropomorphizing language, then that is a really clear sign that they are being
sold as doing more they can actually do. And that's sort of a place to push back. You can ask,
how was it evaluated? Do we know how well the system works in some test conditions? And then
how do those test conditions relate to how it's actually being used in this environment where I'm considering it or it's being used on me
or whatever. You can ask who's benefiting, who's harmed, and if something goes wrong,
what's the recourse that's in place? And these are all questions that we can all ask and use them to
make informed decisions, but also help other people, if we are not the decision makers,
make informed decisions.
And so that is sort of the serious work.
But the main message that I want to get across
is you don't have to be the kind of person
who can build the system to ask these questions.
And oftentimes the relevant expertise
doesn't sit with the people who build the systems,
but it sits with the people who are existing
in the environment where the automation is happening.
But you can also have fun, right?
Right. And so, I mean, in that case, existing in the environment where the automation is happening. But you can also have fun, right?
Right. And so, I mean, in that case, there's the different kinds of things for the individuals to do. There's the thing that we like to call ridiculous praxis, which is a thing that came
out of the podcast in which if there's something that makes no sense, that is absolutely ridiculous
on the face of it, you should make fun of it.
And I know you do this quite well online, Paris, and I'm sure listeners of your show do as well.
But I mean, if it just makes absolutely no sense, make fun of it. If somebody posts a silly,
ridiculous, cheap-looking AI-generated photo, troll the hell out of them. I would even do it for things that ostensibly look good, like this Studio Ghibli.
Ghibli.
I said it in a way on another podcast, and Emily trolled me.
It was Ghibli, Studio Ghibli.
So Studio Ghibli, I mean, make fun of them.
Say, hey, do you know how much energy it took to generate that?
And do you know that this many people did that? You know how much water that probably took?
I mean, it's worth really digging in. And also because it is not good quality. I mean,
it's what Tracey McMillan Cotton had the New York Times. I know she had a column on it,
but she posted on Blue Sky, like it's mid, it's bad looking, you know? I mean, it's effective.
I mean, there's a cultural weight to making it uncool, to making it seem out of fashion.
There's also strategic refusal.
I mean, just not using it.
I mean, if it's, I know there's pressures to use it from all corners, whether you're
a student or a worker or whatever, but if you make a principled stand, I mean, it does have an effect and it can
inspire and I think relate to other folks to do it. And then I think there's those collective
solutions. I mean, we talk about policy in the book and we're in a terrible policy environment
in the US right now. And the states are a little better, but, you know, we have our own battles in California and elsewhere, including this thing that might sneak itself into, I don't know when you're going to publish this podcast, but there's a thing in the, that's under consideration in the U.S. House right now where Republicans want to prohibit any kind of AI regulation for 10 years to be enacted by states. And they're trying to sneak that in through the budget
reconciliation process, which means they don't have to get the necessary votes. They don't have
to invoke cloture. They'd only have to get 51 votes. And so there's those policy solutions.
But the collective ones, I think, are very powerful. So what are collective ways in which
we can push back, whether that's through our unions,
through our communities, through our schools and our clubs and whatnot?
What are ways that we can really lean into relationships and really make a stand and
say, we're not going to rely on these technologies in our cultural production, in our operations
work, and really pushing back can be very, very effective there.
And so I think the same way that people are relying on each other for things like
mutual aid and kind of mutual support, I mean, we can think about ways of doing that
as kind of consumers and workers.
Yeah. And I feel like given the state of things in the United States right now, that is probably
a really good path to try to go down, especially seeing politically and legislatively what might
be occurring. The ability to kind of get people together to use unions and other organizing
structures to push back against this seems really opportune. I just wanted to add that on the sort
of individual but also inspiring people around us point that there's this inevitability narrative
that we hear from the tech companies all the time, and that is a bid to steal our agency.
And so, you know, Alex said it can be inspiring to take a principled stand. And one part
of that is to say, no, I don't believe you. If you say, well, this is here, we have to learn to live
with it. No, right? There's nothing inevitable either in the development of something called
AI or AGI or in the sort of continued presence of this tech in our lives. And to sort of claim
that space is really important, especially as you're both saying in this tech in our lives. And to sort of claim that space is really important,
especially as you're both saying in this context where our policy environment is so terrible.
Yeah, really important point. And I was just going to pick up on a couple of the other things that you were saying, like in terms of being able to ask the questions, that's where that kind of
ability to have that critical thinking comes from, right? You need to know what to be able to ask,
and you need to be kind of trained to get an idea of how to interrogate these sorts of things.
And then, of course, on the strategic refusal, people are sometimes surprised when they hear I've never used a chatbot or at least willingly.
Sometimes I've had to engage with these customer service chatbots because that was the only option that I had, at least to try to get to someone human.
But I read how other people assess them and I don't feel that I need to go and do my own little personal assessment too. Uber is another company that I've never used despite how common
that is in society. So people can do these things and I don't think I'm changing the world by doing
that, but I just feel that that is a decision I have personally made. I'm totally with you. I'm
not using chatbots. I do my very best not to consume synthetic media. I sometimes get trapped,
like people will send me something and I don't realize quickly enough. And they're like,
I've heard some of that. I don't want to listen. Or when Notebook LM came out from Google,
there was a couple of journalists who wanted me to comment on one each of those fake podcasts.
And so I had to listen to them. But for the most part, I will not willingly consume synthetic
media. And people are surprised. And they like
to say, well, how can you possibly judge? I'm like, well, I can tell you a lot about how these
things work and about how we work with them. I don't have to subject myself to it to make those
arguments. Yeah. And I mean, I wrote a newsletter post that was something like, I hate having to
look and try to discern whether something is AI slop or not, it is not a useful skill so far.
It's useful insofar as this is something that I cannot trust, but I'm so upset that there's so much of this here on the web.
Now I have to use cognitive energy to discern whether this is an AI-gener generated image. And I have to then change my opinion of the brand because they don't respect
me enough to actually use or put some, put some effort into this or to hire somebody to do this.
You know, same thing with texts. I mean, you can discern it kind of a reading long enough,
but it's, and I think this happens a lot when we, um, we get lots of podcast pitches as I'm sure you
do too. And I'm like, this reminds me of synthetic text.
This seems like somebody did not do their work here.
Yeah, I'm sure so many of those pitches are generated and just sent out en masse.
I have no doubt about it.
We need to boycott Coca-Cola now, right?
For using generative AI in its ads and enforcing that on so many of us.
I wanted to move to talk about legislation as well, right?
Because you talk in the book about how there are existing laws on the books that we can use,
but there's also new legislation that can really help to challenge AI hype in important ways. So
I was hoping you could talk a bit about the legislative front and how you think about that
in this moment, given what is happening in the United States. Just to back up a little bit, there's a few tropes that come from the tech lobbyists to the general public and to policymakers
that I think it's really important to push back on. The first one is tech is moving so fast,
policymakers can't possibly keep up. And related to that, this is something that only the people
who build the tech understand. That's absolutely false, right? The point of policy is to protect
rights, and that's not moving so fast? The point of policy is to protect rights,
and that's not moving so fast. I mean, in some cases, we would like it to move a little bit
faster. But the question is not, you know, how do we regulate this technology? But how do we
protect rights, given changes in the environment that are due to the technology? And that starts
with how do we use the policy that we already have? And under Lena Kahn, the USFTC was doing
some fantastic work, sort of predicated on the idea that their role is to regulate business activities. And if those
businesses are using automation to do those activities, that doesn't change anything.
So there's a lot that can be done with existing regulation. Another one of the tropes is that
this is a race. Usually within the US, it's framed as a race between the U.S. and China, very xenophobic
framing.
And if we hamper ourselves with regulation, then we're not going to win the race.
So that is one to push back on.
And then finally, there's this trope that regulation stifles innovation, which is connected
to the second point there.
And that's absolutely false, right? Regulation harnesses,
shapes, guides innovation towards the goals of the regulation. And the only thing that it stifles is
the ability to just like amass all the money, which kind of is the point, right? And you can't
use innovation to refer to that and be honest at the same time. So I think there is scope for protecting rights more thoroughly. I think the status of privacy law in the US is really not where it needs to be. And there's principles of data minimization that I would love to see more strong. to set it up so that anybody who produces a chatbot is actually accountable for the output
of that chatbot. Because a lot of our issues with this come from the fact that nobody has
accountability for what's coming out of the chatbot in most cases. There was the Air Canada
situation where Air Canada actually got held accountable for their chatbot. But for the most
part, that's not well established. I don't know if you saw, but there was a recent story, I think,
in the Financial Times that insurers are now offering a new type of insurance to companies when their chatbots make errors or mistakes or say things that end up costing them a lot of money.
Right.
Yeah.
Which is interesting.
I guess I like that it suggests that the company should be responsible for it.
But I think there's also interesting questions about, like, how do we allocate accountability? So if you think about the instances of the fake mushroom hunting books that get posted on Amazon and people actually
getting injured because they've followed that, it's like, okay, so how do you allocate the
accountability among the chatbot producer? So, you know, whichever system was used to produce
the thing in the first place to the person who used that to produce the book and put it on Amazon
to Amazon, like, you know, sort of where is the accountability across those three sites?
I think that's an interesting place to be working in.
Yeah.
The state of policy in the U.S. is pretty rough.
I mean, on the federal level, it's pretty much stopped, right?
On the state level, there are a few proposals.
So, I mean, there have been some things in the last California legislative session, there
were some things that were targeting things like non-conceptual deepfakes.
There was something that SAG-AFR was promoting in terms of preventing likenesses for being used that didn't prevent them outright, but that put some guardrails around that. A assembly member, Bauer Kahan, has a bill called AB 412, which would at least allow copyright holders to make a request of AI model developers to get notice of if any of their copyrighted works were in the training data.
And they actually have to respond within seven days.
If they don't respond, then they can file a civil suit.
So there's something.
I mean, it is something on the copyright front, which is, again, a pretty weak mechanism.
But it's kind of the only one in which artists have been able to file any kind of grievance with companies.
And you're probably quite aware of the suits that many people have taken, including the New York Times and the large lawsuit by Sarah Silverman and Jodi Picoult and Game of Thrones guy whose name I'm forgetting
right now.
George R.R. Martin, sorry.
But at the same time, you have this attack on the legislative front that I mentioned
earlier that the Republicans are trying to force through the budget bill.
There's also attacks on the California Privacy Commission right now, which they've got rid
of the person who was the executive director and have installed someone who's much more
middle of the road, much more industry friendly.
And so you've got these things happening.
And so anything that's targeting AI particularly has been sort of defanged.
But as Emily said, there are existing laws on the books. And if it's going
to be the kind of matter of if there are people in regulatory agencies who are willing to enforce
them. Yeah. You know, obviously I agreed with all of the proposals and arguments that you made
around legislation. The one that really stood out to me was thinking about not just data rights,
but minimization of data collection, right? Because we know that these systems rely on all of this data that has been created and that they
have kind of scraped off of the internet to create these models. And I think that's a real point to
think about pushing back on is just how much data these companies can collect, can create,
can store on us. And trying to push back against the power that comes with that is one thing that I
think would be a great thing to look more at into the future. But we're running close to the end of our
time. So I just wanted to kind of end off by asking you a broader point. You've put the work
into this book. You've been hosting your podcast and being really vocal critics on AI for the past
few years as we have entered this new hype cycle around generative AI. And so I was wondering,
you know, what you most want people to take away from reading this book, you know, once they pick it up.
That's a great question. And I want to have sort of a pithy answer to it. But I think that I want
people to take away that we all have agency here and that it doesn't matter if you are not a tech
developer. It doesn't matter if you feel like you're bad at math and computers.
Your expertise is still relevant.
You still have something to say.
And you still have, you know, ground to stand on for your own refusal and your own decisions.
I mean, my thing is that AI is not inevitable.
It's a tagline that we also have at DARE.
And I think it's the one that this, you know, this book re-inscribes and provides a little bit more meat on how to push back.
And so I want folks to go away and know that
amongst the horrors of the horrors of the world,
AI surely is not inevitable
and part of one's own kind of political action.
You can incorporate this pushback as well.
It's been great getting to know both of you
the past few years.
I'm really excited for people to pick up the book, to read it, to engage with it. I know
it's going to be helpful for so many people in getting a good perspective of the critical view
on AI. Thank you both so much for taking the time to come on the show. I really appreciate it.
Thank you, Paris.
Thanks, Paris. It's so great to talk with you again.
Alex Hanna and Emily M. Bender are the authors of The AI Con. Tech Won't Save Us is made
in partnership with The Nation magazine and is hosted by me, Paris Marks. Production is by Kyla
Hewson. Tech Won't Save Us relies on the support of listeners like you to keep providing critical
perspectives on the tech industry. You can join hundreds of other supporters by going to
patreon.com slash tech won't save us and making a pledge of your own. Thanks for listening and
make sure to come back next week.