Tech Won't Save Us - Will AI Kill Your Job? w/ Brian Merchant
Episode Date: September 4, 2025Paris Marx is joined by Brian Merchant to discuss whether the AI bubble is about to burst and how bosses are deploying AI tools to kill jobs and degrade work. Brian Merchant is the author of Blood in... the Machine and writes a newsletter of the same name. Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon. The podcast is made in partnership with The Nation. Production is by Kyla Hewson. Also mentioned in this episode: Brian has a series called AI Killed My Job with existing entries on tech workers and translators. Brian encourages listeners to check out the work being done by the National Writer’s Union, Translators Against the Machine, and Lucile Danilov at Loc’d and Loaded. You can contact Brian directly by emailing aikilledmyjob@pm.me A Stanford paper published this week explores the effects of AGI on employment.
Transcript
Discussion (0)
Ammodei, Altman, all of the AI CEOs, they do want to create the impression that, like, a great disruption is coming.
That makes it easier for them to sell more automation software, right?
Like, it's a product in the array of products that they're selling is Enterprise AI Automation.
Hello and welcome to Tech Won't Save Us, made in partnership with The Nation magazine.
I'm your host, Paris Marks, and this week my guest is Brian Merchant.
Brian is the author of Blood in the Machine and also writes a newsletter under the same name
where he's recently been publishing a series of essays, articles, under the name AI Killed My Job.
Now, you might also remember Brian from a podcast we were doing together earlier this year called
System Crash, or just from hearing him on the show before we were actually doing that show,
in the past number of years. But since Brian has been talking to so many workers about the impacts
that generative AI and the rollout of those tools in various companies in many different sectors
are having on their professions, on their work, on basically the sectors that they work in,
I figured it was a good moment to have him back on the show so we can discuss, you know,
not just the effects that generative AI is having on work, but also this broader narrative
of AI that we've been seeing over the past number of years, these questions that many people are
posing now about the state of the AI bubble and the AI hype that we have been experiencing
and whether we're finally starting to see it deflate, you know, I think it's still very much
an open question and we'll have to see where that goes. But also to try to understand, you know,
what the longer term consequences of generative AI might be, even if this bubble does eventually
burst, that doesn't mean the technology is going to disappear. There's some.
still kind of remnants of the metaverse out there. Of course, cryptocurrency has turned into a
political force. So what might happen with generative AI in the future? So I think there are a lot of
interesting discussions that we have in this show. And of course, you know, you'll hear that we get on
pretty well when we're talking together because we've known one another for years. And we also
hosted a show together for quite some time. So I have little doubt that you're going to enjoy this
episode with me and Brian where we dig into these issues that we've both been paying so much
attention to writing so much about over the past number of years, but also to look specifically
at the work that he has been doing recently. So if you do enjoy this episode, make sure to share
the show on social media or with any friends or colleagues who you think would learn from
it. If you want to support the work that goes into making tech won't save us every single week,
so I can keep having these critical in-depth conversations that help you better understand the technologies
that pervade our lives and the tech industry that pushes them on us, you can join supporters
like Christopher from Stockholm, Lavinia in Geneva, Switzerland, and Steve from Indy by going to
patreon.com slash tech won't save us where you can support the show as well. Thanks so much and
enjoy this week's conversation. Brian, welcome back to Tech Won't Save Us. I have to say,
it's just, it's such an honor to be here. I'm just such a fan. I'm listening to the podcast
for so long. I was just hoping one day that I would get this invite to join the show and spend
some time with you, my favorite podcast host, Paris Marks. I appreciate all this high praise coming
from you, a person who I don't know very well, coming onto the show like this.
Right. Practically strangers who didn't spend most of a year talking weekly for hours at a time.
Yeah, I was just talking to myself, as we both know, right? Just with an AI filter on my voice.
That's right. It has never been proven or disproven. So whoever it was in the Tech One Save us
community that put that theory forward, I just say you haven't been proven wrong.
definitively I might not exist yeah so oh it's good to talk again Paris it's good to
don't you miss this oh my god absolutely you know what I do miss like for what I
nine months eight months or something we were like chatting every single week and every now
and then I'm like man haven't heard for Brian in a little while wonder how he's doing I know it was a
nice way to it was a nice way to process all the all the shit that was going on on four
Unfortunately, it was also a lot of work producing these things.
It was my, you know, first time podcasting.
So now I know it goes into the sausage a little bit.
More respect for the podcaster community.
Yeah, more respect.
And I'm sure there's some overlap between, you know,
Tech Won't Save Us and System Crash or maybe like a 100% overlap.
So to all those who've been asking us if we're going to come back or say,
we miss the show, just say, we thank you so much for the kind of.
words and for reaching out and we still still don't know it was honestly it was just like we both
have so much going on and paris is what you're a halfway through the right in a book or how far are you
almost done uh about a bit past a quarter i guess yeah yeah slowly chipping away the first quarter is the
hardest quarter yeah and then the ball's rolling it's like you got momentum going downhill totally
like we were saying before you know we got on the pod like i'm i'm feeling good about the momentum and
and where things are going. Hopefully people are going to like the book. That's no small thing.
Anyone out there who's tried to write a book, it can be very daunting. You're just like waiting
through a swamp of words and it's hard to corral them into any meaningful sort of shape or
direction. And so I am not kidding when I say that first quarter is probably the hardest quarter.
Even just getting into it like entirely was just trying to get myself in the headspace to be able
to start writing it was, you know, daunting in itself, right? And took me weeks.
Yeah, and now is when I should probably say that I am ghostwriting the whole thing.
So I, you know, it's really Paris on the podcast, but it's-
Brian, we weren't supposed to tell anybody that.
But it's really me writing the books.
So I'm writing Paris books.
He's being me on the park.
I'm getting to pretend to be stressed out.
Yeah.
Well, you do.
But no, it's like you're saying, it's great to have you back on the show.
You know, now that we're not doing system crash, I'm sure you'll be making more regular
appearances back on this show as we talk about what you're,
to and you know these big issues that we're both talking about now that you know we're not talking
about every single week yeah if my you know if my schedule permits it i'm pretty busy these days
you're right sorry i know you're in really high demand and you know just little old tech won't save us
might not be able to catch someone like brian merchant too often but we appreciate when we can get
your time you know of course no always uh always always always a pleasure to be here but you have
been doing a ton of writing and reporting. And, you know, we're doing it while we were doing
system crash as well on AI and the broader effects of these things. And you have this great
series that you have been writing called AI Killed My Job. And so I wanted to talk to you more
about that. But if we're talking about AI, I think the big thing that we have to start with is
obviously all of this discussion that we've been having about an AI bubble for the past little while,
right? I think it's pretty well established that these AI companies are overvalued, that they are
making claims about their products that are not really supported by what the products are actually
doing. And it feels like in the past few weeks, we have reached this point where it's kind of been
like, okay to acknowledge that there is an AI bubble and to question whether that bubble is
finally going to burst in the near future. You know, we've seen some difficulties in the stock
market. Different companies have been pulling back on certain initiatives and things like that.
Obviously, we saw Sam Altman come out and basically acknowledge that AI is in a bubble. So I wonder,
what your vibe is at the moment and how you're feeling about where this kind of AI market,
where this AI bubble, where the AI hype is in this moment.
There is a lot going on, and I think there are a few major developments that have sort of
changed the conversation, perhaps permanently.
Number one is that when GPT5 came out, which was this long-awaited,
mega-hyped product from OpenAI, that was supposed to be sort of,
of like the next incarnation of almost AGI or artificial general intelligence.
It was supposed to be this amazing transcendent moment.
Yeah, supposed to be this like massive leap that Sam Altman has been talking about for
like months and months and months and months for two years almost because, you know,
it was three when when ChatGBTGPT first came out and kind of made its first splash
in short order, they went to GPT4.
And then sort of everybody was kind of in the AI community was like, okay, well, GPT-5 is going to be the one because from three to four was a pretty noticeable or the performance was much better of the models.
And it just kind of like felt more like an actual artificial intelligence in terms of a product and interactivity and all that.
And then five just you could kind of now in hindsight, it's pretty clear that or it was the question constantly haunted them.
Like, is this going to be enough?
And it seemed like the answer was always no.
So as they would iterate and, like, release new models, they started to get into, you know, the point fives and then, like, the letters and then 4-0 and then Orion or whatever.
And it was a very incremental progress.
And that sort of complicated matters we can see now because if you release an incremental product update and then say, this is, you know, the next.
of AGI, then people are bound to be disappointed, which is exactly what happened.
So my sense is that Altman and his C-suite at Open AI were kind of like just like,
well, like we got to pull the trigger sometime.
We can't, if we, it's only going to look worse if we wait another year or whatever.
And it was like kind of like this safe distance from when they secured their last round of
mega funding from SoftBank.
And it was just like, okay, maybe we can just.
release it now and maybe we can get away with it. And they couldn't, right? It was like users had
already sort of baked in a host of assumptions. Other users were quite sort of addicted already to the
previous iteration of the product. And it was sort of just on the terms that Open AI set for itself
a failure. And that's what I think is important. Because some people are saying like, oh, like,
it's amazing. And the critics are, you know, being too harsh or whatever. But I'm, I'm judging
this by the terms that OpenAI set out for itself.
And you can look back at Sam Altman's comments himself that he published on his blog just
in February, where it's like, we're getting close to AGI, right?
Like, it's in the air.
Like, it's going to be very close.
And then what happens after the launch, six months later of GPD-5?
Suddenly, AGI is not really a useful term anymore.
It's not a super useful term to quote Sam Altman.
I was like, are you kidding me?
Because you have been banging this drum, and maybe it's not super useful for you right now
because you're going to be criticized about it, but it has been incredibly useful to you as a
fundraising tool, as something to tout as you go to Microsoft or go to prospective enterprise clients
and say, AGI is around the corner, we're building it, give us $10 billion or, you know,
invest in this next round, or, you know, buy a suite of GPT for business or whatever.
And so that disappointment, I think, finally solidified the fact that the level of improvement is not going to continue.
I mean, we can debate the actual sort of benchmarks or how well that the model did in this context or what it's good at.
But the bottom line is that, like, you know, critics, you know, folks like Gary Marcus, most notably, probably have been talking about how.
how just scaling, which is just feeding more and more data into the systems, into the models
had hit a limit.
And now it's pretty clear that he was right about that.
And that means that whatever else happens, it doesn't mean that, like, you know, AI isn't
going to be able to do interesting things or different, but it means this model where you're
just getting more and more and more data, getting the LLMs to terrain on more and more and more
data and then to produce output based on just more, more, more, more, that ethos has sort of
sort of reached its limits. And there's going to have to be new interjections of symbolic reasoning
or different configurations. There's going to have to be something else. And so that, I think,
has permitted the business press, the tech press to sort of take stock of what's actually
happened on the ground so far. And I think it's also worth noting that into this sort of environment
came this study from MIT that showed that 95% of businesses that have adopted a, you know,
have essentially struggled to do so and have not showed major gains.
And so you have like, well, the business case is iffy.
The model sort of improvement is iffy has slowed down.
And the future all of a sudden seems very uncertain because as listeners of this pod know that like this is an incredibly capital intensive technology where it's not just like, oopsie, this didn't work.
Let's try something else.
It's like you have already sort of baked in massive contracts with data centers, with cloud
compute providers, chip purchases from Nvidia, where like it really, really matters.
Because again, it was all predicated on scale.
It was all predicated on scale.
And so you have to take a hard look at the through lines.
And so now we're at this sort of cloudy moment where it's like, oh, wait a minute, meta, which
was just like a month ago or even weeks ago.
go, like paying $100 million signing bonuses to get AI researchers away from open AI is going
like, actually, maybe we're going to pause our superintelligence team that they're calling it.
Yeah, we need to reorganize in this moment.
We need to reorganize.
I think that what you're saying is so important to understand, right?
Because the big assertion around generative AI for so long has been like, it's on this
exponential curve like so many of these other like tech products, right?
And so if you have GPT-5 come out and it's not showing that, then all of a sudden, like, the whole thing that this whole boom, that this whole market, that this whole like supposed business and business venture is built on is being called into question because these products are not actually getting so much more powerful on this exponential curve.
It's like, okay, you had this moment where it came up, but now it looks like we're on the S-curp that we've seen with, you know, AI for so long where you're going to have this advancement and it goes up, but then it plateos again.
for a long time until you, you know, maybe 10 years down the road or something, there's this
next development or this next kind of series of research that results in this next level of
advancement. And then on the narrative side of things, it's like, like you were saying, you know,
you have this MIT study, you have just the general things that these companies have been saying,
the way that GPT5 comes into all this. But then you also have these increasing like stories from
employees talking more about how AI is not making them more productive, is not making things
better. And I think the big thing, like the past couple months has really been like just the
growing wave of these stories about the mental health consequences, about people committing
suicide after having talked to chatbots and gotten really concerning series of dialogue from
them where they're basically egging on their suicidal ideation. And, you know, there's even this
story about this guy who's like high up in open AI who is apparently kind of having mental
health consequences as a result is. I don't know the best way to describe it, but, you know,
it feels like kind of the stories about the health and human consequences of this technology
are just growing so rapidly that, you know, more and more people are like, what is going on here?
Yeah. One thing that has always been sort of unique about the AI boom is that it has sort of
required all this forward motion, all these promises of AGI and things like that to sort of overtake
any critical backlash or introspective?
Because it's always been there.
From the beginning, this is a technology
that has never been
sort of a majority of people surveyed, for instance,
have never said, I love this technology.
From the beginning, and up until recently,
you know, Pew has done polling,
tech equity has done polling of California.
There's lots of polling,
and time and time again,
you find that consumer sentiment
and worker sentiment is more negative than positive.
If people are more concerned than excited by significant margins over AI, and they have been.
And that's always been that even some industry insiders have pointed out, like, been kind of the risk of touting this technology as so powerful, right?
Like the doom hype was, I think, taken as a tactic because it was working for a while.
But now, as you're saying, we might see some of that sort of backlash come because it used to be like, well, if this technology is so powerful that, like, it's going to take over the world.
at least it'll help me sort of replace my workers,
or at least it'll be sort of addictive to users.
So we all better invest in it.
And then if that pitch can't bear any fruit,
if it turns out like, well, actually,
it's just like a semi-successful automation technology
that is only useful in a few key contexts
and with a lot of oversight and work,
and we have to hire other people to make sure that the AI works,
and then all of a sudden that whole calculus is thrown out.
out of whack and that those criticisms can then sort of shine through and take up more of the
space. And I think you're right. I think we'll start seeing that happen more and sort of, you know,
those very real. And that's also not to say that, you know, those criticisms haven't been more
developed and become sharper over the years as we have more data to point to. More kids whose
lives have been ruined by, you know, AI addiction. More educators just completely exasperated by the way
that AI has sort of taken a wrecking ball to the classroom.
And then, yeah, what we can talk about, which is labor.
Before we pivot to the labor question, can I just ask you one final thing on this AI bubble?
And then we'll get into your series in the work that you've been doing on that.
I wonder how you feel about the state of that bubble at the moment.
Because for me, I feel like there are certainly questions in this moment, right?
Questions that are being asked much more publicly.
There is clear evidence of the vulnerabilities and the problems.
problems and the kind of the lies that this market that, you know, the valuation of this technology was built on, I think there's still energy incentive to try to keep this bubble inflated, not just from investors. But for me, I always look at how the technology has kind of become this geopolitical football where you have all these countries trying to pretend that they are going to be leaders on AI too. And I feel like even if these vulnerabilities are becoming clearer, these issues with the narrative that the valuations were built.
on, I still think it's entirely possible that the bubble remains inflated, at least to a certain
degree, because of that kind of geopolitical aspect, the aspect that is beyond the business
case. But I wonder how you kind of feel about where that stands at the moment.
Yeah. This is always kind of been my sense as well that I think for a number of reasons,
AI is essentially at this point, you know, too big to fail. And I, you know, I've actually,
I've had some really, really good arguments about this with folks like Ed Zitron, who, you know,
can also persuasively make the case that there's a house of cards quality, especially to a lot of the
companies, and that once, you know, things start going south and investors pull out, you know,
a lot of what the AI companies are doing is unsustainable. And I, you know, I think that's,
that can be persuasive too. But we've already seen precisely what you gestured towards, which is that
we're already in this new kind of era where there is a new sort of formation of Silicon Valley
and the federal government in the U.S. especially, this new sort of Silicon State here
that is still much more maybe insulated from a lot of like sort of, you know, the market activity
and that the state can do a lot to prop up a company, as we're seeing, right?
Like, we're seeing, like, the state is taking an actual stake in Intel, for instance.
It's making weird deals to exempt Nvidia from export controls.
And it has, you know, close relationships with a lot of the AI companies and their, their architects and their executives.
I will say, it was interesting to see the exemption that Nvidia got.
And then, like, Howard Lutnik basically turned around and made some comment on, like, how they were going to
I had to treat China differently or whatnot.
And China, like, immediately was like, yeah, we're discouraging anyone buying these chips
at all, even if they're now available.
Even if they're for sale.
Yeah.
Yeah.
Which is to say that, you know, I think, so that's on one layer, like the state, like,
whether it's through direct contracts, whether it's through actually taking a stake in a
which is, I mean, we're, that's interesting.
That's not really something that I necessarily would have put on my bingo card,
seeing, like, Trump want a 10% stake in Intel or anything else.
But now, you know, we have to understand, like, how interested the state is in AI.
This is a point I think we made on system crash, but I'll make it again here.
And that is just like, everybody should be asking themselves why it is that the one technology,
for all intents and purposes, the one non-overtly military technology that the Trump administration is
interested in is AI, right?
He's cutting subsidies for clean tech, gutting electric vehicle supports, pulling the rug out
from under, you know, health sciences and investments in vaccines and things like that.
And yet here we're like AI.
We're pro AI.
And that's because it's so well suited to be a technology of control, of domination, of surveillance.
It is, it can produce shitty propaganda that the White House can put on its Twitter feed.
It can, you know, be pumped into government agencies in hopes that it can do the jobs of fired public servants and all the while sort of concentrate and control under a fewer number of officials and sort of allies.
It is, and it can be a tool, you know, at least as the way that Silicon Valley has pitched it for military might, for on the geopolitical stage, as you said, whether it's to conduct,
sort of hacking at malware attacks or, you know, help guide more conventional weapons or do
target selection as we've seen the IDF do in Gaza. So the state, at least right now, is also
quite invested in AI. The American state is. So that's one factor. The second factor is like
whether or not this is comparable to the dot-com boom of 20, 30 years ago,
or AI is in any way, shape, or form, the next internet or anything.
I do not think that it is, of course.
But that's what the industry is treating it as.
This is their idea.
Everything is the next internet.
Crypto's the next internet, Web 3, AI is the next internet.
But more than crypto, more than the Metaverse, more than NFT, there has been a convergence
on this, and investment in this idea that has kind of made it the only game in town.
And that's not to say they can't, you know, scatter to the wind and pivot away or try to afterwards.
But that's number two.
Silicon Valley, I think, is too sort of invested in in this idea and propping it up.
So I think we'll see some interesting things happen if and when that bubble or rather when that bubble does start to burst.
I think it will burst.
What happens next will be the interesting thing, whether there will be government intervention or how the companies will react and the scale of
of that bursting.
And number three, and I think the dark horse factor here is just that it's such an alluring
idea for the clients of this technology to have a tool like AI that can automate
and labor and surveil the smaller workforces that remain in theory.
It's a much more appealing pitch than like, than crypto was where, you know, if you're
whatever Walmart, you're looking at crypto and going.
And like, how does this, I don't care, you know, how does this affect me?
But, you know, there's a reason why, like, almost every organization has been, like,
how do we do AI?
Like, how do we get AI?
Every, like, CFO in the world has been like, all righty, bring on the AI.
You know, let's cut labor costs here.
So I think those three factors are going to make this uniquely sort of resistant to a bubble.
It also might make it all the more cataclysmic if and when that bubble goes full burst.
No, I think you've put that so well
And I think it pivots us really well to start talking about labor
But there's one thing I want to tell you
Before we start talking about your labor reporting
And what you've been hearing from workers
And what you're seeing
You know, obviously on system crash
We used to talk a lot about what's going on in Canada
And you know Canada has a new AI minister
I'm sorry, remind me what is that like a city in Europe
Yeah, that's your 50 first state
Don't you remember?
Oh, that's right
Yeah, that's right.
Not Washington, D.C.
It's Canada.
But so the AI minister gave an interview the other day, and he was like, there's this
bill from the old parliament that I needed to understand.
And so I ran it through Google Gemini, and I had Gemini make a 15-minute podcast for me
about the bill to explain it to me, and I listened to it on the car on the way to work.
And it was fantastic.
Let me pull it up and let you listen to it.
It was great.
And I was like, man, tech policy in Canada.
is so fucking screwed if this is what we have.
It's bad up here, man.
It's bad.
Not as bad as down there, I know, but, like, it's not good.
I think that's explicitly, like, there's a Bloomberg profile of, like, Sacha Nadella, I think.
And that's what he said that he did.
He would, like, download books into it into chatbots that he could then talk to and ask questions about it.
It's just, like, it's so deranged.
Like, what even is that piece of information anymore?
it's just it's like already been you know regulated and processed through the mass of human
knowledge into something that like could probably not even be discernible like as the book anymore
so you're just like you're just talking with nothing it's just like someone might as well be just
like blowing hot air on your face like it's just so ridiculous but yes you know the UK Canada
like I continue to be astonished by the openness and the eagerness that you know states around the
world have, you know, not all, not all. There's plenty. There are some good, you know, exceptions
who have basically said fuck off. So, but yeah, it's Canada screwed, UK screwed. We're obviously
screwed. Man, the Anglophone world. We're, how a mess. It's unfortunate. But yeah, let's
pivot and talk about your labor series because, you know, I think you set us up really well to get
into it. You know, you have this series, AI killed my job where a bunch of people have been sending
in their stories about how they're seeing AI affect their professions, their workplace,
places, what the effects of that are. Of course, you've published two pieces, you know,
kind of directly telling those stories so far. And, you know, I know you have more kind of
in the pipeline as you're going through more of these things. And for me, it's been really
fascinating to read through that and to see the types of things that people are saying about
their workplaces, about their work. Of course, you know, the ones that you've published so far
on tech workers and on translators. But even then, I think there are so many things that just feel
so much more like broadly applicable potentially of the things that they are talking about,
even going from some of the things that you were just saying.
So I guess to start, like, how did you decide to do this series?
And was there anything that you were surprised about when you started to get these stories
coming in from people telling you about what was happening in their work lives?
Yeah.
So I know I'll also just add a note to say that I think that the talk of the AI bubble and that some of the
mythology and the more sort of pie in the skyness of the AGI conversation sort of beginning
to evaporate, kind of allows us in a lot of ways to see generative AI and the generative
AI tools being sold by these companies for what they are, which is either, you know,
socially mediated kind of entertainment products like chatbots that people talk to,
or it's enterprise workplace or personal automation.
It's sort of, you know, souped up productivity software.
And so, you know, with that in mind, I mean, that's kind of how I've always approached
generative AI, as you know.
That's, I think, to me, the most useful way to look at a technology like this that is
being sold as something that is going to disrupt the workplace or transform work
or, you know, beget a jobs apocalypse in the words of something.
some of these AI CEOs is to just like look at history and look at all the times when similar pronouncements have been made and other technologies have been sort of, you know, entered into working life.
And so the best way to do that is just look at the material conditions on the ground and who's doing the introduction and the adoption and how it's changing.
So the AI killed my job series came about because I spent a lot of time talking to workers.
and I have since the beginning.
And part of that is just because that's just sort of like where my beat has naturally been,
talking to before it was AI, I was talking to Uber drivers, talking to Lyft drivers, and Amazon workers,
and trying to understand what was happening on the ground, on the other end of Jeff Bezos's
or Travis Kalanick's pronouncements about the transformation of the workplace or the future of work or whatever.
And it seemed especially acute to me that during the AI boom, so few people were really just going right to the workers.
So like, okay, great, Dario Amode from Anthropics says that whatever, 10% of all jobs are going to be gone, maybe half of all young collegiate.
So, okay, great, he's a CEO selling a product.
So what's actually happening?
Like, where is the technology actually, like, hitting the pavement?
and that's usually you can find that out by talking to the people that these tools have been thrust upon
or that are using them voluntarily or that are parts of organizations.
And when you say, when you talk about the tools there, do you mean the AI products or do you mean the executives?
Sorry, I couldn't help myself.
They're both tools in their own way.
Yeah, I'm finally reading why we fear AI by Hagenblicks and Ingo,
board glimmer and an important point that they make is that, you know, that these companies
and the AI salesmen are all just being, you know, motivated by the same capitalist forces
that are animating the whole to do so. In a sense, they have to say that, you know, like our product
is going to put even more people out of work than yours. And then it becomes a sort of arms race
to see who can scare people more, but I digress.
Another point that I want to make without digressing too much before we talk about the workers
is that talking about AI and labor replacement or a job's crisis is so fraught.
Because on the one hand, Ammodei, Altman, all of the AI CEOs, they do want to create
the impression that like a great disruption is coming.
That makes it easier for them to sell more automation software, right?
Like, that's, it's a product in the array of products that they're selling is Enterprise, AI, automation, productivity software.
And I think there is a tendency, even on the left, to sort of like push back, completely saying, like, this stuff is bullshit.
It sucks.
And giving that idea any credence is just, like, playing into these corporate narratives.
And then some of, like, I think some of the better critiques, like from Aaron Beninov, who's great,
looking at sort of the sort of middling results of like sort of previous sort of industrial, you know, mass-scale
job scares and saying like that just it isn't borne out. I do want to caution against minimizing
too much. So, and I think that's part of this project is that I think in no way are we going to
see anything like a mass-scale jobs apocalypse. It's not going to happen. The AI is not suited to do enough
jobs. It requires too much oversight. It's too expensive. But that said, there are still a lot
of use cases where a management can either use it as a tool, use it as leverage to immiscerate
or, yes, sometimes even replace workers or freelancers, especially where there are workers in
more precarious conditions. And so I do think we want to be careful about swinging the pendulum
too far the other way because I have been talking to probably hundreds of workers.
at this point. And that's not like a, you know, obviously a meaningful sample size if you're looking at
the global economy or the American economy. But it's enough for me to get a sense of what I think is
happening in particularly sort of vulnerable industries where executives can use AI maliciously or
aggressively to cut costs. And in a lot of cases, it can still be very pernicious in the way that it is
used to sort of reshape a job or to take away parts of a job that people think are meaningful
and replace it with like with button pressing or where like your job used to actually be to
translate the text for example now because some you know person in middle management was
susceptible to a pitch from some tech company now the part of translation is outsourced to a machine
but you still need a human to go over the output and correct it and sometimes I heard
over and over in my survey of translators,
that job was just as time-consuming sometimes,
but it's just far less,
like you're not actually doing the translation,
which is considering meaning
and considering context and place and person
and painting a picture of a game
or a piece of art or prose
and then translating that.
Instead, you're taking the automated output
and trying to see if it lines up
because somebody somewhere on the supply chain
got convinced that that's more effective and it can save the firm a few bucks.
So there are a lot of impacts like that that are still rolling out and that I'm hearing about.
So, yeah, I really just wanted to hear from the workers.
I guess that's a long-winded way of saying that, like, I wanted to hear all this corporate Silicon Valley
AI speak.
How's it playing out on the ground by the people who have to deal with this stuff every day?
And, yeah, I decided to sort of separate it by industry for now.
I might do other things as I move along.
I started with tech workers because they're in a very interesting place.
It's one of the more, you know, obviously management at a lot of tech companies is the most
sort of gung-ho and aggressive about deploying AI.
And then so it becomes interesting to see like, you know, the disparity between an AI loving
executive and a senior software engineer who really knows what they're talking about and is
just going like, I can't believe we have to use this stuff or, you know.
And so there's a lot of great stories.
came from that one. A few where it was, you know, we think that the AI boom sort of convinced
our executives to close down a department or to fire me as part of a layoffs as part of an AI
first strategy or something. But more often than not, yeah, it was like, I work for Google. I've
worked here for a long time and they're automating sort of like the AI generated coding process
and they're just injecting it directly into our code base. And I think that's a disaster
waiting to happen. That was a really interesting one. You know, stories like that. I completely hear
where you're coming from with that kind of distinction between what jobs are being destroyed and kind of
what jobs are being transformed and often transformed in a way that is degrading them, right? Making them
more precarious, lowering the pay, making the work more frustrating, I guess, maybe in order to have
to deal with. Like, I think that these are important things to understand. And when the executives are
just coming out and talking about productivity and replacing jobs, like,
the actual material consequences of that can be abstracted, right?
But when you go in and actually talk to the workers, you can see what is happening there.
And my view has always been in part informed by like the last AI wave.
I've talked about this in the past that the actual job destruction is often minimal
and is often focused on particular tasks.
That's my view.
And what we see much more of is the use and kind of the weaponization of these technologies
by bosses, management, and executives,
you know, things that you've, of course,
written plenty about through your career
in order to try to change the work
to make it so that workers have less power
so that they're being paid less
so that they have fewer abilities
to really intervene in the work process.
And I feel like this was something
that really came out in some of those stories
that the translators and the tech workers
were talking about,
where it really felt to me like some of them were saying,
like, you know, in the translator's case,
that with open,
A.I. And with LLMs, the quality of the translation has not actually gotten significantly
better, but it seemed more like the hype of the past couple years provided a justification
for a lot of these companies to adopt and roll out these tools and change the profession of
a translator in a way that wouldn't have been justifiable in the past. But because of the hype,
it was now okay to do it even though the quality wasn't there. And that to me seemed like a really
significant kind of bit of detail to come out of these things that you were talking about. So I wonder
how you reflect on that piece of things after, and the way that executives in particular have been
able to take advantage of this after talking to the workers about how they have seen it actually
play out in their professions. That's absolutely correct. As I put in a previous piece,
in one that actually helped spur this project,
and one that I spoke to a laid-off duolingo contractor
after that company pivoted to AI.
And this was the same time that sort of like
the doge clearings of houses was in full effect.
So I wrote a piece that argued that sort of the real AI jobs crisis
is sort of the cultural logic that it allows executives to embrace
and to impart onto their organizations.
But it's less that, you know, AI can actually do any one of those jobs of the civil servants that have gotten laid off.
It's just, it provides sort of like the window dressing, the cover, the idea, like the futurity, necessary to at least sort of gesture towards this concept of replacement, that there's going to be the same level of functionality even after these people are gone when it's just really what management wanted to do anyways.
And I think there are some cases.
The Duolingo case is interesting because I think this is just one of those guys.
It really seems like he just really does either believes in the hype or maybe he had
been itching to get rid of all of his contractors for years anyways.
I mean, we can't know.
He certainly seems very credulous about the capacities of AI, or at least he did, until everybody
sort of revolted and started pushing back.
But I think that's a lot of it.
But there is this layer.
So, like, I also don't want to minimize the experience.
of the people that have said, like, my work is gone, right?
Like, my work is gone.
It's dried up.
Like, I, that cultural cover allowed my boss to select the good enough option, which is sort
of, you know, auto-generated code in the case of the tech workers that, you know,
in some cases can be good enough.
Again, if you have somebody, maybe you can hire somebody who's less expensive to sort
of spot check that output.
Or in the case of translators, maybe it's just like, you know what?
consumers of this particular
like Japanese video game
that we're putting out,
maybe they don't need a good translation
or maybe this is good enough.
They can still get the gist and they'll still,
you know,
and so it facilitates like those tradeoffs.
And it does sort of,
again,
provide cover to management to make these decisions.
And because like ultimately you just got it like,
it's AI in these contexts is an automation technology.
At the end of the day,
it's just like it's what management chooses to
do with it. And usually that is just, again, yeah, squeeze, surveil, control, or replace
tasks or jobs that management think it can. So it's going to be deployed in those same context
that automation technologies always have been. There's nothing particularly mystical or, you know,
befuddling about it when you actually get down into the details. As such, it still stands to be a
pretty potent force because it's been imbued with these properties, right? Because
logic has become powerful enough. And I think that going back to what we're talking about at the
top about the bubble, one really interesting thing to see will be whether or not that sort of
wipes away some of the eagerness to use this as an automation technology. Is it going to be like,
oh, like actually we were overzealous on this. Maybe we, maybe it can't do everything that
we were sold on it being able to do. And now we have to change tax.
or there's a potential fork in the road where it's like, well, we've sunk all these costs
anyways. Everybody's just going to have to deal with subpar output, subpar cultural products,
subpar customer service experiences. And AI is going to win the day because it's a little bit
cheaper and we've already bought the enterprise contracts. Yeah, or in the case of a country like
the UK, apparently they're looking to just buy an open AI subscription for like the whole government
or something like it's a wild but i think that's really interesting right because as i was reading
like the translation piece in particular i was also thinking about how i have seen this being used
in like other parts of the world as well like i spoke at an event in amsterdam earlier this
year where they were using basically ai like live translation of speakers and i was like this is
weird like who knows what that thing is like claiming that speakers are saying when they're
on the stage. But again, like, it's in place of where in the past you would have someone actually
like doing that, right? Or maybe you just wouldn't have it at all. I don't know. And then I was
speaking to some publishers who operate outside the English language. And they were like frequently
using chat GPT for like correspondence to English speakers and stuff like that. I don't know. I like gave them
a bit of shit for using chat GPT. But I also like kind of understood it like, you know, if English is not
your first language, that makes it a lot easier to potentially converse with people
outside of that. And so, like, I feel like I've been picking up on a lot of how I'm seeing
people using these tools and normalizing these tools and, you know, not super comfortable
with it, obviously. But, you know, sometimes I feel like it's a bit more prevalent than I
expected it to be. And that makes me wonder, like, what kind of the, you know, say, post-bubble
burst kind of use cases of this technology are going to be. I mean, it is pretty pervasive.
It's being used by hundreds of millions of people every week.
There's a pretty big user base, I mean, which is going to open up a whole other can of worms
because a lot of those use cases are extremely unhealthy and concerning.
I think, like, was it Harvard Business Review, did a survey of, like, the most common AI uses.
And right at the top was, like, therapist.
People were, you know, treating it as an AI therapist.
And, you know, that's just like, just like red flags, just kind of.
came tumbling out of the sky for me on that one.
But yeah, it is terrifying.
So it is, and look, like, I think that you don't have to deny that there are some
genuinely, like, interesting context and use cases.
I remember when, like, you had computer vision that could, you could, like, take a picture
of, like, a road sign in a foreign country, and then it could translate that.
And, you know, that was something that you just, you know, you could try to ask somebody
what it meant.
And but in a lot of times, you know, maybe you can't find somebody who's speaking.
the same length. So there are like utilities where people are finding useful. There's also
people just like following into the trap because it's so useful or so it's so much easier for
them to do this than, I mean, famously, homework, right? It's so much easier to just like have
chat GPT generate answers for you than to actually do it. Oh, I definitely know people who turn
to it to answer like any number of questions instead of just having to think about it themselves
or even turn to Google as maybe they would have done in the past. And now it's just chat
GBT instead, right?
Yeah, I mean, absolutely my, my, I think some of those use cases will be like, I think,
filtered out.
Some of the mass automation stuff will, will eventually be filtered out.
I mean, I think in other cases, we, I think we were going to be stuck with hard questions
about like what we're going to fight for and what we're going to, because there's that
famous line about how AI and technology we're supposed to, like, automate doing, you know,
the dirty work doing laundry and dishes and giving us time to do art and music and instead it's
automating art and music and forcing us to spend more of our time working doing the grunt work
and like the impact on creative industries is something that's just going to have to be
negotiated against fought against same with i think translation like do we value translators i i think
we do i do like doing this piece really has underlined my
my sense of the importance of this work.
And in this sense, I'm really grateful for having done this piece and really thinking
about like, oh, yeah, like, how many, like, translated works of, you know, books have I read
over the years a lot, you know, like how many translated documents, a lot of them, now recognizing
sort of like the art and the labor and the toil that goes into that process and really
spending time with folks
and their stories
who love that,
who love the act
and the art
of taking something
that somebody else said,
thinking about it,
contextualizing it,
and then making it accessible
to a whole other culture,
creating an intermediary
between cultures.
This stuff is so important.
I mean, it probably wouldn't
make the top 10 list
of things most people are concerned about
in AI or automation.
But now,
you know, this, the prospect of automating that process feels incredibly sad to me, you know,
instead of actually, you know, humans putting cultures in touch with one another and negotiating
those meanings together, discussing them, and sort of ensuring that things are as best accounted for,
the details, the nuances, the color, the, you know, you name it. It's all intact. You know, I feel like
something really stands to be lost if we just automate these processes and it's like in one
side out the other and we just have these like tubes of content production that are going each way.
And I want to take this opportunity to shout some of the groups that are that are kind of
standing up and trying to fight against this. And I would love to spend more time talking about
them. There's a group with a name after my own heart translators against the machine. They're a group
that's sort of gathering stories and data about what it's like to work in translation right now
in order to sort of build solidarity and to fight the encroachment of tech companies into
their professions because it is really important, I think, what they do.
And it is one of these areas that I think Silicon Valley companies do stand to sort of grind
away, you know, whether just for a few extra enterprise automation contracts or just as sort of like
a thoughtless byproduct of this rush to build and release these products.
And there's also, if you're a translator who's worried about or interested in organizing
around the impacts of AI, the National Writers Union has a translator's organizing committee,
and you should check them out at nw.org slash chapters slash TOC.
So there are folks who are out there doing some stuff about this.
And I think it's a space, like I said, that it's not going to go away.
If the AI bubble bursts, there are still going to be these automation products that are widely available and in use.
And you're still going to have executives and clients who still will want to use them.
And it's going to have to be, it's going to be a fight as it is with, I think, you know, people in the
arts professions, illustrators, copywriters, graphic designers, screenwriters, who
already, you know, who won the first round of their fight, you know, and there's going to be
many more.
And finally, I would shout the work of Lucille Danilav, who's a translator, who's written a lot
about games localization and has a website called Locked and Loaded.net that you should check
out if you're a translator interested in this stuff.
Awesome.
Yeah, we'll put those in the show notes so people can more easily find them instead of
to remember what you were what you were saying there but i think that's fantastic that that you
laid those out and i just wanted to pick up on a couple things that you were saying right like
you know we think about the ways that these technologies are rolling out for language and
translation as well it immediately brought to mind what i heard from maori uh speakers and people
who advocate for that indigenous language in new zealand and how they're worried about you know
how i will continue to hamper efforts to kind of you know renew restore uh enliven that language and
and keep kind of the older pronunciations and things alive as it just becomes jumbled and
treated as this translation of English and be related more to English rather than what it previously
was. But also, you know, as someone who is from like an officially bilingual country, I think
having that kind of back and forth, that proper translation and, you know, the understanding of the
context between the two languages, even if you are a monolingual person and are trying to engage with
like the whole of French and English culture in Canada. And, you know, I believe there were a couple
people in the article who were from Canada who were kind of talking about things like this. Like,
I think that there's such a huge loss if instead of having these translators who can translate
that context, who can actually, like, have the meaning there instead of just, you know,
replacing words, is going to be such a loss for, you know, a country that still has kind of
linguistic divides and identity issues around language and things like that, right? I think
it potentially harms some of those kind of like national unity questions there as well,
like, you know, these kind of bigger issues that we talk about. And, you know, obviously you guys
in the States have, you know, English and Spanish. It's a bit different up here where it's like,
you know, officially bilingual on the government level and things like that, right? And, you know,
just to close off our conversation, I wanted to ask you, you know, you have been talking to so
many of these workers. You have been writing about, learning about speaking to workers for so long
about this, but I wonder after doing this project, AI killed my job after hearing so many
stories from people about this latest wave. You know, has this changed how you assess the impact
of AI on work after doing this for so long? Like, what has been kind of the main takeaways that
you've had from this experience? You know, I wrote a year and maybe even a year and a half ago,
like before I was really even fully doing the newsletter. And I would just kind of like jot out
some thoughts on it occasionally. I wrote a post. I think it was called Understanding the real
impacts of AI on jobs or something like that. And I was just randomly going over it again because
it popped up. And when I was looking for going through my archives, looking for something to
link to in a recent piece. And pretty much everything that I predicted would happen has more or less
been born out so far. The fact that, you know, it's really going to be a
question of management using AI as a tool to sort of cut labor costs when possible to
concentrate their power or gain control in an organization, to use as leverage, which we've
seen happening to some extent for sure, where less than, and I also, you know, I didn't think
from the beginning that we were going to see a jobs apocalypse either, that it would be sort
of this mass unemployment event.
And, you know, no, I think I've been a little surprised by, or at least a year or two
ago I would have been surprised at sort of like the pervasiveness at how many corners
that companies have been determined to just ram AI into just, you know, part of that's
just like, fomo that like everybody's saying AI, AI is the buzzword.
Like if I don't, if I'm a middle manager and I don't find a way to like have some kind of
an AI program. My boss is going to think I'm stupid, so I better get it in there. I've been a little
surprised by the extent to which, like, in education that a lot of the teachers have been
adopting AI in certain contexts. So, like, I feel like there's case-by-case instances where I'm a
little bit surprised by a certain use case. And even when it seems like it's obvious that this
isn't a great idea. And as that MIT study found 95% of the time, it's just like,
Like, not going to generate any real savings or advantages for your firm or your institution.
So, yeah, the zealousness maybe a little bit.
So I should also say that the project is ongoing.
We have at least four more installments to do.
And so if you, dear listener of Tech Won't Save Us, have had AI kill your job in any way.
And I should say, like, I'm a little, I'm still to this day ambivalent about, because AI killed my job.
It's supposed to be, you know, AI has like made, as change, transformed, made unpleasant, immiserated, in whatever way, blanket, it's not, I don't mean to give the impression that, like, AI is an autonomous force that's killing jobs.
That's the antithesis of everything I'm about.
It's supposed to sort of help explode that, that myth as AI is a sentient thing.
But it wouldn't be as catchy if all the additional context was in there, you know.
What, I tried.
I was like, you know, the AI that my boss bought from an enterprise AI company killed my job.
So it just wasn't flying.
So if AI has killed your job or you know somebody who's dealing with AI in the workplace or who has seen their work fall off,
it's AI killed my job at p.m.com.com.
It's a proton mail account.
I'm particularly interested in the next two installments are going to be health care workers.
So if you're a nurse, if you're working as a therapist, if you're working in a hospital, in admin, if you're a health care worker, I would love to hear from you.
And secondly, I'm looking at illustrators and graphic designers, artists, people whose work has been impacted by the rise of services like Mid Journey or Dolly, which is not just chat, GPT, but the image generation side.
so but everybody where there's going to be more installments after that those are just likely to be
the next two so i would love it if you have one to share we'll throw that in the show notes too or
paris will i'm no longer in control of throwing things on that we'll see maybe i'll put it in the
show notes now that's great Brian i think it's so important that you're that you're doing this
i think it's shed such important light on you know this facet of what we've been seeing with this
a i hype over the past few years and really kind of grounds it for us
us, right, so that we can actually feel the tangible effects that this is having, which, as you say,
is not always destroying a job, but can still have, you know, massive, terrible repercussions for
people's lives, people's work, how people are living in this world, right? And often it doesn't
get near the attention that it deserves, especially when we see these statements from these
executives absolutely everywhere. So I think it's a great series, and I've really been enjoying it
so far and can't wait to hear and read the next.
Installments coming.
Installments. Thank you.
Well, it's certainly been really eye-opening to me to talk to all these workers and hear their stories, hear your stories.
And I'm so grateful to all of the workers and translators, tech workers, and everybody still to come who has submitted stories, answered my questions over email and, you know, started bringing to light these issues and the reality of AI on the ground.
So thank you to everybody who's participated so far.
Absolutely.
And Brian, great to speak to you, as always.
Thanks for coming back on Tech Won't Save Us after such a long.
period where you weren't here but that was of course because we were talking every week somewhere
else we were doing our own show oh paris it's always good to be here you know that always a pleasure
i'm sure i'll see you again before long
brian merchant is the author of blood in the machine and writes a newsletter of the same name
tech won't save us is made in partnership with the nation magazine and is hosted by me paris marks
production is by kaila houston tech won't save us relies on the support of listeners like you to
keep providing critical perspectives on the tech industry.
You can join hundreds of other supporters by going to patreon.com slash tech won't save us
and making a pledge of your own.
Thanks for listening and make sure to come back next week.