Big Technology Podcast - Will Artificial Intelligence Take Our Jobs? A Conversation with Kevin Roose, NYT Columnist and Futureproof author
Episode Date: March 9, 2021Artificial intelligence may take our jobs, or it may give us new freedom in the workplace. The debate is heating up as this technology is starting to enter the workplace in a real way. New York Times ...columnist Kevin Roose returns to Big Technology Podcast for a bonus episode for a discussion on what's actually happening based on the research. His book on the subject — Futureproof: 9 Rules for Humans in the Age of Automation — comes out today. You can order Futureproof here: https://www.kevinroose.com/futureproof If you liked this episode, would you be willing to rate us? ⭐⭐⭐⭐⭐
Transcript
Discussion (0)
The Big Technology podcast is sponsored by MediaOcean.
Looking for a job in big tech, you might want to take a look at MediaOcean.
They were just named by Ad Age as the number one place to work in advertising technology.
Go to mediaocean.com slash big tech to learn more about the company and check out their careers up.
MediaOcean is building the mission critical platform for omnichannel advertising.
If that sounds cool or if you just want to find out what those buzzwords mean, hit at Mediaocean.com slash big tech and browse their job listings.
And big thanks to MediaOcean for supporting the big technology podcast.
All right, everybody, hello and welcome to a bonus emergency podcast edition of the Big
Technology Podcast, a show for Cool, had a nuanced conversation of the tech world and beyond.
Joining us today is a friend of the show and the first repeat guest in big technology podcast history.
He is a New York Times columnist and author of Future Proof, nine rules for humans in the age of automation,
which comes out Tuesday, March 9th. Kevin Ruse, welcome back to
the show. I'm so delighted to be the first repeat guest. What an honor. Thanks for having me.
It's great to have you back. It's really great to have you back. We talked originally about
getting you on to talk about your book and then we ended up having you on with Mark Ledwich for a debate
about your YouTube stories. And now the book's coming out and I was like, all right, we got to get
you on again so we can have the discussion we originally planned. So I appreciate you being willing
to come on and being generous with your time. Of course, anytime. And I was especially excited to talk to
you given that you were you were the only witness to one of the conversation, to the story that is
in the introduction to the book and can corroborate that that actually happened. Yeah, so let's
start with that. You and I were at a party in San Francisco and then we hear about something called
a boomer remover. What's a boomer remover? Yeah, so correct me if I'm getting any of the
details of this wrong. This was pre-pandemic, obviously, when parties were still a thing.
And you and I were, I don't know, at this party in some loft in San Francisco.
And this guy comes up and he introduces himself.
And he says, you know, I'm the founder of this startup and we make this AI software.
And he started talking about it.
There's something to do with factories and streamlining production.
And then he says, yeah, we call it the boomer remover.
And I remember, like, you and I sort of looked at each other.
We were like, that's strange.
Why would you call it the boomer remover?
And he started explaining, like, yeah, this software allows companies to basically get rid of their overpaid middle managers who just hang around and aren't productive anymore.
And it allows us to automate those jobs and give that money back to the company.
And like, this is the kind of conversation.
Like, I'm sure you have conversations with people in the Valley about technology.
and AI. And there are kind of two kinds of conversations. There's the ones when people are listening
and there's the ones when they don't think anyone's listening. And so, you know, companies like that
will say, oh, our software, you know, we don't replace people. We just sort of help them or we give them
new abilities or we take away the bad parts of their jobs. But this guy was basically saying, he was
coming right out and saying, no, like we are trying to replace people in workplaces all over the
world. Right. Like thrilled to get people fired.
yeah like didn't see anything wrong with it which which again like this is not a rare sentiment
among the executives that I've talked to it's just a rare sentiment for someone to say in public
right right you know they say these things to themselves all the time and you use it to lead off
your book future proof about how automation is starting to come into the American workforce or
the global workforce so why did you pick this topic and what's it going to do to our workforce
overall well I was on another podcast the other day and the host asked me if
this was a self-help book.
And I thought about it for a little bit because obviously that has some weird connotations.
But I thought, yeah, it literally was a self-help book.
I was worried about my own job and my own future.
You know, when I was a junior reporter, I was writing, I was covering Wall Street.
I was writing these kind of formulaic corporate earnings stories.
And I was seeing AI and automation start to do those jobs.
There were these algorithms that news companies were implementing to do that job.
And I've seen so many of my friends and other reporters lose their jobs in part because, you know, they've been automated.
And so I wanted to know what to do. And as I was talking to people sources in Silicon Valley and asking them what they thought, I found that the conversation was oddly polarized.
It was like either people thought that AI was amazing and was going to make all of our lives better and was going to have no side effects for people and their jobs.
or they thought it was terrible and dystopian and robots were going to take all the jobs
and we would be basically enslaved by Elon Musk's robot army or whatever.
And so I wanted to figure out like what is the truth?
What is actually happening with AI and automation?
So I went out, I interviewed a ton of experts, researchers, academics, practitioners, engineers.
And I tried to make it a practical book that would help me futureproof myself and my career,
could also help other people future-proof their own lives.
And so, you know, you talk about those two poles.
You know, this is going to destroy everyone's jobs.
It's going to make everything utopia.
Where do you fall out on the question?
Well, I call myself a sub-optimist.
And that's because I'm largely optimistic about the technology itself.
I think, like, AI is, you know, general purpose technology.
It's being used all over the world for all kinds of things every day.
And most of those things, you know,
of those things are very good. Like, AI can help with drug discovery and vaccine production. It can
help with climate change. It can help, you know, with all kinds of implementations in the health
and medical industry. But I'm less optimistic about the people who are using this technology
and who are implementing it. Like the guy at the, you know, like the boomer remover guy. I mean,
there are a lot of people who are not using this technology to improve people's lives,
but are using it to put people out of work to, you know, make their jobs worse and more,
more mundane and to kind of control people and to bring about outcomes that I don't think are
positive. So I'm, I guess I'm somewhere in the middle. I think if it were, you know, zero to 10
scale where like zero is like, you know, everything is great. We have nothing to worry about.
And 10 is like red alert. Everything's going horribly wrong. I would be like a six or a seven.
Right. And you push back a little bit on this.
idea that we've been through technological change and it's led to an increase of jobs and
an increase of wealth. And you say that might not be the case for AI. Why is that?
Well, this is the typical argument that the optimists make, right? We've always had new technology
that's destroyed jobs. You know, we don't have people operating elevators anymore. You know,
we don't have ice salesmen. We don't have lamplighters. But when those technologies came in and
were replaced by new technologies, we got many more jobs. So,
the people who operated the elevators, you know, got jobs dispatching elevators in big, you know,
apartment complexes or something like that. But what we see when we look back at history is that
there's actually a lot more to it than that. For starters, periods of technological change are
very, very disruptive in people's lives. So, you know, when we talk about the original industrial
revolution, not only were there labor riots really harsh working conditions, these sort of
Dickensian scenes at these awful factories where people were being packed in and, you know,
in really unsafe and unsanitary conditions. But it also took a long time for workers to actually
see the fruits of the new technology. It took about 50 years by some estimates from the time
that the, that the machines started coming into factories until workers' wages started to rise.
So, yeah, we ended up, you know, society didn't come to a halt because of these new machines.
But for a lot of people who lived through that period of technological change, it wasn't a good time for them.
And I think that's sort of what we see happening now is in the aggregate, you know, we have a lot of unemployment now.
It's not related to automation.
Most of it's related to COVID.
But we also see these other effects of automation.
People's jobs, you know, becoming more closely surveilled and less human.
and people, you know, becoming automated out of parts of their jobs that they actually enjoy.
So I think the people who say, you know, this always happens.
This is just a part of a story we've heard before really aren't paying attention to what makes this time different.
Yeah. Okay. Now I'm going to push back on you.
Yeah. Go for it.
Look, I mean, I've covered this stuff too. And, you know, one of the things that I've seen,
and maybe I'm wrong, and actually, I'm glad we're having this conversation because we can hash it out.
but you know we we had that this is the way that I look at it and we had the industrial economy where people didn't have any time to actually give their ideas to management management would say make widgets they would make widgets pull the lever they spend entire all their time on what I call execution work right work executing someone else's ideas zero percent of their time on idea work right we moved to the knowledge economy and all of a sudden people's ideas matter right all of a sudden management is like we want to hear what you think we want you to use your knowledge
in the workplace. But there's so much arduous tasks that people do that in most companies,
they spend almost all their time on execution work still. They're not pulling levers.
They might be, you know, moving data from spreadsheet to spreadsheet or, you know, filing the same
report over again. Or, for instance, take your reporter example, you know, people who are combing
through earnings reports trying to make sense of that. And they don't have a time to either come
up with new ideas for the company or new ideas how to make their job better or new ideas to like, you
know, in a reporter instance, you know, go deeper on a story, spend more time on the phone
with sources. I mean, I think it's inarguable that our work days are filled with so much
crap. I mean, there's even a full book on it called bullshit jobs where people just sit
around and do nothing because the system requires them to do it. So don't we end up in a much
better workplace if we allow the machines to take care of those type of tasks and we can
focus on the idea work, which is sort of the core of the economy today?
that certainly can happen and that's the best case scenario and I hope that that's what happens
but if you look through history you don't see just examples of that happening you see
examples of people's jobs being made much worse in some cases by by automation one example
I talk about in the book is this this famous strike at GM in the 1970s at the lordstown plant in
Ohio. This was supposed to be the most technologically advanced auto plant in the world. It had all
these robots. It was this brand new thing. They spent tons and tons of money creating the
state of the art factory. And they opened it up and the workers hated it. They were excited to
work with the machines, but they were not excited about how the machines became kind of like
bosses in a sense. They were slaves to the machine rather than using their own judgment and
creativity. So we see things like that happening today at places like Amazon. I mean, Amazon is
the most automated, probably the most automated company in the world in terms of just the
sheer number of processes that they've been able to turn over to machines. And yet they have
hundreds of thousands of workers who are not happy. They are essentially taking directions
from machines. They are kind of the endpoints of the automation where everything they do is
tracked and dictated to them by an algorithm, they wear bracelets that track their motion.
If they don't meet their packing targets or their rate, they can, you know, a machine can
automatically generate the paperwork to fire them.
So that's a end of fulfillment centers.
Yeah.
So that's an environment where you, you would think that, you know, well, they have all this automation.
So isn't their job becoming more creative and flexible?
But no, it's actually, it's actually going the opposite direction.
It's becoming much more machine-like.
But, I mean, just to talk about the fulfillment center example, it does seem that there are machines that are doing some of the work they would do before.
Like, for instance, when someone was in the warehouse and their job was to pick an item out of one of these shelves and then, you know, send it off.
What they would actually have to do is move through the warehouse, go out and pick it and then drop it off into a bin that would send it out to shipping.
but now with the robotics the the shelves come to them so actually that eliminates the that eliminates that part of their job they're still picking and wouldn't it be fair to argue that the actual arduous part of the the stuff that you're talking about that's so sinister right the tracking has little to do with automation it's more just like workplace you know workplace monitoring technology and can't we separate that out from the fact that the shelves come to the picker
now inside the fulfillment center.
Well, I think what you're talking about is a kind of automation that is like an automation
of management, actually.
So it's not actually the pickers who are being automated because the technology to do that
is still a ways away.
It's like the people who used to be called supervisors.
They've been automated.
And that job has been turned over to machines.
And I think that's, you know, that's part of what's happening in a lot of white collar
workforces is the kind of middle management layer is being automated away. And I think,
you know, to take your point, like, I don't think this is entirely a bad thing for workers. I think
that AI can be a wonderful tool for helping people, you know, do the parts of their jobs that they
don't, that they actually enjoy and sort of get rid of some of the drudgery. But it can also
introduce new kinds of drudgery. There's a, there's a great book called Ghost Work by two
technologists, Mary Gray and Siddharth Suri, and they talk about all of the kind of low-end, mundane
labor, generally very low-paid labor that is used to make AI systems function. So people who are
tagging images, who are, you know, who are labeling, you know, data sets, the kind of, you know,
the kind of mundane work that goes into producing AI and the systems that rely on AI. And so I think
we have to be very careful that we're not just sort of looking at our own experiences with automation,
but also looking at how it's transforming labor more broadly. No doubt. Yeah, that's one of the things
that I heard in Amazon where, like, more often, you know, people become auditors of the technology
as opposed to, you know, folks that do things themselves, which means the technology does make a lot
of these decisions. And it's interesting when you hand it to them because then you start to get
into this whole new world where, yeah, those machines are making the calls.
Yeah, there's a chapter in the book, one of the, there are nine rules that I've sort of
come up with for people to future-proof their own jobs and their lives. And one of them
is about exactly this. It's rule number five. It's called don't be an endpoint. And the endpoints
in software engineering are kind of the places that you hook up one system to another. You know,
If you have one program that needs to call data from some other program, you would use the
endpoint to that.
And in our world today, there are a lot of people who are functioning as basically human endpoints.
They take data from one system and they plug it into another system.
And those kind of jobs are really precarious and really in danger.
And so I think there are more of those jobs than we care to admit.
And by the way, it's not just low paid workers in retail and warehouses.
there are many doctors who have complained that their workplace automation has turned them
into essentially endpoints, that they plug in patient data to a computer and the computer
spits back out a recommendation and they follow the recommendation and they track their time
and they feel like minders of machines rather than people who get to spend time doing the
work of interacting with patients, the real human part of that job.
Yeah, but I guess you could also, and I hear that, but you like, you could speak
to most doctors, and you talk about the amount of time they spend on paperwork versus the amount
of time they spend actually talking to patients. And it's become the amount of documentation necessary
has become totally outlandish. And I don't know. I wonder if we would benefit, especially in the
medical field from technology, automation technology appropriately applied. You know, it's, it is
interesting at the end of the book that I wrote about this stuff, always day one, we talk, I talk a
little bit about the case for thoughtful invention and how like we have this technology at our
fingertips and we can do amazingly powerful things about them. But we often just tend to lose
the thoughtfulness because people are so drawn to the sweetness of invention. And it seems that
that's, you know, a big part of what you're arguing here is like let's have a manual for the way
that we handle this stuff. Yeah. And for executives and people who are in charge of implementing
this technology to actually listen to workers, to listen to their experiences, to hear whether
whether it's making their jobs better or worse, whether it's making them happier or unhappy,
there was a great book I read about the history of electricity and the moment in the early
20th century when factories started to become electrified. And, you know, people who run factories
had assumed this would be great news for the workers of the factories, they would love this
because, you know, you can have a machine lift the heavy piece of steel. You don't have to like break your
own back, you know, lifting the heavy piece of steel. And so they were mystified when workers
started sort of complaining about the machines and the electricity in the factories. And it turns
out that what they hadn't thought about was the fact that a lot of the sort of workers really liked
the kind of downtime that it offered them. They liked the kind of, you know, the heavy lifting.
They liked the camaraderie of like working alongside other people in a factory doing the same
task. And to them, electricity just, it made everything more stressful. It made the pace faster. It made
the bosses impose more draconian, you know, rates and requirements on them. And so it made their job
a lot less fun. The parts of it that they enjoyed, you know, were stripped away along with the sort
of old way of doing things. Okay. After the break, I want to talk a little bit about who controls
technological progress, whether it is us or whether it is just a fact of life. So let's take a quick
pause, and we'll be right back after this and get into that here on the Big Technology
podcast. And we're back for the next part of the Big Technology podcast here with Kevin
Ruse. He's a New York Times technology columnist, also the author of Futureproof, Nine Rules
for Humans and the Age of Automation. It's an introduction he's going to start getting used to
because I'm sure you're going to be doing lots of podcast interviews as this book takes off.
Let's talk again about who makes these decisions, whether it is us or whether it is the technology,
because it does seem like that it does seem that humankind just has this obsession with invention and progress.
And I found it interesting in your book that you argue that it's up to us to decide, you know,
how we're going to structure the way that we use this technology.
But it also seems that every time that we have, you know, a leap, whether it's automation or nuclear power or even something in the health field,
we can't help ourselves.
Humans don't seem to be able to help ourselves.
and we just go ahead and we create it and we deal with the fallout afterwards.
And it does seem sometimes that like, you know, technological progress is inevitable.
Even when it comes to the fact that we invented the nuclear bomb, like, I mean, you know,
there's arguments to be made that that might have prevented more deaths in the long run.
But we made, you know, we didn't really stop and think about whether we should be creating
something like this.
We went ahead and did it.
And even one of the founders of the bomb himself was like, you know, we didn't think about
the repercussions, the sweetness of invention was too great and we couldn't turn away.
And so, like, I wonder, like, you're coming out with this idea saying, actually, we do have
control over the stuff.
We have agency over the stuff we develop.
I'd love for you to unpack that a little bit.
Yeah.
So one of the things that I found really troubling and really, I thought, wrongheaded in talking
with people in the tech industry about AI and automation, is this.
idea that AI or any other technology just happens to us. Like it's a, you know, force of nature
and not something that we decide on. And often the reason that the sort of human role in deciding
these things is sort of covered up or omitted from the narrative is because it, you know,
it allows people to sort of dodge responsibility when things go wrong. But if you, you know,
if you look at what's happening in the economy right now, I mean, people love to talk about
robots taking jobs or jobs being automated. But it's it's executives who are automating those
jobs. They are deciding, you know, I want 20% fewer people in the accounting department or I want
to take off, you know, 30% of costs in our billing and accounts payable department. And so that's
why I'm going to automate these jobs. They don't have to do that. There's no, there's no law that
requires you to automate work as soon as it's, you know, economically feasible to do so. So that I would
take that across every technological innovation. And to your point about nuclear weapons,
you know, we actually, I think that actually proves that we can control the deployment of
technology. You know, we may not be able to control, you know, to put things back in the bottle
once they're invented. But, you know, we've had, we've had more than a half a century of, of,
of nuclear peace now. You know, we've managed through extreme, you know, political sort of brinksmanship
and activism to contain the use of nuclear weapons,
to prevent them from causing mass destruction.
We've done this with lots of things through the centuries
that we invented and then thought,
wait a minute, we probably shouldn't use that,
at least in the way that we had designed it
because that could hurt a lot of people.
So I think we're seeing the same debate today
with things like facial recognition,
where the technology is there.
there are unscrupulous vendors that are going around selling facial recognition software
to law enforcement agencies and other people who might misuse it.
But then there's also a movement of activists who are trying to stop it and to make it
illegal in certain contexts, and it's working.
So I think that's a case in which we actually are containing a technology because we've
decided as a society that it produces more harms than benefits.
So do you think that humans will actually decide to put the,
breaks on automation. And does that leave us with a workplace where people end up just doing more
of this arduous work? Like if the endpoint is just to keep them, you know, to keep them in a job,
is that sort of what we do where we have like, you know, more of these, we put the breaks on
automation. We have more of these jobs where it's like, well, you're going to come in and do a lot
of drudgery work. But we're not going to hit the automate button because of the labor impacts
that might happen. Where does it go? Well, I,
I think that there are a couple options.
I think one is, and I think it largely depends on how management decides to implement
automation.
I think if they do it in a way that is thoughtful, that consults with workers and involves
them in the process, that retrains people whose jobs are being automated to do other
things, and crucially, that shares the gains of automation with workers.
You know, if your company is 20% more productive because of automation, how much of that extra 20% marginal profit is going to workers and how much of it is being kept for executives?
So I think if executives do this thoughtfully, this can go smoothly and this can be great.
I mean, our workplaces are safer and better now than they were 100 years ago by many times.
And a lot of that's because technology has taken away a lot of the most dangerous and backbreaking work.
And so this can be really good, but it depends on how it's implemented.
If executives just automate jobs with no regard for the human toll, if they don't,
if they hoard all the gains of automation for themselves and don't share it with workers,
then yeah, there's going to be a backlash.
There's been a backlash every time in history that that's happened from the Luddites
in the Industrial Revolution to the strikes of the 1970s in the auto industry.
Yeah.
And where are we now?
because we have talked a lot about automation in your book.
You've mentioned that automation is well underway.
But as far as I can tell, there hasn't been a wide-scale loss of jobs due to automation.
So aren't we at a point where some of these fears do seem to be overblown given that, you know,
we have robotic process automation, which is in your first chapter.
Also in mine, I think we both saw that this was kind of interesting.
We have AI, but yet we don't have people now like lining the unemployment.
employment lines trying to, you know, find a way to get government benefits because AI put them
out of work. Well, we do have people in the unemployment line now. It's just it's because of COVID
and not because of automation. Not AI. And the question, and so when I started writing this book,
which was before the pandemic, this was a real worry in my mind. It's like, what if all this is
just a theoretical worry and the reality is that, you know, people aren't being put out of work
in mass numbers. Now people are out of work in mass numbers. And so the question is how many of those
jobs are going to come back. And if you talk to a lot of, I got off the phone with an economist
this morning who, you know, who thinks that there's going to be a huge transformation in the
economy. It's not just, you know, automation, but for example, you know, business travel.
Will that come back? Are people, you know, are people still going to get on planes 200 days
a year? You know, who's going to pay the full freight at hotels during the week in big business
centers. The loss of commuters into downtown urban areas is going to be a huge hit to the
local economies there. And there actually has been a lot of automation of many of the jobs that
we used to do. One example is what's been happening with restaurant delivery and food delivery
more generally. We don't think of those jobs as being automated because there's still a human
bringing the thing to your door. But all the logistics, the sort of dispatching layer has been
totally automated by these companies like DoorDash and Postmates and Instacart. And so those jobs,
you know, those jobs used to be done by humans in a lot of these cases. And now now they're not
being. So I think it remains to be seen like what the unemployment picture is going to look like
after COVID. I'm sort of, I'm curious to see that as well. But I think we're already starting to
see the effects of things like RPA in the corporate world. And there's been some interesting
sort of numbers coming out of consulting firms and things like that about how many companies
are actually accelerating their automation plans because of the pandemic.
Yeah, I think we should take a minute just to talk about robotic process automation.
But I did enjoy like before we do, I did enjoy a few of the examples that you gave in the book
of stuff that's been automated during COVID. For instance, FedEx now has, I think this is right,
FedEx now has automated systems that are processing through envelopes and sorting them.
Are there any others you want to throw out there?
Yeah, meatpacking plants have gotten a ton more automated during the pandemic.
There have been lots of demand for sort of security and cleaning robots, so the little things
you would use to clean an office building or a grocery store aisle or something like that.
Demand for those has been exploding.
And those are just the sort of hardware robots.
There is also the sort of entire world of software robots that, you know, some of which we would
categorize as robotic process automation or RPA, which is, by the way, the most boring world
imaginable, but one of the most important, I think.
And as you probably found when you were doing your book, like, people just don't talk about
this in part because it's incredibly boring.
Well, let's talk about it.
And we'll do our best to make it interesting.
So you spent some time with a company called Automation Anywhere, what they do, why don't you
tell us a little bit about what they do and maybe try to wrap an explanation of robotic
process automation.
Sure.
So from what I understand, robotic process automation is basically the lowest level of software
automation.
It is not, you know, the most sophisticated AI coming out of Google and Tencent and By-Doo.
It's not, you know, it's not the cutting edge of AI, but it's a category of software that
basically automates common business tasks.
So, you know, something an RPA bot might do is to, you know, take, you know, a digital image of a check for a bank and read the numbers off of it and enter those numbers into a database.
It could, you know, convert information from one, you know, CRM product to something else.
I mean, these are like the back office functions that, you know, corporations need to do every day just to operate.
And so RPA is, you know, there are a lot of companies.
that are doing it. Automation Anywhere's one. There's one called UiPath. That's a big one.
There's Blue PRISM. There are a number of firms like these. And they're growing like crazy.
They're so big. Yeah. People have no idea how big these companies are. They don't talk about them.
It's amazing. I was like, I went to a UiPath conference in maybe 2018 and or 2017, one of those two years.
And I was like, oh, I hope this story holds for the book that's coming out in 2020 and nobody covered it.
It's amazing how it. No one pays attention to it.
They are the most important companies that no one has any idea to exist.
I agree.
And they are having a huge success automating the back offices and middle offices of Fortune 500 companies.
You know, one consultant or one analyst who studies this sector said they're basically building bots to do what Harry and the back office did.
And so there are a lot of-cubicle jobs.
Yeah, there are a lot of cubical jobs.
And it's not just wrote data processing.
They're also now getting into things.
things like sales forecasting and things that take a little bit of judgment, typically when
humans do them, but that you can now automate.
So these firms are super successful.
They're making a lot of money.
No one knows they exist.
They're not going to win any awards at AI conferences because the technology isn't that
sophisticated, but frankly, it doesn't need to be to have a huge impact on the labor
market.
Yeah, UiPath told me that their software can even write new hire letters and termination
letters when the time comes. Yeah. And, you know, their, their sort of defense of what they do is that
exactly what you said at the beginning of this interview, that they're bots, they allow, you know,
workers to focus on higher value work. They take away the more mundane parts of their jobs.
But that's not actually how this ends up playing out. I mean, at, you know, a lot of these companies,
they implement RPA and then two quarters later, they lay off a bunch of people. And so,
that's not universal. I'm sure some companies that do this are actually growing their staff. But in a lot of cases, the entire point of implementing RPA is to be able to, you know, cut your payroll. Yeah. And when I was at the UiPath conference, there was a banking executive who just kind of were at the buffet. And, you know, he's filling his plate with like whatever the fried cauliflower or something tasty. And he looks at me, he goes, oh, yeah, like we're doing this, you know, for FTE reduction, which means full-time employee reduction.
And just I was like, whoa, just came out there and said it, they're here to put this technology into place to fire people.
The one thing I do wonder is like whether corporate Darwinism will end up, and this is really my belief that corporate Darwinism will end up clearing out the companies that use automation, robotic process automation AI, just to fire.
Because, you know, you talk about like the S&P 500.
You know, companies used to last 67 years on the S&P 500 last century.
Now they last 15 years.
So you could live your whole existence with one idea back then.
Now you need, you know, maybe three or four to last the same amount of time.
And I've always felt that like if you're going to be automating, put the people
whose tasks you automate on higher value tasks.
So you can reinvent or create new things and then buck the trend of that 15 years
and actually live longer because you are.
creating new things. And I think company like Amazon, like we talked about, does this in the
front office, right? They have like a lot of their white collar job functions automated and
they put people on new things. So I'm kind of curious whether you think, you know, those who do
automate to reduce full-time employees are, you know, getting a short-term gain and then
long-term pain or whether, you know, companies can be successful doing this stuff and, you know,
they need to be stopped.
Well, I think it's, it's some of both.
I mean, I think there will be companies that, you know, that reduce their head count
basically to nothing and succeed.
I mean, one, one example that I talk about in the book is, you know, one of the
largest lenders in China now is this, this company called My Bank.
And, and their signature product, their signature loan product, they call 310 because it,
it takes
because it
basically it takes
three minutes to apply for a loan
one second for an algorithm
to approve it and zero humans
so they are
automated and this is not an old bank
that converted into an automated bank
this is a totally new bank
they have only a few hundred employees
and they're outperforming
companies that have
tens of thousands of employees
with a more manual process.
And so I think the typical image that people have of automation in the workplace is like
your boss come, you know, you come in one day and your boss, you know, there's a robot sitting
in your seat and your boss goes, I'm sorry, Alex, your job has been automated.
We'll no longer need your services here at the big technology podcast.
But really, that's not how this works.
What happens is most of the time is that, you know, an old company that relies more on manual
services is out competed and replaced by a company that's much, much smaller in terms of its headcount,
but is able to accomplish a similar amount of work. Okay, that's fascinating. So what do you think
the old company? I mean, yeah, let's talk about this a little bit because what do you think
the old companies are supposed to do now? Like if they're, let's say they're leading the economy
and they're in the driver's seat to make the decisions that you're talking about, consult with
workers, be more thoughtful. Then a new smaller company comes to that, you know, starts to challenge
them and does what they do with, you know, less people because of automation.
So where do, what do companies do, right? If the choices, keep the old way and get killed by
the competition using automation, or use automation yourself and try to get in line with what the
smaller companies are, what your more nimble competitors are doing. I guess like I'm trying to
figure out, yeah, how does this work? Yeah, I mean, I think that's sort of a false choice. I mean,
I don't think the choices are, you know, automate and survive.
or don't automate and die, there are, you know, companies that are succeeding, you know, with
very little automation.
One of the fascinating things that I heard when I was reporting this book, there was a talk given
by the head of AI research at Facebook, Jan Lacoon, and he talked about how sort of in the future,
all of the things that are done by machines will lose their value.
and all the things that are done by humans will gain value.
So he used the example of, you know, like a flat screen TV versus a ceramic bowl.
So like a flat screen TV is a complex piece of machinery, you know, hundreds of parts.
It's got transistors and lasers and rare earth metals and stuff like that.
And it's all assembled and packed by machines.
It's basically, you know, end to end a product of robots.
And as a result, like, it's very cheap.
Like, you can get a pretty good flat screen TV for a couple hundred bucks.
But if you want to buy like a nice ceramic bowl, like technology that's been around for thousands of years from a talented artisan, like that's going to cost you probably more than the flat screen TV because what you're paying for is the human touch.
You're paying for the part that makes people feel connected to another person.
And so I think this is a good blueprint for businesses as they look toward the future is if your business is just selling people things as cheap as possible, like yeah, you're going to.
don't want to automate because that's the path to survival for you. But there's another path,
which is that your business can be about making people feel things and making people feel a connection
and providing something that is more like a service than a product. And I think that's the other
direction that we're seeing companies go in is trying to build an emotional connection with
customers through what they're selling so that they're not subject to the same deflationary pressures
as the highly automated firms. Yeah, I like that. And so that sort of goes along what I
my feeling, which is that if you use automation to remove some of the execution work, invest in
ideas, invest in human relationships. And that's when you're probably going to be in a good
spot. Do we have time to go through your nine rules quickly? We can't if you want to. It's
your show. All right. Let's do it right after the break. We'll be right. We'll be back right
after this. Hey, everyone. Let me tell you about the Hustle Daily Show, a podcast filled with business,
tech news, and original stories to keep you in the loop on what's trending. More than two million
professionals read The Hustle's daily email for its irreverent and informative takes on business
and tech news. Now, they have a daily podcast called The Hustle Daily Show, where their team of
writers break down the biggest business headlines in 15 minutes or less and explain why you should
care about them. So, search for The Hustle Daily Show and your favorite podcast app, like the one
you're using right now. And we're back for the third segment of the big technology podcast here
with Kevin Ruse of The New York Times, an author of Future Proof, Nine Rules for Humans, and the
of automation. Let's go to the rules. Kevin, we can just go quickly through the rules, but I'm
kind of curious how you think people should be approaching the world of AI that we've discussed
and how to make sure that they're not going to end up on the wrong side of automation.
So why don't we just do a rapid fire run through the rules, and then we can wrap it up and
let you get on with your day. Sure. I'll read them off and you tell me which ones you want to
dive into. We don't have three hours, so I won't go into all of them in depth. But number one,
be surprising, social, and scarce.
Number two, resist machine drift.
Number three, demote your devices.
Rule four, leave handprints.
Rule five, don't be an endpoint.
Rule six, treat AI like a chimp army.
Rule seven, build big nets and small webs.
Rule eight, learn machine age humanities, and rule nine, arm the rebels.
So you tell me where to go deeper.
I kind of, I would like to do each one as quickly as we can.
Sure.
We already did, don't be an endpoint.
So we'll skip that.
Let's just go quick to the first one and tell us a little bit about what it is.
Yeah, so rule one, be surprising, social, and scarce.
This is basically what I got from talking to AI experts and asking them the question,
what can humans do better than computers?
And what are they likely to be able to do better than computers for a long time?
And these are three categories,
or surprising social and scarce.
So surprising would be like work that involves lots of edge cases and weird scenarios and changing
rules and lots of variables.
social would be work that makes people feel things, these kind of empathy-based jobs.
And scarce work would be work that involves sort of rare combinations of skills, people
who are exceptional in their fields.
Oops, sorry, I hit my mic there.
Surprising, so you're on your way.
Yeah, exactly.
Rule two is resist machine drift.
So this is about sort of feeling out of control of our own choices because of the influence
of algorithms.
So I think one takeaway from the book, and one thing that I've really come to believe
is that the more human we are, the more likely we are to succeed in the automated future,
which leads into rule number three, demote your devices, which is not about sort of using your
phone less.
It's not a phone detox chapter.
Which you've done.
Which I've done.
I did do a phone detox.
But this is about taking control back over our digital environment.
environments and our own choices and really sort of, you know, becoming more, becoming sort of
less subjugated to the sort of algorithms that run the world.
Does that mean, like, be aware of, like, what the news feed and stuff like that are doing
to you?
Be aware of what the news feed is doing to you and carve out space for yourself and for figuring
out what you actually like.
You know, one thing that I've found in the research is that basically, at the time, you
time when we need to be the most human and be developing our human skills. Some of these products,
these social networks, these recommendation algorithms are actually sort of depriving us of our
humanity. They're making us more homogenous, more predictable. They're sort of taking away some
aspects of our humanity. Rule four is leave handprints. And that's sort of the one I referred to
earlier about sort of the value of work that is obviously human, that is, you know, that makes people
feel a connection to you as a producer.
Yeah, this is the ceramic bowl rule.
Then there's don't be an endpoint, which is about this, you know, category of jobs that
involves sort of passing things between machines or interfacing between software
programs, and that's a really dangerous place to be.
And that's like moving data from spreadsheet to spreadsheet, which just seems like a
pretty precarious spot in today's economy.
Automation anywhere can handle that pretty well.
Exactly. Rule 6, treat AI like a chimp army. This is a metaphor about sort of the dangers of over-automating, of giving too much authority to AI within the context of an organization. So there are lots of examples about, you know, how unsupervised or inadequately supervised AI can totally wreak havoc in your business and your life. There's an example of...
Yeah, I was going to ask for a story about this. I'm sure you have some good ways.
Yeah, there's a really actually sad story in the book about this guy named Mike Fowler.
He is an Australian entrepreneur.
And he, do you remember, like, in 2014 when, like, you could see all those shirts that were advertised on Facebook that were, like, just algorithmically generated, like, combinations of words?
So it would say, like, you know, kiss me.
I'm a podcast host named Alex, you know, from the Bay Area.
I have that shirt.
And it would just be like total algorithm, you know, fodder.
And so Mike Fowler was one of the early movers in that space.
He built an algorithm that could generate custom T-shirt slogans and automatically list them online for sale.
And it worked really well.
And then one day he woke up and his phone was blowing up.
And he had all these messages and requests for comment from reporters.
And it turns out that he had screwed up his algorithm.
He had forgotten to take some words out of the word bank.
And so his algorithm was producing shirts with messages like keep calm and hit her or keep calm and rape a lot.
And so he ends up like Microsoft's robot Tay.
Exactly, exactly.
You know, it was this fun chat bot for kids and then started saying all these Nazi things basically overnight.
So, sorry, go ahead.
No, I think that's a great example.
And that's something that I think illustrates the risk of giving too much authority to AI before it's ready to handle it.
Yeah, good rule.
Cool.
Rule 7 is about building big nets and small webs.
This is about sort of the systemic changes that are going to make it easier for people
to survive this disruption in the economy.
So big nets would be things like UBI.
Are you in favor?
I am.
I think what we're finding now during the stimulus is that people, you know, can be that
actually giving people money directly is good for the economy and that it helps people
stay afloat. There are also other, you know, solutions that in Sweden and Japan, they have these
kind of job councils that catch people when they get laid off due to automation. And then the
small webs are the kind of, you know, community-based groups that help people through periods of
transition, whether it's unions or worker collectives, even religious groups or neighborhoods. I think
those are really important for people. Right. We've definitely lost like a lot of that community
fiber that held us together, religion and community groups. And it seems to be a real reason
why we have such an angry society right now is because those support groups that used to be
there for folks that they could lean on and find meaning from have dissipated. And I think it's
also like, I'll just go off on one small tangent about this, which is that I think it's also
part of why people are so scared of AI and new technologies. We don't,
celebrate them as community events. We don't look at the positives they can do. I mean, one
story that I love that I found in the course of researching the book is that when the, in the
1920s and 30s, when the rural parts of America were getting electricity for the first time,
towns would have these festivals. They would have these like parties when their town got
electricity for the first time. And, you know, the Boy Scouts would play taps and they would
they would have a mock funeral for like a kerosene lamp where they would bury the lamp because
the days and lamps were over. And it was sort of this amazing community time to come together
and celebrate the good that these technologies were doing. And we just don't do that anymore.
You know, we're not having ticker tape parades in the street for the people, the scientists who made
the COVID vaccine, for example. And I think that's something we should be doing.
Love it. Rule 8 is about what we should teach people, what we should teach ourselves, what we should
teach kids to prepare them for work and life in an automated society. I think we've been doing
this entirely wrong. For years, we've been telling people, go major in STEM, learn to code,
become as productive as possible, optimize your time, basically be as much like a machine as you can.
I think it's a really dangerous message. Memorize, spit back. That's been the education system.
Exactly. And so I think many more schools and organizations are starting to think about how do we actually
teach people the things that are going to make them stand out from machines. Things like,
you know, like social, you know, emotional skills, ethics, you know, morals. So there's some of that
in there, too, some suggestions for what we should teach people. And then chapter nine or rule nine
arm the rebels is just sort of about this possibility of engaging with these technologies,
of, you know, of fighting back against things that feel unfair or dangerous to us, of not
letting the C-suite executives at big companies make all of the decisions about how this technology
is implemented and how the gains from it are distributed. I think, you know, in every era of
technological change we've had in the last, you know, three or four hundred years, the progress
has come not from executives and factory owners, you know, implementing stuff from the top
down, but from workers and people who are grappling with it, who are trying to make it more fair,
who are, you know, agitating for changes that will make their lives better.
And so I think we really have to engage with AI and automation in a way that we're,
most of us are not doing right now.
Yeah.
No, I agree.
I think those are nine great rules.
No, no, everybody go buy the book.
I'm midway through.
I think it's terrific.
Okay, as we wrap up, a couple quick questions for you to speculate on.
On the current trajectory we're going, do you think we end up with mass AI driven unemployment?
Yes.
Wow. Okay. Do you think UBI becomes a thing in society?
I hope so, but I suspect not. I suspect it'll be more like a wealth tax or some other form of redistribution.
And do you think the impacts that we're going to see from this stuff are like two, five or ten years away?
I think more like five than two, and I wouldn't be surprised.
if it, yeah, I think five feels like about the sweet spot.
Yeah.
All right.
So we have some time to prepare.
Everyone go out and prepare by future proof nine rules for humans in the age of automation.
Kevin, thanks again for joining us as our first repeat guest here on Big Technology podcast.
We'll have to have you back again.
Thanks, Alex.
Really a pleasure.
Okay, great.
And good luck with the book tour.
I know this is the beginning.
So I'm excited to follow it as it continues to go on.
Your nine rules seem like a pretty good keynote.
talk for all the big tech conferences that will start to resume. Yeah, for sure.
We'll see. Cool, man. Great stuff. Good to talk to you. Thank you. You too. Thanks everybody for
listening. We'll be back on Wednesday with a new episode, Twitter focused. So make sure to stay for that
one. If you're new to the show, please subscribe. We do this every Wednesday with a bonus like this
one here and there. And if you've been listening for a while and want to rate us, we would definitely
appreciate that. Thanks again for listening. It's been a pleasure having you here on the big technology
podcast. We will see you sometime soon. Thanks again.