Pivot - Future of Work: AI
Episode Date: April 3, 2024In the third and final part of our Future of work series, Kara and Scott chat with Susan Athey, who teaches The Economics of Technology at Stanford Graduate School of Business. They take a deep dive i...nto AI, discussing how it will impact work as we know it, and whether all the doom and gloom is justified. Follow Susan at @Susan_Athey. Follow us on Instagram and Threads at @pivotpodcastofficial. Follow us on TikTok at @pivotpodcast. Send us your questions by calling us at 855-51-PIVOT, or at nymag.com/pivot. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Support for Pivot comes from Virgin Atlantic.
Too many of us are so focused on getting to our destination that we forgot to embrace the journey.
Well, when you fly Virgin Atlantic, that memorable trip begins right from the moment you check in.
On board, you'll find everything you need to relax, recharge, or carry on working.
Buy flat, private suites, fast Wi-Fi, hours of entertainment, delicious dining, and warm, welcoming service that's designed around you.
delicious dining and warm, welcoming service that's designed around you.
Check out virginatlantic.com for your next trip to London and beyond,
and see for yourself how traveling for business can always be a pleasure.
Hi, everyone. This is Pivot from New York Magazine and the Vox Media Podcast Network.
I'm Kara Swisher.
And I'm Scott Galloway.
And this is our special three-part series on the future of work, where we look at the business and technology trends that will shape the workforce, employment, and the very nature
of work. Today, we're going to do a deep dive on AI and how it will impact the work as we know it.
Some numbers to get us started. 56% of U.S. workers report using generative AI to complete
work tasks, according to a survey from the conference board. 22% of U.S. workers report using generative AI to complete work tasks, according to a survey from the conference board.
22% of U.S. workers worry that technology will make their jobs obsolete, according to Gallup.
Only 26% of companies have established AI policies.
We're going to talk through all this with our guest, Susan Athey.
Susan is a professor of the economics of technology at Stanford Graduate School of Business.
She's also the chief economist of the antitrust division at the U.S. Department of Justice,
but is talking to us today about her work at Stanford because there's a lot going on at the Justice Department.
Welcome, Susan.
Hi, it's great to be here.
All right.
First of all, there's a lot of doom and gloom talk about there when it comes to talking about how the workforce would transform due to AI.
I think it's the single biggest question I get.
I'm on a book tour right now with people ask about it and they're worried
about it. They don't have a lot of information, but there is some merit to it. A recent survey
found that 44% of companies expect some layoffs to occur in 2024 due to new AI capabilities.
As Scott always talks about, CEOs are always looking for efficiencies and it makes sense
if they can find them. Where do you fall? Are we too worried
or not worried enough? I happen to be in the middle. I think Scott is probably in the middle
too. But where do you fall? Yeah, I think I'm also in the middle. I think there's a lot of hot
takes that are pretty extreme. So at one end, there's utopia, where our biggest problem is how
to find meaning when we don't need to work anymore. And, you know, drones are dropping things at our doorstep
and, you know, we're reprinting our food.
But at the other end, you know, there's some kind of dystopia.
And there's a lot of different versions of that dystopia, actually.
You know, robot wars or other things.
But, you know, even if we imagine sort of peacetime,
you know, you may have a highly capital-intensive world in an economy where a few get rich and the rest become irrelevant.
If you don't need workers, they lack political power, and we don't do a good job with redistribution, and there's mass unemployment, which then, of course, can lead to unrest.
So, you know, those are pretty extreme,
although science fiction kind of gives you some ideas of what to imagine. But my own view is much more in the middle. One thing, the utopia is a bit unrealistic because it leaves out the economics
and politics of how everything gets done, like how do resources get allocated and who actually
governs us. But the dystopia doesn't
seem imminent either because, you know, there's so many bottlenecks and constraints on the path
to universal adoption of a new technology. We still have to fax stuff. And so, there's just a
lot of frictions on the way to, you know, being able to achieve mass adoption. So I'm really more focused on short-term worries,
like how do we help people make the transition as certain jobs are likely to be displaced and
how to include more people in prosperity. Nice to meet you, Susan. So when you think about
technology and its impact or new technologies, whether it's automation or different agri-farming
technologies, it generally has, follows the following curve. There's some short-term job
destruction and then those efficiencies and that capital or those profits are redeployed
and you end up typically with more, with net job growth. And I don't see why this technology would
be any different. What's different here? Shouldn't this result in net job growth. And I don't see why this technology would be any different.
What's different here? Shouldn't this result in net job growth eventually?
So, I mean, it's certainly possible that you can end up in a more capital-intensive economy.
And so, you know, there's no necessary reason that it has to go one way or the other.
But in the end, you know, there's a lot of things
where humans can be productive, and they even can be productive in a world of robots. So if you
think about taking care of older people or taking care of younger people, you know, that's a nice
example because it tends to sort of scale with the size of the population, and we're going to
have young people and we're going to have old people. And there's really can be a pretty low productive ratio of humans looking
after other humans. So that just suggests to me that the marginal product of humans isn't going
to go down. And in fact, you know, we may find ways to augment humans so that they can be productive
longer or be productive with less knowledge.
So, for example, for elder care, more focused on the well-being side of things rather than just,
you know, the physical care. So, I think, you know, I agree with you that we just can't
necessarily imagine what the new equilibrium looks like, but it seems hard for me to imagine that there aren't things for humans to do. And also, until the AI solves electricity and figures out how to make a lot more chips, we will still have constraints on the resources that are used on computing.
Leading into Scott's question, what are the, say, the positive features it would bring to the workforce that we're overlooking? There's been a lot of AI professionals who are doing the doom scrolling.
So one thing is that it can be stressful in a job if you're worried about making a mistake.
Also, certain physical aspects of a job can be challenging. And a lot of jobs require a lot of
training to avoid making mistakes. One thing that AI can do is it
can help you be good at a job with a little bit less upfront training, and it can also avoid
mistakes. Yeah, one reason that seniors want to stop doing their jobs is if they start having
some memory problems or worry about overlooking things, that's very stressful for them. So they
don't want to do a bad job or hurt someone in their job. Like they want to save work, they might want to do childcare, but then they're
worried they might, you know, forget something or forget the kid in the car.
Like the child.
Exactly. But the same with elder care that requires the same kind of attention to detail.
But in the end, if drugs can be dispensed automatically, and if you have some kind of,
you know, safety monitoring going on, then it can be possible to really reduce or eliminate some of those risks and then give people the chance to not just do something that's a fun hobby that keeps them busy, but something that really contributes to society and helps keep them engaged.
So an aid, you're talking about like an aid.
Now, you run a lab at Stanford with a thesis
that technology can benefit humans.
Tell us about that work
and as it relates to AI in the workforce.
First of all, AI is a general purpose technology.
So what that means really broadly
is that the same innovations can be used
for lots of different purposes.
And it could be distributed around the world
and shared at relatively low
cost. So the premise of our lab is that we will collaborate with social impact organizations.
The organizations have a relationship with some end customers. They could be patients in a hospital,
they could be people who need counseling, they could be workers who need transitioning or
students learning. The organization has that relationship with the end consumers.
And then we try to take advantage
of all the students we have at Stanford
and all of the great technological capabilities
that we have to build things
for those social impact applications.
And the idea is once you build them,
they can be shared and spread.
So more broadly, when you just think about it,
someone who understands the capabilities of AI,
people have outlined so many potential threats. It becomes sentient, income inequality,
job destruction. There's just so many different threats that people have outlined.
Is there one threat, Susan, that you think is the most ominous that we should be focusing on
from a regulatory standpoint and that academics should be modeling out ominous that we should be focusing on from a regulatory standpoint
and that academics should be modeling out?
What should we be doing
to potentially prevent a tragedy of the commons
if there's a threat here?
So I think there's a number
of individual tactical threats,
and then there's some
that are maybe a bit more systemic.
So starting with the jobs,
we have never been good as a society at
following through with the redistribution that can make everybody better off. So international
trade is a great example. Econ textbooks tell us why that's good, why that can make everybody
better off. But often, if people are attached to a location, individual humans are
made much worse off. And there's a lot of evidence about the negative impacts of job displacement.
I've done some of this research myself. And often, the people in the worst locations and the worst
industries, if they lose a factory, they can be very bad off even 10 years later, while the people
who are more educated and mobile are able to move locations or find new jobs.
So what we see here is that over the last 10 or 15 years, we haven't seen as much productivity
benefit of all of the computing advances in the numbers, but we have seen a lot of firms
lay the infrastructure.
So they are in the cloud now.
They're using software as a service.
And that makes it much faster for them to adopt a certain technology if it comes quickly.
So think about human customer service agents.
If their software is a service and firms are already kind of plugged into that software
as a service for their current humans, you could see many firms all at once kind of replacing
their humans.
And so those people, especially if there are
areas of the country where they put the call centers because labor was cheap, because there
weren't a lot of other jobs or in certain countries specialized in this, they could get hit hard all
at once. And that can be very disruptive. And in the political environments we have, not just in
the U.S. or around the world, it can be very difficult to do the redistribution that might, you know,
help everybody share in the benefits because somewhere else, you know, there may be lots
of benefits, although it might be accruing to a smaller number of people if it's a capital
intensive replacement.
It might be the software engineers who are making the improvements.
So that's like an economy-wide thing.
And then there's a bunch of tactical issues as well.
that's like an economy-wide thing. And then there's a bunch of tactical issues as well.
I think one, you know, the disinformation, misinformation is a really big issue. We need people to be invested in democracy in order to, you know, go through any transition. We need people
to think about hard problems. And if we have hard problems with trade-offs, you know, and then people are just
kind of being polarized in the process, we won't be able to have the kinds of societal discussions
we need. And then, of course, there's all the security threats. Those are also, you know,
a bit scary. And we've never been very good at investing to prevent problems. We're good at reacting.
But some of these problems may come so fast that we are kind of left flat footed.
So we're going to go on a quick break.
And when we come back, we'll talk more about AI's impact on the workforce, including which
industries that you think will most transform in the next five years due to AI.
Fox Creative.
This is advertiser content from Zelle.
When you picture an online scammer, what do you see?
For the longest time, we have these images of somebody sitting crouched over their computer
with a hoodie on, just kind of typing away in the middle of the night.
And honestly, that's not what it is anymore.
That's Ian Mitchell, a banker turned fraud fighter.
These days, online scams look more like crime syndicates than individual con artists.
And they're making bank.
Last year, scammers made off with more than $10 billion.
It's mind-blowing to see the kind of infrastructure that's been built to facilitate
scamming at scale. There are hundreds, if not thousands, of scam centers all around the world.
These are very savvy business people. These are organized criminal rings. And so once we
understand the magnitude of this problem, we can protect people better. One challenge that fraud fighters like Ian face
is that scam victims sometimes feel too ashamed to discuss what happened to them. But Ian says
one of our best defenses is simple. We need to talk to each other. We need to have those awkward
conversations around what do you do if you have text messages you don't recognize? What do you do
if you start getting asked to send information that's more sensitive? Even my own father fell victim to a, thank goodness,
a smaller dollar scam, but he fell victim and we have these conversations all the time.
So we are all at risk and we all need to work together to protect each other.
Learn more about how to protect yourself at vox.com slash zelle. And when using digital payment platforms,
remember to only send money to people you know and trust.
Scott, we're back with our special series
on the future of work.
We're talking to Stanford professor, Susan Athey.
So we're going to talk about healthcare in a second,
but because I think it's probably the one
that's going to be most transformed,
but what industries do you think off the top will be most transformed?
So one thing to look at, first of all, is just the industry of AI.
And we have a lot up in the air right now about how that's going to shake out.
So we do need to be aware of how concentrated that industry is going to be and whether there's going to be a good
environment for startups to be able to create services. And we even see it in my lab at
Stanford. We're building services that can be used for social impact, but that requires tools.
AI as a business.
AI as a business, but AI, of course, transforms everything around it. We are seeing some of the earliest adopters being, say, software as a service firms that
serve a lot of customers.
And so the ones that get ahead in AI can have a higher market share.
And all of the things, all the infrastructure around it will be transformed as well.
Right.
So that's as a business.
So healthcare is a hot topic, obviously, when it comes to AI disruption. The market for AI in healthcare projected to reach
over $170 billion by 2029, but 60% of Americans say they would be uncomfortable with a provider
relying on AI. Speaking of hot topics, you did a trial using digital counseling to help patients
choose contraceptive methods. Talk about the overall thing and then what you did there,
contraceptive methods. Talk about the overall thing and then what you did there and how it went there.
So in the developing country context, it can be very difficult to recruit enough nurses that have a lot of education and experience. So this was a digital assistant for the nurses to guide
patients through a counseling session. It made sure that the patients were able to express their concerns about side effects as well as their desires. And then it provided a ranking of options.
And we compared a method where the app provided a ranking versus where the patients just led the
discussion. And we found that when the app provided a ranking that was responsive to what the patients wanted, the patients spent more time evaluating the options and had higher satisfaction and
were more educated about their options.
But interestingly, the nurses also liked it, and they felt that they learned from using
the application because it's not very satisfying to counsel people when you're not sure you're
giving them the very best information for them.
And it can be hard to hold in your head all the different combinations of side effects
and concerns people want.
And so the application sort of helped them do a better job helping their patients.
And so I think that's a general trend.
And again, the developing context or in places where resources are tight, you can potentially get better information to people.
But crucially, there was still a human in the loop who could interpret all of it and
could answer questions and help people feel comfortable with the information that they
were getting.
And also, the patient was more engaged as a result of this and was more participating
in their decision making.
And I think that's a
much more likely short-term thing because it's really hard to get technology to do,
to avoid the errors. And if you're in high stakes environment like healthcare,
errors can still be very costly. So having an assistant to the provider to make better choices
is something that seems imminently likely.
So somewhere there's going to be a lot of change. Somewhere there's going to be a lot of change.
And what about for teachers? The stats are pretty staggering. 60% of educators use AI in their
classrooms already. You specifically experimented with teaching children to read using news feeds,
no less. Tell us about that. Yeah, so we worked with an educational application
that was sort of like a Netflix or a TikTok for stories for reading. And when we first started
working with them, they had humans curate the news feed to pick stories they thought would be
interesting. But we built a recommendation system based on the students' past behavior and found,
But we built a recommendation system based on the students' past behavior and found,
you know, 50% increases in the amount of stories the students read.
And then we also use gamification to try to get them excited about it at the beginning and show that the students continued reading afterwards.
And what I take from that is, like, look, the commercial sector has figured out how
to get you hooked, you know, but a lot of that is detrimental.
It's doom scrolling.
It doesn't really make you happy. But we can potentially use those same kind of tactics for good to help you
make for education and to help you develop positive habits.
So you touched on K through 12, Professor. I think a bunch of us have been talking about
and waiting for the impending disruption of higher ed. And I've just been
shocked how resilient and static it is. I mean, you walk into a, I don't know about, I think it's
the same at GSB, but you walk into Stern and I mean, you could be walking in a 2000 or 2023,
the classroom environment just hasn't changed that much. The curriculum hasn't changed that much.
Do you see any sort of disruption coming or is this all, I mean, it just feels like so far,
it's been this fortress where the walls hold onto the business model. Do you think that AI is going
to change higher ed? So I had a colleague that actually fine-tuned an AI on all of his course
materials. And so it can answer questions about the course materials,
but only the course, well, as much as possible,
focused on the course materials.
And he said that it cut way down on the email questions
during that semester of the course.
So I do think that there's a lot of ability
to get sort of basic repetitive questions answered
in a more customized way. We're also, I led a study
at GSB of how AI could be used, and we felt like we're generally in the early stages. There's been
a lot of experimentation in terms of, you know, how do you integrate it rather than outlaw it
while still getting people to learn the concepts.
One example of that is coding syntax.
So there's some people who are CS students and they need to learn to code.
But we need MBA students, we need business people to be able to think about how coding works because there's going to be digitization in every single industry going forward.
But MBAs aren't super excited about learning lots of syntax
and you waste all your time just teaching details.
And now the code reviews can help you with the syntax.
And so that can help people move much faster
and get to the interesting stuff, the thinking part,
and not spend as much time on the syntax part.
But there's a downside because, you know,
they can also more easily just skip over it without
thinking at all. And so, I think we're going to be really challenged as professors to really change
the way we assess students so that we ensure that they still get the conceptual part, but don't get
bogged down with the part that may be less important in the future, like missing commas
and where the curly braces go is just not where it's at, you know, basically starting now.
What do you advise students in terms of trying to prepare for an AI future?
Outside of taking courses in AI and spending time with different LLMs, do you think it's
going to impact the way, I don't know, the skills we
emphasize in terms of preparing for a more AI-enabled future? I think logic is going to be
very important. The AI is good when you break down a task into a part that a robot would be
especially good at. It's often like a repetitive task, and it often involves something where you can measure success easily. And measuring success is hard. And so it's often going to be the case that we
can put AI on something where we can measure success. Thinking about how to measure success
and thinking beyond short-term clicking type measures about what success really looks like
requires a lot of logical thinking. It also requires thinking about
what sometimes people call second order effects or equilibrium effects. So, you know, for example,
I helped people get jobs by doing portfolios. Like if everyone made those same portfolios,
maybe they wouldn't be so effective in getting jobs because part of what was signaling that you
were a good worker was that you figured out to make the portfolio. But if everybody does it, it no longer has the signaling value. That's kind of equilibrium
thinking that is required to anticipate what happens when you put things in. And also in just
being creative in terms of how do you measure success if clicks and eyeball scrolls and stuff
is not enough to understand success. So that kind of thinking,
you know, it becomes more important, not less important. That's a compliment to AI,
not, it's not substituted by AI. All right. So in summary, should the average worker be worried
about AI taking their job? I mean, if you were, if people must ask you this all the time, right?
You just say it depends? Or what do you say to them beyond the students, beyond people who are currently like, I'm getting replaced? I mean, I think if your job is to,
you know, create images and sell them or, you know, write ad copy or send repetitive emails
to your customers and handwrite them, you know, that doesn't seem like that's going to last very long. Now, what takes its place
may be managing systems that do those things or measuring systems that do those things, but there
may be less of those jobs. The people who are in those jobs may be more productive. But then,
as I mentioned earlier, jobs may open up that previously had big barriers to entry, big training requirements, while those jobs might
become, you know, more possible for people to transition into. But as I mentioned before,
the big concern is we are terrible at transitions. We are terrible at helping people through
transitions, especially at the lower end of the income distribution. So, what is possible
is not the same as what we're going to actually choose to do.
And it could, in fact, affect people on the higher end, lawyers.
Absolutely. I mean, you see this also already, just research, document searches. But, you know,
we used to have lots of paralegals, you know, go through stacks of documents now.
Bait stamping.
Yeah. But now paralegals do keyword searches.
That's going to get changed. But actually, we still use paralegals. It's interesting, though,
we may use people in different ways. And some new lawyers are not getting the experience of,
you know, reading the document, but also just reading documents by hand. Like, you can get a
lot, you can get by by only doing keyword searching and never just picking up the documents one by one and sort of seeing where your creativity takes you.
Yep.
That's a really good point.
You still have to read it.
Oh, not really.
As long as you can get a summary of it by AI.
Anyway, Susan, thank you so much.
We really appreciate it.
Okay, Scott, that's it for the final part of our three-part series on the future of work.
Read us out.
Today's show was produced by Lara Naiman, Zoe Marcus, and Taylor Griffin.
Ernie Untertide engineered this episode.
Thanks also to Drew Brose and Neil Severio.
Nishat Kerouac is Vox Media's executive producer of audio.
Make sure you subscribe to the show wherever you listen to podcasts.
Thanks for listening to Pivot from New York Magazine and Vox Media.
You can subscribe to the magazine at nymag.com slash pod. We'll be back next week for another breakdown of all things
tech and business.