a16z Podcast - a16z Podcast: Automation, Jobs, & the Future of Work (and Income)
Episode Date: May 23, 2016There's no question automation is taking over more and more aspects of work and some jobs altogether. But we're now entering a "third era" of automation, one which went from taking over dang...erous work to dull work and now decision-making work, too. So what will it take to deal with a world -- and a workplace -- where machines could be thought of as colleagues? The key lies in distinguishing between automation vs. augmentation, argue the guests on this episode of the a16z Podcast, IT management professor Thomas Davenport and Harvard Business editor Julia Kirby, who authored the new book Only Humans Need Apply: Winners and Losers in the Age of Smart Machines. But the argument isn't as simple as saying humans will just do the creative, emotionally intelligent work and that machines will do the rest. The future of work is complex and closely tied to the need for structure, identity, and meaning. Which is also why linking the discussion of things like "universal basic income" to the topic of automation isn't just unnecessary, but depressing and even damaging (or so argue the guests on this episode).
Transcript
Discussion (0)
Hi, everyone. Welcome to the A6 and Z podcast. I'm Sonal, and I'm here today with two guests. We have Tom Davenport, who is a professor at Babson University and a research fellow at MIT, and Julia Kirby, who is an editor at Harvard University Press and a contributing editor to Harvard Business Review. But the reason we have them today on the podcast is because they have a book that's just coming out called Only Humans Need Apply. And the subtitle is Winners and Losers in the Age of Smart Machines, which is a topic we talk a lot about as software eats the world.
Welcome, Tom and Julia.
Thanks, son-all. I'm glad to be here.
Nice to be here.
The best place to just kick off is you guys have this section in your book where you talk about this ode to the AI spring.
And I think that's a really important place to start because both listeners on this podcast and people who have been following the world of artificial intelligence for a long time, often talk about their scars from the contrasting AI winter before that.
Do you want to talk briefly about that?
As you suggest, we've gone through various cycles in this space.
And this is probably the most spring-like spring we've ever had in the sense of interest in the technology, the number of firms that are adopting it.
One of the things that always fascinates me is that even during winter, there were a lot of things kind of quietly happening.
I mean, 10 years ago, I wrote an article saying that automated decision-making was really percolating its way through lots of insurance companies and banks and so on for underwriting and credit issuance.
and so on. But now, I think probably with big data and analytics, gave a lot of impetus to the
topic and everything is in full flower all over the place. Yeah, in one of my former colleagues,
Brian Arthur, I wrote this really compelling view called The Second Economy. And his idea is that
there's, I'm actually only now putting it in the context of the AI winter, but this entire time that
we've been waiting for AI to have its moment, it's this collection of things that have already
become automated in our lives that are invisible to us every day, like down to checking in
at the bank teller to checking at the airport. I mean, there's so many ways that automation has,
to your point, percolated and permeated into our lives. I think the question that's top of
mind, though, for people is that no matter how we approach it as investors or researchers or
observers of the phenomena, I think what really people care about at the end of the day is how it
affects their jobs. And the reality is, and we've talked about this a lot on the podcast,
especially lately, because we just came out of a D.C. series where people are, the theme that came up on every single podcast was the realities of the job market, is how people, especially in the U.S., can adapt to this world. And even before that, what are the realities of the job market as software and automation eats the world?
The big fear, I guess, right now, and it's justified, is that a whole kind of set of us who thought that our jobs, our livelihoods were kind of immune to this encroated.
of automation are now having to rethink that confidence, you know. So we've invested a lot of
time and money in gaining those college degrees and advanced degrees so that we can do, you know,
this sophisticated knowledge work. We thought that that meant, you know, we're not going to be
like those assembly line workers or even frontline service workers in, you know, fast food
settings who might be seeing their jobs gobbled up by automation and even by
computers. But with the advent of cognitive technologies, we're now seeing machines capable of
doing decision-making. So you could see this as sort of three waves of automation. That first
machines came along and they automated the dangerous work. And then computers came along and they
started to automate some of that dull work like transcriptions, etc. Now we're at the point
where they're taking over decision-making. And the scary part is that it's hard to see what is the
higher ground that you can move to as a human and still be able to add value in a workplace.
That notion of a higher ground. I mean, first of all, I think it's hard for anybody in this
economy. I mean, we have to always think about knowledge workers, but this idea now that
nobody is safe from automation is a compelling one. And so when you guys think about where
is the higher ground that you can go to as this flood of automation comes in, how do you guys
think about that? I mean, what is the reality? I mean, I don't think it's enough to say,
let's just give people skills training and better STEM. I mean, these are all realities that
that we need to adjust to, of course. But what now? What can people do? How should they think about this?
We generally think that since there's no higher ground to which humans can retreat,
then they have to find common ground with the machines that are going to be their colleagues.
And so a lot of our book is around this idea that will need to augment their capabilities and have
them augment hours rather than a set of activities that humans will always be better than
computers. We just don't think those will necessarily exist, but at least for the foreseeable
future, there will be ways that we can add value to what they do and work with them as
colleagues in a whole variety of fields. And I think that augmentation-oriented future, we think,
is by far the most likely one for how AI travels through the occupational world.
And that's especially true because of the fact that when computers move into a workplace,
they never take anybody's entire job, that what they do is they take away certain tasks.
And all of us have, you know, a certain percentage of our time that is spent on, yes, it's knowledge work,
but it's very codifiable, rules-based knowledge work that, you know, given the right algorithm,
be turned over to a computer because it doesn't involve a lot of ambiguity or creativity.
And so it's those tasks that are getting chipped away from jobs today.
So the reality is that every knowledge work job is going to see this encroachment of smart machines into the workplace.
And it's just a readjustment that people have to make to figure out, okay, what do I do in this
equation, what does the machine do, and how do we make best use of what both parts of that
equation can do best? So before we talk about how people can engage, because I think that's a really
important question, I do want to pause for a moment on this concept that you guys are both
reinforcing of augmentation versus automation outright, because I think it's really important,
because it's a difference between reacting to something that just sort of happens to you versus
proactively thinking, okay, let's expect this, let's treat it as a given. Let's just say, like,
machines are going to be our colleagues, machines are going to automate parts of
our jobs, whether you're a knowledge worker and they automate certain parts of it or you're
a worker where they automate huge chunks of it. I think that's a really important idea.
And it reminds me of the original notion that Doug Engelbart had of augmenting intelligence
and really thinking about computers, the mouse, as an extension of us versus something that
compete with us, which I think is a reality of our lives. I mean, I think people treat
their smartphones already as an appendage literally. So there's a little bit of that.
already. But can you guys talk a little bit more about the difference between augmentation and
automation and why that's so important? Yeah, you know, I think there's always been this tension
about AI, whether it would augment us or fully automate us. And Ingo Bart was certainly
among the original thinkers about the augmentation idea. Unfortunately, we tend, because automation
is a more, I don't know, dramatic and attention-getting scenario, we tend to keep coming back
to that. So probably everybody listening to this has seen that Oxford University studies
suggesting 47% of U.S. jobs are automatable. It doesn't say anything about when that might
happen or, you know, which tasks will be taken over in jobs by machines. Because as Julia said,
it's never whole jobs. It's just tasked within jobs. So what we've tried to do,
in this book is to find various ways that humans can either do things that computers don't do
very well and probably won't for the foreseeable future, or, as I was suggesting earlier,
work alongside them almost as colleagues or, you know, instead of being a supervisor of a human
or a set of humans, you be a supervisor of a machine and really treat them as co-workers with a
certain set of skills and also a certain set of shortcomings.
Let's talk about those ways of engaging then.
I mean, concretely, what are ways that people can engage and adapt their readiness for jobs or existing jobs for this world?
The kind of core augmentation job, as we see it, is we call it stepping in.
That's really treating a smart machine as your daily colleague, kind of monitoring its performance day to day, maybe improving it a little bit.
You know, this has been going on in some industries for a while.
and insurance, for example, not the most exciting industry, but one of the earliest that used
AI, particularly rule-based systems to a substantial degree, there were underwriters who would
underwrite the policies that got spat out by a computer and would improve rules if it looked
like they weren't working. And, you know, this goes all the way back to the industrial revolution
where when factories were installing new textile machinery, for example, somebody had to
configure it, fix it, educate people on how to use it effectively, and so on. So stepping in,
we think is a very key job. There's also stepping up, and if you think about the archetypal
stepping up job, it's a managerial role, not like a hedge fund manager who, even though all the
trading in a hedge fund might be done by a machine, the hedge fund manager sort of looks at the entire
portfolio and monitors how well it's going and do we need more or less automation and do we need
substantial change or has the world changed? And maybe this particular set of algorithms or
rules or whatever isn't really appropriate anymore. And so you turn an automated system off.
And then finally, in this, you know, working with machines category, there's what we call
stepping forward. And that's developing the intelligent technologies of the present and future,
not only writing the code for them, but also in marketing them and supporting them.
And we know already that big companies like IBM are hiring thousands of people to do this.
And I think there's every reason to expect that as these technologies take hold, almost every vendor will have some level of cognitive capabilities.
And a lot of people will need to be hired and employed for that purpose.
You know, all these are kind of moves that you can make vis-a-vis the machines that are now in your workplace sharing your workload.
So what is it that you're doing that they're not doing?
A couple other ways that people can step, to use our stepping kind of motif, is you can step aside,
which would mean that you are now banking on all the stuff that is so uniquely human that it's not going to be programmed into machines.
So, you know, that may be creativity, maybe complex communication, dealing with ambiguity, it may be humor, it may be taste.
These are things that just computers are not very good at.
And it may be because, you know, it takes a sort of a human to know a human.
And when you have human customers, a lot of times you need to, you know, you need to just be simpatico.
So there are a lot of parts of work that are going to rely on just very human strength.
So one example of this is in financial advising where now you have the robo advisors.
And these are great algorithms for figuring out how your investments should be allocated
across a portfolio of different things with different risks and different returns.
And at any point in your life, whatever your goals are, there's an optimal allocation of that.
Well, computers are really good at figuring out what that is.
and it's very hard for the human mind to keep up with them on that.
So we talked to a financial advisor and asked him,
does this worry you about the future of your profession?
He said, definitely I hear the footsteps behind me.
And already I feel like I'm spending more of my time
being almost like a psychiatrist to my clients,
telling them what the machine says.
But our point is that is extremely valuable,
providing that handholding that the client needs.
And that is not the part that the computer can do.
So that would be stepping aside.
I mean, you're not leaving your job.
You're just focusing on the parts of the job that require the human touch
and leaving the parts of the job to the computer that it can excel at.
And then the last kind of way of stepping would be to step narrowly,
which is to focus on an area where you're really in such a niche area that there is no
compelling economic case to be made for putting it in silico, for creating an algorithm to do
it. In silico, I've never heard that. It's probably because it's an area of new discovery
and where, you know, there's not so much demand for it that a human can't serve much of the
demand for it or a small set of humans. So, for instance, in scientific inquiry, you're always
looking for the thing that hasn't yet been discovered. And you're kind of going into narrower and
narrower and narrower niches. And that is a very viable strategy for human work because only after
that new territory has been discovered, will it eventually move into the realm of automation?
A few weeks ago, I included this article in our newsletter about how there's this machine learning
to discover drugs out of lab notebook notes. And it's super fascinating,
because it's an example of discovery where humans would have actually ignored it.
And I think it's really interesting because I used to embrace this idea that as computers automate more and more of our lives, humans can actually be more creative and we can do more of these narrow and interesting things in our skill sets, the psychological, the emotionally intelligent things.
And I definitely think that's true.
But I also think that there's just this concurrent move where we're seeing a new kind of creativity coming out of machines that we haven't even begun to explore yet.
I mean, so far, they're just doing things that are versions of human activities, like, you know, algorithms that are painting like Van Gogh, or, you know, that's the obvious case.
Or deep learning art.
I just went to an art show a few weeks ago where there was art being generated out of deep learning algorithms and it looked very cliche, but it was still an interesting beginning.
I think it's not so black and white anymore.
I'd love to hear your thoughts on what happens as these worlds become muddy, because I think there's a potential that computers can actually get more creative as well.
I was talking about this issue last night with my.
son who's visiting me from L.A. He's a TV comedy writer. And we had put an example in the book
of a joke that a computer had created. And he said, oh, this is so lame. It's so obviously
programmed. Even the category of joke has obviously been programmed in. So, you know, I agree that we
both agree that there are more of these kinds of creative things starting to happen. But they're pretty
far behind humans so far. And I think at some point there will probably be just a human
preference for human created art and humor and so on just because it's human.
Humans as the artisan. That's taking artisan. It's like, oh, it's artisan, not AI. It's
actually human crafted. I mean, it's a relief to hear you say that. I mean, I'm only bringing
it up because even the example of hedge fund that you brought up earlier, Tom, I mean, I was thinking
of the show billions. But one of the funniest things about it is that beyond all the things that
can quantitatively and computationally happen that a computer could easily do, there's a huge
broker network of information where the people are actually hubs of information flows.
I'm not even thinking the way a computer could do it, where it would find signal in the noise.
It's like you're saying, Julia, where there's this human interaction with information.
Like, there's a funny scene where one of the characters get some insider trading knowledge,
which he didn't get busted for it because technically he made it sound like he was just like helping
a farmer's daughter.
I mean, if you haven't seen the show, this probably means nothing.
But the moral of the story for this purpose of this podcast is that there is information
and it's got layers and layers of intuition built into it.
And I think it's tough to tease apart sometimes which parts are human and which parts are machine.
Yeah, I mean, in that show, I've only seen some of the episodes.
But, you know, there's obviously a lot of psychological calculation going on.
Is this person telling the truth?
Is there confidence, you know, just posturing?
or is it really real?
And I think, you know, think about a poker game,
you could obviously get a computer
to easily figure out the different hands of poker
and what to bid under certain circumstances,
but the whole thing about looking in your opponent's eyes
and figuring out whether they're bluffing or not,
I think that's going to be tough for machine to do for a while.
Let's talk now then about something
that I think a lot of people link to the AI discussion
for better or worse.
I don't fully subscribe to the link,
but a lot of people make this link
between the future of automation and the future and this conversation about the need for
universal basic income, that people should have a fundamental basic amount of money to live on.
And the ideas behind it are complex and varied. I personally think, I mean, I'm more into the
idea of thinking about insurance for people versus subsidizing a baseline, but that's a whole
other conversation. The conversation that I want to focus on today about universal basic income
is it's linked to automation and that some people argue that having a universal
basic income can actually change the way we think about work because if you don't have to worry
about basic necessities and survival, you can then think about work as this creative act. And that's
why they tend to link it sometimes to this notion of automation and creativity. And I'd love to hear
your guys' thoughts, especially having written this book. Right. Universal basic income, I mean,
the link that people make with it is a really, I guess, depressing link. So I would say, first of all,
it's kind of an unnecessary link and it's sort of a damaging way of thinking about things.
Why do you say that? Because I think that'll be really controversial for some of our listeners
who are very pro that link. Well, of course, the reason people make the link is because they
believe that automation means that there will be much, much less for humans to do. And that
somehow we have to provide for human livelihood because jobs won't do it anymore. And so we
just fundamentally reject that premise. And that's kind of what our entire book is about. It's showing
that, in fact, there are lots of ways that humans are still going to be able to and be required
to add value to what computers do in the workplace. So we don't at all see no need in the future
for human employment that all work can be given over to smart machines. So if you reject that
premise, then you don't have to think so hard about, well, then how do you provide for people's
livelihoods? So that's why it's unnecessary. The reason that I think it's actually a damaging
way to think about things is that it denies the fact that actually work is really important to
people. It's really part of the human condition. I mean, being part of something bigger than
yourself, aligning your efforts with other people's, being compensated for your efforts. This is all
really important to identity formation and a sense of being worthwhile. And, you know, it's easy to say,
oh, well, we would all just have the level of self-discipline required to do all that,
even if our pay wasn't contingent on it, or even if there was no link between effort and
reward in the world. But that's simply not what we ever have seen. And what we know is that we actually
We appreciate having, you know, not just the source of income, but the source of structure in
our lives, the sort of structure, the meaning that it gives to us. And, you know, work is a good
thing to have. Even dogs like work. I mean, some dog feeders that I've had for my dogs,
you know, put these little saddles on the dogs because they like to feel like they have
something important to do and carry around. So, I mean, we do believe that it's certainly possible
that there will be some job loss on the margins, but because of this belief in the importance
of work for, you know, meaning and life satisfaction, we'd argue for guaranteed work,
which would be compensated rather than guaranteed income alone. And as Julia suggests,
there haven't been a whole lot of experiments around the world. We're probably going to see one in Switzerland within a couple of weeks when there's a vote on a guaranteed basic income only of about $2,500. But in the experiments thus far, it appears that instead of doing, you know, highly meaningful activities, people just watch more TV. So clearly, I don't think that would make for a terribly satisfying life, a spinger days watching television.
Totally throw off our productivity numbers too, which is apparently a big deal. Yeah, no, I think that's right with the key point to me and the UBI topic, universal basic income topic is incentives are just completely misaligned. And I think that's why to me the insurance angle is really interesting because it's similar to your notion of guaranteed work because the incentives are more aligned with how people are driven. I will say, however, that this notion that that's how we've always done it, that people find meaning in work and there's a requirement and there's drive for structure and work. But I do think that
and the way things change does surprise us and we don't know how our lives will be as
even more of it is automated. We could actually find ourselves very surprised by how the nature
of work changes. Well, I think it'll be very interesting if Switzerland votes this in. So we'll have
one of the world's largest experiments on what people do when they can live at least at a very
low level without having to work. It'll be interesting to see does it lower the desire to pursue
work and to see what people do in their, quote, leisure time will be very interesting.
It will be interesting. And it's always going to be an early test study. But just like with all
studies, when we talk about countries like Sweden and Switzerland, where they're smaller and
less diverse. And I'm thinking of an extremely heterogeneous place like the U.S. or India or even
China when you think about just layers of culture and tradition and people's relationships.
So it'll be super fascinating to see how these play out as it cascades into other regions.
Two quick last questions then in the context of this world of augmentation versus automation
and for this world where only humans need apply.
What happens to the nature of the firm, the organization?
And how does this change management science?
I mean, you guys are both affiliated with MIT and Harvard business.
And so I'm curious for the management school thinking of what happens on this.
That's right.
So we know all the answers on these management topics because we hang out with the really smart people in Cambridge.
Well, we believe that organizations will continue to exist and, in fact, may bring more of their work back in-house than they have had over the past couple of decades, that automation may take back a fair amount of the work that was distributed through outsourcing.
And we think that it's very important for an organization from the beginning to say augmentation is our preference and our objectives.
here, in part because pursuing automation tends to be kind of a race to the bottom in many ways.
It sort of lowers everybody's costs, but it also lowers everybody's margins, and everybody
ends up doing kind of similar, not so innovative things, letting people know that augmentation
is what we're doing. And ideally saying, you know, we're not going to lay off people just because of
automation. That frees up everybody in an organization to think about what they could do with these new
technologies and potentially liberate them from the tedious work that we all still have to do.
Well, it's funny because you talked about, you know, the scholars in Cambridge and one of the
things that we notice a lot with startups and entrepreneurs is that the management, I mean,
there's basic fundamentals that are true of any company, you know, that have to do with profit
and loss and running, you know, good financials and running a good organization and culture and
HR. They're just best practices. But there are these things where I wonder if even the emotional
component. This reference is the point you made earlier, Julia, of management will change management
science? Because will it focus more on emotional intelligence versus, like you said, the outsourcing
model and thinking of efficiencies and cost-benefit analyses? Like, does it change management science
in that way? Where are we in that evolution? Is it still too early to see? Yeah, no, I think you're
absolutely onto something there. I mean, we've always known that management is kind of a synthetic
discipline. It draws on engineering. It draws on psychology. It draws on sociology. And
I guess you might say it has drawn on engineering much more in the past in terms of, you know,
what is the optimal workflow and how could we possibly do this more efficiently, which probably
means bringing in more automation and getting rid of some of this expensive wetware.
But maybe now we'll see it move much more to drawing on psychology and really thinking about,
okay, now if my true source of competitive advantage is the human element here and how well
leveraged it is by these augmenting technologies, then what does it mean to make somebody more
human than they were before? How could I enable the people who work in my organization to draw
on the part of them that's really going to give me a competitive advantage because we're now
working with some stuff that isn't easy to put into code and therefore won't be in our
competitors' hands by next week? I just want to say thank you for joining the A6 and Z podcast, and I think
people should just read your book. Only humans need a play. Thank you, Thomas and Julia.
Thank you. Well, thank you.