Big Technology Podcast - Wait, The Robots Didn't Take Our Jobs? — With Erik Brynjolfsson
Episode Date: June 2, 2022Erik Brynjolfsson is the director of the Stanford Digital Economy Lab and professor at the Stanford Institute for Human-Centered AI. He joins Big Technology Podcast for a discussion of why our fears t...hat artificial intelligence would take human jobs haven't yet come to fruition. We also cover how humans and AI can work together and how AI is changing work already. Stay tuned for the second half where we discuss the latest on robotic process automation and address why we're working at all in the age of machines. Check out Prof. Brynjolfsson's paper: The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence
Transcript
Discussion (0)
LinkedIn Presents
Hello and welcome to the big technology podcast, a show for cool-headed nuance
conversation of the tech world and beyond.
And we are coming to you for one more episode from Davos.
We've had a series of them.
And we're here in collaboration with the Web3 Foundation in Unfinished.
It's been a heck of a week, five shows and two weeks.
I'd love to hear what you think about it.
So please send feedback to big technology podcast at gmail.com.
Our guest today is Professor Eric Bryn Yolfson.
He is a professor and senior fellow at Stanford University,
director of the Digital Economy Lab there,
and co-author of a great book, The Second Machine Age,
which I recommend you pick up.
Eric, welcome to the show.
Pleasure to be here, Alex.
So the way I came across your work was I was in Amazon headquarters
reporting for my book, Always Day 1,
talking about how AI and corporations,
will mesh and how that changes work.
And I'm having a conversation about it with Jeff Wilkie,
who is the CEO of Consumer Worldwide there
before he stepped down last year.
And he said you got to read Eric Bernoultz's book.
So I read it, and I think that your work on the way that AI and work combine
is really fascinating, and I'm excited to have you here.
Well, that's great to hear.
I'm a huge fan of Jeff Wilkie's, we first met back when he was at MIT, and he's responsible
for a lot of Amazon's success. So I'm glad he liked the book. Yeah, and we'll talk,
and Wilkie, by the way, for listeners and viewers, he was responsible for running the entire
Amazon retail operation, among other things. So basically was doing the Amazon style of Amazon
business while Jeff Bezos was CEO and then eventually left. So the study of AI and work,
It's really interesting. It's nascent. And it's not quite a technology study, not quite a sociology study, not quite a labor study, but kind of falls squarely in the middle of all three. What got you into it, Professor?
Well, you know, since I was a kid, I was a fan of science fiction and always saw and imagined the way that technology could change the world. And, you know, I read Isaac Asimov's foundations.
series where he made up this profession of psychohistorians that can understand the great
sweep of history and these mega trends.
And I thought that was kind of cool.
So I think that kind of got me into economics.
And I was debating, you know, in college and afterwards about whether to be go more into
AI.
And I did actually take a number of AI courses and teach a course on an AI right after I graduated
or more into economics.
I was hoping I could do both of them simultaneously when I went to MIT.
It turned out the two groups didn't really talk to each other that much.
So I ended up doing the economics track, but focusing on how technology was changing the world.
And ever since, I've been thinking about, okay, when I look at a change in the economy, to what extent can technology explain that?
Or conversely, to what extent can a change in technology lead to changes in the economy?
And what can we do to shape those changes in a way that lead to better outcomes?
Was there a moment where artificial intelligence and machine learning started to come on to the radar for you and made you think that it was worth studying in deeper depth?
Well, pretty early, honestly.
I mean, definitely by high school, I was reading, you know, The Mind's Eye and Douglas Hofstetter and Patrick Winston had a book on artificial intelligence.
And, of course, if you read Isaac Asimov, it's, you know, I robot goes back to the 50s and 60s when he was writing some of those stories.
So that's been something that, like generations of people, I thought was important.
For me, I also just more broadly was looking at digital technologies.
And a lot of my work in the 90s was about the Internet and search costs and information goods.
You know, things made out of bits versus out of atoms have very different properties.
So trying to understand the economics of information.
But always keeping an eye and doing occasional work on AI in particular.
And then I think in that, you know, the early, around 2010 or thereabouts, when a lot of people were worried about what was happening to wages and work and productivity, I started diving more closely and saw the role of AI.
I took a ride into Google self-driving car.
I think it was 2011, actually.
We drove up Route 101 up to San Francisco and turned around.
And I was like, wow, this is, seems like it's almost ready for prime time.
The highway was driving fine.
They had the human take it when we turned around, the clover leaf to turn around and come back down.
But I said, okay, well, they'll figure that out soon.
And so that was something that I think was a little prematurely optimistic about how rapidly the cars would be rolling out.
But really, since the early 2010s have been focusing more and more on machine learning and AI because of, well, we've seen this revolution driven by deep learning and supervised learning systems.
that have been able to solve a lot of problems that previously they couldn't do.
Right now, I don't think it's controversial to say that we're living in the most advanced
moment for machine learning and artificial intelligence, especially as it's applied in the workplace.
For sure.
We've never had anything like this, and it's accelerated extremely fast.
However, the rumors that the AI was going to take people's jobs and lead to mass unemployment,
that hasn't come true.
Right.
Andrew Yang ran a political campaign saying we need universal basic income.
Turns out we didn't.
We're as close to as full employment as you can get in the U.S.
That's right.
And artificial intelligence doesn't seem to be threatening jobs.
Now, maybe I'm making some assumptions here, but can you square those two ideas?
Yeah, no, absolutely.
And I love Andrew Yang.
He's a super smart guy.
I've had some good conversations with him.
And I'm very flattered that he cites our work in some of his campaign literature, et cetera.
And I appreciate that.
But, you know, we actually, in the book, we were not pushing for universal basic income,
and we did not predict mass unemployment.
Some people didn't read the book that carefully and maybe, you know, saw that we were talking
about some big changes coming.
But we always talked about what we call the great restructuring.
The reality is, and not the great mass unemployment or great resignation even, the reality
is that while AI is very powerful, human level, even superhuman in certain specific
narrow tasks, we're still very far from artificial general intelligence, you know, AI that can do
the broad set of things that humans can do. In fact, I did a study that was published in science
with Tom Mitchell of Carnegie Mellon University where we looked at about 18,000 specific tasks that
humans do in the economy. Onet actually publishes a list of 950 occupations where each of them
is about 20 or 30 tasks. So it adds up to about 18,000 tasks. And, um,
we evaluated them on a set of criteria as to whether machine learning were as likely to be able to do, and we call this the suitability for machine learning rubric, what we found was that in most jobs, there were some tasks that machine learning could do better than humans or would be able to if you apply the technology.
It's kind of a gold rush now to do that.
But in not a single occupation, we looked at all of them, did we find that machine learning could run the table and do all of the tasks?
You know, for instance, people often talk about reading medical images and radiologists being put out of work.
I've heard a number of machine learning people talk about that.
And it's true that machine learning can read medical images very well, in some cases, better than humans to detect cancer or other anomalies.
But there are 26 tasks that radiologists do according to the Onet taxonomy.
And for most of them, machine learning is not very helpful.
It's not helpful in counseling patients and comforting them.
they get a diagnosis or coordinating care with other physicians or setting up the machinery or
one of the tasks is administers conscious sedation. I'm like, I'm not, I don't think I'd want a
machine to be administering sedation to people. So, so those are, you know, and that's even one of
the jobs that is in some ways more vulnerable and many other jobs like, you know, a carpenter
were even further from having machine learning doing. So that said, if you look at the subset of
tasks where machine learning can help or take over for humans or augment humans, it adds up to
about a trillion dollars worth of work if you were most conservative. And so there's a lot of restructuring
and reorganization that will happen in the economy. But for quite a while, there'll still be
demand for human labor. And so I've always focused more on how can we redeploy, retrain people
as some tasks become less important, other tasks become more important. In short, there's no
shortage of work that needs to be done. If you look around the economy, if you look around,
you know, people taking care of elderly or children or cleaning the environment or just even
art and science that only humans can do, at least with existing technology. So we need people
to be working on all of those areas. And we're pretty far from saying that there's nothing
left for people to be working on. Do you think that the fear of artificial intelligence taking our
jobs was overblown is overblown well so let me nuance it it's taking certain tasks it's not leading
to mass unemployment but it is shifting the demand for different types of skills and so there are
places where there's less demand and there's more demand and wages in some areas are depressed
when when a machine can do some of the jobs especially some of the repetitive rote kind of work
so middle skill routine information processing work a lot of that has
has been depressed and median wages have been stagnating in those areas. So there's a effects on
demand, but it's not like mass unemployment. It's more what's happening to wages and income and
income inequality that have been the drivers. And I do think it's worth thinking about those and
taking steps in terms of education, retraining, maybe income support, progressive taxation to help
cushion that and help smooth the transition to new kinds of jobs. But, you know,
know, it's not the kind of thing where we like throw up our hands, say, well, there's nothing
these people can do. Let's just give them UBI.
When I read, you have a paper out called Turing Trap, which talks a little bit about this.
And one of the things that surprised me in reading it was that you very concretely drew the
line between automation and machine learning into bigger societal problems like concentration
of wealth among fewer people and income inequality, for instance, societal unrest.
Do we have any evidence that that's happening now?
now because, you know, I was trying to see, I always thought that the link between what machine learning was doing in the workplace and these bigger issues wasn't as firm. And it seems like from your research, there's actually a bigger link than I was imagining.
There's a very strong link between technological advances, especially information technology and changes in the weight structure.
So depending on how broadly you define it.
Now, AI and machine learning is sort of the leading edge of that, and there's a new set of things that are being affected by that.
But going back for 20 or 30 years, there's a mountain of research, some of which I did, people like Larry Katz, David Otter, Daronasamoglu, that have documented the link between.
changes in uses of technology and changes in the wage structure.
There are really three big forces that are affecting it.
One is technology.
Another is globalization and trade.
And the third one is, you know, government tax structure and those things.
Of the three, most economists would say that technology is the single biggest one.
Larry Katz, who runs the quarterly journal of economics and who's, you know, leading labor
economists, looked at many of these issues.
says that it's not even close. So in that sense, there is strong evidence. Now, you go into
specific technologies and then you need to look at particular case studies, and then there are a number
of studies where technology has had beneficial effects, negative effects in different ways.
But the broad story is that technology can certainly move the wage structure. And going back to
what I described earlier, that work on suitability for machine learning, that's a little bit more
forward-looking. That looks at the tasks that potentially could be done by machine learning.
And if you run the math through, if some percentage of those tasks are done by machines,
then there will be less labor demand for those kinds of tasks, more labor demand for
complements to them, and that will shift the demand for skills significantly.
Our analysis that we did in a follow-up paper with Daniel Rock of Wharton suggests that, as with
the previous wave, the next wave of technology also will disproportionately affect some of the
lower and middle-skilled jobs compared to the higher-skilled jobs. You know, cashiers and
bookkeepers are more likely to be in the affected part, but also airline pilots who are among
the higher-paid ones will also be somewhat affected. Radiologists, I mentioned. But on average,
the effect could be to increase inequality depending on how the technology is used.
One of the examples that I've heard that I think is pretty illustrative or illustrative,
I don't know how you say the word, is the accountants. You know,
Lots of families would have their own family accountant, you know, back in the day.
And an accountant was a pretty good middle-class job.
And you could do that work.
And now they've moved to turbotex and the benefits are accruing to intuit.
Yeah.
So there's an example of some of that.
But I want to come to one of the main points from the Turingtrap paper, which is that there's different ways of using technology.
Technology can be used to imitate and replicate and automate what humans are doing.
it could also be used to complement and extend or augment what humans are doing.
And both of those can be very profitable.
Both of those are successful strategies, and there are definitely some things we would love to have automated away.
If a job is dirty or dangerous or dull, you know, let's go ahead and have machines do it.
But most human progress has actually not come from that kind of automation, from replacing what we're doing.
And I give a little example or hypothetical thought experiment in the paper.
if you go back to ancient Athens, and suppose that somebody had invented a miraculous robot
that a set of robots that automated every job in ancient Athens, but only the jobs in
ancient Athens. So you just automated what they were already doing, whether it was making
clay pots or, you know, tunics or burning incense for sick people, all of that could be
automated. I think it's pretty clear that, you know, you're even having lots of free clay pots and
burning incense, their living standards wouldn't be all that high. And it's true, they wouldn't
have to work at all. So there'd be zero labor. They have a lot of leisure. But ultimately,
their living standards would be nowhere close to ours. They wouldn't have iPhones or jet planes
or penicillin or COVID vaccines. So most progress since that time has not come from taking
existing tasks and simply automating them. It's come from using the technology to allow us to do
new things. And a second important point we make in that.
paper, I'm making that paper, is that when you do use the technology to automate tasks,
not only aren't you making the pie as big as it could be, you're not really raising the level
as much, but you're also shifting things around, as you mentioned earlier, it leads to more
of a concentration of wealth among capital owners. When you use technology to automate
what a worker is doing, there's less demand for the worker, there's less labor income, lower wages,
maybe even zero wages, and more income for the capital owner.
That leads to more concentration of wealth.
Capital is more concentrated than labor.
Conversely, if you use the technology to augment people,
that is, use the technology to allow them to do new things they couldn't do before
and complement what they're doing,
then that tends to raise wages and create more widely shared prosperity.
In the Turing Trap article, I argue that, as I said earlier,
while both of those strategies can in principle be profitable, right now there are excess
incentives to focus on substitution and automation and are not enough to focus on complementing
humans. Whether you're a technologist trying to, you know, solve the Turing test, which is, you know,
the title of the papers and homage to the Turing test, you know, which was originally called
the imitation game, whether you're trying to make a machine that imitates humans, that is
focusing too much on substitution, or whether you're a manager looking to take the labor out
of your factory or out of your process by replacing each worker with a machine, that's focused
on substitution, or whether you're doing tax policy and you give significantly lower tax rates
to capital owners as opposed to labor, which is what the U.S. currently does. It wasn't always
like that. In 1986, they were even. So in each of those cases, the thumb is on the scale
towards automation and substitution. And I think that's a mistake. At a minimum, we should have a
level playing field and just do whichever one works better. And I think you could even go further
and say, if we're going to put the thumb on either side of the scale, I think we should push
more towards augmenting rather than substitution. And that's the message of that paper is we need to
rethink how we're using these technologies. And ultimately, I hope technologists and managers
and policymakers will think harder about how we can use the technology, not only make the pie
bigger, but to create shared prosperity through this complementing strategy. Professor Eric Brunyelveson
is with us. He is a professor and Cedar Fellow at Stanford University, where he directs the
Digital Economy Lab. We'll be back right after this.
Hey, everyone. Let me tell you about the Hustle Daily Show, a podcast
filled with business, tech news, and original stories to keep you in the loop on what's
trending. More than two million professionals read The Hustle's daily email for its
irreverent and informative takes on business and tech news. Now, they have a daily podcast called
The Hustle Daily Show, where their team of writers break down the biggest business headlines
in 15 minutes or less and explain why you should care about them. So, search for the Hustle Daily
show and your favorite podcast app, like the one you're using right now.
And we're back here on Big Technology Podcast with Professor Eric Brunelphson from Stanford.
He wrote The Second Machine Age, a great book that you should check out.
Professor, very interesting hearing you talk about the way that machine learning can change work.
When I watched what Amazon had been doing, I always felt that they were working towards the automate versus augment.
And I want to give you a chance to respond to this.
But it seems to me that they automated a lot of work in the retail organization to make room for those retail workers to end up building new
products. Amazon Go, for instance, came out of a group that had been in the retail organization,
Dilip Kumar, who led pricing and promotions. All that stuff got automated when he finishes spending a
year and a half working as Jeff Bezos, his technical advisor. He then goes ahead and creates the Amazon
Go store, which is now core to the company's strategy. Right. And so I always thought,
okay, you automate the execution work, work that's keeping the business running. You make room
for idea work, anything involved in building something new, and that's sort of how you make
progress. But it's the augmentation side of it is a different wrinkle that I hadn't considered.
Yeah. And let me be clear. There are places where automation is great, and I've visited
those Amazon distribution centers, and some of that work is pretty boring and routine,
and it'd be great if a robot could take over some of that. And this is even in the white collar
side of the business. People ordering the products and stuff and stocking the phone's size. Routine work like
that. I mean, I think it's actually a fortunate coincidence that the kinds of things that most
people don't really like doing repetitive, boring kinds of work that don't involve much
creativity. Machines are pretty good at that. The kinds of things most people prefer creative work or
interacting with other humans, you know, the human touch and relationships, that's stuff that the machines
are not very good at. So we have a kind of a nice division of labor forming where, you know, at least with
current technologies. We can have people focus on things that they actually like a little bit more.
But, you know, Amazon's an interesting example. Or the Amazon Go Store in particular, I wish them
success with that, and it's great that they're pushing the envelope on that. But I want to do
another thought experiment, which is, you know, Amazon's about a $2 trillion company now, a super
successful company. And back in the 1990s, Jeff Bezos was looking at bookstores and thinking,
hey, we can use technology to change the way bookstores run. If he had been uncreative,
he would have walked into a bookstore and said, you know, that cashier, we could automate
the cashier. We could put a robot where the cashier is and check people out with a robot
cashier. And, you know, maybe that would have led to a little labor savings. I don't know.
It would not really have moved the dial much in terms of, I'm sure there wouldn't have been
a $2 trillion company if that's simply what they had done.
And in many ways, it was a lot harder.
I mean, even today, Amazon Go trying to do something like that is, you know, beta technology.
Instead, what he did was he looked at existing technology and said, you know, we can do things entirely different now with the Internet.
We don't need a physical cashier.
We don't need physical stores.
You know, we can have people order through a browser, et cetera.
And that much more creative way of using the technology created a lot more value for all of us, including for Amazon shareholders eventually.
And I think that's a good example of how simply trying to take the existing process and automate what people are doing with machinery is uncreative and usually not something that makes a big impact.
Managers who are able to think more broadly to do things differently and combine the components in new ways are able to usually create a lot more value.
And so Amazon's a great example of that.
And that's one of my messages when I teach at the business school to my MBA students
is try to think a little bit more creatively about how to use the technology.
Isn't there a darker side to all this?
I was listening to Amazon's shareholder meeting last week,
and it was striking to me that there were shareholder proposals coming through from workers
who felt that they were being managed by robot managers who were tracking their time off task
and, you know, assigning them goals and firing them in some cases.
Is that the type of automation we want?
It doesn't sound like it.
You know, I'm not familiar with that particular set of what they're doing with Amazon in that category.
But the broader point is spot on, which is that these technologies can be used to make the pie bigger, to create more shared prosperity,
create more freedom and well-being, or the technologies can be used to concentrate wealth,
to reduce freedom, to make people a lot, most people worse off.
There's no law of economics that says that everyone's going to benefit from these technologies
automatically.
And I think it's really important for us to think about what our values are.
What are we trying to achieve with these technologies?
Our technologies today, as you said, the opening of the podcast, are more powerful now than
they've ever been. And almost by definition, that means that our tools are able to change
the world more than they ever could before. So we have more agency. Our decisions make a big
impact. There's only so much somebody could do with a spear or a rock, you know, a thousand years
ago. But today, you can literally like change the world in quite traumatic ways if you use the
technology. And that could be done in a way that makes the next 10 years the best decade we've
ever had or in a way that makes it one of the worst or the worst. So I really put a lot more agency
on us and I think it's really important that we think carefully about how we use these technologies
and think about what our values are, what kind of world we want to build. I don't buy the idea
that, you know, technologist's job is just to make, you know, technologies that work and leave it
to other people. Everybody has to be thinking about what their values are when they build
technologies. I have a couple more questions for you. I want to know what you think about
robotic process automation. Because a couple of years ago, this is especially when I was writing
my book, it seemed like it was the hottest thing in the world. Companies like UiPath are raising
massive valuations, talking about putting a robot at every desk, give it every employee
their own robot, they're struggling pretty bad right now. Was the technology not there? Was
the fact that organizations can't implement it well enough, not there? What happened in that
situation? Well, just to make sure you're listening to them, robotic process automation generally
doesn't refer to like kind of physical robots that we think of.
Right.
It's referring to, you know, information robots like that will automate filling out a form.
And there's a huge amount of white collar work that's like incredibly boring and routine.
We talked about that where, you know, insurance forms or medical forms, whatever, all need to be filled out and processed.
And the idea of robotic process automation was to automate a lot of that.
And, you know, like I said, these dirty or these dull, boring jobs.
jobs can be automated and more power to them. I think that RPA was a bit overhyped,
and so there's a bit of a bubble. There's definitely some value there. But in the end, it's what I
said earlier. It's kind of focusing on taking what people are already doing and automating that.
Often they'll literally like fill in forms that were made for humans. And so you're just,
it's like my cashier example. You're not really thinking broadly enough. What you probably want to do
is have the information systems connect to each other at a much more fundamental level,
not going through this form that was designed for humans.
And I think it's just not being creative enough about how data systems can communicate with each other.
So, yeah, it automates a bit of human labor,
but it isn't really reimagining work the way I was calling for.
You know, I'm struck by your example of it was Athens, ancient Athens.
Right, digital.
Oh, yeah, yeah, right, yeah.
Well, it's interesting cars, our economy right now. A lot of the production of goods, you know, the way that we used to think of production is, you know, being done with a lot of capital and little labor, a lot of automation, very efficient factories. What is our, and our economy is like service or maybe it's entertainment? You ever think about like what the heck we're doing? I mean, what is our economy actually, if we can build everything we need, you know, with very little labor, what exactly are we all working for?
I think that's exactly the question we should all be asking ourselves is, and, you know, policymakers, managers, economists like me, we can make GDP higher and higher. We can reduce the labor content more and more, you know, therefore increasing productivity. And if we're just making more, you know, flat screen TVs, you know, to what end? I'm not a philosopher, but one thing I would look to is Maslow's hierarchy of needs. You know, those of you know, there's some basic needs like food,
safety, clothing, shelter. And then there's some intermediate needs, needs like relationships and
status. And then there's self-actualization. And I think that's not a bad template for how we should
think about progress. There's still a lot of people who don't have enough food, clothing,
shelter. So let's take care of those needs. I'm glad to see that absolute poverty has decreased
tremendously over the past 30 years, actually even faster than the UN development goals called for.
So, hooray for technology, and I think the way that we've made progress on that, there's still a lot of work to be done.
But in wealthier societies like ours, you know, we need to think about, okay, how are we going to get meaning and what do we want to spend our time on?
Do we want to spend time, you know, on Facebook debating politics?
Do we want to, you know, be gossiping with each other?
Do we want to be in virtual reality, playing video games?
You know, what are the things that we want to do with this time and leisure that have been free?
read up what are the things that matter to us. I would hypothesize that, you know, philosophy will start
becoming more important as people step back and think, okay, you know, as I said earlier, we have
these powerful tools. Our values matter more now than they ever did before. We're not just having
to spend 12 hours a day scraping out a living, you know, by getting some grain to grow in the
ground. We can now have the luxury of thinking beyond that. Last question for you.
Yeah, I think, you know, we're here at Davos. And again, we've been doing two weeks of these shows. So folks, bear with us. The Davos references will end next week. But I haven't seen anyone here more busy than you. I've seen you walking back and forth on the promenade the whole week. You must have been speaking with corporate leaders. Do you think they are interested in augmentation of labor? Or do you think they're mostly trying to figure out, hey, how do I automate as many jobs as I can, especially as we head to an economic?
You know, there's some of each, I think a big misconception about Davos is, you know, there's
plutocrats and all the wealth and power that's here. But the people who come here, they really
do seem sincerely interested in changing the world to make it better. So the ones I end up
talking to are asking me, how can we create more widely shared prosperity? You know, how can we do
it in a way that helps that sustainable and helps the environment? How can we address some of the
challenges of diversity, equity, and inclusion. And I think they're sincere when they say they want
to work on those things and they put them forward addressing poverty. I could go through the whole
list. And they want to use the technology to make the world better by and large. There's a lot of
inertia and forces that go in the other direction. So it's not easy. But I'm heartened by how
many smart people. I just had a breakfast meeting this morning with, well, I can say his name,
Frank McCourt, because we're here in the Web 3 area. And he made a lot of money in real estate
and other areas. And he's taking his energy to try to create a decentralized social network
protocol that allows people to own their own connections, their own data, and have more
freedom to interact with each other and deal with some of the misinformation.
I think he's doing it mainly because he wants to try to make the world a better place as he sees it,
not because he's trying to concentrate more wealth and power in his own hands.
And there's a lot of people, technologists and others, that have as their agenda to do that.
One of the things I love about being out in Silicon Valley is I run into a lot of mission-driven companies and individuals who they are pretty far up.
Maslow's hierarchy of needs, I guess.
Most of them probably don't need to work anymore for food if they didn't want to.
But they work pretty hard.
I work pretty hard because we're energized by that we think we can make a difference in the world.
And we have these powerful tools now.
And this is an inflection point in society where the choices we make the next five, ten years could put us on a very different path depending on whether we make the right choices.
So it's worth spending the time and energy to do what we can to move the down in the right direction.
action. Professor Eric Vernielsen, thank you so much for joining us. It's been a pleasure.
Thanks, everybody, for listening. It's been a heck of a run here at Davos. We are now officially
coming to a close. I really appreciate you listening to all the shows, so please give us
feedback. And stay tuned for next week's show. We're going to go back on a normal schedule. We'll be
talking about tech news. Thank you to Simon Hipkins from Key Pictures for doing the audio and the
video. You can check out the video on my YouTube page. Thank you to LinkedIn.
driving me as part of your podcast network.
Thank you to Unfinished and the Web3 Foundation again for this great collaboration here.
And thanks to all of you the listeners, we will see you next week.