Freakonomics Radio - 257. The Future (Probably) Isn’t as Scary as You Think
Episode Date: September 1, 2016Internet pioneer Kevin Kelly tries to predict the future by identifying what's truly inevitable. How worried should we be? Yes, robots will probably take your job -- but the future will still be prett...y great.
Transcript
Discussion (0)
When you try to envision the future, what do you see?
Do you see a grim picture?
A world, perhaps, in which humans have become marginalized?
Where technologies created to help us have gained the upper hand?
Open the pod bay doors, Hal.
I'm sorry, Dave. I'm afraid I can't do that.
The film industry takes a rather dim view of the future, doesn't it?
Indeed. I can't think of a single Hollywood movie about the future on this planet that I want to live in.
Kevin Kelly was a pioneer of internet culture, a founding editor of Wired magazine.
His vision of the future wasn't so bleak.
In the 50s and 60s, when I was growing up,
there was a hope of everything after the year 2000.
And that's the future that I remembered.
In a new book called The Inevitable,
Kelly tries to see whether his youthful optimism squares with the technological realities of today and tomorrow.
Short answer? Yes.
I think that this is the best time in the world ever to make something and make something happen.
All the necessary resources that you wanted to make something have never been easier to get to than right now.
So from the view of the past, this is the best time ever.
And it's getting better.
Artificial intelligence will become a commodity like electricity, which will be delivered to you over the grid called the cloud.
You can buy as much of it as you want. And most of its power will be invisible to you as well.
But, of course, it could all go wrong.
We've never invented a technology that could not be weaponized.
And the more powerful a technology is, the podcast that explores the hidden side of everything.
Here's your host, Stephen Dubner.
Kevin Kelly, a writer and thinker, has what he calls a bad case of optimism.
It's rooted in the fact that stops and goes away,
the probable statistics view of it is that it will continue.
Kelly envisions a world where ever more information is available at any time,
summoned by a small hand gesture or voice command,
a world where virtual reality augments our view of just about everything,
where artificial intelligence is seamlessly stitched into our every move.
Most of AI is going to be invisible to us. That's one of the signs of the success of a technology has become invisible.
So invisible that without our even knowing about it, AI will read our medical imaging and approve our mortgages.
It'll drive our cars,
of course, and perhaps become our confidant. So I think our awareness of it, for the most part,
will be as a presence in our lives. And we take for granted very, very quickly,
in the same way that we take for granted Google. People don't realize, I tried to stress to my son,
you know, when I was growing up,
you couldn't have your questions answered.
You didn't ask questions because there was no way to get them answered.
And now we just routinely ask dozens and dozens of questions a day
that we would never have asked back then.
And yet we just sort of take it for granted.
And I think a lot of the AI will be involved in assisting us in our schedules, our days, answering questions as a partner and getting things done.
So think of it as a GPS for your life.
In the way that you kind of set your course in the GPS and then it's going along and it's telling you how to go.
But oftentimes you're overriding it and it's not bothered by that.
It's got another plan right away.
And then you change your mind.
It's, oh, no problem.
I've got another one.
I have another schedule here.
I'll do this over here.
I'll get this ready for you.
I'll make this reservation.
I'll buy this thing.
No problem.
You change your mind.
Well, I'll send it back.
No problem.
Kind of like having a presence that is anticipating and helping your life.
I think that's what it looks like, even I would say within 20 years.
So we've done quite a few episodes of Freakonomics Radio that address the future,
especially when it comes to the interface between technology and employment.
The idea of whether there will be, quote, enough, end quote, jobs for people, whatever enough means.
You write that, and I'll quote you,
the robot takeover will be epic, which I'm sure will scare some people. And that even
information-intensive jobs, doctor, lawyer, architect, programmer, probably writers and
podcasters too, can be automated. So even if this is what the technology can and wants to accomplish, it strikes me that the political class may well try to stymie it.
I'm curious your views on that.
Yeah, I mean, there was this really great survey poll that Pew did.
Basically, they asked people how likely they thought that, you know,
50% of the jobs would be replaced by robots or AIs.
And it was like 80% of people.
And then they followed this up with how likely they thought that their job would be replaced.
Like nobody believed that their job would be.
And it was across the board.
And I did the same exact survey, actually, in a crowd of people who came to my book party, 200 people.
We had instant polling devices, and I asked the same thing.
It was exactly the same pattern.
Everyone believes that most of the jobs will be replaced,
and no one believes that their job will be replaced.
And I think it's actually neither.
I think most of these, our jobs are bundles of different tasks,
and some of those tasks, or maybe many of those tasks,
will be automated.
But they'll basically redefine
the kinds of things that we do.
So a lot of the jobs
are going to be reinvented
rather than displaced,
particularly in the kinds of things
we're talking about
of the professional classes.
I'm not saying that the AI
can't be creative.
It can be.
In fact, we're going to be shocked.
In some senses, we're going to realize that creativity wasn't so creative.
Creativity is actually fairly mechanical.
We will actually be able to figure out how to have AIs be creative.
But the thing is, they're going to be creative in a different way than we are.
I think probably the last job that the AIs or robots would do
will be a comedian.
I mean, they'll have a different sense of humor.
They won't get us in the same way that we get us.
Even though they'll be incredibly creative,
they can be brilliant, they'll be smart,
they'll surprise us in many ways,
they're still not going to do it exactly like
we do it. And I think we will continue to value that.
But that assumes, which is, if you watch a certain kind of futurist movie or read a certain
kind of futurist book, that assumes that the artificial intelligence doesn't essentially
obliterate or marginalize us. Yes?
Right. The question is whether an artificial intelligence that we create can only gain at our expense.
And I think that while that's a possibility that we should not rule out, it's an unlikely possibility.
You write, Kevin, that this is not a race against the machines. If we race against them, we lose. This is a race with the machines. Talk about how that begins to happen,
whether it's a shift in mindset, a shift in engineering. How does it happen that we come to view AI or robotization
or automation or computerization
as more of a continuing ally than a threat?
So one of the first AIs we made,
which was a kind of dedicated standalone supercomputer,
the IBM Deep Blue,
who beat the reigning chess master at the time,
Garry Kasparov.
And this was kind of the first big challenge
to human exceptionalism, basically.
And when Kasparov lost, there were several things
that went through people's minds.
One is, well, that's the end of chess.
It's like, well, who's going to play competitively
because the computers are always going to win?
And that didn't happen.
In a funny kind of way, playing against computers actually increased the extent in which chess became popular.
And on average, the best players became better playing against the artificial minds.
And then finally, Kasparov, who lost, realized at the time that he said, you know, it's kind of unfair because if I had access to the same
database that Deep Blue had of every single chess mood ever, I could have won. And so he invented a
new chess league that's kind of like the freestyle league, like a kind of free martial arts. You can
play any way you want. So you could play as an AI or you could play as a human or you could play as a team of ai and humans
and in general has happened in the past couple years is the best chess player on this planet
is not an ai and it's not a human it's the team that he calls centaurs it's the team of humans
in ai because they're complementary because ais think differently than humans and AI because they're complementary because AIs think differently than humans and
the same of the world's best medical dietician diagnostic is not Watson, it's not a human
doctor, it's the team of Watson plus doctor. And that idea of teaming up is going to work because
inherently AIs think differently even though they're going to be creative,
even though they'll make decisions, even though they'll have a type of eventually consciousness,
it will be different than ours because we're running on a different substrate.
It's not a zero-sum game.
And how much AI was applied to the writing of the book? I mean,
obviously you spell check and things like that, but I'm curious if there's anything else.
Not as much as I would like, because AI has been around for 50 years and very slow progress,
and that is because it was very, very expensive to do.
AI was expensive because good artificial intelligence requires a lot of data and a
lot of what's called parallel processing power. But the cost has come down,
an unexpected gift from the video game industry. The reason why we have this sudden surge into AI
right now in the last couple of years is because it turned out that to do video gaming, you really
needed to have parallel processing chips. And they had these video chips, graphical processor units
that were being produced to make
video gaming fast. And they were being produced in such quantities that actually the price went
down and became a commodity. And the AI researchers discovered a few years ago that they could
actually do AI not on these big expensive multi-million dollar supercomputers, but on a
big array of these really cheap GPUs.
Add to that the fact that more and more objects
are being equipped with tiny sensors and microchips,
creating the so-called internet of things.
As Kelly writes, in 2015 alone,
five quintillion transistors,
that's 10 to the power of 18,
were embedded into objects other than computers,
which means we will be adding artificial intelligence as quickly and easily as people in the industrial
era added electricity and motors to manually operated tools.
I believe people will look back at this time, will look back at the year 2016 and
say, oh my gosh, if only I could have been alive then. There was so much I could have done so easily.
And here we are.
But one area of technology isn't keeping up.
And that's batteries.
I think a lot of this Internet of Things,
the idea that we take all your shoe and your clothes and the chair and the books
and the light bulbs in your house and all of them are connected,
I think part of what's holding that back is not the sensors and the chip's intelligence, but the power. What I don't want to do is spend
all my Saturdays replacing all the batteries and all the things in my house.
Predicting the future anytime in any realm is fairly perilous. You admit in your book that you missed a lot about how the Internet would develop, for instance.
So, without meaning to sound like a total jerk, let me just ask you,
why should we believe anything you're telling us today about the future?
Yeah, I think every futurist, including myself, is basically trying to predict the present.
And so, you should believe me
to the extent that it's useful
to helping you understand
what's going on now.
As much as possible,
I'm not really trying to make predictions
as much as I am trying to illuminate
the current trends
that are working in the world.
These are ongoing processes.
These are directions
rather than destinies. These are general movements that have been happening for 20 or 30 years. And
so I'm saying these things are already happening. It looks like they're going in the same direction.
And so I might be wrong, and I probably will be wrong on much of it. But I think if you see what I'm seeing,
I think you will agree that it's happening right now
and that that can be useful to anybody
who's trying to make something happen
or make their lives better.
Let me just give you a quick little parallel
about what I mean by inevitable,
which is the title of the book.
I'm talking about long-term processes rather than
particulars. And so imagine, you know, rain falling down into a valley. The path of a particular drop
of rain as it hits the ground and goes down the valley is inherently unpredictable. It's a
stochastic. It's not at all something you can predict. But the direction is inevitable, which is downward.
And so I'm talking about those kinds of large-scale downward forces
that kind of pull things in a certain direction and not the particulars.
And so I would say that in a certain sense, the arrival of telephones was inevitable. Basically, no matter what political regime or economic system that you'd have,
you would get telephones once you had electricity and wires.
And while telephones were inevitable, the iPhone was not.
The species, the particular product or company wasn't.
So I'm not talking about the particulars of certain inventions.
I'm talking about the general forms of things.
Coming up on Freakonomics Radio, how good of a driver are you, really, compared to an autonomous
car? Even though there's a few people who die from robot cars a year, humans kill one million
people worldwide a year. And we one million people worldwide a year,
and we're not banning humans from driving.
And if you want to hear more conversations like this one,
check out the Freakonomics Radio archive
at Freakonomics.com, on iTunes,
or wherever you get your podcasts.
Not long ago, I got a text from a friend.
It just said, have you read The Inevitable?
I thought, The Inevitable? What's that?
I had no idea, but it sure sounded scary.
Did North Korea finally bomb someone?
Did Donald Trump finally fire one of his own kids?
But no, that's not what my friend meant.
The inevitable was the title of a new book by Kevin Kelly about the future.
I do have to say the title of the book sounds to me at least a little scary.
And I'm wondering, were you trying to scare us a little bit or no?
No, I wasn't trying to scare people.
I think people are scared enough as it is.
I was trying to suggest basically what it meant literally,
which is that we need to accept these things in the large form.
And part of the message of the book, which is a little bit subtle,
which is that large forms are inevitable, but the specifics and particulars are not.
And we have a lot of control over those.
And we should accept the large forms in order to steer the particulars.
One of the inevitable trends that Kelly points out is dematerialization,
the fact that it takes so much less stuff to make the products we use.
Yeah, that's a long, ongoing trend.
The most common way is to use design to be able to make something
that does at least the same with a smaller amount of matter.
An example I would give was the beer can, which started off as being made of steel,
and it's basically the same shape and size
but has reduced almost a third of its weight
by using better design.
But you can see how this trend can snowball.
Instead of 100 books on a shelf,
I have one e-reader.
Instead of 1,000 CDs,
a cache of MP3 files
which I may own or more likely borrow
from the cloud whenever I want them.
The current example would be the way that people are reimagining a car, which is a very physical
thing, as a ride service that you don't need to necessarily buy the car and keep it parked
in your garage and then parked at work, not being used, when you could actually have access to the transportation service
that a thing like Uber or taxis or buses or public transportation give.
So you get the same benefits with less matter.
But one of the things that's sitting in my recording booth
is a hardcover copy of your book, The Inevitable,
and it just strikes me as so weird
that for a set of ideas that we're talking about today, that it is still published in classic dead tree format.
And I'm curious whether you felt that there was a little bit of a paradox in that, or are you happy to exploit technologies from previous generations for as long as they're still useful, even if in small measure.
So there's a couple things to say about it. Let me say the larger thing first,
and then get to the specifics about the book. Most of the things in our homes are old technology.
Most of the stuff that surrounds us is concrete, steel, electrical lights. These are ancient
technologies in many ways, and they form the bulk of it, and they will continue to form the bulk of it. So in 50 years from now, most of the technology in people's lives will be old stuff.
We tend to think of technologies as anything that was invented after we were born.
But in fact, it's all the old stuff, really. And so I take an additive view, and I had this
surprise in my previous book, and many people have challenged it, but it never successfully.
And that is that there has not been a globally extinct technology.
Technologies don't go away, basically.
They just become invisible into the infrastructure.
And so, yes, there will be paper books forever.
They will become much more expensive, and they may be kind of premium and luxury items,
but they'll be around simply because we don't do away with the old things.
I mean, there are kind of like more blacksmiths alive today than there were ever before.
There were people making telescopes by hand than ever before.
So lots of these things, they just don't go away.
But they're no longer culturally dominant.
And so that's, paper books will not be culturally dominant.
And in fact, this book, The Inevitable, it has digital versions.
I'd love you to talk for just a couple minutes about the ongoing need for maintenance,
even when the technological infrastructure we're building and using every day
wouldn't seem to be as inherently physical and in need of maintenance as the old infrastructure.
Yeah, that was a surprise to me. I changed my mind about my early introduction to the digital world.
There was the promise of the endurable nature of bits that when you made a copy of something,
it was perfectly exact copy. And there was a sense that there's no moving parts,
you know, kind of like a flash drive thing.
There's no moving parts that will never break.
But it turns out in a kind of weird way that,
in more ways than we suspected,
the intangible is kind of like living things.
It's kind of like biology in the sense that it it's so complicated and
interrelated inside that things do break and there are little tiny failures whether it be inside a
chip a particular bit that can have cascading effects and that can actually make your thing
sick or broken and that was a surprise to
me that software would rot, that computer chips would break, and that in general, that the amount
of time and energy you'd have to dedicate to digital and tangible things was almost equal to
the physical realm was a surprise to me and I think a lesson for us into the future.
Did it change how you think about running your own technological life? You write that you used to be one of the last guys to update everything because, you know, I got used to things the way
they are. I don't need the update or the upgrade. And how has that changed how you do it now?
Yeah. Well, I learned by being burned, by experience, by waiting until the last minute to upgrade
that it was horrible
and that it was more traumatic
in the sense that
when I did eventually upgrade,
I had to upgrade not just the current system,
but everything else that it touched,
forming this sort of chain reaction
where upgrading one thing
required upgrading the other,
which required upgrading the other.
And that when I did these calculations and changed my mode
and tried to upgrade pretty fast as soon as,
maybe not the very first rep, but the next one after that,
that it was actually kind of, it was like flossing.
It was like hygiene.
You just sort of wanted to keep up to date
because in the end you actually spent less time and energy.
It was less traumatic.
And you gained all the benefits of that upgrade.
And so there is a sort of digital hygiene approach to things that I take now.
And that's not the only way that I change.
I also realize that the purchase price is just one of the prices that you pay when you bring something into your life.
That there is this other thing that you actually do have an ecosystem, even in your household, even in your workplace, whatever it is.
And that bringing something on, you're now committed to tending it.
When you're talking about bringing something into your home, I thought the one product that I've seen that number, they usually call it like cost of ownership, I guess,
is cars. And I don't think it's the car manufacturers themselves who calculate that
for you. Maybe it is. I don't know. But you do see this car, here's what it's going to actually
cost you over its lifetime in terms of how much fuel it uses versus another car,
how much maintenance it will require versus another car, because of the high-end components
that it may have, what the cost of replacement will be for those. And I like that, but I want
that calculation attached to everything. I want that calculation attached to the people that come
into my life, even. You know, actually, I think you're onto something. I think this idea of calculating the cost of ownership for digital devices or software apps, for that matter, would be very, very valuable and would not actually be that hard to derive because, you know, everything is being kind of locked in some capacity. By looking deeply into the present,
Kelly sees a future where more and more of our moves are being tracked,
whether because of data we voluntarily make public, as on Facebook, or otherwise.
Inevitably, we will be tracking more and more of our lives,
and we'll be tracked more and more of our lives and we'll be tracked more and more and
that's inevitable and what we have a choice about is the particulars of how how we do that whether
we do that civilly or not we have to engage this i was maybe a little bit frustrated by the fact
that there's often an initial reaction from many corners of trying to prohibit things before we
know what they are.
And that's called the precautionary principle,
which says simply that there are things that we should not allow in our lives
until they're proven harmless.
And I think that doesn't work.
Has that ever happened with a major invention period?
That we proved that it was harmless?
Yeah, before it being, you know, let's say widely adopted.
In general, no. I don't think that there has ever been that. And I think it's kind of unfair to
request it, but it does seem to be a current motion, like say in the genetically modified
crop area. So people saying we can't have these because we can't prove that they're harmless.
And so there are attempts to do that with AI driving a robot car,
which is saying, no, no, you can't have robot cars on the road
until we prove that they're completely safe.
And that's not going to happen, and that's unfair
because even though there's a few people who die from robot cars a year,
humans kill one million people worldwide a year,
and we're not banning humans from driving.
In the future that you envision, who are the biggest winners and losers?
I think it's all comparative. I think there will certainly be people who gain more than others,
and to them, who only gain a little, that might seem that they lost. But I suspect that everybody will be gaining
something. And perhaps the poorest in the world will continue to gain the most over time. But
there will be people who won't gain as much as many others. I don't want to call them losers,
but those people I think are going to, by and large, be those who will be unable to retrain or unwilling to retrain.
And I think retraining or learning is going to be kind of like a fundamental survival skill.
Because it's not just the poor who have to be retrained.
I think even the professionals, people who have jobs who are in the middle class.
I think this is going to be an ongoing thing for all of us is we are going to probably be changing our careers, changing our business card, changing our title many times in our life.
And I think there will be the resources to retrain them.
Whether there's a political will, I don't know.
I kind of take a Buckminster Fuller position,
which is that if you look at the resources, they're all there.
There's enough food for everybody.
The reason why there's famine is not because there's enough food,
but it's because there isn't a political will to distribute it.
And it only takes one bad actor to ruin the livelihood
of a couple hundred thousand
or million people.
That's, you know,
that's a leverage that exists,
you know, even in humans.
Forget about machines.
Exactly.
So I think this technology
is going to benefit
or can benefit everybody.
But whether they do or not,
whether specifically
whether they do,
that is a choice
that we have to make and will make a huge difference so in an abstract sense i think
this technology does not necessarily make losers but that doesn't mean that there won't be because
i think we do have choices about how we make things specifically the internet was inevitable
but the kind of internet that we made was not, and that
was a choice that we make, whether we made it transnational or international, whether it was
commercial or non-profit. Those choices are choices that we have. Those choices make a huge difference
to us. And so I think inherently the technology has the power to benefit everybody and not make losers.
But that's a political choice in terms of the particulars of how it's applied.
And therefore, I think we do have to have those choices.
It also seems, just out of fairness to your argument, really, that just as you can't foresee all the benefits of what technology will give birth to,
nor can you see the downsides, right?
I mean, there's just no way for any one of us sitting here now
to see what that's really going to be.
I'm sorry, David. I'm afraid I can't do that.
Yeah, right.
We've never invented a technology that could not be weaponized.
And the more powerful a technology is,
the more powerfully it will be abused.
And I think this technology that we're making
is going to be some of the most powerful technology
we've ever made.
Therefore, it will be powerfully abused.
And there's the scary part
of the Kevin Kelly view of the future.
Exactly, right.
But here's the thing.
Most of the problems that we have in our life today
have come from previous technologies.
And most of the problems in the future will come from the technologies that we're inventing today.
But I believe that the solution to the problems that technology created is not less technology, but more and better technology.
And so I think technology will be abused and that the proper response to those abuses is not less of it to prohibit it,
to try and stop it, to turn it off, to turn it down. It's actually to come up with something
even better to try to remedy it, knowing that that itself will cause new problems, knowing
that we then have to make up new technologies to deal with that. And so what do we get out of that
race? We get increasing choices and possibilities. All right, Kevin Kelly, one last question.
You argue that technology is prompting us to ask more and better questions,
advancing our knowledge and revealing more about what we don't know.
You write, it's a safe bet that we have not asked our biggest questions yet.
Do you really think that we haven't asked, I guess, the essential human questions yet?
What are they?
And I ask that, of course, with the recognition that if you knew the answer to that question, we wouldn't be having this conversation.
Well, what I meant was we're moving into this arena where answers are cheaper and cheaper.
And I think as we head into the next 20 or 30 years that if you want an answer, you're going to ask a machine, basically.
And the way science moves forward is not just by getting answers to things,
but by then having those answers
provoke new questions, new explorations,
new investigations.
And a good question will provoke a probe
into the unknown in a certain direction.
And I'm saying that the kinds of questions that,
like, say, Einstein had, like, what does it look like if you sat on the end of a beam of light and
you were traveling through the universe and the front of a light? Those kinds of questions were
sort of how he got to his theory of relativity. There are many of those kinds of questions that
we haven't asked ourselves. The kind of question you're suggesting about what is human is also part of that, because I think each time we have
an invention in AI that beats us at what we thought we were good at, each time we have a
genetic engineering achievement that allows us to change our genes, we are having to go back and
redefine ourselves and say, well, wait, wait, wait. What does it mean to be human or what should we be as humans?
And those questions are things that maybe philosophers have asked, but I think these are kind of the kind of questions that almost every person is going to be asking themselves almost every day.
As we have to make some decisions about, is it okay for us to let a robo-soldier decide who to kill?
Should that be something that only humans do? Is that our job? Do we want to do that?
They're really going to come down to like dinner table conversation level of like,
what are humans about? What do we want humans to become? What am I as a human, as a male,
as an American? What does that even mean?
So I think that we will have an ongoing identity crisis personally
and as a species for the next, at least, forever.
So I have to say, for all this talk of technology
and the future of technology,
you have weirdly made me feel a bit more human. And for that, I thank you.
You know, you're not a robot because you ask such great questions.
The true test will be how I do at comedy, though. Correct?
Exactly. And you laughed at my joke, so we know you're alive as a human. Next time on Freakonomics Radio, Steve Levitt, my Freakonomics friend and co-author, has long dreamed of solving an economic puzzle.
And he's finally done it with the help of an app.
I love Pokemon Go.
No, not that app.
I have had a burning question about economics that I've wanted to answer for almost
15 years. And by using Uber data, I've finally been able to get to the bottom of it. What Uber
can teach an economist and the rest of us about consumer surplus. Trust me, it's more interesting
than it sounds. That's next time on Freakonomics Radio. Freakonomics Radio is produced by WNYC
Studios and Dubner Productions.
This episode was produced by Christopher Wirth.
Our staff also includes Irva Gunja, Jay Cowett, Merritt Jacob, Greg Rosalski, Caitlin Pierce,
Allison Hockenberry, Emma Morgenstern, and Harry Huggins.
Remember, you can subscribe to Freakonomics Radio on iTunes or wherever you get your podcasts.
You can also visit Freakonomics.com where you'll find our entire podcast archive,
as well as a complete transcript of every episode ever made, along with music credits and lots more.
Thanks for listening.