a16z Podcast - a16z Podcast: Talking Humans and Machines with NYT’s John Markoff
Episode Date: August 28, 2015Longtime New York Times technology and science writer John Markoff joins the a16z Podcast to discuss our changing relationship with technology and machines ... as well as the changing nature of Silico...n Valley itself (where Markoff grew up). Jumping off from the themes of Markoff’s new book, Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, we explore the future of human and robot work; hear about chatbots that keep kids enthralled during "toilet time;" and the implications of “the wheels finally falling off of Moore’s Law” -- something people have long predicted but has never happened yet. And finally, why education is the raw material for a future where humans and intelligent machines work hand in (robotic) hand.
Transcript
Discussion (0)
Hi, everyone. Welcome to the A6 and Z podcast. I'm Sonal, and today Michael and I are interviewing
John Markoff, the longtime New York Times technology and science reporter who's been covering
the evolution of technology, Silicon Valley, both inside and outside Silicon Valley,
for over the past three decades. He wrote his first book in over a decade that just came out
this past week, Machines of Loving Grace, the Quest for Common Ground between humans and robots.
And that's what we're going to talk about today. John, welcome. Thanks for joining the A6 and Z podcast.
today. It's really nice to be here. So Michael and I were excited to have you in the 860s
podcast today because you have been a longtime journalist and chronicler of technology
and how it's evolved and also have been observing Silicon Valley. And until now, those two
things have been completely conflated. We're curious to hear your vantage point on what's changed
so much for you. Like, what surprised you the most in that time frame? Well, you know, I actually
grew up in Silicon Valley. I mean, the sort of the, my, my street cred is I played in the
Hewlett household when I was a kid in kindergarten in first grade, and I was the paper boy at
the houses where Steve Jobs and Larry Page lived. I like to say, there goes to the neighborhood.
So, you know, I grew up with a very different Silicon Valley than the one we have right now.
And actually, you know, what surprises me, I guess, if you're, it's actually the geographic shift.
I mean, Richard Florida about a year ago, did this.
this wonderful bit of research where he basically found the center of Silicon Valley by geo-coordinating
current investments. And, you know, once upon a time, the center of Silicon Valley was in Santa Clara
and now it's at the foot of Petro Hill in San Francisco. And, you know, it used to be a manufacturing
center, and now it's a marketing and design center. And still a very unique place. But I have to
say, it's generational. And I grew up with a particular generation of Silicon Valley. I sort of
call it the second generation, personal computing. There was a valley, electronics valley before
there was Silicon Valley, but the semiconductor guys were the first generation. And now it's
something that I'm only barely in touch with. You know, personally, I'm in the science section of New
York Times. So I've kind of moved on in a number of ways. I walked away from the cyberbeat in 2011
because I figured that if I had to write a story about one more testosterone, poison teenager with an
attitude, I was going to have an aneurysm. It was time to do something different. It turned out to be
one of the best beats and, you know, it just gets worse, right?
Right.
And I walked away from it, but I was just not having fun.
Perfect timing.
I know, the biggest events happen in that one period.
Yeah.
But, you know, I do feel like I have a long view, and I feel like the Valley is very generational,
and I'm marginally in contact with what's going on now.
It's remarkable.
I mean, it's still a unique place, and I watch it from the outside.
I feel a lot of time because you really have to be like a fish in the sea to appreciate
what's going on.
You wrote a book about the sort of the dawn of the PC era and kind of the culture that fomented that and why that happened and how it happened.
Robots, you know, we've been dreaming of robots or fearing robots for a long time in automation, but why this book now?
What brought you to it?
Yeah, so I can go in a couple different directions with that.
And, you know, why now, in some ways it was a sequel to my last book called What the Door Mouse said, how the 60th counterculture shape the personal computer industry.
when I was writing that book, and at that time, it was kind of an anti-autobiography.
I was gone from Silicon Valley from 65 to 75, 77, and I wanted to see what happened while I was gone,
and this amazing world, you know, sort of bubbled up then.
But the one thing I noticed, and just mentioned in Dormouse, was at the very dawn of interactive computing,
there were two laboratories on either side of the Stanford campus.
One was John McCarthy's, the Stanford AI Lab.
At that point in 6263, McCarthy thought it would take him a decade to build a working AI.
On the other side of campus was Doug Engelbart's laboratory,
and he wanted to do something that was sort of philosophically opposite to what McCarthy wanted to do.
McCarthy wanted to replace the human.
Engelbart, who was the inventor of the mouse and invented hypertexts and all the things
that would become the modern personal computer world and Internet world,
he wanted to extend the human.
He wanted to give small groups of human beings, intellectual tools to make them more
powerful intellects. In fact, I think he even called it augmented. He called it IA.
Augmented cognition. Oh, IA. Okay. Intelligence augmentation. So it's AI versus IA. And I was
fascinated by that. And then, so dial the clock forward to, when did I start on this, 2010, 2011,
I began looking at this new wave of technology. AI as a field has failed many times.
Yes. Yes. Really quite remarkable how it over promises and underperforms over a long time.
But something different was happening because of a set of technologies, which were,
finally maturing, you know, you pull together big data, pulled together the internet, all these
things, and the algorithms in terms of, you know, basically statistical algorithms ranging from
neural nets to Bayesian statistics, they began to work. And it began to have an impact. And I began
to report on that. And I really sort of set out to sort of try to square the circle between
AI and IA because, I mean, I think there is a solution. I mean, you know, it's both a dichotomy and
it's a paradox. It's a dichotomy in a sense that these two things, you know, you either read
the people out of the equation or you extend them. But even when you extend them, you need fewer
people, right? So it's not a neat dichotomy, and I've been struggling with that. And so that's,
I set out to explore that. And what I really, and the book is a set of linked narratives
about people who crossed over, most from AI to IA, and, you know, their stories. And I, you know,
I am, you know, I'm a failed social scientist, and so there's this notion of social construction
of technology. I'm a total believer in the Churchill and McLuhan view of the world that we shape
our tools and then they shape us, but we shape our tools. And I'm very interested in sort of
describing that, and that's what I tried to do in this book. The versus AI versus IA is still very
much alive and still very much kind of this religious war, isn't it? Yeah, I wouldn't call it a
war. What happened was these two communities emerged there and they both, they went their own way and
they largely don't talk to each other. And they, you know, Engelbar's work has been as a research
community and as a part of the industry is now the human computer interface world. Those people
largely do design with the human in the center of the equation. Yeah, guys, not so much.
You know, they're really fascinated. I mean, one of the cool things that I stumbled across is how
influential science fiction has been on the careers of these people who make these machines.
You know, two examples. Both Jerry Kaplan and Rod Brooks, who are early AI guys, a roboticist,
and Jerry was the co-founder of two companies. One was technology, and the other was Symantec.
Most people don't know that it started as an AI company, but it did.
They both decided to go into AI because of Space Odyssey 2001. They both saw Hal and said,
I want to build that, which is not the reaction you might expect.
I'm not surprised by that.
And Ronnie Brooks is building, you know, robots that will help us in factories and help us do all sorts of things.
He's crossed back over.
I mean, he really now takes an IAA point of view.
I mean, he sort of positions the rethink Baxter as a machine that collaborates with humans, not replaces them.
Is this versus really a thing, though, because, you know, coming from the world of human computer interaction, it doesn't seem that people really view them as, like, computers versus humans.
Like most people, if you're on the AI side, clearly your interface.
has to interact with, your interface has to interact with people at some point.
I mean, you have to design it to interact with people.
There's just no way to not have a robot that doesn't operate independently.
Similarly, on the IA side of things, you have to embody that humanity into the interface in
some form so it can do things for people.
So it feels like this dichotomy is almost false.
Well, maybe.
I actually don't think so.
I think that the AI community has a set of values that are about building systems.
that basically replicate human capabilities.
That's their value set.
And oftentimes, you know, in an economic system
where you can take labor out of the equation,
that's the priority.
I mean, it's called capitalism, right?
And it's, you know, there's a big debate.
I mean, it's so murky what's going on in terms of jobs.
I mean, it's a reporter's field day
because you can, you know, you can take the entire spectrum
of, you know, position on the question of do these things
destroy jobs, all the way from the international federation of robotics, who argues that there's
going to be the biggest job renaissance in history, to a guy like Moshe Vardi, a computer scientist,
who argues, you know, machines will be able to do everything that humans do by 2045. And so look
at that. Somebody's got to be right and somebody's got to be wrong. Well, and what's the evidence
thus far? I mean, the evidence is there's been more jobs created and more job creation in the last,
you know, 40 years than, you know, ever, right? I'm with you. I have to admit, I started, I mean,
I'm one of the guys who started this new wave of debate.
There was a piece I wrote in 2010, I think, about a new wave of AI that was basically
capable of replacing $35 an hour, what are called, legal interns, what are they called?
Parallegals and $400 an hour attorneys and doing a demonstrably better job of reading documents.
Right, right.
And that was for the first time that, you know, you began to see concretely that it, you know, working its way
into the upper reaches of the workforce.
Doctors, for example, are also a radiologist or another classic example.
So, you know, high-value work for the first time.
Something new was going on.
And I became very alarmed.
I was an alarmist.
And gradually, over the last four or five years, I've come way over.
A classic example.
I was, you know, my hair was on fire and I was trying to basically make the argument to Danny Kahneman.
and the economist that as robots were deployed in China,
it was going to perhaps lead to social disruption.
And Connman said, you don't get it.
In China, the robots are going to come just in time.
And I said, what?
And he said, yeah.
In China, the population is aging dramatically.
The workforce is going to shrink.
You have to look at these things, not as snapshots, but as dynamic systems.
And when you begin to consider what's going on in the world, the advanced world, look at Japan.
Right.
I mean, it's stunning how, I mean, we just did it in the Times a piece about,
ghost houses because there's nobody living them
because the population is shrinking.
Look at Korea.
Same thing.
China.
The U.S. is an exception because we have immigration.
I mean, that's really the rich irony.
Europe, they're spending a billion dollars
to try to develop elder robots.
So you really need to frame these things dynamically
rather than as a snapshot.
Well, actually, I heard you mentioned somewhere that
and I thought of this is an entire world
or only in those regions that in five years,
that there will be more people over the age of 65 alive
than there are children under the age of five.
For the first time in history.
That's insane.
Yeah.
And so what does that mean in terms of...
It means we need robots to take care.
I mean, so what I love about this...
Take care or take out.
That's the debate where you know.
I mean, I spent three years trying to understand the economics of the situation.
And I've come away feeling...
I mean, take one of the best labor economists in the country, David Otter.
I just read this wonderful piece he wrote in 2014.
He's the guy who's basically been really close.
identified with this notion of polarization, that, you know, the middle drops out and the economy
is growing at the top and the bottom. And it seems to be a lot of evidence that that's happening.
And you can make that argument. I had made that argument in my mind that these, you know,
the arrival of the internet coupled with companies like SAP and PeopleSoft and Oracle and IBM
who are re-engineering the white-collar clerk class inside corporations has caused this, you know,
the evaporation of the hollowing out of modern corporations. I can see it happen in my
own company.
There's a debate, even in the economics world.
I mean, Otter acknowledges, not all economists agree with the polarization idea.
I mean, it's really fascinating.
The economists don't quite know what's going on.
Here's the thing that baffles me as much as anything.
You know, in the late 1990s, there was an incredible productivity boom in the United States.
You saw productivity growth up around 4%.
Since 2008, you know, where the modern theory is capital's replacing labor.
You should see that kind of productivity growth.
We've seen no productivity growth.
We've even seen negative productivity.
And there are all kinds of explanations.
Right, among the popular ones.
How varying.
Right, exactly.
That we're not measuring the right things at all.
Exactly.
Or there's the curmudgeon from Chicago.
Oh, I love him, Gordon.
Yes.
Who was like, look, we just, we failed or we're failing.
But that brings us to Moore's Law.
Because, you know, come on, I grew up in Silicon Valley.
I worshipped at the Church of Moore's Law forever.
You know, things get faster, faster.
and things get cheaper faster.
That's gospel in Silicon Valley.
Well, if that's true, you know, you get back to Robert Solo.
Computers are everywhere except in the productivity statistics.
And so it's a great puzzle to me.
And in fact, I mean, there's so many nuances.
I mean, one of my favorite stories,
there are three of the books that are sort of wringing their hands
without naming them about the labor situation.
Mention the Instagram versus Kodak situation.
13 programmers at Instagram take out 140,000 workers at Kodak.
You know, great story, except it's not true.
I mean, first, you know, the guys at Kodak made a series of strategic mistakes,
and they shot themselves in the head repeatedly until they were dead.
The proof of that is Fuji, their chief competitor, made it across the chasm, okay?
But it's even more interesting than that, because Instagram, as a company,
couldn't exist until there was a mature Internet.
How many jobs did the mature Internet create?
probably two and a half million at least good job.
So on balance, people benefited from that.
Yeah.
So in Dallas, it's not a simple equation.
Right, exactly.
There's disruption, but there's a lot of growth.
And just, and it's hard to quantify this, but the volume of just photos.
If that's codex business, I mean, just exponentially grows, right, because of things like
Instagram and the technology behind it.
Well, that's mainstream economics.
By and large, you know, productivity growth is supposed to raise all boats.
it improves the quality of life.
Now, the question, there is this question of distribution,
and that's the fly in the orientation.
I would love to see good reporting on what the impact of new technologies has been
on the distribution of income, and I just haven't seen a definitive study.
Right.
I don't think there is one either.
But then, you know, it's interesting.
The title of your book is Machines of Loving Grace.
You're not talking about machines being helpful and saving the world.
You're talking, there's two very specific words.
What do you tell people who are losing jobs to machines?
Is that even happening directly?
Well, sure.
There are people who are being displaced by new types of labor.
But, I mean, what do I tell them?
It has been happening since the Industrial Revolution.
And so it will continue to happen.
I actually don't think that there's evidence that it's happening in accelerating rate.
Right now, in the United States, more people are working than ever before in history.
There are 140 million people employed.
Even if there's this polarization stuff, that's a lot of people working.
Now, people come back and say to me, but, you know, workforce participation is down.
Okay, that's true.
There are a lot of reasons of which technology is one of them.
It's not the only one.
Aging society is a reason.
And so you get into this stuff and you read the papers and you come out thinking, you know,
I think that the big impact has, the things that are going to really reshape society haven't happened yet.
Yeah. I want to actually come back to this theme that you've talked about a little bit about acceleration, both in terms of this notion of accelerating, you know, job growth, accelerating job loss.
Acceleration through Moore's Law. One of the mindsets in Silicon Valley that you've described is that we just have this concept that things never plateau, like that Moore's Law is going to keep going, it's going. And every time we think it's not going to keep going, it just does. Something happens, some innovation comes across, some material science advance, whatever it is.
what do you think is going to happen next?
Do you think that's going to keep going in that direction?
So I am absolutely obsessed now with the concept of scaling for kind of an unusual reason,
but I think you'll be able to relate to this.
I'm the one who stumbled across the paper in the Stanford Archives that Doug Engelbart wrote in 1960,
where he looked at this brand new technology of semiconductors,
and he pointed out that it scales.
And he made a very systematic argument about the way the relationship between the components change and the performance change.
And so a lot of the stuff that was codified five years later in Moore's Law in that electronics article, Doug got at.
And he was looking at, you know, the irony is he had come from the, he was an electronics tech in the 1950s at NASA Ames in the wind tunnel.
And that's where they had this aeronautical engineers used this term similitude, which was similar to scale.
It meant scaling.
And they took a small model and they scaled it up.
And he saw that.
When he looked at photolithographic technologies, he realized it scaled down.
That was a profound insight.
For him, it gave him the confidence to realize that there was going to be more computing power and he could go and do his thing and develop his augmentation tools more than codified it.
But what I've realized is, you know, scaling.
works both ways. And, you know, the valley used to scale down, be obsessed with scaling down.
Now the valley is obsessed with scaling up. Right. I mean, everybody, everything that you fund here is
about its ability to scale. Is that, is that the shift from hardware to software kind of manifesting
itself? I think it's something broader than that. I'm from hardware. Maybe, where it's not
such a hardware-centric world, that's possible. I'm just starting to think about this.
Right. But I have a CODA that I want to add on because I've been, you know, I've made an entire career out of writing about Moore's Law.
You know, not dead yet, not dead yet, not dead yet.
Except that maybe it's dead, but no, it's not dead.
Okay, I'm finally going to call it. The wheels are falling off.
It is in fact not dead. But if you look carefully at what's going on right now, the wheels really are falling off.
And let me just give you a couple of examples.
Denard scaling, which was really the most important part of Silicon Valley stopped in 2006.
Just stopped dead.
But clock speed stopped advancing, and so you went to parallelism and all these other kinds of technique.
But more recently, the interesting things that have happened are, one, for most of the industry, the cost of transistors has stopped falling.
And so as you scale, this is everybody but Intel in the industry says this.
If you talk to Broadcom, if you talk to Qualcomm, all of the customers of the foundries say, you know, going to the next generation, we're not getting that cost advantage.
That's a big deal.
But you do get the performance advantage.
People have argued that you get a better performance in another way.
It's true, but not in clock.
I mean, clock's flattened out.
And then you get this other thing sort of chimed in.
There's this phenomenon called dark silicon.
Out of the clock problem, you know, they're so challenged by heat that they can't turn all the transistors on at the same time.
They go through elaborate sort of algorithms to keep the chips cool.
And, you know, they do this in the mobile devices, particularly.
You lose the, you know, you lose the value all the transistors.
So, okay, where we are, we're, you know, Intel just said that they've delayed going from 14 nanometers to 10 nanometers.
You know, it was TikTok, TikTok, TikTok forever, and now it's TikTok talk.
And so when is the next talk going to be?
Well, they say it's going to be 2017.
We'll see.
I believe, you know, based on talking to all the wisemen I've been able to talk to in Silicon Valley,
that there are three more turns of the crank.
It would go from 14 to 10 to 7 to 5.
The industry can see its way to 5.
And there is one more big deal.
If you get to 5 or if you get to 7, it's a significant savings in power.
And so it's not a performance, not its cost, but that turn of the crank on power,
I think that could create new platforms that will create new industries,
and have all that goodness.
So not dead yet, you're right,
but a lot of the driving forth has slowed down.
Well, the low-power thing is actually really incredibly interesting
because then you think of developing countries
and the fact that you don't require that much power to power
entirely amazingly huge, large-scale platforms.
Yes.
Which does beg the question that I've heard you ask as well,
which is, is it possible for the next great platform
to come not from Silicon Valley?
So just a bit of history,
because I was spending a lot of time in Europe in sort of 2004 to 2006, and I was over there,
and it seemed like all the mobile innovation was happening in Europe and not in the U.S.
And I thought, wow, we're behind.
This could be a big deal because I was really bought into the fact that mobile was going to be next.
And all of a sudden, here comes the iPhone.
And it resets and it comes, and the next platform comes from Silicon Valley.
But, you know, it seemed to me that there was this moment where a new platform innovation,
might come from outside the valley it's you know if tom friedman is right and the world is flat it has to
happen at some point we don't have a lock on good ideas and i'm amazed it's lasted as long as it's like
one thing and for those who who don't design chips um the the move from seven to five nanometers
just means you can cram more transistors in any given space correct and use less power and use less
power and what that also means though the consequence as you're discussing about platforms is that
that compute goes everywhere right these chips which are already pretty ubiquitous all the
sudden, they're in everything. And so what, if there's chips and everything, if there's
intelligence in everything, does the definition of a machine change? And are we already
seeing evidence of like robots actually don't have arms and say hello to you when you walk in
the room? They're in your light switches. They're in your ceiling. They're in your car.
I so believe that. I mean, we reach Mark Weiser's world then. You know, this was the second
big idea from Xerx Park. You big was, you know, first you had a personal computer and you had one
computer and then computers basically disappeared into everyday objects and everyday objects became
magic. And I hate to say it, but Steve Jobs was the first guy to get that. You take a micropossessor
and you combine it with a music player and you've got an iPod. You take a microprocessor,
you combine it with a telephone, you got a smartphone. He was the first guy to really take advantage
of that. But how many everyday objects are there? I mean, you know, and Tony Fidel went off and did
to the thermostat and to the smoke detector and, you know, that we'll work through these things.
And I believe that we increasingly will talk to them because that's the most obvious interface.
And, you know, right now people sort of, they balk at talking to machines, but I see a lot of
indication that as, you know, they're connected to the cloud, the recognition of accuracy reaches
a human level, speech synthesis is going to improve very dramatically, very quickly.
it just is going to work, and it will be the more, you know, you won't be able to put an interface on this device, but you'll be able to have a conversation with it.
Right.
I mean, one consequence of Wiser's vision, though, is interesting, is his vision of ubiquitous computing was that the computing itself would disappear, and it clearly has, because it's in the object.
And I think, I don't know who said this.
Maybe Chris Anderson, you still always say, like, AI is going to really be AI when we stop calling it AI.
Like a to a toaster is a toaster.
You know, a curling iron is a curling.
And you're referring to the wired Chris Anderson.
Right, right, not the Ted Chris Anderson.
So I think there's definitely something different going on right now, which is that, you know, we are actually kind of glorifying those objects.
Because Mark Weiser used to say, like, computing will be so ubiquitous and so cheap and so invisible that you could leave a conference room and leave your pencil behind and never think twice about leaving your pencil behind.
But we don't do that with our iPhones and our objects.
Like we're actually, those devices are probably more valueless than our own children and family members at some points.
I mean, you feel very bereft without your phone.
I just want to, for the record, I'm glad I'm not related to you.
So I think it is interesting because as the computing itself disappears, our relationship with that entity changes and how we fear it or don't fear it.
Well, let me tell you a story.
I agree, and I spend a lot of time thinking about that, and I don't think I haven't figured out, but I think it's an important point to look, and the human computer interface designers are doing that point.
I just wrote a story a couple weeks ago about a Microsoft experiment in China.
called Zhao Ice. And Zhao Ice is a chatbot that is very different than Syrian Cortana. I mean,
Syrian Cortana are productivity tools. You ask it a question, it gives you an answer.
Zhao Ice is a conversational companion. And the numbers are just stunning. They've gotten 20 million
users. Ten million of them are intense users who have multiple conversations within a day,
and a conversation might be, it could consist of a wide range of interactions. They call it
toilet time. The kids go into the bathroom, toilet time, go into the bathroom late at night
when they're by themselves and they have conversations.
25% of the Zhaweish users have said, I love you, to the program.
Really? Wow. So what does this mean or what does it foretale? So the Microsoft guys who
designed this were creeped out by it. And I was kind of creeped out by it because, you know,
this is Sherry Turkell territory. We're having deep relationships with inanimate objects.
What could possibly be wrong with this picture? Okay. I talked to
I think her name is Michelle Liu, who was an IBM researcher.
She's got a startup now.
And she's Chinese.
And she said, you know, when we come to your country, we feel that it's quiet.
She said that in China, there's so much more densely, socially interwoven than we are that something like Zhao Ice is actually a way to get private space.
And it's not like, it's not weird in our sense of the world.
It's their way of having time to themselves.
And I'm willing to buy that as a hypothesis at least.
That's fascinating.
You know, I wouldn't mind walking into a bathroom and having a conversation with somebody I don't know and somebody that's not there.
Let's be clear.
I want you to put your hat on as a long time sort of technology business writer too.
This world that you describe, who wins or who benefits most?
Well, one of the things I'm fascinated by is Google's foray into robotics and what it means.
And I have lots of theories.
I'm going to have a little bit more insight than some people because of the.
the reporting I've done and why they were trying to get in robotics.
They haven't said yet, right?
There's, you know, it's kind of, they haven't said anything publicly about robotics.
But when Andy Rubin was going around the valley and buying, around the world and buying
up, you know, the world's best roboticist, he was sketching out of vision, which I would
describe as Google competing with Amazon.
And, you know, and if you can kind of see the architecture of that competition, you know,
what is it, the friend of my enemy is my friend.
Yeah.
They want to set up a distribution infrastructure, which would a lot.
allow them to compete without having to build the large warehouse, although they have warehouses
all over the valley already. So that's where the robot comes in. I mean, his vision was the Google
car drives up to your house and the Google robot hops off the back bumper and runs the package
up to your door. And if you've seen the big dog videos, the little dog videos that Mark
Graber at Boston Dynamics, which is now a Google subsidiary going up the stairs, you might
believe that that actually was on the agenda. So that's a fascinating competition to me,
although it's not really laid out, but that's one of the places I'd watch. In terms of,
so, you know, one of the things that Rubin said to me in 2005 that got me going in this
direction was that personal computers were starting to sprout legs and move around in the
environment. And it took me a couple of years to figure out what he was talking about. And you
really see it in the DARPA Robotics Challenge, which just finished. And I clearly see that
that's true, but it's one of those areas where it's going to take a lot longer than people think.
I mean, you know, the DARPA Robotics Challenge, which just was finished in Los Angeles, was
real ground truth of where we are in building autonomous machines. Let's forget cars for a second.
Let's just to talk about machines that can get out of their cages and walk around.
And here we had 25 teams from all over the world that had millions of dollars of funding.
the best roboticists around and you know they had they had trouble opening doors
so not too worried about you taking over my dog line his son Joel Pratt said you know if
you're worried about the Terminator just keep your door closed because he can't get the door
up you know I do think we're focused too much though in this discussion about robots that have
form I mean I'm not just talking about humanoid robots that are human like but just any
robot that has arms and legs as you know to me the real threat or advantage of
automation and especially AI is in that layer that's invisible to our eyes, you know,
all the things that are powering what Brian Arthur calls the second economy.
Like there's a whole deep, rich, interconnected tree and ecosystem that's completely beneath
the surface.
That's not being counted in productivity stats, but that is literally changing how we operate
and how we work.
What are your thoughts on that aspect of the debate?
Because I think a lot of times when we embody it in human or non-human or machine form,
we kind of lose sight of what we're talking about.
Well, so one aspect of that where you can see it very clearly, I have a hypothesis that sort of fits in a freakonomics way of the world.
So, you know, if you look at where the American economy has grown since World War II, a lot of it is in people who transact business via the telephone.
Tech support, people who are salespeople.
Ticketing agents.
Just vast expansion.
Bank, exactly.
That, because of falling cost of telecommunication, many of those jobs were outsourced to the Philippines and India.
And now you have this freakonomic.
reverse thing happening where those jobs are coming back to run in U.S. data centers as software,
not as U.S. workers. So that's at the heart of what you're talking about the insane economy.
So how far does that go? I'm fascinated by that because I believe that that's one area where
the technologies are moving quickly enough that they will have a job impact. And, you know,
first people will hate talking to machines. And then just like ATMs and bank tellers,
they'll realize that it's a more efficient way to get the information you want. And we will spend much
of our time talking to machines. Exactly. And in fact, I think when we talk about talking to
machines, it doesn't have to always be conversational UI. I mean, there are many ways to talk to
machines, inputs, touch screens. There's exactly, you know, picking up on sensory cues that are
embedded in our environment. There's so many different things to think about there, definitely.
Do we ask people then, you mentioned efficiency, you know, whether it's call centers or warehouses
or whatever, do we have to draw the line ever? Like, okay, that's efficient enough, but we need to
preserve this for humans or like how does that debate well so this is why I started yeah it's not
it's already started I mean the reality is I mean I talked to someone like Terry Winnegrad an AI pioneer where
I was talking about this AI versus IA thing and he said you know we're in a capitalist economy
and the economic structure dictates these decisions and you know that was very disheartening to me
because I believe I mean you know I grew up as a Marxist as a kid not all of that sort of filtered
out of me. I believe that, you know, well, let's talk about a Stanford economist like
Veblen who wrote this wonderful book called The Engineers in the Price System, where he basically
argued that sort of all the knowledge about the economy would accrete to the engineers and
they would become the new power elite. He wrote this in like 1913. But I see that potential
again. I mean, so, you know, the designers of these systems have a great deal of power over what
they look like. And that's my one hope. I mean, that's where I'm trying to be optimistic,
that it won't just be about cost, but there'll be human values that'll be put into these systems.
Right. Exactly. And it's, you know, a human responsibility. Maybe I'm, maybe I'm being overly
optimistic. I don't know. Well, you know, let's wrap up and talk about some implications of your
book and your findings over the years. Let's talk about what happens to research. So one of the
things, as we've seen the valley and technology evolve, is that we've moved in cycles from basic
research to applied research back and forth. We see this in NIH funding, NSF funding. What are your
views on where we are in that balance today, sort of the pastures quadrant of investing in the
future? Well, look at the role that DARPA's played in sort of developing the seed corn for Silicon
Valley. I mean, you don't know how many people understand how much of the technologies that have been
commercialized were originally pioneered by DARPA. And you'd think that looking at the history,
you could just sort of dial it forward and saying, well, that kind of spending.
probably would be good for us.
But that's not the framing of the discussion right now.
You know, what worries me when I look at a great institution like Stanford
is that Stanford, you know, it's become an entrepreneurial hotbed on a scale that you can't imagine.
You know, there used to be this notion of an ivory tower, and that's so gone.
People pass through on their way to their startup and they'll, you know, maybe come back.
I talked to Hennessey when he was dean of engineering about this.
was this 15 years ago, maybe 20 years ago.
Before he became the president.
Before he was president and he was retired.
And at that point, he had sort of worked out a model where he'd sort of get enough of
his faculty to hang around that he had a coherent hole.
And it's changed dramatically since then.
I think there's – and actually, you know, and I've spent time at Carnegie Mellon,
and it's almost in an interesting way refreshing to go to Carnegie Mellon because not everybody's
just getting ready to start their next company, which is.
the feeling you have at Stanford. I mean, there are people who are actually, you know,
interested in computer science and they want to stay there. And that's true at Stanford. There are
people who are, you know, computer scientists. And there are people who have gone out and started
companies. And it really, it's a really viable system. But I think that there is a value to
long-term research and that our society undervalues that. And I don't know how to reset it. Do we
need another Sputnik? Probably. I mean, that's what got DARPA started. There was this event that was
an existential threat to the country and, you know, the country responded and it had this
incredible impact. And so, but, you know, that's a kind of a grim scenario if it's going to
take, it's going to take that. You know, when I began thinking about some of these labor
technology challenges, at the end of the day, I don't see any way out of this conundrum but through
education. If we're moving toward an economic system where you can't expect to do what someone
like I has done it, but you stay in a job for your career, but you have to have a series of jobs,
then it doesn't seem like socialism to want to fund education as it's the raw material for the
economy. And so maybe you need to, I mean, we've done it before as a society. The GI Bill had
this incredible impact in terms of making America a competitive nation in the wake of World War II.
So let's take a look at that. And, you know, maybe we need an educational version of Obamacare
where if someone loses a job, they can simply retrain themselves.
And, you know, just society will fund them to do that.
I don't see what's so terrible about that.
And I think that, you know, if you had that to fall back on and you could, you know,
you could go in a different direction, it really might change the nature of the game.
And it would create this raw material for an economy that was changing and dynamic.
And, you know, all boats might rise.
What do you think about then, like, Peter Thiel's view on people drop, kids dropping out of college to become entrepreneurs?
Yeah.
So, you know, how to respond to that.
At the Bits and Adams Center, there is a physicist whose name I'm blanking on right at the moment,
who basically styles himself as the anti-Peter Thiel.
I'll think of his name as soon as we get off the air.
And he has a series of students who've studied sort of serious graduate level physics
who are in incredibly important CTO-like roles all over the valley.
probably 20 of them, some of the best and brightest in Silicon Valley.
So, you know, you get back to my notion of wanting to be an entrepreneur before you have an idea.
I think that's the cart before the horse.
Oh, right.
So about that notion.
So this is a notion where you talk about how Steve Jobs and Gates dropped out of college in order to build these companies, but that's because they had the vision.
They didn't decide they were going to be entrepreneurs and then adopt the vision.
And I also have this, a slightly different notion of many people about Silicon Valley.
which is, you know, not just entrepreneurial-driven.
I mean, two things that I've noticed over the times.
Look at jobs in Wozniak.
I mean, to me, that's the canonical example of a Silicon Valley company.
One guy who just wanted to build a computer so he could share it with his friends at the Homebrew Computer Club.
He was not thinking about markets.
He wasn't thinking about anything but the fact that he was in love with the technology.
And another guy who realized that there might be a market for there.
I can't tell you how many times I saw that in Silicon Valley.
And for me, the moment that I got, you know, my generation of Silicon Valley was slightly after the introduction of the IBMPC.
Immediately after it was introduced, there was something called the Big Blue Computer Club that met at Dyson Auditorium in Sunnyvale.
We're talking in 1981 or 1982.
And I went to one of these meetings.
And at that point, it was a room full of mostly almost all men in wearing white shirts with pocket protectors who were working in their day jobs in the valley.
And Evelyn Richards, it was a reporter for the San Jose Mercury Guy.
sat up on stage and did a survey.
Evelyn had an amazing amount of chutzpah.
And she said, how many of you are planning on starting your own companies?
And two-thirds of the hands went up.
And I said, oh, I get it.
There was a culture who believed if you had a good idea, but if you had a good idea that you
could be the next Steve Jobs.
That was really already woven into the drinking water in Silicon Valley.
Right.
Well, it's kind of what's been around since day one, because this dichotomy between academia,
government
and entrepreneurialism
through startups
it's kind of false
when it comes to Silicon Valley
I mean its boundaries
have always been really permeable
I want to sort of come back to the beginning
and well
almost the beginning
where you were talking about scale
and I just wonder going forward
and for non-engineers
for people who are out there in the workplace
you know as Silicon Valley scales
across the world as these products go everywhere
and again
as automation happens in robots come in
more and more into our lives, what would you advise people to think about and how they move
through the world when they adopt these machines, when they adopt, you know, or faced with
automation? You know, you seem optimistic, but it also sounds to me that people have a big role
to play in, like you say, designing this future. But how do the people who aren't designing
the things themselves, what role do they play? Yeah. So, you know, as a citizen, you know,
I see, I see, I mean, I'm really torn because you see all kinds of things that worry you.
You see, I mean, one of the things that I worry about in the Internet of Things kinds of impact is there's a, you know, the Facebook and Google explosion has been based on this kind of Faustian bargain where, you know, they give me these wonderful services and what do I give them?
Well, I give them behavioral information about my life.
And it's, you know, it's troubling to me because it's not clear.
really demarcated. I think, you know, there's been a lot of discussion about it, but no
answer. And so I guess I would say to a citizen who's not, you know, a technologist in
designing, you know, I guess educate, educate, educate, you know, learn more about the
technology, understand what it does. It's not magic. You know, you just get a little bit deeper
and try to understand what's wrapping around you. You're going to need to because it's going
to completely wrap around you very quickly. John Markoff, if our robot overlords let us talk to you
again, we would look forward to it.
So thanks so much for coming.
Thanks for joining a season of the podcast.
Thanks.