Behind The Tech with Kevin Scott - Behind the Tech: 2019 Year in Review
Episode Date: December 20, 2019To wrap up the year, we revisit podcast conversations with a few of the incredible thinkers and innovators who have been on the show. Topics range from the democratization of artificial intelligence, ...to the challenges of social media and the internet. You may also hear some surprising things these folks do in their free time when they’re not changing the course of tech history in some fascinating way.
Transcript
Discussion (0)
All of us every day should be asking,
how is it that the thing that I'm doing right now is
going to positively accrue to the overall social good?
If you can't answer that question in an affirmative way,
then maybe you're doing the wrong damn thing.
Hi, everyone. Welcome to Behind the Tech.
I'm your host, Kevin Scott, Chief Technology Officer for Microsoft.
In this podcast, we're going to get behind the tech.
We'll talk with some of the people who made our modern tech world possible and understand what motivated them to create what they did.
So join me to maybe learn a little bit about the history of computing
and get a few behind-the-scenes insights into what's happening today. Stick
around.
Hello, and welcome to a special episode of Behind the Tech. I'm Christina Warren, Senior
Cloud Advocate at Microsoft.
And I'm Kevin Scott.
And today, we're going to change things up a little bit with the format and do a Behind the Tech year in review. And I'm Kevin Scott. everyone. Yeah, exactly. And so what we're going to do is we're going to basically highlight a few of our fascinating guests. We're going to replay some of our favorite quotes. We're going to share
some surprising moments, maybe even a few laughs. First, we're going to hear from Stanford University
scholar Andrew Ng about topics like the democratization of AI. We'll revisit our
conversation with Jaron Lanier about challenges of social media and the quote unquote downfall
of democracy. And we'll hear from a few more of our media and the quote-unquote downfall of democracy.
And we'll hear from a few more of our guests about the crazy things they do when they're not,
you know, like changing the course of tech history or something.
Awesome.
With the rise of technology, often comes greater concentration of power in smaller numbers of people's hands.
And I think that this creates greater risk of ever-growing wealth inequality as well.
To be really candid, I think that with the rise of the last few waves of technology,
we actually did a great job creating wealth on the East and the West Coast,
but we actually did leave large parts of the country behind.
And I would love for this next one to bring everyone along with us.
So that was Andrew Ng.
Andrew is founder and CEO of Landing AI, founding lead of the Google Brain Project, and a co-founder of Coursera.
He really is a true leader in artificial intelligence and machine learning, and I can't agree with him more.
He's so smart. And what happened is that actually led to a really interesting discussion about computer
literacy and bringing AI not just to tech companies, but to all industries.
So here's more from Kevin's conversation with Andrew Ng.
Once upon a time, society used to wonder if everyone needed to be literate.
Maybe all we needed was for a few monks to read the Bible to us,
and we didn't need to learn to read and write ourselves
because we just go and listen to the priests or the monks.
But we found that when a lot of us learned to read and write,
that really improved human-to-human communication.
I think that in the future, every person needs to be computer literate
at the level of being able to write at least simple programs
because computers are becoming so important in our world, and coding is the deepest way for people and
machines to communicate. There's such a scarcity of computer programmers today that most computer
programmers end up writing software for thousands of millions of people. But in the future,
if everyone knows how to code, I would love for the proprietors of a small mom and pop
store at a corner to build a program and LCD display to better advertise their weekly sale.
So I think just as literacy, we found that having everyone be able to read and write,
improve human-to-human communication. I actually think everyone in the future should learn to code
because that's how we get people and computers to communicate at the deepest levels. Yeah, I think that's a really great segue into the main topic that I wanted to chat about today, AI,
because I think even you have used this anecdote that AI is going to sort of be like electricity.
I think I came up with that.
Yeah, no, this is your brilliant quote. And it's sort of spot on. You know, the push to literacy in many ways is a byproduct of the second and third industrial revolution. You know, like it, we had this transformed society where like you actually had to be literate in order to, you know, sort of function in this this quickly industrializing world.
And so I wonder how many analogs you see between the last industrial revolution
and what's happening with AI right now.
Yeah, the last industrial revolution
changed so much human labor.
I think one of the biggest differences
between the last one and this one
is that this one will happen faster
because the world is so much more connected today. So wherever you are in the world one and this one, is that this one will happen faster because the world is so much more connected today.
So wherever you are in the world listening to this,
there's a good chance that there's an AI algorithm
that's not yet even been invented as of today,
but that will probably affect your life five years from now.
I love this image that he evokes
of a mom and pop store programming their own LCD sign.
If we really are going to get to this vision of powerful technology shaping the future, we really have to have as many people as humanly possible involved in the creation of the technology. And, you know, I think one of the ironic things about AI is that
it can be one of those democratizing influences. Like, ironically, even though we sort of think
about AI as this just very arcane art that only a few people can practice, the reality may be over
the next handful of years is that AI itself may facilitate more people being
able to participate in the creation of technology than ever before. It might make programming
actually easy. Yeah, it's really, really interesting to think about how, you know,
instead of taking jobs away, it could actually maybe even create new careers and create new
opportunities. Yeah, I mean, one of the things that we chatted about in our conversation is that
we have a whole bunch of costs of subsistence, like health care, for instance, that are growing faster than the gross domestic product.
And obviously, that's not sustainable over time.
The end state there is like we end up spending our entire national wealth on health care, which is probably not a great thing for society. And one of the really interesting things,
and we're seeing this across a range of diagnostic applications, is that AI is allowing us to
provide cheaper and more ubiquitous access to health care technologies. Like, for instance,
there are AI apps right now that are able to read EKG charts,
which is just a fraction of a cardiologist's job, but able to read charts with clinical levels of
accuracy, which then doesn't take away cardiologist jobs, but potentially frees up
cardiologists to do higher value things. And, you know, the thing that I get really excited about
with things like that is, like, whenever you go, the thing that I get really excited about with things like that
is like whenever you go to your general practitioner and get a checkup every year,
you don't have a cardiologist like giving you like a full diagnostic and examination of your
cardiovascular health. But with these technologies, you could potentially have some sensors in a box
sitting in your general practitioner's office that could basically do a full cardiologic exam,
which would be really awesome and something that you couldn't do without the technology.
Like it's just economically invisible to have that many cardiologists like spending their time that way.
And you actually talked with Andrew about whether or not he's seeing those trends in other
domains. Let's take a listen. I think that there'll be a lot of partnerships between AI teams and
doctors that will be very valuable. You know, one thing that excites me these days with the theme of
things like healthcare, agriculture, manufacturing is helping great companies become great AI
companies. I was fortunate really to have led the Google Brain team,
which became, I would say, probably the leading force
for turning Google from what was already a great company
into today a great AI company.
And then at Baidu, I was responsible for the company's
AI technology and strategy and team,
and I think that helped transform Baidu
from what was already a great company
into a great AI company.
And I think, really, Satya did a great job also transforming Microsoft from a great company
to a great AI company.
But for AI to reach its full potential, we can't just transform tech companies.
We need to pull other industries along for it to create this GDP growth, for it to help
people in healthcare, deliver a safer and more accessible food to people.
And so one thing I'm excited about building on my experience helping with really Google and Baidu's transformation is to look at other industries as well to see if either by providing AI solutions
or by engaging deeply in AI transformation programs, whether my team at Landing AI,
whether Landing AI can help other
industries also become great at AI.
We'll talk a little bit more about what Landing AI's mission is.
We want to empower businesses with AI.
And there is so much need for AI to enter other industries than technology.
Everything ranging from manufacturing to agriculture to healthcare to many more.
For example, in manufacturing, there are today in factories, sometimes hundreds of thousands of people using their eyes to inspect parts as they come off the assembly line to check for scratches and things and so on.
And we find that we can, for the most part, automate that with deep learning and often do it at a level of reliability and consistency that's greater than the people are.
People squinting at something 20 centimeters away your whole day,
that's actually not great for your eyesight, it turns out.
And I would love for computers rather than often these young employees to do it.
So Landing AI is working with a few different industries to provide solutions like that.
And we also engage with companies with broader transformation programs. So for both
Google and Baidu, it was not one thing. It's not that, you know, implement neural networks for ads
and sell as a great AI company. For a company to become a great AI company is much more than that.
And then having sort of helped two great companies do that, we are trying to help other companies as
well, especially ones outside tech,
become leading AI entities in their industry vertical. So I find that work very meaningful and very exciting. Several days ago, I tweeted out that on Monday, I actually literally woke up at
5 a.m. so excited about one of the landing AI projects. I couldn't get back to sleep. I started
getting up and scrolling on my notebook. But I find this work really, really meaningful.
You know, when we talk about AI,
and you've had so many great conversations
with people about AI,
one of the big things that comes up
is how to make sure that, you know,
we're doing this the right way.
Yes.
And we've had a bunch of conversations
about that this year.
So one of them was with Fei-Fei Li.
Whenever humanity creates a technology as powerful and potentially useful as AI, we owe it to ourselves and our future generation to make it right.
So that was Fei-Fei Li, a pioneering researcher in AI. Fei-Fei is an incredible computer science researcher.
So she helped create, in large part, the entire discipline of modern deep neural networks.
So she created this thing called ImageNet in 2011, I think.
And ImageNet was the thing that allowed all of the breakthroughs in image classification
that everybody's so excited about now to actually happen. So she's a computer science professor at
Stanford University. And right now, perhaps she's best known as the co-director of the
Human-Centered Institute for AI at Stanford. I was actually lucky enough to be part of the
advisory council that helped establish the institute. So I asked her to tell us about some of the work that's going on there.
So first of all, I think the institute that both of us are involved in is really laying
out a framework of thinking about this.
And the framework is human-centered, is that from the get-go,
from the design and the basic science development of this technology all the way to the application
and impact of this technology, we want to make it human-benevolent. And with this framework in mind,
we have at Stanford, this institute works on three principles, founding principles to cover
different aspects of human-centered AI. The first principle is actually what we've been talking
about is to continue to develop AI technology, basic science technology that is humans-inspired
and betting on the combination of cognitive science, psychology, behavior science, neuroscience to push AI forward
so that the technology we will be using have better coherency or better capability to serve human society.
So that's the first principle.
Second principle is I would love to hear your
thoughts. You know, you and I are trained as a generation of technologists that the technology
is solidly considered an engineering field or computer science field. But I think AI really has turned a chapter.
AI is no longer a computer science field. AI is so interdisciplinary today. In fact,
some of the most interesting fields that AI should really contribute and also welcome to
join force are social sciences and humanities. And at Stanford, we're already seeing the collaboration
between AI researchers with economists, with ethicists, philosophers, education experts,
legal scholars, and all that. To do this, our goal is to understand what this technology is
really about, understand its impact, but also forecast, anticipate the perils, anticipate
the pitfalls, anticipate unintended consequences, and really with the eventual goal of guided
and recommend policies that are good for all of us.
So that's the second principle, really understand, anticipate, and guide AI's human and societal
impact.
The third and the last but not the least principle
is something I know you and I feel passionate about,
is really to emphasize the word enhance instead of replace.
Because AI technology is talked about as a technology to replace humans.
I think we should stay vigilant about job displacement and labor market.
But the real potential is using this technology to enhance and augment human capability,
to improve productivity, to increase safety,
and to really eventually to improve well-being of humans.
And that's what this technology is about. And to really eventually to improve well-being of humans.
And that's what this technology is about. And here we're talking about healthcare.
Another vertical that we put a lot of passion and resource in is education, sustainability, manufacturing, and automation.
These are really humanly and societally important areas of development.
Well, just sort of sticking with health care and like your elder care example, like this
is something that I don't think a whole lot of people spend time thinking about unless
they're taking care of an elderly parent or relative.
Like we're not thinking about like how systemically we can make the lives of elderly people better.
And like we're certainly not thinking about the big demographic shifts that are about to come.
It's going to come globally.
Yeah, globally.
I mean, so, I mean, you and I have chatted about this before, but, you know, we sort of see in almost all of the industrialized economies, but also in Japan, Korea, and China.
Yeah, absolutely. But also in Japan, Korea, and China, you have this very large bubble of working age population that's getting older and older.
And we just don't have high enough fertility rates in these younger generations to replace it.
So at some point, like we across the entire world, we're going to have far more old people than we will have working age people.
And you have like a couple of big questions when that
happens, like who takes care of all the old people and like who's going to do all the work. And
it's actually not far enough away that we can not think about it.
2035 is, I think, we have to find the actual number, but the last baby boomers become the
aging population, the youngest. So we were very close to that.
And also, to do this research in aging population,
I spent a lot of time in senior homes and senior centers.
One thing I learned as a technologist is that we should really develop
the kind of empathy and understanding of what we really are working on and working for.
For example, I cannot tell you how many Silicon Valley startups are there to create robots as
senior companions. And when some of them feel robots can replace family, nurses, friends, I really worry. And I really want to encourage these entrepreneurs to
spend a lot of time with the seniors. One thing I learned a lot about well-being with aging
population is dignity, social connection is the biggest part of aging. And so my dream technology is something that you don't notice, but it's
quietly there to help, to assist, to connect people, to ensure safety, rather than this big
robot, you know, sitting in the middle of the living room and replacing the human connectivity.
Yeah. It's really funny that you're bringing all of this up. I'm writing a book right now on why I think people should be hopeful about the potential of AI, like particularly in rural
and middle America. And for the book, I went back to where I grew up in rural central Virginia in
like this, you know, very small town. And I visited the nursing home where three of my
grandparents spent the last chunk of their life.
And I was just chatting with some of the people there.
And I asked the nurses and the managers in this place, like, you know, what do you think AI?
And, like, when I say AI, like, the vision that conjures is like, oh, there's going to be some human equivalent android coming in.
And they'd be like, no, the residents would be terrified by this thing.
Whereas, like, they've got a bunch of things, like dispensing medicine, for instance.
Like, you know, when you're elderly, like, you're taking this, like, complicated cocktail of medicines and, like, getting it dispensed in the right amounts at the right time through the day,
making sure that you actually take the medicine.
Like, that's a problem that we could solve with AI-like technologies, like, you know,
combination of robotics and computer vision.
But it wouldn't be like this talking, walking, you know, robot.
It would be like a set of things that sort of disappear into the background and just sort of become part of the operation of the place.
And like that, I think we should have more ambition for that sort of thing rather than this, you know.
That's why Stanford HAI wants to encourage that.
The best technology is you don't notice the technology, but your life is better.
Yes.
That's the best technology.
I could not agree more.
That conversation with Fei-Fei was amazing, and it makes me really happy that we have people as technically brilliant as her who also are like thinking in a deeply conscientious way about how
AI technology and technology in general are developing.
I couldn't agree more. I think it's really imperative that we don't just have, as you said,
the technical minds,
but people who are thoughtful and are thinking about the implications and how to, as we said
kind of at the top, do these things the right way.
Yep.
So next up, we're going to revisit our conversation with Reid Hoffman.
And Reid is an angel investor.
He's the co-founder and executive chairman of LinkedIn.
He's the author of three books, including his most recent one, which is called Blitzscaling.
And he's now a partner at
the venture firm Greylock. And I know he's also a good friend of yours, Kevin.
Yeah, he's one of my best friends. And so in my conversation with Reid, which are always
delightful, we talked about a book by Robert Wright called Non-Zero, The Logic of Human Destiny.
It was sort of funny, like I read this book a while ago when I was in grad school,
and it was one of the things that Reid and I bonded over because he, too, had read the book early on and was a big fan of it.
And so I asked Reid to explain to us this notion of non-zero-sum.
So, fundamentally, it's roughly does the pie grow or not.
Right.
Right.
And in a zero-sum game, it's whatever I
win, you lose. So, if there's 100 units, if I get 52, you get 48. If I get 55, you get 45. Right.
A non-zero-sum game is we figure out a way that, well, actually, I may still get 55,
but maybe you get 50. Right. Right. And so, it's the pie grows. Right. And I actually think one
fundamental part of ethics that should go across all ethical systems, all value structures is we should prefer non-zero-sum games.
Correct.
Right?
Yeah. Like you should pick important zero-sum societal problems and try to figure out how to put a set of incentives and policies in place to convert those zero-sum social good things into non-zero-sum games through the use of AI. And like healthcare potentially is one of them where we have this very interesting system right now where we have a finite amount of our gross domestic product we can spend on healthcare.
You can spend all of it.
I think you're the one who actually said this to me.
You can spend 100% of GDP and still not solve the fundamental problem, which is we're mortal.
And so it's this terrible zero-sum game in a sense, because you're
always having to decide, you know, what the trade-offs are and like anything that we can do
technologically to create abundance in this system, to like get better diagnostics cheaper for
everyone to like be able to, you know, sort of influence things in a way where you're getting better healthcare
outcomes for everyone, we should be doing. And it's hard to do without some flavor of
technology, whether that's AI or something else. Yeah. That's what progress looks like.
Right. That's the universe we all want to be in because we'll all be better off.
So Reid is also a real thought leader in AI.
I asked him what he's thinking about right now
in terms of how we should be influencing AI
in academia and industry
to produce positive outcomes for society.
Let's see, two things.
So one is to realize that
while our natural reflexes come from evolution
are always to go to fear and worry first,
to realize that there's massive opportunity and that we're playing for that.
And that opportunity could be like what you were mentioning earlier, medicine. It can be actually, in fact, if we transform a whole bunch of productivity that can do anything from what like your book will be describing, which is that allow creativity and innovation in like rural towns and all that.
Like there's an enormous amount of great things that could happen.
Right.
We should play for those.
And we should not lose sight of that's what we're playing for.
Right.
And so, that's kind of a, like, how do we maximally benefit?
How do we become better, public intellectual, through this?
And then, think about, like, okay, how do we steer around, not stop, but how do we steer around the pitfalls?
Right.
Like, what do we say?
Well, look, if we're going to have a whole bunch of displaced driving because of autonomous vehicles, what are
our ideas about what to do about that and to have a healthy society where people have paths to better
lives? And what are the ways we help with that? What are the ways we facilitate? And so, part of
the reason why, you know, you were part of the task force, thank you, and part of the advisory
council for the Stanford Human-Centered AI is like, well, how can we catalyze across industry, across government, across academia to say these are the ways that we steer towards good outcomes and avoid bad outcomes?
That was such a great conversation.
If you haven't heard the whole episode, I definitely encourage anyone who's listening now to go back and listen to that because there was so much good stuff there.
Yeah. And conveniently, if you want to hear more from Reid, he has this amazing podcast called
Masters of Scale, where he talks with a whole bunch of the really great entrepreneurs and
technologists in Silicon Valley about how they have solved different aspects of the scaling
problems for their companies as they were going from just an idea to something that was amazingly impactful.
Yeah, it's a great podcast. So check out Masters of Scale as well.
One of the themes that we've been grappling with over the past couple of years and that
played heavily in many of our podcast recordings is about this narrative we're telling about AI.
You know, is it the portrait of Rosie and the Jetsons or Data in Star Trek The Next Generation? Or is it Hal in 2001,
A Space Odyssey or The Terminator in the Terminator movies?
No, exactly. I mean, and that's, it is a really interesting conversation we've had,
because we've talked about the need for positive stories about AI to balance some of the doom and gloom that,
you know, is arguably maybe more narratively interesting, but maybe not so good for how he
want to be thinking about this kind of game-changing technology that's going to impact
all of us. Yeah, we are, at the end of the day, the stories we tell, and, like, we need to be
telling some positive stories to help push the development of AI in a positive direction.
So I had this great conversation with Sam Altman, who's the CEO of OpenAI.
He ran Y Combinator before OpenAI and started his first company at the age of 19.
I really think of him as an entrepreneurial prodigy.
He says one of the most valuable classes he took during college was a creative writing course, which is really interesting for someone who is the CEO of like one of the deepest technology companies in the world.
So the conversation that you're about to hear is us chatting about the arc of the development of AI from the time when he first started programming to some of the most advanced stuff that's happening today at the very cutting edge of AI research and development.
So when did you when do you start thinking about AI?
Well, as an undergrad, when I was 18, I made this list of things I wanted to work on, and
AI was the top, but I took the AI classes at Stanford, and it was clearly not working.
And why when you were 18?
So you were 18 when?
Well, I just think like 2003? I was born in 85.
Okay.
So AI in 2003 was not what it is now.
Well, I think everybody, like most, everyone who grew up reading sci-fi like wanted to make AI.
Like this is kind of, it just feels like we're all on this inevitable path and that's where it's going and it's like the most interesting thing to work on.
But it just didn't feel like there was an attack factor.
And then in 2012 interesting thing to work on, but it just didn't feel like there was an attack factor.
And then in 2012, it started to work.
And then in 2015, which is when we started talking about creating open AI,
which we started in early 16,
it felt like not only was it going to work,
but it might work much better and much faster than we thought because there had been this one trend of just scale things up
that kept working.
And again, this has been like, I mentioned it's been like the central learning of my career.
The asterisk to that, though, is that humans have not apparently evolved well to guess how exponential curves are going to play out.
Yeah.
And so when you scale these things up, if they're getting like, you know, doubling every year
in the case of AI,
maybe 8x in every year,
we don't have good intuition
for that.
And so people are never
bullish enough
if the curve is going to continue.
And so I was like,
huh, maybe this is
really going to work.
But AI is like a tricky,
a tricky thing,
you know, in the sense that
the term artificial intelligence like like, wasn't really
coined until the Dartmouth workshop in, what, 55, 56? Something like that. And they thought they
were going to get it done that summer. Oh, yeah. They were completely convinced. Like, if you read
those documents, like, they had this list of things, and they were just sort of convinced
that the progress was going to be much faster than it actually was. And, like, we have had a couple of booms and busts now, you know, where you can actually
go to Wikipedia and look up AI winter and, like, the bust has a name.
So, you know, one of the things, and I'm just, for what it's worth, like, I'm in the
optimist's column here.
Boom and busts are the way of the world.
Like, you know, we talked earlier about
startups, like we had a lot of booms and busts there,
but the curve, though it squiggles,
if you zoom out enough,
goes up and to the right. Yep. And
the curve of
computers getting smarter does too.
Now, how much further we have to go when we're going to get there,
very hard to say. What I
can say with confidence is, maybe
the current trends don't get us all the
way to general intelligence, but they're going to get us surprisingly far. They're going to change
the world in very meaningful ways. And maybe they go all the way.
Yep. And so, like, I'm interested to go back to this whole creative writing thing, because like,
I think the storytelling around AI is like one of the really, really interesting things right now.
Like, getting – because you guys – so, OpenAI is a nonprofit organization that is committed to, like, realizing artificial general intelligence.
And for having the value that AGI creates sort of accrue to the public good.
To be clear, we have not figured out the storytelling yet.
I agree it's really important.
I think about this stuff all day.
I can barely in my own head think clearly about what the world really does look like if AGI happens.
You know, all of the stories I can tell are either like too mundane or too grandiose.
It's like either like, oh, medicine gets better,
or it's like sentient beings colonize the universe until the heat
death and sort of neither of those are quite feel right.
And people get really, really, you know, I know one of the things that you have
you've said is, you know, something about, you know, the light cone
of all. People don't like that. Yeah. And like people get really
upset about, you know, theose things, which sort of makes them miss all of the really concretely useful things that this stuff is going to do with 100% predictability over the next few years.
If you're doing anything interesting, you're going to have a lot of haters.
And you may as well say the thing you actually believe.
So I could try to figure out exactly how to calibrate this somewhat dishonest version of what I believe the future is going to look like.
Or I could just say, here's what I actually think.
It might be wrong, but here's what I genuinely think, and not try to under or oversell it.
And that's what I genuinely think and not try to under or oversell it and
That's what I actually think so
Why do you think that?
it is possible that there is a
very Actually, I don't even think it's that unlikely. I think there is a reasonable chance that there is something very deep about
Consciousness that we don't understand or we're all Boltzmann brains and none of this is real or whatever.
But if not, if physics as we understand it works and everything is just sort of an emergent
property in our brains of this very large computer we have, then I think that will be
replicatable in silicon.
And I don't like, I still think that's the more likely outcome.
All right.
So we've been talking a lot about AI in society,
which is obviously really important.
And there's been a big trend, I think, not just on our podcast,
but on a macro level in all of tech this past year.
But let's extend that conversation into social media and
the internet. And so here are a couple of sound clips from our guests, beginning with Danny Hillis.
Social media was never by itself the thing that was going to advance humanity, and nor is it the
thing that's going to destroy humanity. It's actually a tool that we're going to learn to use with time, like we learn to use fire?
If you're talking about failure that is not just inconvenient but democracy, we're not allowed to just say, oops.
We have to be thinking about the consequences of the technologies that we are bringing to market because we are now in the center of everything in our lives.
I'm worried about where we are right now.
I just feel like our present internet is destroying societies and democracies and economies.
You know, I think it's bringing down civilization.
It's bad.
We really screwed this thing up.
So that was Danny Hillis, the pioneer of supercomputing, who created the company Thinking Machines.
And then we heard from Judy Estrin, who is one of the original co-creators of the Internet, which is just amazing.
Amazing.
Like, I mean, that's so cool.
That's, yeah, that's unreal. And finally, we heard from Jaron Lanier, who is basically considered by many people to be the father of virtual reality.
Like, I think he actually coined the term virtual reality.
He did. And so looking at this list of guests, it's sort of shocking the people that we got to talk to, like the person who invented a big chunk of modern computing and these amazing supercomputers. Like one of the people in the labs creating the internet protocols
and the person who played perhaps the biggest role
in the creation of virtual reality.
Sort of crazy.
It is.
You have really good friends, and we're all really lucky
that we get to basically leech off of your friendships
for these great conversations.
So the interesting thing about Jaron in particular is that in addition to being an amazing computer scientist and inventor and one of the people who helped create virtual reality, is also a musician and an author.
And today spends a bunch of his time thinking about humanism and sustainable economics in the context of all of the technology that we're using in our day-to-day lives.
I'm not sure if I coined data dignity, by the way. I think either Glenn Weil or maybe even
Sachin Adela did. I'm not sure. Digital dignity was a term I was going to be the title of Who
Owns the Future, but the editor didn't like it, so it turned into Who Owns the Future. At any rate, so this is a whole long tale as well.
In the 80s and 90s, there were a couple of really vociferous, intense movements within hacker culture, within technical culture, about what networking should be like whenever it really comes about.
One of them was this idea that everything should be open and free,
and that was started from a number of sources.
One of them was a guy who was a friend of mine, Richard Stallman, back in Boston,
and there were a few other source points for that as well.
And then another was this kind of intense libertarian philosophy
that government shouldn't be involved, We should leave everything to entrepreneurs.
And in the late 80s and early 90s, I ended up spending time with somebody named Al Gore,
who's at that time a senator from Tennessee. He eventually became a vice president.
And he was really interested in these things.
And he came up with this idea of throwing some government money at people with nascent packet switch networks to bribe them to become interoperable. And that was the internet.
So that was funded by the Gore bill. And so we used to debate like what this thing should be.
And because of the extremely intense, those two dogmas, there was this feeling, well,
it'll be minimalist. It won't have accounts, for instance. It won't represent people.
That'll be left to private industry. There won't be any persistent data on it. That'll be left to
private industry. There won't be any transactions on it. That'll be left to private industry,
and on and on and on. There won't be any memory on it. There won't be any contextualization on it.
That'll be left to private industry. And I remember saying to him, you know,
we're creating this gift of many hundreds of billions of dollars to persons unknown because there will be natural network monopolies that form to fill these obviously needed functions.
But whatever.
That was – there was just this feeling that that was the better way to do things.
And since the experiment wasn't run the other way, we don't know. But then the other, that everything should be free, I think set us down a terrible
path because it feels all socialist at the first, at the, you know, it feels like this friendly
socialist lefty thing. But since it's being mixed with this libertarian philosophy, you end up with
only one possible business plan, which is advertising. So everything feels free, but
actually the money is made by third parties who want to influence the users using user data. And it ends
up, it starts cute and ends up evolving into this sort of monstrous universal behavior modification
scheme. Anyway, this is the stuff I talk about all the time where I think we've gone wrong
and we've created a network that's more about deception than it is about reality.
So what do you think we can do about that?
Well, we're kind of in a pickle now, to use an expression from when I was a kid. It's tricky. I
mean, there are a lot of schools of thought about it. I think we can't combine socialism
and libertarianism in the awkward way we did and
expect to get anything useful. And I think we should just choose one of them. And I personally
think we're better off choosing markets. You've been working on a bunch of concrete things to try
to figure out how to introduce these new incentive structures.
Can you elaborate on that a little bit more?
Yeah.
Well, the problem is how to get from here to there.
I kind of have in my head an image of what a society would be like with paid data.
There's a few things to say about it. One is, there are a lot of people out
there who pretend to be making a living online, but aren't because they're fakers. It's all a
big illusion. It's what we used to call a Horatio Alger illusion, where you create this illusion
that there's this way
of making a living when in fact there isn't.
It's only for a token small number of people.
However, there's another population of people out there who are making a living, but not
within the rules dictated by a central hub, but as actors.
For instance, there are tens of millions, maybe, well, we don't know the total number,
but at least 50 million people in the world who are actually making a living delivering online video lessons and counseling and guidance.
And, you know, this is anything from legal consulting to religious training to yoga teachers to musical instrument teachers.
All those people have sort of cobbled together something that has to fight against the grain
of everything because there's no infrastructure to support them.
There's no infrastructure.
So each one of them has had to invent their own infrastructure by cobbling together little
pieces from the different digital companies.
And that population interests me.
In a way, I see them as the future.
Those are the people who don't have to worry about their jobs being taken by robots unless, I mean, they could be. All we have to do is create some machine learning thing that
steals all their data and makes a fake clarinet teacher without paying them for their data,
just stealing their value. And that's what we've done in so many other areas.
So the future I would see is to, first of all, try to support, to identify those groups and support them.
And also identify those communities that are trying to create new structures to help people cooperate in decentralized ways.
And here, the blockchain community, not the Get Rich Quick blockchain, but the other blockchain, the blockchain of people who are interested in new ways of cooperation that can be mediated by networks.
Those people could be really important and helpful.
I think we need to invent new structures.
The reason that we treat data as being worthless, even though the companies that collect the data become the most valuable ones in the world, is that there's no collective bargaining for people whose data is taken. So out in any other economic example, in order to have a productive economy, you have to
have some, you have to invent some kind of structure so that people can cooperate and
not have it not be this Hobbesian race to the bottom where each person is against each
other person.
And if you believe more in capital than labor, you call that a corporation or a legal partnership or something. So these people are
incentivized to cooperate instead of try to kill each other. If you believe in labor over capital,
you call it a union and you call it collective bargaining. But on the internet, the difference
is academic. And I was playing around with terms like UNORP and Corpian, and they're terrible.
So we just came up with a very – my research partner, Glenn Weil, and I came up with the term MID.
Actually, my wife came up with that.
Mediator of Individual Data.
So you'd have something that's a way for people to band into a group so as to not have the value of their data descend to zero through interpersonal competition, but instead have a degree of local cooperation.
So we need to create those things.
And MIDS can serve another function.
Here I'm talking fast because I know we're almost out of time.
But one of the things that's really terrible about what's happened in the world is we've been petitioning tech companies to become the arbiters of culture and politics. But the thing is, do we really want tech companies to
become the new de facto government? Is that what we want? I don't think so. So the mids could also
become brands in themselves where people who bonded together to create a mid not only are
collectively bargaining for the value of their data, but the MIT itself has become a channel,
like if you like, like a guild or a union
or like a corporation or a brand that represents a certain thing.
It might say, whatever data comes through here
is scientifically rigorous and has been checked,
or whatever data comes through here is fashionista approved
and is very beautiful,
or whatever data comes through here is guaranteed
to be really amusing and suitable for your whole family or whatever. You know, like what it creates is these in-between size structures
that can take on this function of quality maintenance, you know, because you don't
want a centralized source being the maintainer of quality. That's a recipe for some kind of
dysfunction, too much centralized power. So the mids both solve the
economic problem and the quality problem, and we need to start creating them. So there are
fledgling attempts to create them. Right now, they have no infrastructure tools to help them along.
I'd like to change that. And that's just one little corner of the problem. I'm also just trying to,
honestly, I'm just trying to get the tech companies to see the light. And here,
you know, some of them are better than others.
So let's switch gears. I always ask guests what they're excited about in new trends,
innovations, and future of tech.
Yeah, and I love that because it's always so interesting to hear people from these various
backgrounds who are so connected with what's kind of on the cutting edge to hear about what
they're excited about. And so next, we're going to revisit our show with Dana Boyd. And Dana is
a tech scholar and researcher, and she looks at the intersection of people and social practices
and technology. And she's a partner researcher at Microsoft, and she looks at the intersection of people and social practices and technology.
And she's a partner researcher at Microsoft, and she's the founder and president in Data
and Society, which is a nonprofit New York City-based think tank.
So I asked Dana about what she's seeing right now at the intersection of tech and society
that's interesting and promising. What I'm hopeful for is,
and there's like small glimmers of it,
is the various folks
who are really starting to grapple
with climate and tech
and those intersections,
both in the ability to understand
how to reduce the cost to climate
for our technology,
but also the possibilities
that we can model, understand, and innovate
because we have a big, heady challenge in front of us on that front.
But those are the glimmer stages as opposed to here's where we have tools.
There's so much opportunity there.
I mean, it's unbelievable. could co-optimize production and consumption of power, like there probably is on the order of
like one or two magnitudes of efficiencies that we could drive, which would be unbelievable.
And then, you know, that's without sort of having the even bigger thoughts about like,
what could you do with some of these big machine learning models to design better systems that are fundamentally more efficient in and of themselves?
Well, so here's an example of something that is a mixed sword, mixed feelings on.
We also have the ability to model what land will be arable.
And we can really think about the future of agriculture, the future of water supply,
who controls that information?
Who controls the decision-making that happens from that information?
So that's that moment where I'm like, okay, we're getting there.
We actually have a decent understanding.
But if we're at a point where that material gets co-opted, it gets controlled,
then I'm deeply concerned.
So these are the contradictions I think we're sitting in the middle of.
Because if we can really understand, I mean, where did data analytics begin?
Farming.
If we can really understand what this will do to ag, we're going to be able to better
build resilience.
Yep. And that – so it's the moments where I'm like, okay, you know, this is not about just NOAA, the National Geographic and Atmospheric Administration.
It's not just about NOAA being able to model, but it's also being able to give that information publicly.
Yep.
In a way where it doesn't get perverted for political purposes.
Right.
And that's a tricky thing right now.
Yeah, and, you know, on the hopeful side of things,
you know, what we've even seen at Microsoft
with some of the stuff that's happening
with this FarmBeats program that's happening at Microsoft Research
is that you can take some of this data,
so, like, the weather data, weather forecasts, like
all of the sort of historical information, like stuff that like used to get embedded
into a farmer's almanac, which was almost, you know, like a little bit like astrology.
But like there was real, you know, data and trending that people built into these almanacs
that help people decide like very prosaic things, like when to put the seeds in the ground.
And, like, we know that if you apply technology to that process, to very simple things like when to plant in a particular location given historical and predicted weather trends, that we can make huge improvements in crop productivity.
Like, we see it in India where, you know, some of these very poor parts of India, like,
when you put a little bit of technology in, like, you can get double-digit percentage
improvements.
And, like, that is the difference between people starving and people getting fed.
Oh, absolutely.
And it's just great to see happening.
And the important thing about something like agriculture is it has to happen around the globe.
It has to happen.
It just has to.
And same with water resources.
Yep.
We need to understand and model out water resources because, I mean, just take the continent of Africa, right? There's so many places across that continent where things are possibly fragile if we don't work out where that is or how to deal with it.
And so it's both the technology of desalination, which is wonderful, like making better use of your water resources, like, there are hundreds and hundreds of ways.
Like, so, for instance, like, one of the ways that you can make more efficient use of water in agriculture is, like, all of the agricultural chemicals that we use, so pesticides and fertilizers and whatnot, are massively diluted with water. So, like, the chemical concentration, like the active compound, is like a tiny part of,
like, the thing that gets sprayed over the crop, which means that you're wasting all
of this water, that the, you know, chemicals are going in the places where they're not
needed.
It's just this hugely wasteful thing.
And there's all sorts of, like, interesting new technology where you can very precisely
deliver the chemicals to the crop
without diluting them in water at all. So, you're not wasting any water. You don't have any of this
chemical runoff into the water supply. It's just fantastic. And simple things like using
some of the cool new stuff that we're seeing with computer vision where you can fuse classical sensor data like
moisture meters with vision models where you can sort of infer soil moisture from pictures that
you're taking from above the crops with drones or in places where drones are too expensive.
Like the FarmBeats folks are literally tying
like little cheap cameras to balloons.
And you have a human like walk like a balloon over the crop, you know, tethered to a rope
because, you know, in some parts of the world, you can't afford a drone to fly over them.
And from that, like you can, if you know what your soil moisture is, like you know exactly
how much to water so you don't have to worry about under- or over-watering a crop, which leads to way more efficiency.
So it's just so damn cool what's possible.
And that I will say is that's also the technology mind, which is, you know, I live in New York City.
And one of the funny things about living in such a crazy urban environment is to wander around and be like, I can see how this
could become more efficient. Ooh, and if we did this and this and this. And that is that moment
where you see the real hope and the real excitement, which is that we can actually do things
that would solve problems, especially like nothing to me is sort of more interesting than seeing all
those infrastructure layers. And I think the question for me is how do we get not just the technology
but all of the things that are surrounding the technology to make that happen?
Yeah.
And that's where we have to realize that those technologies are only as powerful
as the political and social processes surrounding them.
Yep.
You know, I can talk about how to make my, you know, building that I rent in more efficient.
But if I can't convince developers, if I can't convince the city who is setting out the regulations
to set these things in motion, no amount of good technology can solve really valuable
problems.
And that's where I think that that coordination becomes so critical.
Because the technologies in many ways, we're at a point where they're moving faster
than the political and social structures
to make them function well.
And that is why I think we need,
even as we invest in having people
build up technical skill,
we need to invest in people building up
the ability to think about the bridge
because without that,
you can't actually deploy at the levels to make a difference.
And that's one of the reasons I'm firmly a believer that we need societal kinds of regulation,
and I'll use that abstractly rather than government, so that we can actually advance
the development of these things. I think we all have very concrete roles that we can play in it. But, like, the thing that I think we technology folks, like, have a special duty and obligation to, and you inherently get this.
Like, you've been doing this since the very beginning.
It's like all of us every day should be asking, like, how is it that the thing that I'm doing right now is going to positively accrue to the overall social good.
Like, if you can't answer that question in an affirmative way, then maybe you're doing
the wrong damn thing.
Right.
No, I agree.
And I think this is also where I'm a big believer in putting yourself in networks where this
is in conversation.
It's like one of the things that, you know, really struck me in back, you know, especially when I was, you know, doing old dev days.
You can imagine the positiveness, but you actually need people around you who are thinking about and how to implement, which is like everything from business to policy, et cetera.
You need people around you saying, and what if this goes wrong?
You need to be doing this in networks, in communities, and you need to be thinking with
all of the different affected communities or the people that you're trying to really engage and
create possibilities because they need part of that conversation. And I think, you know, one of
the weirdest things right now as I'm, you know, trying to do this exercise in coordination around
differential privacy, it's like the technology will get there,
hopefully as fast as we need it to, but it will get there.
But we need that buy-in process.
We need people understanding it.
We need people really embracing and figuring out how to make it work.
Or we're going to end up in a weird moment
where we have this beautiful object sitting on a shelf
that we're going to look back, you know,
15 years ago, we had it. Why didn't we put it out there? And so that's where it's like, as you're
thinking about the goodness, think not just about like the goodness, you know, of that, but like
how to actually build your teams and your communities in ways that actually can make this
really be part of it. And I'll say one of the most powerful things that I learned from an old mentor is that there is nothing more successful
than getting a community to think it's their own idea.
And so this is one of those moments where, as an industry,
we've gotten into a bad habit of telling people what the
future should be rather than inviting them to
co-construct it with us.
And that co-construction, I think, is what we need to
make all of those beautiful things that we can imagine in our minds become truly real.
One of the things that I really love about Dana is how inclusive she is. She's always thinking
about how we can bring more people into what we call tech and how we can use those perspectives to make tech better and more
interesting. Yeah, I get asked a lot how I have the time to do this podcast and conversations
like the ones I have with Dana are so energizing that doing this podcast in a way is almost like
therapy for me. Like I walk away from those conversations feeling so much better about the world
and so happy that we have people like Dana
who are pushing on things
in a humane, ethical, principled way
that it helps me just go do my job.
I love that. I love that.
So one of the questions that you always ask of your guests,
and I love this question
because we get so many different answers, is how they got started in tech. Yeah, like Daniel Feinberg,
who is a Pixar engineer and director of photography who worked on Coco and a bunch of other of their
movies, although Coco is the one that most recently made me cry like a little baby. Coco
totally made me cry too, for the record. I have to just say that. When you asked this question of Danielle Feinberg, she shared that it was her love of combining computers and
art, and that actually started when she was just eight years old. And that eventually led to her
getting a BA in computer science from Harvard. So when did you decide that computer graphics was the thing?
Well, you know, it's that same thing of like, I'm looking at these engineering classes.
I don't know, looking at the computer class and I go, look at that computer graphics class.
That sounds awesome.
I want to take that.
How soon can I take that?
Oh, it's got this prerequisite here.
Okay.
So then, and then I, oh, I can't take it till junior year.
And so I'm such a nerd.
Sophomore year, I emailed the professor and I was like, hey, I'm so excited to take your class.
Is there anything I could do to get ahead?
Is there anything I could just play with now?
And I got the most confused email back from him.
You know, I don't think most Harvard students are like emailing professors for future classes and asking how they can do work for it.
He was like, I guess you could go buy the textbook.
But it must have made an
impression. We're still friends to this day. And he clearly knew my enthusiasm going into the class.
And so I go in and it was really a class about programming all the underpinnings of the
programming to get to the 3D world. But there was a day where he turned off the lights and he
started playing these films. And it was the Pixar short films from the late 80s and early 90s.
And this is, I think, 94.
And I still completely, clearly etched in my mind,
just watched those with my mouth hanging open,
was like, that is what I have to do with my life.
Because it was all this math, science, and code I'd been learning.
But it created worlds and stories and characters in this way
that, to me, was just the most perfect combination of everything that I loved.
Before we close, we'd love to revisit some moments when we ask our guests what they do
when they're not innovating, inventing, and, you know, otherwise being amazing and brilliant.
Here were some of our favorite responses, first from Danny Hillis when I asked him what
he did outside of work.
Well, let's see.
I mean, really, I have to say that there is kind of a blend of fun and work for me.
But I do some things that I have no excuse for doing at all at work, like I make perfume.
Oh, I didn in my thinking.
You can't be logical about perfume.
You can't even really give names to the – so it's sort of a meditative thing for me because it turns off what a neurophysiologist would call my default mode network.
Right.
Right. Right.
And so my default mode is very analytical, but in that you really just have to be experiential.
So I look for excuses to do that, hanging out in nature, those sorts of things, to complement it.
Yeah, that's super cool.
And again, we posed the same question to Sam Altman.
I'm very thankful that there's so many things I could say here.
One thing that has been surprisingly great over the last year is a lot of long meditations and finding a group of
people who have been nice enough to spend time with me and teach me and
that's been sort of a significantly changed my perspective on the world.
In what way? I think I just like I'm a very different person now. I think I'm so
much more content and grateful and happier and calm. And it's something that I
just really wouldn't have expected me to get into. I know that a few years ago, I think,
so I don't meditate, but like a bunch of these sort of Buddhist practices around, you know,
sort of compassion and mindfulness are like really helpful.
Like the thing that I've latched onto that's been really useful is just gratitude.
Totally.
Like trying to find in as many moments, in as many days as possible, something to be
truly grateful for.
And like I surprised myself because I'm a, yeah, I think engineers are sort of pessimistic
and, you know, like a little bit cynical by nature.
You're sort of wired a little bit to see all of the problems in the world because that's part of what motivates you to go out and change them and make them better.
But it is sort of a jaundiced way of looking at the world sometimes.
But I've just been shocked at how many things I've, I can find to be grateful for every day
and like how much like calmer that makes me.
Totally.
And here's Neha Narketa,
who is one of the initial authors of Apache Kafka
and co-founder and CTO of Confluent.
My favorite activity to strike a balance with
is to travel to new countries
and experience new cultures.
And that's what me and my husband do.
And our crazy hobby together is to go scuba diving.
Oh, wow.
Some of the, you know, crazy locations.
Oftentimes to see different varieties of sharks.
So I can't, I can't.
I've been in one of those cages where great whites are on the outside and you're on the inside.
I can't confirm if I was
scared or not, but I would say I survived it and it turned out to be fun in a very weird kind of way.
Yeah, you might be a little bit braver than I am.
And of course, we couldn't end without hearing from Jaron Lanier, who, as a musician, has
collaborated with the likes of Yoko Ono, T-Bone Burnett, and Philip Glass.
And Kevin is asking Jaron about his musical instrument collection, which is, it's kind of insane.
Like, expansive doesn't really do justice.
It's kind of insane in the best possible way.
In the best possible way.
I started just learning new instruments, and I have this voracious, perhaps not always healthy need
to always be learning
a new instrument.
And so whether it's
the largest instrument collection,
I'm a little doubtful
because there's some pretty
big instrument museums.
But in terms of playable collection,
I'm pretty sure it is.
And I don't know
how many there are,
but there are a lot of instruments.
And I do run around,
I can play them.
And I have a –
And we're talking like hundreds, if not thousands.
Certainly in the thousands, yeah.
Yeah.
Yeah.
Which is, you know, sort of a mind-boggling, interesting thought in and of itself that there are like a thousand, you know, thousands of distinct instruments that one could collect.
Well, they're the best user interfaces
that have ever been created.
They're the ones that support peak human performance
like no other invention ever.
And they're profoundly beautiful.
And each one has a story.
And each one is kind of a form of time travel
because you learn to move and breathe
like the people who originally played it,
wherever it's from.
So it's a kind of a cultural record that's unlike any other one.
It's a haptic record, if you like.
Well, I think that's a great note to end on.
Yes.
And there were so many more interviews that we just didn't have time to revisit. For instance, our conversation with Anders Heilsberg, who's the inventor of Pascal and TypeScript, as well as our conversations
with Wences Casares and Dio Gonzalez. And one of my personal heroes, Bill Corrin.
The good news is that you can listen to any of these interviews anytime at your leisure,
because that is the nature of podcasting. And before we bring 2019 to a close,
I wanted to take a moment, Kevin, to congratulate you on your book that is due out on April 7th,
and it is called Reprogramming the American Dream.
Yeah, it's pretty exciting. It's very exciting to have all of the writing over with and it off into the publication process.
So I'm excited to see how people react to the book. It is basically telling the story of AI
through my own personal journey, like my life and career, and trying to get us all to think more
about how the decisions that we make right now about how we build the technology, how we deploy it, how we regulate it, the incentive structures that we set up for guiding its evolution over the next several years.
Trying to get people to have a more rigorous and robust debate about all of these things so that we get a form of AI that is actually beneficial for everyone. I am so glad that you wrote the book. I'm also glad for you
that you were done with the writing process. I'm excited to read the book. Let me ask you,
now that you have all this podcasting experience, are you going to record the audiobook or are you
like letting someone else do that? I suspect I'm going to let someone else do that.
Yeah. I don't blame you. I think that that would probably be that take up too much of your very valuable time.
Well, and I think it's also, you know, what I've learned podcasting is this is a skill set.
You have to practice it a lot to like even get OK at it.
I would not even remotely presume that I'm nearly as good as some of the professionals who do audio recordings for books.
Well, I can't wait to read your book next year. I'm nearly as good as some of the professionals who do audio recordings for books.
Well, I can't wait to read your book next year.
And as always, we would love to hear from you at BehindTheTech at Microsoft.com.
Tell us what's on your mind.
Tell us what your favorite episodes of the last year were, what some of your favorite conversations were.
Tell us about your tech heroes, and maybe we'll invite them to be on the show.
And of course, be sure to
tell your friends, your colleagues, your parents, anybody that you're going to be seeing over the
holiday season, you know, the strangers on the street, the Uber, Lyft drivers, everybody,
tell them about the show. And thank you very much for listening. See you next time.