Behind The Tech with Kevin Scott - Andrew Ng: Influential leader in artificial intelligence

Episode Date: August 30, 2018

Is AI the new “electricity”? Find out what one of the most influential leaders in Artificial Intelligence and deep learning has to say about our future.  Behind the Tech’s Kevin Scott talks wit...h Andrew Ng about the Google Brain project, Coursera, Baidu and Ng’s most recent project, deeplearning.ai. You won’t want to miss this episode!

Transcript
Discussion (0)
Starting point is 00:00:00 With the rise of technology, often comes greater concentration of power in smaller numbers of people's hands. And I think that this creates greater risk of ever-growing wealth inequality as well. To be really candid, I think that with the rise of the last few waves of technology, we actually did a great job creating wealth on the East and the West Coast, but we actually did leave large parts of the country behind. I would love for this next one to bring everyone along with us. . Hi, everyone. Welcome to Behind the Tech.
Starting point is 00:00:37 I'm your host, Kevin Scott, Chief Technology Officer for Microsoft. In this podcast, we're going to get behind the tech. We'll talk with some of the people who made our modern tech world possible and understand what motivated them to create what they did. So join me to maybe learn a little bit about the history of computing and get a few behind-the-scenes insights into what's happening today. Stick around. Today, I'm joined by my colleague, Christina Warren. Christina is a senior cloud developer advocate at Microsoft.
Starting point is 00:01:07 Welcome back, Christina. Happy to be here, Kevin, and super excited about who you're going to be talking to today. Yeah, today's guest is Andrew Ng. And Andrew is, I mean, I don't think this is too much to say. He's one of the preeminent minds in artificial intelligence and machine learning. I've been following his work since the Google Brain Project, And, you know, he co-founded Coursera and he's done so many important things and so much important research on AI. And that's a topic that I'm really obsessed with right now. So I can't wait to hear what you guys talk about. Yeah. In addition to his track record as an
Starting point is 00:01:38 entrepreneur, so landing AI, Coursera, being one of the co-leads of the Google Brain Project in its very earliest days. He also has this incredible track record as an academic researcher. He has 100 plus really fantastically good papers on a whole variety of topics in artificial intelligence, which I'm guessing are on the many a PhD student's reading list for the folks who are trying to get degrees in this area now. I can't wait. I'm really looking forward to the conversation. Great. Christina, we'll check back with you after the interview. Coming up next, Andrew Ng. Andrew is founder and CEO of Landing AI, founding lead of the Google Brain Project and co-founder of
Starting point is 00:02:25 Coursera. Andrew is one of the most influential leaders in AI and deep learning. He's also a Stanford University computer science adjunct professor. Andrew, thanks for being here. Thanks a lot for having me, Kevin. So let's go all the way back to the beginning. So you grew up in Asia, and I'm just sort of curious, when was it that you realized you were really interested in math and computer science? Yeah, I was born in London, but I grew up mostly in Hong Kong and Singapore. And I think I started coding when I was six years old. And my father had a few very old computers.
Starting point is 00:02:58 The one I used the most was some old Atari, where I remember there were these books where you would read the code in a book and just type in the computer. And then you have these computer games you could play that you just implemented yourself. So I thought that was wonderful. Yeah. And so that was like probably the Atari 400 or 800. Yeah, Atari 800 sounds right. It was definitely some Atari. Yeah, that's awesome. And what sorts of games were you most interested in? You know, the one that fascinated me the most was a number guessing game, where you, the human, would think of a number from 1 to 100, and the computer would basically do
Starting point is 00:03:29 binary search, but choose, is it higher or lower than 50, is it higher or lower than 75, and so on, until it guesses the right number. Well, in sort of a weird way, that's sort of like early statistical machine learning, right? Yeah, yeah. And yeah, so 6-0, it's just fascinating. The computer could guess. Yeah. And yeah, so six years, it's just fascinating. The computer could guess. Yeah. And so from six years to, did you go to a science and technology high school?
Starting point is 00:03:52 Did you take computer science classes when you were a kid or? I went to good schools, St. Paul's in Hong Kong and then ACPS and the Raffles in Singapore. I was lucky to go to good schools. I was fortunate to have grown up in countries with great educational systems. Great teachers. They made us work really hard, but also gave us lots of opportunities to explore. And I feel like computer science is not magic. You and I do this. We know this.
Starting point is 00:04:18 While I'm very excited about the work I get to do in computer science and AI, I actually feel like anyone could do what I do if they put in a bit of time to learn to do these things as well. Having good teachers helps a lot. We chatted in our last episode with Alice Steinglass, who's the president of Code.org, and they are spending some total of their energy trying to get K-12 students interested in computer science
Starting point is 00:04:42 and pursuing careers in STEM. You're also an educator. You were a tenure professor at Stanford and spent a good chunk of your life in academia. What sorts of things would you encourage students to think about if they are considering a career in computing? A huge admirer of Code.org. I think what they're doing is great. You know, I think once upon a time, society used to wonder if everyone needed to be literate. Maybe all we needed was for a few monks to read the Bible to us,
Starting point is 00:05:12 and we didn't need to learn to read and write ourselves because we just go and listen to the priests or the monks. But we found that when a lot of us learned to read and write, that really improved human-to-human communication. I think that in the future, every person needs to be computer literate at the level of being able to write at least simple programs
Starting point is 00:05:30 because computers are becoming so important in our world and coding is the deepest way for people and machines to communicate. There's such a scarcity of computer programmers today that most computer programmers end up writing software for thousands of millions of people. But in the future, if everyone knows how to code, I would love for the proprietors of a small mom
Starting point is 00:05:50 and pop store at a corner to build a program and LCD display to better advertise their weekly sale. So I think just as literacy, we found that having everyone be able to read and write improved human to human communication. I actually think everyone in the future should learn to code because that's how we get people and computers to communicate at the deepest levels. Yeah, I think that's a really great segue into the main topic that I wanted to chat about today, AI, because I think even you have used this anecdote that AI is going to sort of be like electricity. I think I came up with that.
Starting point is 00:06:25 Yeah, no, this is your brilliant quote. And it's sort of spot on. You know, the push to literacy in many ways is a byproduct of the second and third industrial revolution. You know, like we had this transformed society where like you actually had to be literate in order to, you know, sort of function in this, like, quickly industrializing world. And so, you know, like, I wonder how many, you know, analogs you see between the last industrial revolution and what's happening with AI right now. Yeah, you know, the last industrial revolution changed so much human labor.
Starting point is 00:07:04 I think one of the biggest differences between the last one and this one is that this one will happen faster because the world is so much more connected today. So wherever you are in the world listening to this, there's a good chance that there's an AI algorithm that's not yet even been invented as of today, but that, you know, will probably affect your life five years from now. A research university in Singapore could come up with something next week and then it would make its way to the United States in a month and another year after that it would be in products that affect our lives. So the world is connected in a way that just wasn't true at the last industrial revolution. And I think the pace and speed will bring challenges really to individuals and companies and corporations. But our ability to drive tremendous value for AI, for the new ideas, be a tremendous driver for global GDP growth, I think is also maybe even faster and greater than before.
Starting point is 00:07:56 Yeah. So let's dig into that a little bit more. So you've been doing AI machine learning for a really long time now. When did you decide that that's the thing you were going to specialize on as a computer scientist? So when I was in high school in Singapore, my father, who is a doctor, was trying to implement AI systems. Back then, he was actually using X-bit systems, which turned out not to be that good a technology. But he was implementing AI systems of his day to try to diagnose, I think, lymphoma. And this is in the late 80 diagnose, I think, lymphoma.
Starting point is 00:08:26 And this is in the late 80s? I think I was like 15 years old at that time. So yeah, late 80s. And so I was very fortunate to learn from my father about X-bit systems and also about neural networks, because they had a day in the sun back then. And that later became an internship at the National University of Singapore, where I wrote my first research paper, actually. And I found a copy of it recently.
Starting point is 00:08:51 When reading back, now I think it was a really embarrassing research paper that we didn't know any better back then. And I've actually been doing AI, you know, computer science and AI pretty much since then. When I look at your CV and the papers that you've written over the course of your career, it's like you really had your hands in a little bit of everything. You know, there was this inverse reinforcement learning work that you did and published the first paper in 2000. And then you were doing some work on what looks like information retrieval, document representations and whatnot. By 2007, you were doing this interesting stuff on self-taught learning. So so transfer learning from unlabeled data. And then you wrote the paper in 2009 on this large-scale unsupervised learning using graphical processing. So just in this 10-year period in your own research, you covered so many things.
Starting point is 00:09:42 And in 2009, we hadn't even really hit the curve yet on deep learning. The ImageNet result from Hinton hadn't happened yet. How do you, as one of the principals, you sort of help create the feel. What does the rate of progress feel like to you? Because I think this is one of the things that people get perhaps a little bit overexcited about sometimes. One of the things I've learned in my career is that you have to do things before they're obvious to everyone if you want to make a difference and get the best results. So I think I was fortunate, you know, back in maybe 2007 or so to see the early signs that deep learning was going to take off. And so with that conviction, I decided to go in and do it.
Starting point is 00:10:23 And that turned out to work well. Even when I went to Google to start the Google Brain Project, at that time, neural networks was still a bad word to many people. And there was a lot of initial skepticism. But fortunately, Larry Page was supportive and that started Google Brain. And I think when we started Coursera, online education was not an obvious thing to do. There were other previous efforts, massive efforts that failed. But because we saw signs that we could make it work with a conviction to go in. When I took on the role at Baidu, at that time, a lot of people in the U.S. were asking me, hey, Andrew, why on earth would you want to do AI in China? What AI is there in China?
Starting point is 00:10:59 And I think, again, I was fortunate that I was part of something big. And even today, I think landing AI, where I'm spending a lot of. And even today, I think landing AI, where I'm spending a lot of my time, people initially ask me, AI for manufacturing or AI for agriculture or trying to transform coins using AI, that's a weird thing to do. I do find people actually catch on faster.
Starting point is 00:11:16 So I find that as I get older, the speed at which people go from being really skeptical about what I do versus to saying, oh, maybe that's a good idea. That window is becoming much shorter. Is that because the community is maturing or because you've got such an incredible track record? I don't know.
Starting point is 00:11:32 I think everyone's getting smarter all around the world. As you look at how machine learning has changed over the past just 20 years, what's the most remarkable thing from your perspective? I think a lot of recent progress was driven by computational scale, scale of data, and then also algorithmic innovation. But I think it's really interesting
Starting point is 00:11:55 when something grows exponentially, people that are insiders, every year you say, oh yeah, it works 50% better than the year before. And every year it's like, hey, another 50% year-on-year progress. So to a lot of machine learning insiders, it doesn't feel that magical. It's like, yeah, you just get up and you work on it and it works better.
Starting point is 00:12:12 To people that didn't grow up in machine learning, exponential growth often looks like it came out of nowhere. So I've seen this in multiple industries with the rise of the movement, with the rise of machine learning and deep learning. I feel like a lot of the insiders feel like, yep, works 50% or some percent better than last year. But it's really the people that weren't insiders that feel like, wow, this came out of nowhere.
Starting point is 00:12:33 Where did this come from? So that's been interesting to observe. But one thing you and I have chatted about before, there is a lot of hype about AI. And I think that what happened with the earlier AI winters is that there was a lot of hype about AI. And I think that what happened with the earlier AI winters is that there was a lot of hype about AI that turned out not to be that useful or valuable. But one thing that's really different today is that large companies like Microsoft, Baidu, Google, Facebook, and so on, are driving tremendous amounts of revenue as well as user value through modern machine learning tools
Starting point is 00:13:05 and that very strong economic support. I think machine learning is making a difference to GDP. That strong economic support means we're not in for another AI winter. Having said that, there is a lot of hype about AGI, artificial general intelligence, this really overhyped fear of evil killer robots, AI can do everything a human can do. And I would actually welcome a reset of expectations around that. But hopefully we can reset expectations around AGI to be more realistic without, you know, throwing out baby with the bathwater.
Starting point is 00:13:34 If you look at today's world, there are a lot more people working on valuable deep learning projects today than six months ago. And six months ago, there were a lot more people doing this than six months before that. So you look at it in terms of number of people, number of projects, number of value being created, it's all going up. It's just that some of the hype and unrealistic expectations about, hey, maybe we'll have evil robots in two years or 10 years and we should defend against it. I think that expectation should be reset. Yeah. I think you're spot on about this sort of inside versus outside perspective. The first machine learning stuff that I did was 15 years-ish ago
Starting point is 00:14:08 when I was building classifiers for content for Google's ad systems. And eventually my teams worked on some of the CTR prediction stuff for the ads auction. And it was always amazing to me how simple an algorithm you could get by with if you had lots of compute and lots of data. And you had these trends that were sort of driving things. So like Moore's Law and things that we were doing in cloud computing was making, you know, sort of exponentially more compute available for solving machine learning problems like the stuff that you did, you know, leveraging this sort of embarrassingly
Starting point is 00:14:45 parallelism in some of these problems and solving them on GPUs, which are really great at, you know, doing this sort of idiosyncratic type of compute. So like that compute is one exponential trend and then the amount of available data for training is this other thing where it's just coming in at this crushing rate.
Starting point is 00:15:03 You were at the Microsoft CEO Summit this year, and you gave this beautiful explanation where you sort of said supervised machine learning is basically learning from data, a black box that takes one set of inputs and produces another set of outputs. And the inputs might be an image, and the outputs might be text labels for the objects in the image. It might be a waveform coming in that has human speech in it, and the output might be the speech text. But really, that's sort of at the core of this gigantic explosion of work and energy that we've got right now. And HEI is a little bit different from that. Yeah. And in fact, really little bit different from that. Yeah.
Starting point is 00:15:45 And in fact, really, to give credit where it's due, you know, actually many, many years ago, I did an internship at Microsoft Research back when I was still in school. And even back then, I think it was Eric Brill and Michel Banco at Microsoft way back had already published a paper using simple algorithms, you know, that basically it wasn't who has the best algorithm that wins. It was who has the most data for the application they were looking at at NLP. And so I think that the continuation of that trend that people like Eric and Michelle had spotted a long time ago, that's driving a lot of the progress in modern machine learning still.
Starting point is 00:16:18 Yeah. Sometimes with AI research, you get these really unexpected results. One of those, I remember, was the famous Google Cat result from the Google Brain team. Yeah, so actually, that was an interesting project. I was still full-time at Stanford. My students at the time, Adam Coates and others, started to spot trends that basically the bigger you build the neural networks,
Starting point is 00:16:42 the better they work. So that was a rough conclusion. And so I started to look around Silicon Valley to see where can I get a lot of computers to train really, really big neural networks. And I think in hindsight, back then, a lot of us leaders of deep learning had a much stronger emphasis on unsupervised learning, so learning without labeled data, such as get a computer to look at a lot of pictures
Starting point is 00:17:02 or watch a lot of YouTube videos without telling it what every frame or what every object is. So I had friends at Google, so I wound up pitching to Google to start a project, which we later called the Google Brain Project, to really scale up neural networks. We started off using Google's cloud, the CPUs, and in hindsight, I wish we tried to build up GPU capabilities at Google sooner,
Starting point is 00:17:25 but for complicated reasons, that took a long time to do, which is why I wound up doing that at Stanford rather than at Google first. And I was really fortunate to really recruit a great team to work with me on the Google Brain project. I think one of the best things I did was convince Jeff Dean to come and work. And in fact, I remember the early days, we were actually nervous about whether Jeff Dean would remain interested in the project. So a bunch of us actually had conversations to strategize, boy, can we make sure to keep Jeff Dean engaged, that he doesn't lose interest and go do something else. So thankfully, he stayed. The Google Cat thing
Starting point is 00:17:57 was led by my, at the time, PhD student Kwok Le, who together with Ji-Hsuan Niam were the first two sort of machine learning interns that I brought into the Google Brain team. And I still remember when Kwok had trained us on supervised learning algorithm. It was almost a joke. You know, it was like, hey, there are a lot of cats on YouTube. Let's see if it's learning a cat detector.
Starting point is 00:18:17 And I still remember when Kwok told me to walk over and say, hey, Andrew, look at this. And I said, oh, wow, you had unsupervised learning algorithm, watch YouTube videos and learn the concept of cat. That's amazing. And so that wound up being an influential piece of work. Because it was unsupervised learning,
Starting point is 00:18:32 learning from tons of data for algorithm to discover concepts by itself. I think a lot of us actually overestimated the early impact of unsupervised learning. But again, when I was leading a Google brain team, one of our first partners was the speech team. We're convinced and then who great guy. And that was really working with Vincent and his team and seeing some of the other things happening at Google and outside that caused a lot
Starting point is 00:18:54 of us to realize that there was much greater short-term impact to be had with supervised learning. And then for better or worse, when a lot of the deep learning communities saw this, so many of us shifted so much of our efforts to supervised learning that maybe we're under-resourcing the basic research we still need on unsupervised learning these days, which may be, you know, I think unsupervised learning is super important, but there's so much value to be made with supervised learning. So much of the attention is there right now. And I think really what happened with the Google Brain project was, with the first couple successes, one being the speech project that we worked with the speech team on, what happened was other teams saw the great results that the speech team was getting with deep learning with our help. And so more and more of the speech team's peers, ranging from Google Maps to other teams,
Starting point is 00:19:42 started to become friends and allies of the Google Brain team. And we started doing more and more projects. And then the other story is after the team had tons of momentum, thank God we managed to convince Jeff Dean to stick with the project because one of the things that gave me a lot of comfort when I wanted to step away from a day-to-day role to spend more time at Coursera was I was able to hand over the leadership of the team to Jeff Dean. And that gave me a lot of comfort that I was leaving the team in great hands. I sort of wonder if there's sort of a message or a takeaway for AI researchers in both
Starting point is 00:20:16 academia and industry about the Jeff Dean example. So for those who don't know, Jeff Dean might be the best engineer in the world. Yeah, might be true. Yeah. Like, I've certainly never worked with anyone quite as good as him. I mean, I remember there was this... He's in a league of his own. Yeah, Jeff Dean is definitely... I remember, like, back in the... Long, long ago at Google, this must have been 2004, 2005, right after we'd gone public,
Starting point is 00:20:43 Alan Eustace, who was running all of the engineering team at the time, would once a year send a note out to everyone in engineering at performance review time to get your Google resume polished up so that you could nominate yourself for promotion. Or first thing that you were supposed to do was get your Google resume, which is sort of this internal version of a resume that showed all of your Google-specific work. And the example resume that he would send out was Jeff's. And even in 2004, like, he'd been there long enough where he'd just done everything. And, you know, I was an engineer at the time.
Starting point is 00:21:19 I would look at this, and I'm like, oh, my God, my resume looks nothing like this. And so I remember sending a note to Alan Eustace saying, you have got to find someone else's resume to send out. You're depressing a thousand engineers, like, every time you send this out because Jeff is so great. And just huge fans, really, of Jeff. There's so many, you know, fans of Jeff among them. And just not just a great scientist, but also just an incredibly nice guy. Yeah. But this whole notion of coupling world-class engineering and world-class systems engineering with AI problem solving, I think that is something that we don't really fully understand enough. Like you can be the smartest AI guy in the world and, you know, just have this sort of incredible theoretical breakthrough. But if you can't get that idea implemented, not that it has no impact, it just
Starting point is 00:22:11 sort of diminishes the potential impact that the idea can have. That partnership I think you have with Jeff is something really special. I think I was really fortunate that even when I started the Google Brain team, I feel like, you know, I brought a lot of the machine learning expertise and Jeff and other Google engineers, like early team members like Rajat Monger, Greg Corrado. This starts as a 20% project for him, I think. But there are other Google engineers, really, first and foremost, Jeff, let's get a thousand computers to run this, right? And having Larry Page's backing and Jess' ability to marshal those types of computational resources turned out to be really helpful. Well, so let's switch gears just a little bit. I think it was really apt that you pointed out that AI and machine learning in particular are starting to have GDP scale impact on the world.
Starting point is 00:23:06 And certainly if you look at the products that we're all using every day, like there's many levels of machine learning involved in everything from search to social networks to, I mean, basically everything you use has got just a little kiss of machine learning in it. So with that impact, and given how pervasive these technologies are, there's a huge amount of responsibility that comes along with it. And I know that you've been thinking a lot about ethical development of AI and what our responsibilities are as scientists and engineers as we build these technologies. I'd love to chat about that for a few minutes. Yeah, I think there's potential to promulgate things like discrimination and bias. I think that with the rise of technology, often comes greater concentration of power in smaller numbers of people's hands.
Starting point is 00:24:00 And I think that this creates greater risk of ever-growing wealth inequality as well. And so, you know, we're recording this here in California. And to be really candid, I think that with the rise of the last few waves of technology, we actually did a great job creating wealth on the East and the West Coast, but we actually did leave large parts of the country behind. And I would love for this next one to bring everyone along with us. Yep. One of the things that I've spent a bunch of time thinking about is, from Microsoft's perspective, when we think about how we build our AI technology, we're thinking about platforms that we can put in the hands of developers.
Starting point is 00:24:39 It's just sort of our wiring as a company. So the example you gave earlier in the talk where you want someone in a mom and pop shop to be able to program their own LCD sign to do whatever, everybody becomes a programmer. We actually think that AI can play a big role in delivering this future. And we want everybody to be an AI developer. I've been spending a bunch of my time lately talking with folks in agriculture and in healthcare, which, again, like if you're thinking about the problems that society has to solve. In the United States, the cost of health care is growing faster than GDP, which is not sustainable over long periods of time. And basically, the only way that I see that you break that curve is with technology. Now, it might not be AI, like I think it is, but something is going' level of accuracy, which isn't about taking all of the cardiology jobs away.
Starting point is 00:25:55 It's about making this diagnostic capability available to everyone because the cost is free and then letting the cardiologist do what's difficult and unique that humans should be doing. I don't know if you see that pattern in other domains as well. I think that there'll be a lot of partnerships between AI teams and doctors that will be very valuable. You know, one thing that excites me these days with the theme of things like healthcare, agriculture, manufacturing is helping great companies become great AI companies. I was fortunate, really, to have led the Google Brain team, which became, I would say, probably the leading force for turning Google from what was already a great company
Starting point is 00:26:35 into today a great AI company. And then at Baidu, I was responsible for the company's AI technology and strategy and team, and I think that helped transform Baidu from what was already a great company into a great AI company. And I think that helped transform Baidu from what was already a great company into a great AI company. And I think really Satya did a great job also transforming Microsoft
Starting point is 00:26:50 from a great company to a great AI company. But for AI to reach its full potential, we can't just transform tech companies. We need to pull other industries along for it to create this GDP growth, for it to help people in healthcare deliver a safer and more accessible food to people.
Starting point is 00:27:07 And so one thing I'm excited about building on my experience helping with really Google and Baidu's transformation is to look at other industries as well to see if either by providing AI solutions or by engaging deeply in AI transformation programs, whether my team at Landing AI, whether Landing AI can help other industries also become great at AI. Well, talk a little bit more about what Landing AI's mission is. We want to empower businesses with AI.
Starting point is 00:27:36 And there is so much need for AI to enter other industries than technology. Everything ranging from manufacturing to agriculture to healthcare to many more. For example, in manufacturing, there are today in factories, sometimes hundreds of thousands of people using their eyes to inspect parts
Starting point is 00:27:53 as they come off the assembly line to check for scratches and things and so on. And we find that we can, for the most part, automate that with deep learning and often do it at a level of reliability and consistency that's greater than the people are. People, you know, squinting at something 20 centimeters away your whole day, that's actually not great for your eyesight, it turns out.
Starting point is 00:28:12 And I would love for computers, rather than often these young employees to do it. So Landing AI is working with a few different industries to provide solutions like that. And we also engage with companies with broader transformation programs. So for both Google and Baidu, it was not one thing. It's not that, you know, implement neural networks for ads
Starting point is 00:28:35 and sell as a great AI company. For a company to become a great AI company is much more than that. And then having sort of helped two great companies do that, we are trying to help other companies as well, especially ones outside tech, become leading AI entities in their industry vertical. So I find that work very meaningful and very exciting. Several days ago, I tweeted out that on Monday, I actually literally woke up at 5 a.m. so excited about one of the landing AI
Starting point is 00:29:00 projects. I couldn't get back to sleep. I started getting up and scrolling on my notebook. But I find this work really, really meaningful. That's awesome. One thing I want to, you know, sort of press on a little bit is this manufacturing quality control example that you just gave. I think the thing that a lot of folks don't understand is it's not necessarily about the jobs going away. It's about these companies being able to do more. So, like, I worked in a small manufacturing company while I was in college, and, like, we had exactly this same thing. So, we operated a infrared reflow soldering machine there, which, you know, sort of melts surface mount components onto circuit boards. And so, you have to visually inspect the board before it goes on to make sure
Starting point is 00:29:41 the components are seated and the solder's been screened onto all the right parts. And, you know, when it comes out, you have to visually inspect it to make sure that none of the parts have tombstoned. There are a variety of like little things that can happen in the process. So we had people doing that. If there was some way for them not to do it, they would go do something else that was more valuable. If we could run more boards, it would actually, you know, in a way could create more jobs because the more work that this company could do economically, the more jobs in general that it can create. And I'm sort of seeing AI in several different places, like in manufacturing automation is helping to bring back jobs from overseas that were lost because it was just sort of cheaper to do them with low-cost labor in some other part of the world. They're
Starting point is 00:30:33 coming back now because, like, automation has gotten so good that you can start doing them with fewer, more expert people, but here in the United States, locally in these communities, where whatever it is that they're manufacturing is needed. It's like this really interesting phenomenon. That was one part of your career I did not know about. I followed a lot of your work at Google and at Microsoft. And even today, people still speak glowingly of the privacy practices you put in place when you're at Google. I did not know you're in this soldering business way back. Yeah, I had to put myself through college some way or the other. It was interesting, though. I remember one of my first jobs, I had to put brass rivets into 5,000 circuit boards,
Starting point is 00:31:17 like the circuit boards were controllers for commercial washing machines. And there were six little brass tabs that you would put electrical connectors onto and like each one of them had to be riveted. So it was 30,000 rivets that had to be done. And we had a manual rivet press. And like my job at this company in its first three months of existence, right after I graduated high school, was to press that rivet press 30,000 times. And it's awful.
Starting point is 00:31:42 Automation is not a bad thing. And in fact, in a lot of countries we work with, we're seeing, for example, Japan. Culture is actually very different than the United States because there's an aging population and there just aren't enough people to do the work. So they welcome automation because the options are either automate or, well, just shut down the whole plant because it's just impossible to hire with the aging population. Yeah. And in Japan, it actually is going to become a crucial social issue sometime in the next 100 years or so because their fertility rates are such that they're in major population decline. So you should hope for really good AI there because we're going to need incredibly sophisticated things to take care of the aging population there, especially in healthcare and
Starting point is 00:32:25 elder care and whatnot. Yeah. I think when we automated elevators, once elevators had to have a person operating them, a lot of elevator operators didn't lose their jobs because we switched to automatic elevators. And I think one challenge the AI offers is that with the world being as connected as it is today, I think this change will happen very quickly. The potential for jobs to disappear is faster this time around. And so I think when we work with customers, we actually have a stance on wanting to make sure that everyone is treated well. And to the extent we're able to step in and try to encourage or even assist directly with retraining to help them find better opportunities, we're totally going to do that. It actually hasn't been needed so far for us
Starting point is 00:33:09 because we're actually not displacing any jobs. But if it ever happens, that is our stance. But I think this actually speaks to the important role of government with the rise of AI. So I think the world is not about to run out of jobs anytime soon. But as LinkedIn has said, through the LinkedIn data, many organizations and Coursera has seen in Coursera's data as well, our population in the United States and globally
Starting point is 00:33:31 is not well matched to the jobs that are being created. And we even can't find enough people for it. We can't find enough nurses. We can't find enough wind turbine technicians. In a lot of cities, the highest paid person might be the auto mechanic and we can't find enough of those. And so I think a lot of the challenge and also the responsibility for nations or for governments or for societies is to, I think, provide a safety net so that everyone has a shot at learning the new skills they need in order to enter these other trades that we just can't find enough people to work in right now. I could not agree more. I think this is one of the most important balances that we're going to have to strike as a society. And it's not just the United States.
Starting point is 00:34:13 It's a worldwide thing. We don't want to underinvest in AI and this technology because we're frightened about the negative consequences it's going to have on jobs that might be disrupted. And on the other hand, like, we don't want to be inhumane, incompassionate have to make reskilling and education much cheaper and much more accessible to folks. Because one of the things that we're doing is we're entering this new world where the work of the mind is going to be far, far, far more valuable even than it already is than the work of the body. And so that's the muscle that has to get worked out. And we've just got to get people into that habit and make it cheap and accessible. Yeah, it's actually really interesting.
Starting point is 00:35:12 When you look at the careers of athletes, you can't just train until you're in great shape at age 21 and then start working out. Just human body doesn't work like that. Human mind is the same. You can't just train and work on your brain until you're 21 and then start working out. Just human body doesn't work like that. Human mind is the same. You can't just train and work on your brain until you're 21 and then start working on your brain. Your brain will go flabby if you do that.
Starting point is 00:35:30 And so I think one of the ways I want the world to be different is I want us to build a lifelong learning society. And we need this because the pace of change is faster. There's going to be technology invented next year that will affect your job five years after that. So all of us had better keep on learning new things. And I think this is a cultural sea change that needs to happen across society
Starting point is 00:35:49 because for us to all contribute meaningfully to the world and make other people's lives better, the skills you need five years from now may be very different than the skills you have today. And if you're no longer in college, well, we still need you to go and acquire those skills. So I think we just need to acknowledge also that learning and studying is hard work. I want people, if they have the capacity, you know, sometimes your life circumstances prevent you from working in certain ways and everyone deserves a lot of support throughout all phases of life. But if someone has the capacity to spend more time studying rather than spend that equal amount of time watching TV.
Starting point is 00:36:25 I would rather they spend that time studying so that they can better contribute to their own lives and to the broader society. Yeah, and speaking again about the role of government, one of the things that I think the government could do to help with this transition is AI has this enormous potential to lower the cost of subsistence. So through precision agriculture and artificial intelligence and healthcare, they're probably things that we can do to affect housing costs with AI and automation. So looking at Maslow's hierarchy of needs, that bottom two levels where you've got food, clothing, shelter, and your personal safety and security, I think the more that we can be investing in those sorts of things,
Starting point is 00:37:12 like technologies that address those needs and address them across the board for everyone, it does nothing but lift all boats, basically. I wish I had a magic wand that I could wave over more young entrepreneurs and encourage them to create startups that are taking this really interesting, increasingly valuable AI toolbox that they have and apply it to these problems. They really could change the world in this incredible way.
Starting point is 00:37:41 You make such a good point. So the last tech thing that I wanted to ask you is, there is sort of just an incredible rate of innovation right now on AI in general. And some of the stuff is what I call stunt AI, not in the sense that it's not valuable, but... No, go ahead. Name some names. I want to hear. Well, you know, so I'll name our own names. So we at Microsoft did this really interesting AI stunt where we had this hierarchical reinforcement learning system that beat Ms. Pac-Man.
Starting point is 00:38:14 So, like, that's sort of the, you know, the flavor of what I would call stunt AI. And I think they're sort of useful in a way because a lot of what we do is very difficult for lay folks to understand. So, like, the value of the stunt is like, holy crap, like, you can actually have a piece of AI do this. You know, I'm a big classical piano fan. And one of the things I've always lamented about being a computer scientist is there's no performance of computer science in general where a normal person can sort of listen to it. Or, you know, if you're talking about an athlete like Steph Curry, you know, who has done an incredible amount of technical preparation, becoming as good as he is at basketball, there's a performance at the end, you know, where you can sort of appreciate his skill and ability.
Starting point is 00:39:00 And these stunt AI things, like, in a way, are a way for folks to appreciate what's happening. Those are the exciting AI things for the lay folks. What are the exciting things as a specialist that you see on the horizon, like new things in reinforcement learning coming? People are doing some interesting stuff with transfer learning now where I'm starting to see some promise that, you promise that not every machine learning problem is something where you're solving it in isolation. What's interesting to you? So in the short term, one thing I'm excited about
Starting point is 00:39:33 is turning machine learning from a bit of a black art into more of a systematic engineering discipline. I think today too much of machine learning is a few wise people that happen to say, oh, you know, change the activation function in layer five. And then for some reason it works, but it can turn into a systematic engineering process that would demystify a lot of it and help a lot more people access these tools. Do you think that that's going to come from there becoming a real engineering practice of deep neural network architect? Or is that going to get solved with this learning to learn stuff,
Starting point is 00:40:06 or AutoML stuff that folks are working on, or maybe both? I think AutoML is a very nice piece of work and is a small piece of the puzzle, mainly surrounding optimizing hyperparameters and things like that. But I think there are even bigger questions like, when should you collect more data, or is the data set good enough, or should you synthesize more data, or should you switch algorithms
Starting point is 00:40:26 from this type of environment to that algorithm and do you have two neural networks or one neural network operating a pipeline i think those bigger architectural questions go beyond what the current automatic algorithm is able to do i've been working on this book machine learning yearning ml yearning dot org that i've been emailing out to people on the mailing list for free, that's trying to conceptualize my own ideas, I guess, to turn machine learning into more of an engineering discipline, to make it more systematic. But I think there's a lot more that the community needs to do beyond what I as one individual could do as well.
Starting point is 00:40:59 But it'll be really exciting when we can take the powerful tools of supervised learning and help a lot more people able to use them systematically. With the rise of software engineering came the rise of ideas like, oh, maybe we should have a PM. I think that was a Microsoft invention, right? The PM, product manager, and then program manager, project manager types of roles way back. And then eventually came ideas like the waterfall planning models or the scrum agile models. I think we need new software engineering practices. How do we get people to work together in the machine learning world?
Starting point is 00:41:28 So we're all sorting that out too. Landing AI asks our product managers to do things differently than I think I see any other company tell their product managers to do. So we're still figuring out these workflows and practices. Beyond that, I think on the more pure technology side,
Starting point is 00:41:42 a lot of excitement about GANs, I think it would transform entertainment and art. It'll be interesting to see how it goes beyond that. I think the value of reinforcement learning in games is very overhyped, but I'm seeing some real traction in using reinforcement learning to control robots. So early signs, you know,
Starting point is 00:41:57 from my friends working on projects that are not yet public for the most part, but there are signs of meaningful progress in deep reinforcement learning applied to robotics. And then I think transfer learning is vastly underrated.
Starting point is 00:42:10 The ability to learn from... So there was a paper out of Facebook where they trained on an unprecedented 3.5 billion images, which is very big. 3.5 billion images
Starting point is 00:42:18 is very large, even by today's standards. And found that it turns out training on 3.5 billion, in their case, Instagram images is actually better than training on only 1 billion images. So this is a good sign for the microprocessor companies, I think, because it means that,
Starting point is 00:42:33 you know, hey, keep building these faster processes. We'll find a way to suck up the processing power. Yes. But with the ability to train on really, really massive data sets to do transfer learning or pre-training or some set of ideas around there. I think that is very underrated today still. And then super long-term, you know, we use the term unsupervised learning to describe a really, really complicated set of ideas that we don't even fully understand. But I think that also will be very important in the longer term.
Starting point is 00:43:02 So tell us something that people wouldn't know about you. Sometimes I still go to a bookstore and deliberately buy a magazine in some totally strange area that I would otherwise never have bought a magazine in. And so whatever, $5, you end up with a magazine in some area that you just previously knew absolutely nothing about. I think that's awesome.
Starting point is 00:43:22 One thing that not many people know about me is I actually really love stationery. So my wife knows when we travel to foreign countries, sometimes you can spend way too long looking at pens and pencils and paper. I think part of me feels like, boy, if only I had the perfect pen and the perfect paper, I could come up with better ideas. It hasn't worked out so far, but that dream lives on. That's awesome.
Starting point is 00:43:42 All right. Well, thank you so much, Andrew, for coming in today. Thanks a lot for having me, Kevin. That was a really terrific conversation. Yeah, it was a ton of fun. It was like all of my best conversations. I felt like it wasn't long at all and was glancing down at my phone. I'm like, oh, my God, like we just spent 48 minutes.
Starting point is 00:44:03 One of the questions that you asked, Andrew Andrew was what technology is he kind of been most impressed by and excited by that's kind of coming down the pike with AI. And I kind of wanted to turn that back on you because you've been working with AI for a really long time, you know, at Google and at LinkedIn and now at Microsoft. So what have you seen that really excites you? Several things. I'm excited that this trend that started a whole bunch of years ago, more data plus more compute equals more practical AI and machine learning solutions. It's been surprising to me that that trend continues to have legs. And so when I look forward into the future and I see more data coming online, particularly with IoT and the intelligent edge, is we get more things connected to the cloud that are sensing either through cameras or far-field microphone arrays or temperature sensors or whatever it is that they are. We will increasingly be digitizing
Starting point is 00:45:06 the world. And honestly, my prediction is that the volumes of data that we're gathering now will seem trivial by comparison to the volumes that will be produced sometime in the next five to 10 years. And I think you take that with all of the super exciting stuff that's happening with AI silicon right now, just the number of startups that are working on brand new architectures for training machine learning models. It really is an exciting time. And I think that combo of more compute, more data is going to continue to surprise and delight us with interesting new results and also deliver this real world GDPacting value that folks
Starting point is 00:45:47 are seeing. So, like, that's super cool. But I tell you, the things that really move me that I have been seeing lately are the applications in, to which people are putting this technology, in precision agriculture and in healthcare. And just recently, we went out to one of our farm partners that Microsoft Research has been working with. You know, the things that they're doing with AI machine learning and edge computing in this small organic farm
Starting point is 00:46:22 in rural Washington state is absolutely incredible. They're doing all of this stuff with a mind towards how do you take a small independent farmer and help them optimize yields, reduce the amount of chemicals that they have to use on their crop, how much water they have to use. So like you're minimizing environmental impacts and raising more food and doing it in this local way. In the developing world, that means that more people are going to get fed. And in the developed world, it means that we all get to be a little more healthy because the quality of the food that we're eating is going to increase. And so, like, there's just this trend, I think, right now where people are just starting to apply this technology to these things that are parts of human subsistence.
Starting point is 00:47:16 You know, so food, clothing, shelter. You know, the things that all of us need in order to live a good quality life. And I think as I see these things and I see the potential that AI has to help everyone have access to high quality of life, the more excited I get. And like, I think in some cases, it may be the only way that you're able to deliver these things at scale to all of society, because some of them are just really expensive right now. And no matter how you redistribute the world's wealth, you're not going to be able to tend to the needs of a growing population without some sort of technological intervention. See, I thought you were going to say something like, oh, we're going to be able to live
Starting point is 00:48:03 in the world of Tron Legacy or The Matrix or whatever. Instead, you get all serious on me and talk about all the great things and world-changing awesome things that are going to happen. I'm going to live in my fantasy, but I like that there are very cool things happening. I did over my vacation read Ready Player One, and despite its mild dystopian overtones. It's a good book. I like the book. It's a damn good book. I really love the book. I want some good book. I like the book. It's a damn good book. I really love the book. I want some of this.
Starting point is 00:48:28 I'm with you. I'm with you. I was a little disappointed in the movie, but I loved the book. And yeah. Okay, we can talk about this offline, but we'll end this now. Yeah. Well, awesome, Christina. I look forward to chatting with you again on the next episode.
Starting point is 00:48:41 Me too. I can't wait. Next time on Behind the Tech, we're going to talk with Judy Estrin, who is a former CTO of Cisco, serial entrepreneur, and is a PhD student, a member of the lab that created the internet protocols. Hope you will join us. Be sure to tell your friends about our new podcast, Behind the Tech, and to subscribe. See you next time.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.