Endgame with Gita Wirjawan - Jeff Dean: A Techno-optimist Look at AI

Episode Date: December 14, 2023

Join Endgame YouTube Channel Membership! Support us and get early access to our videos + more perks in return: https://sgpp.me/becomemember ----------------------- Unraveling the genius behind the wo...rld's leading internet search engine, Jeff Dean, Google's Chief Scientist and the mastermind behind the technology giant's success. Delve into the behind-the-scenes narrative of Google's triumph, explore the principles guiding their AI development, and gain Jeff's optimistic perspective on the future of Artificial Intelligence. #Endgame #GitaWirjawan #Sustainability ----------------------- About the host: Gita Wirjawan is an Indonesian entrepreneur, educator, and currently a visiting scholar at The Shorenstein Asia-Pacific Research Center (APARC), Stanford University. Gita is also just appointed as an Honorary Professor of Politics and International Relations in the School of Politics and International Relations, University of Nottingham, UK. ---------------------- Supplementary Readings: "The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma" (2023) "In the Plex: How Google Thinks, Works, and Shapes Our Lives" (2011) ---------------------- Understand this Episode Better: https://sgpp.me/eps166notes ----------------------- SGPP Indonesia Master of Public Policy: admissions@sgpp.ac.id https://admissions.sgpp.ac.idhttps://wa.me/628111522504 Other "Endgame" episode playlists: Daring Entrepreneurs Wandering Scientists The Take Visit and subscribe: @sgppindonesia @visinemapictures

Transcript
Discussion (0)
Starting point is 00:00:00 What are these AI systems doing today? Because I think a lot of times people are using various AI models without necessarily realizing it. You know, often these features feel pretty magical, but they're actually, you know, often powered by these machine learning models. Lots of data about some of these, you know, problems or domains. Right. But we don't necessarily have the deep understanding to make sense of that data. And so this is where sometimes neural networks, which can learn, you know, fairly complex patterns from data can actually be useful and can create new insights or create new capabilities that didn't exist before. Hi, friends and fellows. Welcome to this special series of conversations involving personalities coming from a number of campuses, including Stanford University.
Starting point is 00:01:15 the purpose of the series is really to unleash thought-provoking ideas that I think would be of tremendous value to you. I want to thank you for your support so far, and welcome to the special series. Hi, we're honored to have Jeff Dean, who's the chief scientist at Google. Jeff, thank you so much for coming on to our show Endgame. Thank you for having me. You've been at Google for a very long time, your employee number 29-point something. I want to ask you about how you grew up. You were born in Hawaii and tell us about how you grew up and how you got interested in computers
Starting point is 00:01:53 and how you transformed Google into what it is today. Sure. That's a loaded question. A bunch of questions. I think I had a somewhat interesting childhood in that my dad did tropical disease research for the first part of my childhood and then switched to being a public health epidemiologist. and my mom did medical anthropology. And so they somehow like to move a lot.
Starting point is 00:02:18 I'm an only child. So every so often we would move. I went to 11 schools in 12 years. And so I was born in Hawaii. Then we moved to Boston and then Uganda and Boston and Arkansas and Hawaii and Minnesota and Somalia and Minnesota and Atlanta for the last two years of high school. And then I went back to Minnesota for college. and then I ended up working for the World Health Organization in Geneva for a year and a half
Starting point is 00:02:46 before going to grad school in Seattle. And then I came to the Bay Area and I haven't moved since. My kids did not get the same tour of the world that I did. But, you know, it was interesting because, you know, seeing a lot of different environments and a lot of different schools and a lot of different ways of teaching and so on, it was, you know, different than many people's upbringing. Was there a time when or a few times, when you were pushed or attempted as to pursue medicine,
Starting point is 00:03:16 you know, the area of expertise of both your parents? Yeah, I mean, I think I have had an interest in, not as a doctor, but as like, you know, obviously conversations around the table growing up were all always about, you know, public health often. My dad actually, the way I got into computing, which is something you asked, my dad was always interested in how could you use, you know, better information to help make better
Starting point is 00:03:46 public health decisions. And so he, you know, was kind of frustrated at the time. This was kind of the mid to late 70s. Right. Where mostly people didn't get to use computers themselves. There was a big mainframe in the basement of some thing. And you would go tell some programmer. No, this was actually in Hawaii. Okay. And, you know, you would go to tell someone what you wanted the computer to do and they would do it and then you would not have this nice kind of more natural interactive experience with computing that that has come to pass with personal computing. But my dad saw an ad in the back of a magazine for a solder it to solder it together yourself kind of kit computer called an Imzi 8080 that he, when I was,
Starting point is 00:04:33 I guess I was nine. So he bought one of these and I was probably not much help, but I like held the soldering iron a little bit and helped him solder it together. It didn't do much at first because it was really like it didn't have a keyboard or a screen. You could toggle together, you know, toggle in individual bits and then like enter them into memory and you could get little programs going that way. Then we got a keyboard. That was a huge improvement. So you could actually type, you know, actual characters and so on. But I kind of got interested in that as we got a couple more peripherals, you could start to type in computer games and basic. So you type in the source code of a game. There was a book I got with a source code for a whole bunch of different
Starting point is 00:05:17 games, and you could type it in, and then you could play the game. And then I started to get interested in, you know, modifying the game a little. Like, it was a good way for, I think, young kids to get interested in programming is have something they want the computer to do. And it's kind of very motivating because you can kind of then figure it out on your own or, like, How would I make the, you know, the torpedoes go twice as fast in this game or whatever? So that was my introduction to computing. And then we were actually quite fortunate to move to Minnesota, which at the time had a interactive time sharing system for all the middle schools and high schools in the state. And so every state, every school got a, you know, a computer account and some computing hardware to dial into the centralized system.
Starting point is 00:06:05 And actually they had kind of these interactive chat rooms at the time. So it was sort of like what the Internet has become. Right. But, you know, 20 years before that. And so kids in Minnesota were living the Internet dream earlier than most. Then you moved on to Seattle for your graduate studies right after Minnesota. Yeah. Why Seattle?
Starting point is 00:06:34 Well, my wife and I. were applying to graduate schools together and, you know, the complexity of matching programs that were good in her field and my field, University of Washington was an awesome pick for us. We love Seattle, a little gray sometimes. A little rainy. My Hawaiian upbringing is, you know, kind of spoils the weather for most other places, but Seattle was great. You know, I really liked my time in graduate school there and, you know, I learned a ton and then I came to the area. Now, what made you join Google in 1999? Yeah. So I'd actually come down to the Bay Area to work for Digital Equipment Corporation, a small research lab in downtown Palo Alto. And actually,
Starting point is 00:07:24 that was the lab that created Alta Vista, which was an early search engine. So some of my colleagues there did the early sort of, you know, key work on that system. And Altavista, you know, at the time had a much larger index than most people. It was a very fast responding system. And one of my colleagues had put together from the crawled pages in the Altavista Index, a system where you could actually in sort of programmatic form see what pages point to which other pages, which is not so hard because you could just look at the contents of the page, but also what pages point to each page, so going backwards in some sense.
Starting point is 00:08:04 And so you can navigate this sort of computational graph forwards and backwards with this kind of set of API calls. And that proved to be quite interesting. So a colleague, Monica Hensinger, and I were working on how could you find related pages to any given web page in the web? And we thought we'd have to try
Starting point is 00:08:29 fairly complicated things, but we said, oh, let's just try something really simple first, which is, let's look at a page, and let's look at what pages point to that page, and then what other pages do they point to? And then you just do some counting and frequent things, and then you divide to normalize the probabilities, and all of a sudden, you know, from the Washington Post, you'd get a list of like CNN and the Wall Street Journal on New York Times, or some page about hiking in the Bay Area, you'd get a bunch of other pages about hiking in the Bay Area. And that caused me to think,
Starting point is 00:09:05 there's actually a lot of information in the link structure of the web. And ultimately, I decided I wanted to be at a smaller company than it was actually a little challenging sometimes to get research you'd done out into the world through a very big company I found. it was just a little indirect. And so I decided I would come to Google. I knew a small company.
Starting point is 00:09:31 Yeah. We were all wedged. Actually, we were all of us were wedged in this tiny little area above what's now a T-Mobile store in downtown Palo Alto when I started. But I knew Oros Hotsla, who's one of our earlier employees. I knew he'd come here. He and I were sort of, he's my academic uncle, I guess. Wow. And so I had, you know,
Starting point is 00:09:54 chatted with him many times at different compiler conferences because we both had a background in compilers and program optimization. When did you get the sense that Google was going to be as big as it is today? Did you think like that already in 1999? Yeah, I mean, we clearly at that time had a really successful and growing service. And you can see it, like the whole company could see that like our traffic was growing. You know, six, seven, ten percent a week at times. And, you know, if you do 1.1 to the 52nd, that's a huge amount of growth in a year.
Starting point is 00:10:30 And, you know, a lot of the first couple of years were really, how can we avoid melting every Tuesday at peak traffic time? So, you know, there was a whole bunch of work of deploying new hardware, but that wasn't enough. We had to do, you know, software performance optimization. We had to sort of redesign the system because, you know, often when you have software that works at scale X, it suddenly doesn't work at scale 10 times 10x or 50X. And so you're constantly refiguring out how are we going to, you know, redesign this part of the system because now it's a big problem, whereas it didn't used to be. So when was it that you realized that this was going to be this big?
Starting point is 00:11:13 One of our early employees, we actually had a giant wall, really long wall, and he put up a, you butcher paper kind of chart on the wall. And it was called the crayon chart. Because every day he would plot like how many queries did we get on Wednesdays. In different colors. We had different partners. And then, you know, you would go along a bit. And then he would run out of room at the top of the paper.
Starting point is 00:11:40 Oh, my. So he'd have to scale it down by a factor of five and then start over again. And it would grow again to the top of the paper. And then he would scale it down by another factor of five. And so he did many, many scalings. And we went from, you know, very few queries per day when I started to, you know, a lot more queries. And, you know, then obviously expanded into a bunch of other product areas and things like that. And anything that you think could have been done differently since the start of Google?
Starting point is 00:12:08 Oh, so many things. I mean, I think it's always good to reflect on, you know, what are we doing well, but also what could we do better. I mean, I think one thing that, one lesson that sticks out is, you know, You know, whenever an organization is growing quickly, you know, we were also hiring people pretty quickly. And I feel like every kind of doubling in company sized caused something that used to work well to no longer work. You know, and there's kind of like these leaps of change that that caused that. You know, sometimes it's, you were all on one floor and now you're on multiple floors in the same building. And then sometimes then you go to multiple buildings.
Starting point is 00:12:50 And then all of a sudden, instead of all of your engineering being done in Mountain View, we opened a New York office and a office in Zurich. Now we have to figure out, okay, how do you have people in many different locations working together? And this was before, you know, obviously all the technology we have today of video, video chatting and that kind of thing. It was, you know, more challenging to figure out who should do what, who is doing what. But we worked through that. We had about five engineering locations for a while. One period that I think we could have done a little better is we decided we would greatly expand the number of engineering locations we had. So we went from about five to 30 in a couple of years.
Starting point is 00:13:35 And really that was about hiring great people in different places who didn't necessarily want to move to one of our locations. but we're like, oh, yeah, that person and this team of five people, we should start an office around them. And it's really, you know, it took a while to digest how should we work in 30 engineering locations instead of five. Because each one of these small locations would kind of look at the main engineering centers in New York and Mountain View and say, we should do stuff just like they do, which means work on everything. And I think that doesn't really work if you're working on everything, but you're 15 people. So we tried to create a little bit of specialty and focus in some of these centers where they get to work on really prominent, important things, but on like just a handful of the different products we have. You know, you started studying neural network in 1990.
Starting point is 00:14:56 We've been talking about that, right? And at that time, it was hugely inhibitorial. by the lack of computational power. Yeah. Do you see the computational power having grown as exponentially as you would have thought then today? Yeah. I mean,
Starting point is 00:15:16 I got introduced to neural networks in my senior year as an undergraduate. And it was kind of a one-week module in some class I took, but I was very intrigued by them. It seemed like kind of the right abstraction. Yeah. And so I decided to then work with that faculty. member to do an undergraduate research project, undergraduate honors thesis on, I felt like we just needed more computation.
Starting point is 00:15:41 So maybe we could do parallel training of neural networks. So we could get the sort of 32 processor machine in the department training a single neural network rather than just using one processor. I was convinced if we could use 32 times as much computational power, it would be amazing. it'd be so great. Turns out I was wrong. We kept saying a million times. Yeah, a million times as much computational power,
Starting point is 00:16:06 which is sort of what the progress in general computation produced over 20 years or so. And then through just general improvements, computer architecture, improvements in semiconductor manufacturing processes and fabrication shrinks and so on, all of that compounded to the fact that our phones now are you know, a hundred times or a thousand times as powerful as the giant desktop machines we used to use. So I feel like once we started to have about a million times as much computational power in maybe like 2008, 2009, 2010, then it started to be the case that neural networks
Starting point is 00:16:48 could solve real problems, not just kind of interesting small-scale toy problems almost, but they could actually start being applied to real problems in computer vision and speech recognition was like some of the earlier areas we started to look at. And then, you know, various kinds of language tasks, you know, could they understand words in a way that was different than the way the surface form of the word, but really in what context does this kind of word make sense? And are there other words that are similar to that? And, you know, what is the past tense of this word? and you really understand language more deeply than it's just a sequence of characters. How do you see the evolution of the TPUs going forward?
Starting point is 00:17:34 Is it going to get much more exponential than what we might have seen in the last decade or two? Yeah, I mean, I think. What, TPU, what, V4 now? We've just announced our V5E through our cloud TPU program. Okay. Yeah, so we've been building specialized hardware for machine learning and in particular neural networks now for, you know, quite a while. I think our first TPUV-1 was discussed in 2015. But really, it's about one of the nice properties that neural networks have is that they're sort of all described by different sequences of kind of linear algebra style operations, you know, different kinds of matrix.
Starting point is 00:18:21 multiplies or vector operations. And that's a very restricted set of things you need the computer to do. It's not like you need it to do all kinds of different things like general purpose computing code. And so general purpose CPUs are great for running your word processor, but they're not exactly what you want for running machine learning computations because they're too general. That generality costs you performance. And instead, if you build hardware that is very specialized to, you know, exactly what
Starting point is 00:18:51 the kinds of computations neural networks embody, you will be able to get huge performance improvements, better performance per watt, better performance per dollar, performance per overall chip. And the other
Starting point is 00:19:11 property that neural networks have is unlike a lot of kind of traditional scientific computing where you need actually a fair amount of precision, they're actually very tolerant of reduced precision arithmetic. So you could do computations in, you know, 8-bit integer format or 16-bit floating point format, unlike 32 or 64-bit floating point format, which is typically used for, you know, weather simulation code or whatever. And that means you can squeeze more multipliers
Starting point is 00:19:41 into the same chip area and get higher performance. Okay. You've talked quite frequently about some of the constraints we'd respect to the neural network, the modalities, multitasking versus single tasking and sparsity versus density. Yeah. Talk about those. Sure. I mean, so, I mean, neural networks are sort of loosely inspired by how real biological neural network neurons work.
Starting point is 00:20:09 Right. In that the individual unit in a artificial neural network is something that takes in some inputs. Right. And then has weights on those inputs. How important does it think this input is versus that one? Yeah. And importantly, those weights are learned by through a learning process.
Starting point is 00:20:27 And then the neuron takes all that input and then decides the artificial neuron. The artificial neuron. And loosely inspired by what real neurons do is it decides what output it should produce. Should it fire in some sense or should it produce nothing? Right. and how strongly should it fire. And so that's really what a neural network is, is it's composed of a whole bunch of these individual artificial neurons,
Starting point is 00:20:55 all typically arranged in layers. So you have, you know, the lowest layers that take in very raw forms of data, be it, you know, pixels of small patch of pixels of an image or a little bit of audio data in, you know, audio form, or a few characters of textual input. And then they sort of build interesting features. Let's discuss images because I think that's like a very, you know, easy way to think about the kinds of features
Starting point is 00:21:28 that get built up through this learning process. So the lowest level features tend to learn very simple things. Like, is there a line at this orientation and this part of image or this orientation or is it like mostly gray or is it mostly red or is it like, you know, a different color. And so different neurons will get excited by when they see different patterns. Like this one gets really excited. It's like bright red.
Starting point is 00:21:57 Wow, exciting. And this one has a line like this. And so as you move up the layers, what's happening is these neurons are taking input from the lower level ones and they're learning kind of more interesting. interesting and intricate patterns that are based on combinations of the features that cause these lower layer neurons to get excited. So, like, now it's like, oh, it's red and it's got a line through it like this. That's really exciting. Or it's got an edge with red that mostly on one side and not on the other.
Starting point is 00:22:28 And as you move up farther and farther, the features become more and more complex. So you might have something that's got like something that looks like a wheel or something that looks like a nose or a eyebrow or things like that. and even higher, you get sort of fully featured things like, oh, yeah, this one fires when there's a car with a front-on view of the car or something like that. And I think that kind of process happens because typically you are training the neural network. You know, there's a lot of different ways of training it. But one of the simplest is what's called supervised learning, where you have some, say, image data and then you have labels associated with those images. okay, that one's a car, that one's a cheetah, that one's a, you know, a tree. And so the output of the model at the top level is trying to predict which of these many different categories of images is it?
Starting point is 00:23:25 And the way the training process works is you make a pass upwards through the model, forward pass it's called, and you see what the model predicts. Yeah. And it says, okay, well, that looks like a, you know, a, you know, tower, but maybe it's really a tree. And so what you can do is then make little teeny adjustments to all the weights in the model so that it is more likely when it sees this image or a similar image to say, give the right answer. It's a tree, actually, not a tower.
Starting point is 00:24:01 And the training process is just repetition of that observing of real data and what it should be and then producing, you know, adjustments to the plates of the model. How do you make sure that you can actually weed out this seeming bias by way of the weaker ones get weeded out and then the stronger ones or influences get promoted? That just sounds like an inherent bias in the way to system. In the individual neurons or? Correct.
Starting point is 00:24:37 Yeah. I mean, I think what actually tends to happen. happen is different neurons will latch on to different kinds of patterns. And some of those patterns are irrelevant for any particular image. Like if it's an image of the outdoorsy scenes, all the things that detect vehicle parts are kind of mute. They don't actually produce large outputs. But all the things that are about, you know, foliage and green and trunks of trees and so on are like very excited. And so I think, you know,
Starting point is 00:25:13 part of training a neural network is you want this diversity of different kinds of patterns that the model can learn. And also you need the model to have enough capacity, enough, you know, neurons and parameters. Sure. That it can absorb and learn from the data that you're exposing it to. Like if you have only five neurons and you give it a million images, it's not going to do very well to generalize to new examples.
Starting point is 00:25:41 I mean, that's really one of the things in machine learning you're trying to do, is learn from representative data, but not just completely remember exactly what that data was because you want to learn when you're confronted with a new image or a new piece of text to generalize to those examples. How optimistic are you with respect to being able to basically address, these three concerns or constraints with respect to modality, density, and single-tasking? Yeah, I mean, I think I'm...
Starting point is 00:26:17 Or how soon do you think those are going to get optimally remedied by way of the exponential growth in TPU capabilities? Yeah, I mean, I think we're making a lot of progress on them. So we're actually pretty far along at generalizing some models that are previously mostly text only or sort of software code into models that can actually understand text, code, audio input,
Starting point is 00:26:49 image inputs. And so that I think is starting to be well understood through research. My colleagues and others in the community have been doing over the last three or four or five years. You know, I think in terms of multitask capability.
Starting point is 00:27:08 One of the things we're seeing with these models that are trained on, you know, large corpora of general text, or images and text or whatever, is that that gives them the ability to actually generalize quite well to new things you ask them to do. Like you say, okay,
Starting point is 00:27:27 can you draft me a letter to my veterinarian about my dog? You know, the dog is not feeling well. and, you know, the model has ever seen exactly that requirement or request, but it is able to sort of understand what it is you want and to produce, you know, plausible sounding text that actually fulfills that person's need. And you're starting to see not just generalizing from one data example to another in the same kind of overall category.
Starting point is 00:27:59 Like 10 years ago, the generalization you wanted was, take an image and be able to predict which category it's in from having trained on a bunch of images and those categories. Now you're seeing the ability to generalize across tasks in some sense, like asking the model to do something it's never been asked to do, but is kind of close enough to things it knows how to do
Starting point is 00:28:25 and it is able to generalize. And then the third one is sparsity. So most machine learning models today these days are dense, which means you have all these neurons, artificial neurons, and the entire model is kind of activated for every example or every input. And there's a form of model that we've done a fair amount of work in called sparse models, where you actually have different pieces of the model, and the model can turn on and off different pieces and can actually learn which pieces are most relevant for which kinds of inputs.
Starting point is 00:29:05 So you might have some inputs that are about Shakespeare. And so maybe there is a part that's really good at kind of Shakespeare-y stuff, but the part that knows about C++ code or Java programming is probably not active there. And there's another part that is really good at identifying garbage trucks in visual images. That's probably not active either. But you want this model to have a lot of capacity. So it's got a lot of pieces of it that it can call on, but it doesn't need to call on all of them for everything. And that creates a much more efficient model because now instead of activating the whole model, you're maybe activating 5% of it.
Starting point is 00:29:43 And that makes it much more energy efficient. But you still have this capacity to remember a lot of stuff. It's probably not going to be too far in future, right, when you think you're going to be able to address these constraints. Oh, yeah. I think the multimodality stuff, we've already seen a bunch of work from Google research and Google DeepMind on multimodal models of various kinds that can take in visual inputs and language and answer questions in text form or that can generate. You know, we've seen a lot of work on generating images or generating audio from various kinds of other inputs. Like can you take a text prompt and generate an image? and those models for that have been improving steadily.
Starting point is 00:30:29 You now can take text plus an image and say, okay, generate me a picture of, you know, a giant castle with this dog in front of it. Like it's cool that it can generate a picture of a castle with a dog in front of it, but often what you really want is your dog in front of the castle. You've recently or some time ago gave a lecture or, a talk in front of quite a bunch of computer science students or experts. You talked about the five trends with respect to machine learning, general purpose, efficiency, benefit to society, community, and people, benefit to engineering, science, and health,
Starting point is 00:31:11 and broader and deeper. Yeah. Talk about those. Sure. Yeah. I mean, I think, you know, the first part was about these trends of, improving the multi-mobalities, capabilities of these models and sparsity and so on, and the underlying hardware and systems we use to train them getting more capable.
Starting point is 00:31:36 The second part, it was about, or another part of the talk, was about, you know, what are these AI systems doing today? Because I think a lot of times people are using various AI models without necessarily realizing it. So on, for example, the Android phone, there's a lot of capabilities in that phone that are powered by various kinds of models. So it can, like, screen your calls for you. And it can, you know, you can say, yeah, I don't want to pick up my phone yet. I just want to, like, understand what it is this person wants.
Starting point is 00:32:09 And then it can relay a transcript of what they said. You can say, oh, this is hi. I'm read. I've got a delivery for you at the front door or whatever. or that can, you know, on the phone, you know, do various kinds of computational photography techniques to enhance the images, to be able to remove, you know, that annoying, unsightly telephone pole in the background when you took your photo. Right. Or a variety of other things. And I think, you know, often these features are, feel pretty magical, but they're actually, you know, often powered by these machines.
Starting point is 00:32:46 learning models. And then another part was about how AI and machine learning is really accelerating a lot of aspects of scientific discovery. I mean, I think one of the things that, particularly in fields where there's a fair amount of data and you're trying to pick up on complicated patterns that are not well understood.
Starting point is 00:33:11 genetics or health care or, you know, various kinds of weather prediction. You know, a lot of these things have the property that there is lots of data about some of these, you know, problems or domains. Right. But we don't necessarily have the deep understanding to make sense of that data. And so this is where sometimes neural networks, which can learn, you know, fairly complex. patterns from data can actually be useful and can create new insights or create new capabilities that didn't exist before.
Starting point is 00:33:51 Whether is kind of, maybe I can use weather prediction as a good example. So traditional numerical weather forecasting has like a set of physics-based equations about how, you know, the weather and the wind and the atmosphere interact in order to make predictions of like what's the on the ground weather are going to be like 12 hours from now, three days from now. And that's great, but those simplified equations probably leave out a lot of things that we don't fully understand. And so when you actually try to apply neural networks to weather forecasting, you approach
Starting point is 00:34:30 the problem very differently. You actually have a fair amount of historical data about the weather conditions four days ago were this and now, you know, three and a half days ago, they're this. Or, you know, even three years and one day ago, they were this. And now three years ago, they were this. And this sort of gives you the ground truth of what your model should predict. So given the weather, a thousand days ago, can you predict the weather 99 days ago? And that actually turns out to be a fairly successful approach for weather prediction and you have kind of ample amounts of data to train on. And then you want to generalize to, you know, new weather situations you've never seen before
Starting point is 00:35:13 and also to the future. How soon do you think we're going to be able to, I mean, machine learning has done so much, so well, so fast with respect to reading text. Understanding to some extent, then audits. then visuals, all kinds of visuals, right? Yeah. But what smell? Ah.
Starting point is 00:35:39 So we actually have done some research. I mean, temperature, you can do that already. So within Google research, we've done some work in this space about starting about four or five years ago. Right. And it turns out that with, there's various kinds of instruments. that can sense the olfactory characteristics of the air and can give you very raw data about, you know, what things are hanging in the air. But it's hard to really then put high-level labels on that.
Starting point is 00:36:19 So in the same way, you haven't the pixels of an image and you can train a neural network to say, okay, well, when I see that kind of thing, that's a leopard, you can do the same thing with these olfactory signals to say, okay, that's a lot like lemon with a hint of fine needles or something. And this actually works. The actual device that gathers the data is still a little big, so it's not like a portable thing you can put in a cell phone yet, but it turns out this is an important problem for a variety of reasons. One is there's actually a lot of, you know, some industries that want to create particular sense and they want to be able to, um,
Starting point is 00:37:03 you know, understand a scent, but also do the reverse, you know, create a scent. Like in the same way you want to, uh, you've seen these image models where you can say,
Starting point is 00:37:13 please give me a dog in front of a castle. You'd like to say, please give me what, what I would need to mix together to make a scent of, you know, uh, of a woman, goulash and cinnamon or something.
Starting point is 00:37:27 Um, And so that's one application is for perfume industries or consumer package goods. But another one is potentially in healthcare-related things. There's some evidence that dogs, which have particularly sensitive noses, can actually, you know, pick up on subtle signs of cancer in some cases. And so that indicates that maybe there's signal in these olfactory raw data that could actually be used for health purposes. Anything else that we should anticipate in terms of what could be cool about what ML could be doing to the humanity?
Starting point is 00:38:02 Oh, yeah. I mean, I think I'm pretty optimistic about a bunch of different application domains. I mean, I think one is in the area of education. So, you know, we're not quite there yet, but it's close to being able to say, can you please tutor me on this? Wow. You know, you take in a chapter of a textbook. And you can imagine a system that, you know, absorbs that chapter or, you know, maybe multiple chapters from different books.
Starting point is 00:38:32 And then, you know, asks you questions, assesses the correctness of what you answered, you know, identifies areas where you could use more, you know, more depth and asks you more questions about that kind of thing, fewer questions about stuff that you seem to already know pretty well. So imagine being able to do that for any, anything you want to learn. like either as a kid in school or, you know, or English. English is great. There's all kinds of interesting language learning applications. You know, I think you might be able to create really interesting dialogues that help people learn language
Starting point is 00:39:09 and they're going to be more interesting than the kind of pre-crafted, fairly rudimentary things. Like you could say, I want to learn English and I want to talk about, you know, hiking in the forest or something today or whatever. and it could probably, you know, help you achieve those two objectives, you know,
Starting point is 00:39:27 a pleasant conversation about hiking and you're learning English at the same time. You know, I come from a region. It's called Southeast Asia. And it kind of gets, it's a bit under-narrated. You know, people here in Silicon Valley tend to talk about other places around the world
Starting point is 00:39:47 as opposed to Southeast Asia. And I think part of the structural problem with that is we just don't speak English, not enough of us. I mean, Singaporeans, all of them speak English. Good chunk of the Filipinos, they speak. But the rest of Southeast Asia, so if we would have had this conversation five years ago, I would have been a lot more pessimistic about the future
Starting point is 00:40:12 where we could actually communicate with the international community. Now, with the advent of ML or AI and all that stuff, I'm a lot more optimistic about, getting 100 million people in Indonesia to be able to speak English, maybe 400 million people in Southeast Asia, out of the total population of 700 million people to be able to speak English. It's a breakthrough. It's life-changing.
Starting point is 00:40:40 Yeah. Enabling people to communicate with each other, I think, is a hugely impactful thing. And whether that's through teaching people to learn a second or third language, or whether it's enabling people who don't speak the same language to communicate effectively. We have some of our products like Google Translate can actually
Starting point is 00:41:01 you put the phone on the table and you're speaking one language and I'm speaking another. It will actually produce transcribed reasons of what we're saying and we also have versions that can like transliterated into actual audio in people's ears.
Starting point is 00:41:15 And I think that's a really important capability because the more we're able to communicate as all people, the better it is. And it's also something where machine learning can actually really help because we've seen just dramatic improvements in the capabilities of the quality of translations through these larger neural network-based models, as well as speech recognition and speech production for not just five or 10 languages, but actually for 100 languages. Google's Translate supports more than 100 languages today, which I think is really important.
Starting point is 00:41:56 We actually have an ambitious goal to support 1,000 languages in our products, and this is sort of a... Well, we have 700 of those in just one country in Indonesia. Yeah. We call them dialects. Yeah, yeah. So, I mean, a thousand languages is not... I think there's something like 7,000 spoken languages in the world.
Starting point is 00:42:14 And a thousand, covering the top thousands would be amazing. Yeah. Even, you know, the top 100 covers a lot, but there's still in the next 900, there's a lot of speakers who are sort of left out if we don't support those languages. And so we definitely want to do that. I want to talk to you about sustainability. And I'll draw the picture in terms of how things are a little bit different in developing countries. I mean, I've been saying the narrative sustainability is elitist because, you know, it really. resonates to about 15% of the population of the world.
Starting point is 00:42:52 Whereas the 85%, they're a lot more worried about putting food on the table. Right. Right. Yeah. And then, you know, they don't mind stopping, you know, using coal today, as long as the alternatives are affordable. Technologically, alternatives are available. But economically, they're just not affordable for most people on the planet.
Starting point is 00:43:13 Right. What do you think Google or you as a scientist could be thinking? about what can be done to bridge the gap between the narrative of sustainability and the narrative of development. Because I think it's important for the planet to be collective about this. Right? In terms of attaining carbon neutrality by 2050 or 60, there just seems to be no realism. When we hear the rhetoric of attaining carbon neutrality by 2050,
Starting point is 00:43:50 at the rate that we're seeing a bunch of these people here that just can't afford it. You know, the technologies. And I'm sure you got a lot of smart people here in this building that can figure out how to make things a lot cheaper economically by way of technological innovation. That's been very exponential. Yeah, I mean, I think this is clearly a planet-wide issue. Sure. And we all need to be working together on this. and it's definitely the case that the more economically developed countries have produced way more emissions in the history.
Starting point is 00:44:27 You don't have to get there. Yeah. But, you know, I think there are a few sort of positive signs. So one is the cost of renewable energy like solar panels have been kind of on a, you know, dramatically improving curve. A bit like computation was, you know, 15 years ago, we're now still high to. Still high. Battery technology is improving a lot. And so the combination of solar and battery plus wind is becoming much more affordable.
Starting point is 00:44:56 In fact, in many parts of the world, I think if you look to install new power capacity, that actually becomes the economically rational choice. So that's a good thing. I think one of the issues we've had in the world is just there's things that are not factored into people's decisions like, you know, if I install another coal plant, it's cheaper for me, even if that necessarily causes indirect emissions that impact everyone. So we've been looking at what are things that we can do to improve sustainability and reduce emissions with technological solutions.
Starting point is 00:45:39 Right. So one of them is a project called Greenlight, where basically, by, By using traffic patterns that we can observe through Google Maps, we can actually identify ways in which cities around the world can make improvements to their traffic infrastructure, through signals and so on, to actually reduce idle time at, you know, intersections. That's actually a major source of emissions is just cars not going anywhere. You know, it's also a situation where people don't really like not going anywhere. They're in your car to go somewhere.
Starting point is 00:46:14 and the emissions like from the idling engines actually are pretty harmful. And so with green light, we can actually make suggestions to different cities all around the world and have them adjust the stoplight timing. You know, most stoplights in the world have fairly simplistic method. It's sort of like, is it rush hour or not?
Starting point is 00:46:38 It's kind of the level of sophistication in many of them. But now we can actually say, okay, Tuesdays between 1037 and 1130, you know, you should set the signal timing to 42 seconds instead of, you know, 35. And you'll get way more throughput on your roads. Right. You know, 90% reduction in the number of people who need to wait through a second light cycle, for example. And so we've actually got pilots going with 12 cities all around the world. I think it's on like four or five continents. And Jakarta is one of them. And so we're actually seeing, quite positive results from that.
Starting point is 00:47:15 Okay. And we're sort of learning from that early experience, partnering with those cities and trying to sort of expand that program. But that has, you know, rolled out much more broadly, would have a huge potential impact on reducing emissions. And it also would help people by, you know, getting them where they want to go faster, which is also a nice side benefit.
Starting point is 00:47:37 Another area we'll talk about is contrails. So, you know, the sort of, long linear clouds you see behind airplanes sometimes. Turns out those are actually quite harmful for carbon emissions because they trap in heat at certain times of the day. Okay. And actually the contrails produced by airliners are roughly one third of the total contribution to warming of the aviation industry.
Starting point is 00:48:05 No kidding. The entire aviation is kind of surprising. One third of the actual burning of carbon. One third of the overall footprint of the aviation industry. is related to contrails. We've actually been doing, but contrails are actually avoidable. So if you're a plane and you're flying and the conditions that this altitude would actually produce contrails at a time when that's a bad idea, because conditions seem like that
Starting point is 00:48:30 contrail would be harmful, you can actually change your altitude. With going up or down a little bit, you can actually get to a situation where you won't create a contract because it's really just the exact temperature. and ice crystal formation that, you know, it's really just ice crystals forming around sort of sweat from the exhaust of the plane that causes contrails. And so we've actually partnered with American Airlines to do a controlled study where we took, I forget exactly how many, about 100 flights, and we took 50 of them and we gave them commands about, you know, where we thought contrails would be produced and whether they should go up or down
Starting point is 00:49:12 on their flight path. And what we saw was a 50% reduction in contrails for the flights where we were controlling that versus the ones where we did not. And how are you actually controlling? Oh, so we control it by saying, okay, flight, you know, American Airlines flight operations would tell them to go up to 31,000 feet instead of 30,000 feet or something. Okay. And then actually it's kind of cool how we closed the loop and figured this out. Now you have all these flights. And so we use real-time satellite imagery of when the flights occurred and the paths they took.
Starting point is 00:49:49 And then you can use computer vision to detect was there a contrail produced by this flight versus this one. You know, if you take a look at some of the publications by experts in energy, the demand for fossil from automotive is going to continue declining, right? But demand for fossil from aviation is going to go up. Yeah. And so this, this. You just can't electrify. airplanes that fly long haul. Yeah.
Starting point is 00:50:15 This approach seems like it might reduce about half of the warming related to contrales, which is a third of the overall impact of the industry. So that might be like a sixth of the aviation industry impact. What about food security? I mean, there's a lot that can be done technologically or scientifically, right, to improve upon a preexisting convention. Yeah, absolutely. I mean, I think that has a very broad set of,
Starting point is 00:50:42 of ways that you can approach improving that situation. So one is just helping farmers understand their crops. And, you know, is this what, I'm getting these weird patterns on my leaves of this particular crop. You know, is that a disease I should worry about or is it fine? And computer vision models can actually be helpful with this. You know, we've done some deployments with other nonprofits working in I think it was Kenya and the Nda, helping understand cassava leaf, you know, images and helping help cassava farmers, you know, is this a disease they should worry about?
Starting point is 00:51:26 And if so, how should they treat it? Another is just predicting where food insecurity is likely to occur because we know, you know, when you already wait until a population is in crisis, it's actually kind of late. At that point, you'd rather direct, give sort of assistance to people that are not yet in crisis so that they can sort of, you know, plant more crops or do things that help them avert the most dire situations. And using machine learning to help make predictions there is something that we're partnering actually with Google Research is partnering with the FAO organization to sort of help for that prediction. Let's talk about AI. Okay. Convince us that it's going to lead up to a good future.
Starting point is 00:52:35 Yeah. I mean, I think obviously there's a lot of discussion around this. Let me just, you know, put some context to this. There is a sense, at least from a layman like me, that it's not being pushed forward in an adequately multidisciplinary manner. it just seems highly technological, right, without roping in those people that are expert in culture, economics, environment, spirituality, philosophy, and all that good stuff. I just think that those are important, right?
Starting point is 00:53:11 Yeah. To make sure that this goes to the end of the pipe in a benign or judicious or wise manner. Yeah. Yeah. Yeah, I definitely agree with that. I mean, I think one of the things you want to do whenever you're thinking about applying technology to some problem is you want to bring in people who have a lot of knowledge about that area and work with them. You know, some of the most interesting projects I've worked on are ones where, you know, I might have some technical expertise, but where I learn a lot from college who have other domain expertise. They're clinicians and they understand this kind of health care problem extremely well.
Starting point is 00:53:54 And they can say, yeah, if we could do this, that would be really helpful. This isn't that helpful. This is a big problem. This is not a big problem. We should look out for that. So whenever we're approaching, you know, the use of AI and machine learning in a different domain, we want to bring in, you know, that comprehensive set of people who are thinking about, you know, that domain. We're thinking about, you know, issues of representation and fairness. Right.
Starting point is 00:54:21 If it's, you know, technology is going to affect, you know, people in. one community, you want people who are from that community or who speaks that language or, you know, understand the situation in, you know, that city or that country more deeply so that they can provide feedback and advice and work together to interesting. This is one of the things that Google has done in as we sort of saw more use of AI in our products and, you know, thinking about it in terms of, you know, where was it going to be applied? in other areas, we actually put together a set of principles by which we think about, you know, how do we make sure that we're responsible in thinking about how AI has applied to different
Starting point is 00:55:07 problems? You know, we want to avoid creating unfair bias. We want to avoid creating harm. We want to sort of focus on, you know, positive use cases. And our AI principles, which we published in 2018, have a set of seven principles by which we think about these and we evaluate downstream uses of machine learning and AI in terms of those principles. And I think it's actually been a helpful thing for us to put those out externally.
Starting point is 00:55:36 Because, you know, we've been thinking about AI for a while, but other organizations were starting to think about using machine learning or AI and, you know, whatever, whatever environment, you know, whatever problem they're engaged in and their, their, they're engaged in and their discipline or their domain. And I think it was helpful for us to put out those principles that other people could, or other organizations could reflect on them, you know, say, yeah, that makes a lot of sense. Or, you know, in our industry, you know, this one doesn't necessarily make as much sense, but these other ones resonate. How do you make sure that you're going to be able to find a right balance between humanity and profitability? Yeah. It's tricky. I mean, I think
Starting point is 00:56:20 we actually do a fair amount of work that we don't worry too much about is this going to be profitable because it's just the right thing to do. You know, like our confrails or our green light work, I think we don't really worry about that. It's just the right thing to do for the planet. Or a lot of the health care related work we've been doing in developing countries, I think, is a low and middle income countries. we've deployed some, you know, retinal image-based machine learning systems to help with diagnosing diseases like diabetic retinopathy in partnership with, you know, eye hospitals in India or in other other locations.
Starting point is 00:57:05 And I think that's a pretty good thing to be doing, regardless of whether that's profitable or not. In other areas, we think there's really important uses of AI machine learning, and they provide economic benefit, and we create, you know, business models around that. So like some of our cloud-based AI products, you know, people pay money to use them because they're useful, and that's fine. So I think getting that balance right is an important thing, but, you know, it doesn't have to be in either or. You know, there was an announcement today respect to the leadership with one of the AI players. Oh, yeah.
Starting point is 00:57:46 Without mentioning names, I want to put that in the context of what I want to hear from you in terms of whether this should be open source or close source for the benefit of humanity for the time being. For what should be open source or close source?
Starting point is 00:58:07 AI. just in general. Okay. Yeah, I mean, I think it's a complicated question. I think, you know, we've actually had a long history of open source releases of sort of basic building blocks of AI toolkits. So things like TensorFlow or Jacks, we've been working on for many years and releasing. And actually, huge number of developers around the world have created all kinds of amazing things. TensorFlow, I think, you know, there's 40 million downloads of that system,
Starting point is 00:58:42 probably more now, that have enabled things like the Casava detection example I mentioned. At the same time, I think these, the most capable models, you know, I think it's really good to make sure that they are deployed in a safe manner. And when you completely release the model to the world, you know, it can have all kinds of amazing uses, but it can also be used in ways that maybe are, you know, less desirable. And you don't really have control of that. That doesn't mean we shouldn't open source models. I think it's a balance, right? Like we want, you know, amazing models that are open source that people can do all kinds of good things with. But the most capable models, you know, I would be a little more circumspect. You can offer API
Starting point is 00:59:37 access to people and they can build things on top of that, but that doesn't necessarily mean that we want them, you know, to be completely open and available. China is a large population, right? You know, if you hear some of the experts on AI from China, they seem to think that they're going to be ahead of the United States in AI because they've got more data points. Is that the right logical way to think about how AI is going to move forward or there are other variables that need to be taken into account? Yeah, I mean, I think obviously AI is being worked on all across the world, including in the U.S., including in China, many people in Europe and, you know, all over Asia and Southeast Asia. Yeah. I think it's a, you know, it's a technology that is very relevant to many, many things. And so it's natural that there's lots of work on it everywhere. I mean, I think it's not so much about getting ahead.
Starting point is 01:00:44 It's about everyone working on, you know, improving the capabilities of what these systems can do, making sure that they are deployed in ways that benefit people, people citizens, or, you know, users of companies' products, or, you know, improving the lives of patients or clinicians, there's a lot of things that can be done with AI. And, you know, I think having a responsible approach where you're looking at the ways in which this technology is being used and deployed and contemplating how it should be used in the future is a really helpful thing. You know, if you take a look at some of the reports on the value proposition coming from AI, right?
Starting point is 01:01:31 I mean, some pontification might say it's going to be between $50 to $100 trillion worth of economic value in the next 10 to 15 years, right? Right. You know, I come from a developing country, and you've grown up in some developing countries, right, in Africa and all that. It just seems from an intuitive standpoint that most of the value is going to accrue to just the United States and China. right? I mean, this is coming for me, right? My perspective, right? Yeah, sure. I want to hear, you know, what your views are with respect to how people in Southeast Asia could actually feel confident about being able to capture a little bit of that value proposition, which could amount to $50 to $100 trillion, globally speaking in the next 10 to 15 years. Yeah. I mean, I think,
Starting point is 01:02:26 what do we need to do to make sure that we're relevant? that we're participatory in this narrative. It's actually a really good question. I mean, I think the sort of increasing interest in AI and machine learning is something that I think you want to encourage people in your country, other countries to learn about these technologies, to identify ways in which they can be applied by local companies, by local universities and developers in your country.
Starting point is 01:03:03 I think that is a way to make sure that everyone participates in what is the potential benefits of AI, both kind of societally but also economically. You were asked us at the TED Talk. I'm going to ask you again. How do you make sure that you're going to carry forward the pre-existing negativity? into the future.
Starting point is 01:03:31 That we don't carry forward? Yeah. In terms of like... Well, I mean, you know, inherently there's something that's negatively biased, right? Yeah. With a pre-existing technology, right? What do you do to make sure that that's not carried forward? Yeah, I mean, I think...
Starting point is 01:03:49 With respect to what's good for humanity, with respect to what's good for the community and the person and all that stuff. Yeah. I mean, I think this is definitely one of the... risks of AI and machine learning is the systems learn from observations about the
Starting point is 01:04:04 world. And if they observe the way the world works and we are unhappy with the way the world works in certain
Starting point is 01:04:13 ways, these systems will learn to replicate that behavior and maybe even accelerate it because now you can have an automated
Starting point is 01:04:22 decision about you know, as one example, who should get a home loan or not. We know those are not always based entirely on fair decisions. They're sometimes biased in various ways by human fallibility.
Starting point is 01:04:40 And that can be perpetuated if you train a machine learning model on biased home loan decisions. You will now have an automated system that makes biased home loan decisions. So there's a lot of work on how do you take data that itself is biased. make sure that you can correct a model so that it doesn't have that form of bias, but it does have, you know, other kinds of properties. You know, a lot of these things are learning statistical associations between things, and some of that bias is completely appropriate. Like if I, you want the model to know that the word per is associated with cat, right?
Starting point is 01:05:25 That's biased, but it's per P-U-R-R. But, you know, that's not a bias we're particularly concerned with, but there are biases in the model about, you know, who gets a home loan that we are concerned with. And I think this is partly harkening back to how we think about things in terms of our AI principles. You know, you want to be looking and avoiding harmful or unfair bias. And that there's a whole, you know, a lot of those problems and, uh, things that we list in our AI principles, some of those are active areas of research where, you know, we have some techniques that can eliminate bias or reduce bias, but not completely solve the problem. And so we want to continue to make sure that we do cutting edge research on those areas, but also applying the best known techniques that we have now to, you know, the problems at hand for today. Last question on AI. How do you see it from Apollo? policy-making standpoint.
Starting point is 01:06:30 I mean, I get the sense, again, as a layman, that the technologists are way ahead of the regulators in many countries around the world, right? Yeah. And I think it's your, it's in your long-term interest to make sure that there is an alignment, right, of wisdom, knowledge, and being informed, right? Yeah. How do we align these two? You know, it gets even more interesting in a developing country.
Starting point is 01:06:57 much less underdeveloped country. How do you see that going forward? Yeah, I mean, I think in general, the field of AI has been moving very fast, and that's generally kind of at odds with careful deliberative processes that are often inherent in politics and regulatory decision-making.
Starting point is 01:07:21 I think it's really useful for technologists to help, inform policymakers about what is likely, what is currently possible, what is likely to be possible five years from now, you know, what is likely to, you know, what are potential
Starting point is 01:07:39 paths five and ten years down the road so that they can be informed and make good decisions about, you know, okay, we should regulate this kind of thing in this way, you know, maybe we shouldn't regulate this other, you know, potential application. I think, you know, I often think of these in terms
Starting point is 01:07:57 of the actual application of AI, because it's a very broad technology. And so what is appropriate in one setting may not make the most sense for another. And often there are kind of regulatory frameworks in particular domains that already exist. And with some modest changing or informing about how AI could be used in that domain, that regulatory framework can be modified but not completely overhauled. I think healthcare is a really good one. Most healthcare, there's a lot of health care regulations in lots of places in the world. And now AI is being applied to some kinds of diagnostic approaches.
Starting point is 01:08:42 But regulators already exist in those domains and are thinking about these issues. There's other things, which are like completely new domains that AI enables, where there isn't sort of an existing regulatory framework. And I think it makes sense for policymakers to take a look at, okay, well, what does this technology do? What can it do? What sorts of harms might exist that might, that could happen that might, you know, hurt our populations or, you know, people? And what do we want to do about it? But what I also think it's important to look at, you know, how do we not hamper the particular?
Starting point is 01:09:25 potential of some of these approaches, but still protect, you know, the public interest from harm of various kinds. Jeff, are you getting the sense that you're not recruiting good engineers as fast as you want? Or are you getting the supply of good engineers at the right pace at Google? Yeah. I mean, I think it's a good question. I think one of the things that this shift to much more,
Starting point is 01:09:55 focus on AI and machine learning in computing in general over the last decade or so has what has happened is 10 years ago, you know, machine learning was one thing that some people emerged from university with a little bit of understanding of, but not everyone. What I think has happened over, well, particularly from like a computer science program, is what I'm talking about. I think what's happened over the last decade is there's been such interest created in what can these technologies do that universities are really reacting to this. And so now it's pretty hard, I think, to emerge from a undergraduate program with, you know, without having at least taken a machine learning course or being exposed to it in some way. And I think that that's helpful because now we have more people who understand this technology at least somewhat and, you know, can understand, you know, some of the potential harms that can result from applying it, the basic techniques, the kinds of things that are possible now, the kinds of things that are, you know, not quite possible, but are likely to be possible. And the field is moving fast. So you want people to have that understanding and then also kind of,
Starting point is 01:11:20 keep up with the changes in what is possible. You know, we've talked a little more than an hour. I feel like I've just taken a semester. Semester's worth of computer science class. Okay. You know, it's hardcore, man. I mean, you strike me as a hardcore scientist, right? But just personally, what do you do to chill?
Starting point is 01:11:42 Oh, yeah. I mean, I love hanging out with my family. I have two wonderful daughters who are now adults. You sound like you have, you have, cats because you talk about cats all the time. No, I used to have cats. I no longer have cats. Part of the reason we talk about cats is we were doing some unsupervised learning ages ago. Basically, we took 10 million random frames from
Starting point is 01:12:05 random YouTube videos and we trained a neural network without any labels. So the kinds of patterns the system would learn to pick up on. And then we sort of said, oh, okay, what does it learn to identify? and since it's YouTube, one of the things was cats. So we had a particular neuron that would fire when there was a cat face in the image, even though it had never been told what a cat is, which is kind of cool. It's sort of like how humans learn, right, is a very unsupervised thing. You're mostly just taking in the world as a young child.
Starting point is 01:12:37 And then you get an occasional bit of supervision. You're like, okay, that's a cat. And then the young child associates, you know, the kind of vague patterns, they've learned to build up about, you know, what constitutes a cat with the word cat. Anyway, so sorry, that was a bit of a digression. Having, having been born in Hawaii and having grown up in Africa and all the other parts of the world, any outdoor stuff you do? Oh, yeah.
Starting point is 01:13:05 So I also, you know, I like to do various kinds of exercise. So I play in two different soccer leagues. I'm clinging on the 25 and over a division of my leagues. the 25-year-old seemed faster every year, which is annoying. I got to ask you, Ronaldo or Messi? Oh, Messi. Sorry, Ronaldo. Oh, yeah.
Starting point is 01:13:28 Yeah, I'm with you. Yeah, particularly peak Barcelona, Messi. It was like, you know. And the World Cup, too. Yeah, and the World Cup. I'm glad he won the World Cup. That was awesome. It's great to see because he's clearly the best player in the world,
Starting point is 01:13:40 and it was nice to see him win a World Cup. Yeah, thank you so much for your time. True. Thank you. Appreciate it. Thank you. That was Jeff Dean, the chief scientist at Google. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.