Big Technology Podcast - Alphabet X's CEO Has A Vision For AI Moonshots — With Astro Teller

Episode Date: June 7, 2023

Astro Teller is the CEO of X, Alphabet's moonshot division. He joins Big Technology Podcast for a taping in front of a live audience at Summit at Sea to discuss the calculations society must make arou...nd advancing AI research and how the technology can help the world. Listen for a dynamic discussion that digs into Teller's family history, the current 'moonshots' that X is taking, and whether AI moonshots should've stayed separate from Google for longer. Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Transcript
Discussion (0)
Starting point is 00:00:00 LinkedIn Presents Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world and beyond. And we are here with Astro Teller, who's the captain of Moonshots and the CEO of X at Alphabet. Thanks for having me. All right. Now, as you can hear, we're doing this in front of a live audience here at Summit at C. And folks, I think you just clapped a little bit, but I really want you to be on the recording.
Starting point is 00:00:48 So let's hear it. You got to be loud. Doesn't it feel so good to be loud. be back in person again. Man. For sure. Is this your first time doing something in front of a live audience? I've done it a few times, but nothing as boisterous as this. This is amazing. So thanks everybody for coming. You know, Astro, as I was doing my research for our conversation, which is billed as maybe an optimistic look at AI coming from you, I was looking into your background and saw that your grandfather, Edward Teller, is the father of the thermonuclear bomb.
Starting point is 00:01:30 Yes. This is going to be a slow conversation. No, I'm kidding. So, I guess a bit of a curveball at the beginning, but here we are. So is that weird? And also, what do you think about? When you think about AI, what do you think about in terms of our ability to develop technology that could be quite destructive?
Starting point is 00:01:55 Let me give you two thoughts. I guess you can dig in further. But the first one is. is I have always been inspired by the idea that getting really bright people into an intense environment to work on something really hard that really mattered could have sort of profound positive impacts for the world. And I think the NASA Space Program,
Starting point is 00:02:21 the Manhattan Project, Bletchley Park in England, there are lots of these examples. And that was one of the things that inspired me when I was young. And we'll probably get back to getting a group of people together to work on something hard and particularly hard ways. But the other thing is the nuclear bomb makes sort of like a good headline, sort of the mushroom cloud. We all have emotions attached to that. But the emotions that we've attached to our fears, our frustrations, understandably about nuclear bombs, translated in the 60s and 70s into such a negative narrative about nuclear energy
Starting point is 00:03:08 that we, as a society, completely missed the boat, pun intended. The disaster, which is climate change right now, would not be happening if we as a society had not let our fears about the first thing. translate into an inability to use the upside of nuclear power to save us from what is now arguably the biggest problem in the world. So as we sort of move forward and talk about the technologies of the day, I would encourage us to think about that duality. Not that nuclear power has no downsides, but that if we weren't careful and we weren't in that case, we missed a lot of the opportunity for the upsides.
Starting point is 00:04:02 Thank you. You could give it up. It's an almost a perfect precursor to a discussion about artificial intelligence because AI can't help our society in countless ways. And in fact, it's already in place in certain ways helping us. But it also has this capacity for destruction. I mean, you have, I think it's an often cited statistic, but one in ten AI researchers saying there's a chance
Starting point is 00:04:28 that it could effectively turn civilization into paperclips. So my question to you is, when we create such powerful technology with the capacity for good, but also the capacity for bad, what calculation do you think needs to go through our head before we decide to move forward? Again, this is a very big conversation, so we may have to come at it from a couple different directions. But artificial intelligence, number one,
Starting point is 00:04:57 is algebra on steroids, just so we're clear. It is a very big field of study, and when you get out a microscope and you look down inside the computer, you cannot find the AI in there anywhere. It's just math. And sort of depending on how you measure it, humans have been working on artificial intelligence by that name for 70 years, and it has been making progress over the last about 70 years. So it's not like we were at zero and then there was like a huge step function. The plane that flew you here flew itself almost the entire way using artificial intelligence. And we all rely on artificial intelligence every day, whether we realize it or not. So this is not to say no to your question, but just maybe for us to set the table that
Starting point is 00:05:49 this is not like the lights just got switched on or we're sitting there by the light switch, wondering whether to turn them on. Do I think that things are picking up speed? Yes. But this is part of a much longer narrative. And I think that we need to be really thoughtful about how, as we develop any technology, can we get the most benefit out of that technology? And at the same time, as wisely as possible, see potential downsides from that technology
Starting point is 00:06:19 and then find ways to mitigate against them or corral them into, places where they won't be a big downside for society. So you're a professional inventor, effectively, who manages professional inventors. And I wonder, what is it about us, about humans, that we'll go forward and create things that we know have the potential for great destruction and kind of hope, kind of be optimistic that we're going to be using them for good? I mean nuclear energy, nuclear weapons, like nuclear energy could help save the planet. But the fact that we have nuclear weapons increases the chance that we'll wipe ourselves out.
Starting point is 00:07:03 Same with AI. AI is something that can do a lot of good, has done a lot of good. Glad that the plane made it here okay, but also you have these potential downsides. So put us all in the head of an inventor now and explain sort of why the human spirit pushes forward with things and hopes that will do a good job taking care of them even though there's risk. So inventing sounds like a monolithic effort, but let me separate it a little bit into two different things.
Starting point is 00:07:37 One of them is learning, the discovery of new knowledge. If humanity can't survive the discovery of new knowledge, I mean, I don't believe that. Maybe you do, but I believe in human. I think it could be bumpy at times, but I believe in humanity, and I believe we can survive discovering new knowledge. I don't want us to need to infantilize ourselves as a species by preventing new knowledge. It wasn't my attention. I know. I know. I know. That's the first half. Now, the second half of invention is what you do with the knowledge. And in instantiating the opportunity, a new thing, like let's say electricity, should we put electricity into the things around us? You can say, sure, generically, but when you start to do it in specific cases, you can say things like, who will benefit from this, who will be harmed by this? You can say almost for certain with something like either electricity or artificial intelligence, we can't say for sure all of the ways in which this will play out. Great.
Starting point is 00:08:43 So how can we sandbox this discovery, this invention process? so that as we try to instantiate it in products and services, we can put it out in the world before it's done, not to be irresponsible, but to be responsible. And to say to people, what do you think? How should we change this? What can we learn from this? And if you'll allow the metaphor, you know, for a long, long time,
Starting point is 00:09:10 Waymo, the self-driving cars, which actually came from X, we had a person sitting in front, right by the steering, wheel with their hands really near the steering wheel, eight hours a day, just making sure that nothing bad happened. So the car was practicing driving itself, but there was a backup. I think there are lots of ways in which we can learn in the real world and do it responsibly by engaging the rest of the world in what's happening and getting that feedback so that we can get these unforeseen consequences out into the light so that we can design around them. So I'm going to get to the optimistic stuff, I promise. But I want to keep going here just a little bit more. It's very
Starting point is 00:09:51 interesting how you talked about how with AI, it's just a series of numbers, effectively. It's math, and we're not going to see God inside the algorithms. You could also say that with nuclear, right? It's just a series of equations that we figured out how to do some crazy stuff with, sometimes crazy awesome, sometimes crazy bad. So is there ever a moment where we say, where we, like you mentioned, we shouldn't worry about surviving the discovery of new knowledge. But is there ever a point where we say stop? Like I think about the letter that Elon Musk and a bunch of others said about we need to stop researching AI, which seemed to be a bit of a pipe dream to me.
Starting point is 00:10:31 But is there ever a point where we say this type of stuff we shouldn't keep going with? Or do we sort of, is it inevitable that we push ahead? I can't speak for the whole world. I think the reality is that lots of people who signed that letter and lots of other people in the world are going to keep working on it, no matter what letters signed. That's great for headlines for them. Yeah, for sure.
Starting point is 00:10:59 The only thing I can control is what we work on, and I want to work on what we're working on responsibly and so that we can get as much benefit to humanity as possible. I think that the world is overrun with serious problems. I would put at the top of that list, climate change. Nowhere close to second place, I would put nuclear weapons, just since we use that as an example, and I would put AI doing something particularly horrible for humanity,
Starting point is 00:11:28 a very, very distant third. So instead of focusing on all of the bad things, and I'm willing to talk to you more about them, but I'm interested in why we're not talking about the downsides because this is about us netting humanity to the positive. The downsides or the upsides, why we're not talking about the upsides, right? Yes.
Starting point is 00:11:50 Okay. So I agree that... We're going to do that, by the way. Excellent. I'm looking forward to it. And is it possible that there will be complexity for humanity as we go through this? 100%. Do I believe that anything particularly extreme in our lifetime is going to happen?
Starting point is 00:12:12 I don't. I'm sorry, but I've been working on robots trying to open doorknobs for like the last 30 years. And it's been a slog. So as someone who's been in the field for 30 years, I'm just a little bit maybe more sanguine than people who started learning about this recently. I think I need to just clarify the question here because it's not do we stop. It's, is there a point where we think about pausing? That's what I'm asking, really.
Starting point is 00:12:39 It's not like, I'm not sitting here saying, Astro, we got it. This stuff is about to turn us into paper clips, not the point at all. The point is, philosophically, do we have a point where we say, maybe we don't want to develop those things? And if we don't, that's fine. And that's, you know, one answer. But I'm curious if you actually think that there is a place where we do say no. Yes, I'm sure such a line exists. I argue by the time we've gotten to that line.
Starting point is 00:13:04 it will already be too late. So this is actually me agreeing with you. I think way before that line, we should be saying, what are we doing, how are we doing it, and can we put intelligence into the things in the world around us
Starting point is 00:13:19 in ways that benefit humanity and how, as we're doing that, even if our vision is really well honed to be in that positive for humanity, can we be on the lookout for downsides and get ahead of them? Because I don't want to ever get to that line. And I really think if we get to that line and then some half of humanity says, okay, we're out, the other half of the humanity is just going to keep going.
Starting point is 00:13:41 So I think we need to be worried and thoughtful and responsible about these things, starting now, not starting when we get to that line. Yeah, it's kind of, I mean, it is very interesting. And that's kind of the opinion that I share with you is that we don't really have as a species the capacity to stop. And that's very interesting. Same with the Manhattan Project, same with AI. So, okay, we're going to go ahead and build AI. Do you think there is a similarly positive impact that artificial intelligence can have, the same way nuclear energy can have in terms of preventing, you know, we advertise this session,
Starting point is 00:14:21 energy, climate change. So let's start there. Like, is there a place, is there a way that AI can have that impact? And is it the AI that we've been developing all? along, meaning that the optimization technology, computer vision, stuff like that, or does this generative AI wave have a role to play in this world as well? And let me add a third question because this is now a two-partner, so might as well make it a three. What at Google X, or Alphabet X, right, is now happening to tackle these issues? I'm happy to go down all three of those.
Starting point is 00:14:56 I think we're going to get lost in the rabbit holes. We're going to have fun getting lost in them, but you may have to bring me back to some of those topics. I'm up to it. Awesome. So, I mean, let's start with, you know, artificial intelligence is a really big basket of things. Many people may have heard a lot about large language models recently. That is a particular piece. So artificial intelligence has lots of baskets. Machine learning is one of those baskets. Deep neural networks is a subset of machine learning, and even within deep neural networks, there are ways of setting them up and training them that are what you've heard in public referred to as these large language models. Those will continue to have a lot of impact on the world, for sure. But I'd rather
Starting point is 00:15:42 focus, I think it's actually more productive to go one or two steps back up to like machine learning in general, and to say, how are we solving problems? We should be falling in love with the problems, not falling in love with the technology. And so, for example, one of the projects that's near and dear to my heart that's at X, I think it's a really big issue in the world. The world's electric grid is the world's most complex machine. That is by far the most complex machine humanity has built.
Starting point is 00:16:11 And the people who run the grid all over the world, and there are many pockets, obviously, these are good people. They're trying hard, but it is very complicated. They're regulators. There are private companies. They're these semi-public companies, the system operators, the people who own the wires, the people who own the generation, the people who own the distribution, the sort of last mile to people's homes, all off in different companies. It is a very complex problem.
Starting point is 00:16:38 And so when you go to the people who are supposed to make sure that the load, the need for electrons, and the source of electrons are balanced on a millisecond by millisecond basis, they're trying really hard to do that for all of us. and they're barely hanging in there. And that's leaving the grid the way it is. There are now all of these solar fields all over the world that want to jump onto the grid, all these batteries that want to jump onto the grid, all these electric vehicles that want to jump onto the grid. And they have no way of figuring out how to maintain their system
Starting point is 00:17:12 because, here's the crazy bit, they don't know what their system is. If you go to a system operator and say, show me the digital map of where every inverter, every transformer, every wire is in your grid, they will just say, we don't have that. And that's why it takes them 10 years when you wonder why there are 10 year weights in most states in the United States to get a solar field onto the grid. It's not because people don't want to put solar fields on the grid. They don't know what will happen to their machine if they plug that solar field into the grid. So what if machine learning could help them to learn about their grid, virtualize their grid,
Starting point is 00:17:54 and then answer in a minute instead of in a year, in 10 years, what will happen if you plug this solar panel onto the grid? It's going to be okay. Give that one a yes. Think about the tsunami of renewables that are already waiting. They've literally already been built. They're just sitting there in the dirt, wind farms and solar panels. panels waiting because we don't have a virtualization of the grid. That is an example right now that X is doing to try to use machine learning through the energy infrastructure to make the world better.
Starting point is 00:18:34 And let's talk about the wave of generative AI, large language models. Where do you see the potential there? And yeah, I'm curious if X has any projects underway. Because look, if I had to put myself in your shoes, I think what you're probably going to say is that this is a little bit overhyped and we can actually do more with the technology we have. I guess that's what you just said in a sense. But like, yeah, where do you really think the opportunity is for generative AI and is X working on anything there? So generative AI, as you've probably seen play out in the media recently, leans more into things like asking Bard to write you a poem, going to one of these image producers and say, hey, make me a picture of a pony that's actually
Starting point is 00:19:23 a unicorn. It's at a birthday party, and it's wearing a purple saddle, and then it makes you that picture. That's what, you know, in the media right now, generative AI is typically being put forward as. Let me try to get a different, that's real, that's going to continue to be a thing, but that's the tip of the iceberg. So think about it this way. We're in the middle of actually this has been decades coming, and it will take decades more to move through society, of a process of lifting up people, moving them away from the craftsman mechanical detail work
Starting point is 00:20:04 of designing and making things, lifting them up into guiding computers who help them make things. And this has been going on for a long time, if you're a photographer you are familiar with and you have used Photoshop for example and that doesn't ruin your ability to be a photographer it lifts up your ability to be a photographer so if you work at a car company and you have a car strut let's say so it's sitting there it's it's one of the main pieces that holds the car bits in place around the wheel to the frame
Starting point is 00:20:37 you want it to be really strong when you pull it but also when you smush it together it's got to be really strong. It has to be really strong in torsion as well, because otherwise it'll snap like this. But you want it to be cheap to make, and you want it to be as light as possible. So instead of designing what you think would be the best one, what if you went to a system that could try millions of different possible car struts, so many that it started to hill climb in car strut design space? And you were watching it and telling it things like how much you value fast to make, cheap to make, low-carbon footprint to make, low-carbon footprint, because it weighs less for driving it around afterwards. So you're guiding it, you're making the
Starting point is 00:21:26 decision, but it's trying millions of things, and it comes out with a car strut at the end, which is better than any car strut a human could have designed. We're going to see this sort of thing play out in every discipline in the world over the next 30 or 40 years, and it will take a long time. X is really interested in some of these spaces where we can help the people of various industries to be inventing and designing much faster so that we get to much better designs so we get much better solutions to the world's biggest problems. So you're working completely on moonshots. What you just described just sparks a question for me, which is that if this becomes democratized, if this gets in the hands of so many people, then
Starting point is 00:22:12 does the path from wild idea, moonshot idea to production, become much quicker and then also much more democratized? We don't have many invention houses like X in the world. So are you worried maybe that you're going to have a little bit more competition? Exactly the opposite. The world is not going to run out of problems. And the fundamental goal of X is to get a bunch of people together to work on those problems as efficiently as possible, the more people can work towards solving humanity's problems, the better off will be. So I hope it does democratize things. I'm watching it currently start to democratize things, and I'm super excited about that. I would also say, so there's democratize in the sense of more people can be at the starting
Starting point is 00:23:01 gate and doing that work. It is also lifting us and people like us up in our ability to reach out even better to partners. And so we're now helping aquaculture experts in Norway in a way that 10 years ago, I'm not sure we could even have imagined. We're helping in the electric grid in Chile in ways I just described to you. And so they're getting this same benefit.
Starting point is 00:23:26 They're being lifted up by this technology as we are able to share it with them. What are you doing with the aquaculture experts? Oh, aquaculture. What is aquaculture? So let me take a step back. The first project I was described to you was called tapestry. This project that I'm about to describe on ocean health, we call title.
Starting point is 00:23:49 And let me give you the context here. So the humanity gets about $2.5, almost $3 trillion a year from the oceans. And we are destroying the oceans, as many of you probably know, faster than we're destroying the land or the air. it is the sort of sink for humanity it is pulling all of the worst bits that we're putting into the world into it and that's why the ocean is dying we humanity is not going to stop using the oceans we need to get more value per year from the oceans and we need to do it in a way that is not only not destroying the oceans but we need to regenerate the oceans there is no possible way to do that unless we find a way to take automation to all of the things that we currently do in the oceans and can do in the oceans. So if that's the big picture, where are we going to start? Well,
Starting point is 00:24:43 one of the cool things about fishing, I mean, fishing is open sea fishing is actually really problematic, as most of you know, we're in the middle of overfishing all of the fish in the world. But aquaculture actually helps us not to overfish the oceans. And because the carbon footprint of a pound of fish is one-eighth, the carbon footprint of a pound of beef, we are wildly better off as humanity moving to producing food through aquaculture. But when you go to a huge pen, even our partner Moe, which is a sustainable aquaculture farm in Norway, it's the largest salmon farmer in the world, and they are very good at what they do. And the state of the art, when they wanted to find out a year ago, even two years ago maybe, how well their fish were doing, how much their fish weighed, they would, in a pen with 250,000 salmon, they would pull 20 salmon out of the water, put them on a scale, weigh them, and be able to average that and be like, well, that's probably what they weigh, and put them back in the water.
Starting point is 00:25:55 If they wanted to find out if this fish were sick, they would pull 20 fish out and like, do I see any lice on the fish? Put it back in the water. So what we're doing is we're enabling them through computer vision and machine learning and automation and specialized sensors to allow them to look at the health of the fish, the weight of the fish. We're helping them to automate the feeding of the fish, which makes it much more sustainable because the runoff from overfeeding on these fish farms
Starting point is 00:26:27 is actually one of the big problems in aquaculture. So we're making the farmer better while we're making the world better, using machine learning. So Google X, is it called Google X anymore? X. It's called X. X is there to effectively try to insulate alphabet from the innovator's dilemma, which is effectively you're an incumbent,
Starting point is 00:26:58 you have a business, and you do everything you can to protect that flagship business, and that might potentially head off your ability to do things that would disrupt your core business. We've talked about some very cool projects, self-driving cars, we've talked about things with climate and food. But there's been a big story recently, which is that these chatbots, coming from places like OpenAI and Microsoft have threatened Google. So now Google has its own, it has barred, but I almost wonder when we think about insulating Google from the innovator's dilemma, shouldn't X have been front and center building the first chat GPT and not letting an open AI, for instance, run away with it?
Starting point is 00:27:49 I swear he's not a plant. But good setup. I think you say that we shouldn't be afraid of the advancement of knowledge. Yeah, absolutely, absolutely. So, no, no, I love the question. Let me take a step back and remind you sort of how X functions, what our goal is. Our goal is to invent and launch moonshot technologies that if we do it right, help tackle some of the biggest problems in the world and lay the foundations for enduring sustainable businesses.
Starting point is 00:28:23 one of the early things that we did was a thing which when we graduated it back to Google was called Google Brain. Google Brain is the origin of much of what you now think of as Bard. And so that was an example in the earlier days of X of us making something, realizing that it would be productive for that to be back in the mothership. We moved that back to Google and it has gone on. to do amazing things. And so the transformer paper and other things that you've heard about. Somewhat more recently, this is about five years ago, we said, we can feel on the horizon. We're going to get to the place where the ability is going to be there for us to be working in much tighter loops with software developers, like a partner to them, where we can complete code when they start to write it. If they write out an English language description of what they want, it will write out the code for them.
Starting point is 00:29:30 It can even be like a pairs programmer and have a conversation with them and help them to be better programmers. And so that work happened for about five years at X. About seven, eight months ago, we moved it back to Google. And it was just announced recently as Cody. And that is now sort of the code making part of what you think of at Google. Oh, hoi. That was the timing of that. It was so freaking perfect.
Starting point is 00:30:00 Just finished the answer, and then there we go. Familiarize yourself with the location and receive an important safety information from our crew. So the real question is, is he going to delete this from the podcast, or is he going to leave it in? What do you think? Should he leave it in? Thanks, everybody. That's right. Can someone shut this off?
Starting point is 00:30:36 What's required by law? One of our jokes at X is after we, you know, make time machines work and cure cancer, we're going to figure out how to get AV sorted out. That'll be like the hardest thing. after we do the easy stuff like cancer and time travel. So I want to ask a follow-up question, which is you mentioned that brain started in X and then graduated to Google.
Starting point is 00:31:07 I want to question the graduation process because when you do graduate to Google, doesn't it become a little bit harder to say, we've made all this amazing technology? Now I'm going to make something that's competitive with the bread and butter. For instance, a chatbot that can sort of serve as something that people might want to use instead of search. And once it goes into Google, doesn't it have the constraints that it might not have an X,
Starting point is 00:31:34 which is that now all of a sudden it has to worry about the quality of search and the quality of response where it might not in a more experimental unit? Ramping anything up to serve more than a billion people a day is super. complex. We're talking about, you know, hundreds plus languages. There is a lot, technically, legally, ethically, just to do this practically at all, but also doing it ethically, that X is totally not set up to do. For the same reasons that X may be a particularly good place to do rapid prototyping and learning of new things, we are not the right place to move something from, yeah, we have something special here to it's ready in a really thoughtful, responsible
Starting point is 00:32:22 way to serve a billion people a day. So I hear you, but I would actually say that that would have been irresponsible for us to try to keep that to ourselves. The goal of X is to create really good seed crystals. We don't want to do it all ourselves. We want to get the ball rolling where either sometimes the rest of the world, in the form of an other bet at alphabet, bet like Waymo is or back at Google, the world says, ah, got it now, see why that could be really big before it's done, but it can survive on its own because it's left behind that, like, what are you talking about? Why would we do that?
Starting point is 00:33:03 That's definitely not possible when those things are still on the table. That's what X should be for is to actually work through and often be wrong about whether there's something awesome there. And so we try 100 things and 99 of them don't work. out, and our job is to be wrong about those 99 as efficiently as possible, to move past the ones where we're wrong about them with evidence as quickly as we can, so that that one, which we can double down on over time, can sort of go on to have a really productive future. So here's my uneducated argument to switch things up. Yes, it makes sense to have a prototyping
Starting point is 00:33:43 lab, but also couldn't you, for instance, you know, a billion users that would be building something like Bard into Google chat. But OpenAI's chat GPT started with 100 million users. That's the fastest growing consumer product in history. It seems like you might even be in a great position to, you know, say, hey, we have this chat thing, keep developing it within X, and then call Sundar up and be like, hey, man, can we have some of that cloud infrastructure the same way that OpenA.I. called Sal you up and said, can we have some? And that's sort of what's enabled them to power, chat GPT. And then you don't really have to worry about the competitive elements of coming up against the core business. Let me use a related example, see if this
Starting point is 00:34:27 at least partially satisfies you. So one of the things that we've worked on at X for a very long time is factory automation. The world spends sort of pushing $10 trillion a year making stuff. And if you can automate the making of stuff, you can do it faster, you can do it cheaper, you actually have less waste, it's more reliable because it's been automated. But it is fiendishly complex and expensive to automate the making of almost anything because, until very, very recently, there was a lot of bespokeness, artisanalness to programming robots. And so X worked on that for many years, created, it was a project called Intrinsic, it graduated, it's now an other bet at Alphabet.
Starting point is 00:35:20 They just announced a few days ago that they have a platform called FlowState, which is a developer's arena in which you can develop new capabilities and, new skills for robots to have with a seamless ramp from early investigations in simulation to actually landing them in robots in factories doing real work. And that creates more jobs for developers, makes it easier for people all over the world to help make robots more capable and it helps the makers of the world and it democratizes the making of things. Because right now it costs a billion dollars to seriously automate a factory. So that's an example that started at X.
Starting point is 00:36:09 It's not exactly a chatbot, but it is making robots much more capable, much more flexible, much more nuanced and dynamic and their ability to solve problems on the fly. And then we've laid out the early infrastructure but are allowing the rest of the world to build on top of that, which I think is going to be really good,
Starting point is 00:36:30 yes for alphabet, but for the whole world as well. So that's something that didn't go to Google that lives in this region post-X but still inside alphabet called an other bet. So do you want brain back? What was that? Do you want Google Brain back?
Starting point is 00:36:47 No, no, no, no. Our job is to catch new waves for alphabet, to catch new waves for the world. What excites us isn't the sort of empire building or getting to sort of like have been. been the one who rang the bell. That is not our goal. We want in a really efficient way keeping this balance of audacity and humility just right, audacious enough that we'll try almost anything, but then humble enough that we're honest right after we start out on each of
Starting point is 00:37:20 these investigations that we're probably wrong and that we need to spend our money figuring out that we're wrong rather than trying to prove that we're right, since mostly we are wrong. And by doing that in lots of different domains, what rises to the surface are these new things like inverse design or generative design that we were talking about. It's not that I said there will be generative design seven or eight years ago. It's that we've tried a thousand things over the last decade. And the generative design stuff is some of the stuff that's bubbling to the surface in a way that's looking really promising over and over again. That's what excites us is catching those new waves, doing it efficiently and doing it responsibly. Astro Teller is here with us. He's the captain of moonshots at X and the CEO of X. We'll be back right after this to talk a little bit more about how Alphabet and X actually create the products they do.
Starting point is 00:38:19 Hey, everyone, let me tell you about the Hustle Daily show, a podcast filled with business, tech news, and original stories to keep you in the loop on what's trending. More than 2 million professionals read The Hustle's daily email for its irreverent and informative takes on business and tech news. Now, they have a daily podcast called The Hustle Daily Show where their team of writers break down the biggest business headlines in 15 minutes or less and explain why you should care about them. So, search for The Hustle Daily Show and your favorite podcast app, like the one you're using right now.
Starting point is 00:38:54 And we're back here on Big Technology Podcasts. with Astro Teller. He's the captain of Moonshots of Alphabet, CEO of X. So Astro, I'm actually curious how you decide what to fund inside of X because, you know, I might think of this as like you need to predict the future, but that's not the way that you think about it. Exactly. I, one of the core axioms at X is I do not believe that anyone, certainly including myself, is any better than random at predicting the future. I know. that that's like not the cool thing to say in Silicon Valley, but I just don't believe that any of us are better than random of predicting the future. I think we can discover the future a good bit more
Starting point is 00:39:37 efficiently than maybe other people discover it, but that's very different. So the core kind of map to beginning a journey at X is we make these three circles and we talk about efforts living at the intersection of those three circles. The first one is there has to be a huge problem with the world that you want to solve. If you're proposing something for X, you have to tell us what that huge problem with the world is you want to solve. If you can't say that, we're not starting on the journey. Number two, there has to be some radical proposed solution for how to fix that problem, some science fiction sounding product or service, however unlikely it is that we could make it, that if we made it, would make that huge problem go away.
Starting point is 00:40:24 And then there has to be some kind of core technology opportunity, some breakthrough technology, that gives us the opportunity to start on that quest, and those three things together are a moonshot story hypothesis. That doesn't mean you're right. In fact, you're almost certainly wrong. But if you can propose those three things, it has the form of a moonshot.
Starting point is 00:40:50 and then the answer is great gorgeous moonshot story hypothesis what is the smallest amount of money and shortest amount of time you think it'll take to kill your idea because your idea has a 99% chance of being wrong and there's no way to avoid that because if that wasn't true it wouldn't be radical we're only interested in the over-the-horizon stuff and so as soon as we sign up for that we are explicitly signing up for being wrong most of the time and that's why the the humility has to kick in. So it doesn't work like we will work in these areas. We won't work in those areas. It's, it mostly is what are huge problems with the world. So maybe not coincidentally, more than half of what we're doing right now is in the climate change space, but not because I mandated
Starting point is 00:41:39 that, but because that's what people are excited about. And that is legitimately some of the biggest problems the world is having right now. And then it's actually evidence that kills off most of these things over time, and the ones that survive that we double down on. So it's that winnowing process that looks at the end like we planned each of these things, but we didn't plan these waves to catch. It was just a filtering process that weeded out all the ideas who are just bad sometimes. But more often we're like just wrong time, technology wasn't right, or maybe even the technology was right, the timing is right,
Starting point is 00:42:17 and we just couldn't figure out how to make it great enough. which happens sometimes. It's a very interesting process. I'm curious, most people who are listening to this are not in the position to take moonshots. You must hear this all the time. Is there anything from your process they could put into place inside their companies that might help them achieve those 10-X moments that you're going for? For sure.
Starting point is 00:42:40 I would actually argue what I just described is the most efficient you can be in trying to find something new. and even if you've pre-decided, you know, you're working on flying cars or whatever that is, inside of your effort, there are lots of things that you might be thinking, should it be gas powered, should it be batteries, how many propellers are we going to have? There are going to be lots of things for you to figure out over time. What should the airframe be shaped like? And so there are lots of risks to take, lots of discoveries to have. And for each of them, you can do exactly what I just said. So one of the things that we say at X is, if you're starting out on a journey, I know it's no fun to kill your ideas. I get that. But let's say, for argument's sake, that the idea you have isn't going to make it. Would you like to find out now for $1 or find out three years for now for like $10 million? And everyone, of course, says, I guess I'd like to find out now for a dollar.
Starting point is 00:43:43 Great. Or welcome to X. thank you for working on the propellers or the time machine or whatever it is you're working on. How are we going to do this? Are we going to be intellectually honest or intellectually dishonest about our discovery process? You're like, oh,
Starting point is 00:43:58 I guess intellectually honest, Astro. We all know in our hearts what to do. What I just described is not rocket science. It's not like X invented this. That's not what's going on. It's just so hard to do it.
Starting point is 00:44:15 All of human nature drives us in the opposite direction from what I'm describing. So what X spends all its time doing is trying to create the infrastructure, the social norms, reward systems, et cetera, so that people actually do these things. But what
Starting point is 00:44:30 I've described, for sure, if you work in a startup company, if you work in a huge conglomerate anywhere in the world, this is still the right way to explore new ideas. Let's check in on a couple of famous ex-products.
Starting point is 00:44:45 Waymo, right? I think I'm seeing more progress in self-driving cars now than in like maybe the past year than, because I was in San Francisco 2015, 2016, people were like, this thing is going to happen next year. We're in 2023.
Starting point is 00:45:00 Not quite there yet, but it seems like there's a lot of progress being made. What's your perspective on where we are right now in self-driving cars and how far are we from realizing that original vision? For sure, it was harder than it turned out to be. One of the things that I think caught Waymo by surprise, caught us by surprise, is that
Starting point is 00:45:23 there are a lot of things that human drivers do that if we want to follow the laws, we can't do. Those humans just kind of like are in the gray area a lot about how they choose to drive their cars, where they pick people up, where they drop them off. If it's like transportation as a service, obviously Waymo can't do that. That is incremental hard stuff for us to figure out. So there are a lot of edge cases to make sure that this is super safe. It's a fun experience for people.
Starting point is 00:45:55 I also think that the way it tends to work is any exponential ramp up. It's the same ramp up every year, assuming it's gradual ramp up. But if we're not paying attention to it, it feels like it went like this. And I think most of you will experience self-driving cars like this. There are three cities in the United States where you can get a ride, you know, with nobody in the front seat in Los Angeles, in Phoenix, and in San Francisco from Waymo. And I don't know exactly when, but I pretty much guarantee there will be more cities over time. And so if you live in one of those cities, it probably feels more real.
Starting point is 00:46:38 If you don't, it feels like, nah, it's not going to be a thing. and then all of a sudden it will be a thing and that's natural because, you know, unless you're really in that industry, you aren't watching really closely as the number of rides per day per city is going up from a group like Waymo. I'm legitimately pumped to ride in a Waymo for the first time, hopefully this year. Another project that I want to talk to you about is Project Loon, which was, is these balloons that are supposed to beam internet down to everyone no matter where you are. Now, I'm curious, like, people now, if they think about accessible internet, they immediately go to Starlink.
Starting point is 00:47:15 How did Loon, which was, you know, a great idea early on, sort of, I don't know, is it right to say, but I think Luz ground to Elon Musk and Starlink? First of all, I mean, so Loon was a goal. Could we make a worldwide stratospheric layer of balloons? They're like cell towers, but floating at 65,000 feet that were talking. to each other in an ad hoc mesh network and beaming LTE or 5G to the ground so that people in rural communities around the world, particularly the people who don't have good internet connection all of a sudden would. Very inspiring and it took a long time. We built it. It was working. We were beaming LTE and 5G to hundreds of thousands of people in multiple countries. We couldn't figure out how to get the business to close with the
Starting point is 00:48:08 the owners of the spectrum that we had to work through in those different companies. So we turned it off. It made us very sad. Starlink is a cool company, but they're solving a different problem. There's a fixed amount of bandwidth you can land when you beam something from a satellite in any one region. So it works very well in rural places, but you couldn't have like a huge number of them parked very close together. So that wasn't really the goal that Loon was trying to solve. But let me tell you a sort of fun side story.
Starting point is 00:48:40 It was crushing for all of X for us to kill Loon. It had gone on to be an other bet. It was really one of the inspiring things that had come from us. And so it made everyone really sad for us to stop doing it. And we have this saying at X. We're very focused on moonshot compost. Whenever we stop something, the people, the code, the patents, the know-how, we try to keep them all at X.
Starting point is 00:49:07 They recirculate working on new projects. And so in this particular case, one of the technologies that was allowing these balloons at 65,000 feet to communicate between the balloons at very high bandwidth were lasers. And so
Starting point is 00:49:23 when Loon ended, someone said, well, how come we couldn't put those lasers on the ground, you know, like on a pole? And that sounded almost embarrassingly too simple after all of the work we had done to get them up to 65,000 feet. But fast forward five years, we have these things, about this big, it's smaller than a traffic light. It shoots a laser up to 20 kilometers.
Starting point is 00:49:52 It's eye safe. You could just put your eye right up against it. Nothing bad would happen. It's unregulated. It's near visible light. It's about one order of magnitude outside of visible. light, but in the EM spectrum, that's basically visible light. And it's received by another box that's, you know, two feet tall. You strap them on two poles. If you plug the
Starting point is 00:50:14 internet, like a fiber optic cable into one, you have 20 gigabits per second, up to 20 kilometers away for less than one one thousandth the cost of trenching fiber. And we're rolling these out right now in mainly Africa and India, and that project is now moving more data to real customers per day than Loon moved in its entire history. So go moonshot compost. That project is called Tara, if you want to look it up. Pretty cool. Okay, let's see. We have maybe two or three questions to get through in nine minutes. Let's see if we can do it. In the break, I asked some people in the audience to shout out different big problems that maybe X could get working on. And we've talked a lot about hardware, for instance, but what about some medical physical issues?
Starting point is 00:51:10 Is that ever something that you'd want to, or I don't know, societal issues? Is that ever something that you'd want to take on? So I'll just give one example. So one person in the front here said community. I mean, we know that community, building community is one of the things that people struggle with the most right now. I mean, we're here with one. So it's like, it's nice to see that they're, it persists, but most people would say, or many people, many more people than usual would say that they're lonely. They don't have friends and stuff like that. Is that something that an X would ever take on or is that even too moonshoughty for the moonshot factory? No, no, not at all. As long as we could be proud about the output and there is a technology solution to the problem, we would be excited to work on it. You know, we have had various explorations. For example,
Starting point is 00:51:56 in the social justice space. Because the temptation is to think that something like belonging or systematic bias in society is just not amenable to technology helping. And maybe that's true, but that's not written in stone somewhere. Like shame on us for not at least trying. So we don't have some huge news to give you on that front, but we certainly haven't given up.
Starting point is 00:52:22 I'll give you another one that's sort of like that. Education. I will consider it a failure if X doesn't eventually have a great moonshot in the education space. We've tried like 30 things. It's so painful. But I also don't want to be kidding ourselves that something is a solution. If it's not really going to be a significant phase transition for society, and some things may either just not be at all amenable to technology
Starting point is 00:52:51 or have so many sort of other complex human issues around. them like pedagogy in education. If you take the teacher out of the process, again, nowhere in is it written in stone. You can't help kids. But it might be fundamentally different and harder. So there are lots of those kinds of things where we continue to go back to the well and technology is what we're good at. So we're not going to sort of do things that don't have a sort of core technology piece to them. And some of these hopefully will have great news for you in the future but you know haven't solved yet have you thought about building an app you could make it blue where people would post their personal profile and connect with the people they know in life and maybe
Starting point is 00:53:35 share small updates like 140 characters just very succinct while darker blue okay that could be something no i feel like the world's already got one of those and um you know what we're best at are the things where in the early days, it doesn't look at all reasonable or possible. And usually, because we're not scientists per se, it's less that we're doing kind of basic research. It's more like taking something from column A and something from industry B and some observation from sort of field C and putting them together in a really unexpected way. And again, often we're wrong. But if the those things come together, then at our best, I think we're system integrators, and we're really
Starting point is 00:54:29 good at getting mud on our boots, getting out in the world, working with partners, prototyping quickly, trying and learning, and using humility as a superpower to be wrong, admit we're wrong, learn from being wrong, and get better faster. Like, I think that's us at the best. Okay, and then we also had someone yell out carbon sequestration. I think that was what it was. Yes. Okay, he's not correcting me. So thoughts on that. Yes, we have done a bunch of interesting work in carbon sequestration, in green hydrogen. We're doing explorations in lots of other parts of the space. We're excited about the possibility for making much lower carbon footprint cement. So I can't tell you that we've absolutely cleaned any of these problems off yet, but these are spaces that we're excited about. And And by the way, spaces where inverse design can be really helpful because these tend to be large machines, chemistry problems, electro-mechanical problems.
Starting point is 00:55:33 They have sort of a lot of, the techno-economics of these things are really unforgiving. Ultimately, the number of dollars it costs for you to sequester a ton of carbon is like the only thing that really matters at the end of the day. Green hydrogen, of course, everyone will take it, but the number of dollars per kilogram is just going to determine because it's a commodity whether people buy it or not. And so trying to come up with something very different but also have an eye on the sort of
Starting point is 00:56:05 long term like the Rust Belt engineering side of this work, the project finance side of this work, what would this really be at scale? Because it's actually kind of easy to make any of these things look great in the lab. It's having that Sandy check on top. top. What is this really going to be at scale? Would this actually change the world? You know, we had this project. It was one of my favorite. We had a device. It was about as big as like the area where we're sitting, a little bit bigger. And it was taking seawater and producing methanol. You could burn in a gas tank. There are four billion internal combustion
Starting point is 00:56:45 engines in the world. Tears were running down the faces of the people like capturing the first couple of drops here. This project was called Foghorn. That felt like a real save the world moment. It was actually working. And we could not convince ourselves we were going to get it cheaper than about $15 gallon of gas equivalent. And that's just not going to save the world. So we turned it off. So tighter economic conditions, interest rates around 5%. Do you worry that now that we've exited zero interest rate environments, that you might not be able to get as much funding from the mothership as you had in the past? I think alphabet needs to be very thoughtful about how it spends every dollar, and we have
Starting point is 00:57:29 spent the last 13 years working hard on our efficiency and on our rigor. So, you know, if Alphabet stopped being interested in sort of like the 10-year plus, that would be one thing, but as an alphabet is very serious about the long term, and as long as Alphabet is serious about the long term, I am sure that, you know, making sure that we spend every dollar wisely, we'd be very important. And I think that Alphabet is still excited about having a part of itself that goes and explores these other spaces. Let me ask you one last question. We started about talking about where invention could go wrong. I actually am kind of curious from your perspective, I know we only have like a minute left, but why do we continue to try
Starting point is 00:58:16 to invent and build because it does seem like in some ways that like if we all put our heads together we could have enough on this planet for everyone but we don't do that so i'm kind of curious like what you think where are we trying to get to i'll give you two answers the first answer is humans are fundamentally explorers i think we all have some some pioneer spirit inside of us, wanting to learn, wanting to grow, wanting to find new things and do new things, it's a very fundamental part of who we all are. So I don't think humanity is going to stop doing that anytime soon because I think it's part of what makes being a human great. And don't worry, I'm going to end on a positive note here. There is enough pain and complexity
Starting point is 00:59:15 in being alive, enough reasons to think short term, that people are going to, by and large, do what is in their somewhat narrower self-interests and what solves best for their pocketbooks. So if we want to save the world, if we want to make the world a radically better place, we have to find ways for doing the right thing to be cheaper than doing the wrong thing,
Starting point is 00:59:42 especially when it comes to climate change. The only way that we're going to get to the place where doing the right thing is cheaper than doing the wrong thing when we can dig the problem out of the ground and burn it is going to be radical innovation. So that's why I believe we're working on it. And I think that's why the whole world is working hard on it right now. Astro, thanks so much for coming on. My pleasure. Thank you for having me. Thank you, everybody, for listening.
Starting point is 01:00:17 Thanks, everyone. Thank you so much to our live audience. I want to thank Summit for having us here. Thank you, Nate Gawattany for handling the audio. LinkedIn for having me as part of your podcast network. Everybody in the audience, if you find Big Technology Podcasts in your app of choice, we have the product manager on Bard, Google's Chatbot, that's going to be on the feed within the next week as well.
Starting point is 01:00:38 Thanks again for being here. Thank you, Astro, and enjoy your time on the ship. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.