a16z Podcast - From Models to Mobility: Building Waymo with Dmitri Dolgov

Episode Date: April 17, 2026

Waymo is now delivering hundreds of thousands of fully autonomous rides each week — but getting there required more than better models. It meant building a complete system for training, evaluating, ...and deploying a driver in the real world. In this episode — originally aired on the Cheeky Pint podcast — Waymo Co-CEO Dmitri Dolgov joins John Collison to break down how self-driving actually works today: from sensor fusion across LiDAR, radar, and cameras, to simulation, “critic” models, and the role of AI in decision-making. They also explore why full autonomy is fundamentally different from driver-assist, what it takes to scale globally, and how recent advances in AI are reshaping the path forward.   Resources: Follow Dmitri Dolgov on X - https://x.com/dmitri_dolgov Follow John Collison on X - https://x.com/collision Stay Updated:Find a16z on YouTube: YouTubeFind a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:00 When you're driving around or being driven around, say, we think about what we're building as a driver. I kind of imagine building a big model that understands how the physical world works and understands the important properties of what it means to drive, the social aspects of driving, and what it means to be a good driver as opposed to that one. I would say that we've clearly moved past the stage of scientific research and deep core technology development to this new phase of accelerated global scaling and deployment.
Starting point is 00:00:40 Waymo is now doing nearly half a million fully autonomous rides a week across multiple cities, a shift from long-term research to real-world scale. In this episode, originally aired on the Cheeky Pint podcast, Weimo co-CEO-C-O-C-Ov joins John Collison to break down how they built the system behind it. from the sensor stack in why LiDAR still matters, to the role of simulation and critic models in training the AI. They also get into why driver assist won't naturally evolve into full autonomy,
Starting point is 00:01:12 what it takes to scale globally and how the product itself is changing, from custom-built vehicles to entirely new economies of ride-hailing. Dmitri Dahlgo is co-CEO of Waymo. He joined Google's self-driving car project 2009 as one of its first engineers and was repeatedly promoted until he took it over. in 2021. Waymo is Google's most successful moonshot and now provides over 500,000 fully autonomous rides each week. Cheers, by the way.
Starting point is 00:01:40 Yeah, cheer. You grew up in Russia. I grew up in Russia. Then I was actually a Soviet Union. Right, right. Exactly. My dad is a physicist. So the Soviet Union started falling apart.
Starting point is 00:01:53 And then he had a position, a visiting position in University, in Kyoto University for a year. We moved there as a family and then he went to Berkeley and I kind of tied along and then I ran out of I graduated from high school I was thinking about the next thing I wanted to do
Starting point is 00:02:09 and I really liked that that technical school in Russia The Russians are serious better for this They are so I went back to Russia and I got my bachelor's and masters What year was this that you went back to Russia? In 1994
Starting point is 00:02:22 okay So that was kind of almost peak Russian optimism in a sense where it was opening up It was yeah yeah no I actually remember talking to my mom about it. And, of course, my parents grew up in the Soviet Union. They've seen it.
Starting point is 00:02:39 They were worn right before the war. And then they saw, you know, they lived through some really tough times. And I remember talking to my mom and saying, you know, in fact, I got my green card here in the U.S. before I went back. And she insisted that I do it. And I was actually at the time, I wasn't thinking of coming back. But then I was pretty excited about where Russia is. and trajectories on.
Starting point is 00:03:02 And, you know, being young and naive, there's no turning back. And so why did you decide to come back? There's more of a playback for them. Yeah, yeah, no. School, it was pretty clear to me. Like, I wanted to continue studying, you know, math and computer science.
Starting point is 00:03:20 And while the undergrad and master's that I got in physics and applied math, that I think was still an incredibly strong kind of foundational, you know, school of Russian math and science. Graduate school, it was very clear to me that the best way to do it was in the U.S. So I came back. I'm struck by the founders of the two most valuable UK companies are Russian math nerds,
Starting point is 00:03:48 who both went to the same school, Nikolai-Ashrevolute and Alex Gurko at XTX. But yeah, it's a strong diaspora. there's a company not far from here where one of the founders also has a similar pedigree where a company that exactly exactly you're the classic engineering interview question
Starting point is 00:04:14 of you know what happens when I type google.com and hit enter as you know to talk me through you know whatever you like you know HTTP and DNS and you know BG you can go down to whatever level of the stack you want do you want to maybe just describe when I take a ride in a way mode today what's happening at a technical level?
Starting point is 00:04:32 Like what is the architecture? Let me answer your question. What's happening in real time? But this is going to be only a part of the story because we're going to be talking about the inference, the real time inference part of it. And if we want to have a deeper richer technical conversation, I think it would be interesting also to zoom out
Starting point is 00:04:52 and talk about the entire ecosystem of what goes into building, evaluating, and deploying the way and driver. But when you're doing it, you're driving around or being driven around, say. You know, we think about what we're building as a driver. Obviously, it's not a car. So it has a number of sensors that are positioned around the vehicle. We use three different sensing modalities.
Starting point is 00:05:15 There's cameras. There's lighters or lasers. And they're raiders. You know, those are the primary ones. They're, you know, also microphones, directional, you know, microphone arrays. But those are the primary three for sensing the world. they'll have very nicely complementary physical properties
Starting point is 00:05:35 they all have 360 degree coverage around the vehicle so the Waymo driver sees 360 you know all the time so all of the data goes into a computer you would expect and they're the software that process now it's all AI I'm kind of specialized AI in the physical world
Starting point is 00:05:53 so it processes the sensor data nowadays you know talk about it in the, you know, using AI terminology as, you know, encoders that, you know, take this data in. And then there's the kind of the decoder, the action, you know, the generative part, if you will, in the car. And the generative task there is to, you know, figure out how to drive. Right. And that is, of course, connected through kind of a specialized interface to the car where we can
Starting point is 00:06:19 actuate the vehicle. And, you know, that's why you see the steering wheel, you know, turn and it drives your own. Okay. So I get into my car. there's three main families of sensors, LiDAR, radar, and cameras. And then it is using that to first build a model of what's going on in the world, where are all the other cars and things like that.
Starting point is 00:06:39 And then you say make decisions and then actuate that with the cars. That is the system that you're living in. And is all that inference done locally or presumably yes, nothing's in the cloud? Nothing real time. Nothing real time in the cloud. And there are some things that can happen in the cloud, but they're not required.
Starting point is 00:06:56 Got it. What's an example of a nice to have that happens in the cloud? You can imagine a situation where we do, you know, some of it is not directly related to the test-co driving, but I can say after you leave the car, we want to check that, you know, the car is not dirty. You didn't leave anything there. If you did leave, you know, an item, well, if you left in a mess, then, you know, I want to send the car to one of our depots, get it cleaned up. If you left an item there, you know, your phone. all right, we want to detect that, and then send it to our lost and phone and let you know. So that, you know, we do with kind of by asking a model that's actually a lift off board,
Starting point is 00:07:39 as opposed to having to put it on the car, right? Because it's not a real-time task related to the driving. So that's one example of something that. There are all these debates that go on on Twitter around self-driving. So I can think of, you know, end-to-end versus the more, kind of modular approach. There's cameras only versus array of sensors. And I can't tell, are these debates actually interesting to an expert in the field?
Starting point is 00:08:08 Or do you think these are just settled matters and they're just grist for the algorithm? I understand what the questions are coming from. I do find that often the way they're posed and the way the debate happens is losing a lot of the nuance and a lot of detail that really matters where to me the most interesting technical questions are in that level because the way we think about the building the Waymo driver it starts with a large off-board foundation model I can imagine building a big model that understands how the physical world works and understands the important properties of what it means to drive,
Starting point is 00:09:00 the social aspects of driving, and what it means to be a good driver as opposed to a bad one. So that's the foundation. Then we specialize it into, let me call it, three main off-board teachers. There are still large, high-capacity off-board models. There's the Waymo driver, there is the simulator, and then there's the critic.
Starting point is 00:09:22 And those then get distilled. filled into smaller models that you can run, you know, inference on faster. So the wind driver becomes, you know, the backbone, the male backbone of what's in the car. The simulator, of course, is what powers our synthetic generative environment that can run on the cloud for training and for evaluation in close over the system. And the critic, they value. Sorry, does a simulator ever run locally? No.
Starting point is 00:09:47 No, it doesn't. Yeah. However, what I think is interesting, in a way, the way the decoder works, the way the model works. If you think about the generative task in the simulator of creating those realistic worlds and how other people behave, how cars, pedestrians, cyclists, in order, and the task that you have to solve on the car in real time, there is this fundamental shared capability of understanding how these objects relate to each other and predicting what they might do in the future if you are running on the car and then generating those, you know,
Starting point is 00:10:26 some sampling those probabilistic behaviors in the simulator. So it's different models, but there is, you know, this is why the shared foundation model is able to power both. And similarly, if you think about the critic, like the job of the critic is to finance things and then, you know, be opinionated. Yes. About what's good behavior. Yes. And what's, yes. Bad behavior. Similar fundamental understanding, right? If you're running, you know, inference on the car, you still have to, like, figure out which of the multiple hypotheses of these future worlds you want to, you know, take action to steer it towards. Yeah. Okay.
Starting point is 00:11:00 And these are all downstream of the same foundation model. That's right. So start with the foundation model. Yep. You know, then you, you know, specialize in fine-tune, still off-board model. Those are the teachers. And then you distill each one of the teachers kind of distil, trains its own student. Yes.
Starting point is 00:11:17 Right. The driver, the simulator, the critic. Yes. You started working on self-driving 20 years. years ago. As you think about the tech evolution, is this just a scaling laws story where we had to be able to throw enough compute at it? Were there architectural approaches we needed to wait to have be invented?
Starting point is 00:11:37 Was it just a story of we needed 20 years of going down the wrong cul-de-sacs before we eventually arrived at the right approach? You know, knowing what you know now, could you have a successful Waymo in market in 2015? Was there some enabling technology? No. Technology breakthroughs that happened over the years were critically important. Primarily in AI, but also in other areas, you know, compute. You know, have a compute than you know. Now, I wouldn't characterize it as like going, you know, a thousand different dead ends.
Starting point is 00:12:13 And then having to retract and then finding like the one right bat. I would characterize it as, you know, iterative learning and evolution. and then, you know, Transformers came around, but transformers, for example, are very general architecture, right, powers of LEMs, powers, you know, our models. But how you apply them to that space, I think this is where...
Starting point is 00:12:33 You didn't just fall out of transformers. Exactly, right? And, of course, you know, people like to talk about architectures, but architecture is important, but really a lot of it comes down primarily to your metrics, to your evaluation mechanisms, to, you know, all of the training recipes and, of course, you know, data.
Starting point is 00:12:50 Yes. LMs are good at text or tokens specifically, and obviously perform best at domains that have some kind of single corpus of text they can work on, like coding, where it's very helpful that everything was just kind of textual already. And part of the success has been creating textual representations for domains, such that we can then, you know, put LMs against them. Can you describe how you encode the world that your, seeing.
Starting point is 00:13:22 I mean, are you just building a 3D map, like a 3D map essentially? So this is where I think we get a bit into the question of what is the interface between the encoder and the decoder parts. And I think that touches also on the thing you flagged earlier where people like to debate end-to-end or not into end. So the way. the way, let's make you know talk a little bit about end to end and then get back to like what is the interface between those two right so when we say end to end what do we
Starting point is 00:14:00 mean? I mean that it is some large ML model typically you don't build them monolithically you have you know different parts and different subgras but what's important is that you can propagate back props the you know gradient and the the loss function all through the different layers so they can you know every layer you can learn the weights and the representations that matter for the final task. You don't force it through some narrow funnel between, let's say, the encoder and the decoder. Yeah, I think of a simple view of intending, you know, pixels go in and car actions come out, which is maybe a bit of an oversimification.
Starting point is 00:14:37 Yeah, that's exactly right. And if, like, this is kind of the basic banilla version of it, right? If you think about the, you know, what will it take to build the driver that's capable of fully autonomous operations? You think about this entire ecosystem of the driver, the simulator, the critic. If that's all you do, like pixels in, trajectories out, it becomes very difficult to do all of those three and achieve the high level of safety and performance that we require. and it becomes very difficult to kind of do it at scale. However, if, you know, that's, it's kind of a very easy way to get started, right? You collect some data, kind of like, you know, allows you to the LLM world, right?
Starting point is 00:15:25 The easiest thing you can do is have, you know, pick a model. The easiest way to get started nowadays would be, you know, take a VLM. It already has a kind of a language-aligned camera encoder. And then it has a decoder that, you know, will, can predict, you know, generate text. And you can fine tune it and say, hey, instead of text, generate trajectories. You know, very, very doable. In fact, a while ago we published a paper called Emma that did exactly that.
Starting point is 00:15:56 And it will actually, in the nominal case, drive pretty darn well, which is mind-blowingly impressive. That is very funny, yeah. And, I mean, there's something to... You're saying you can take it off-the-shelf model, which has nothing to do with... driving to start with and you'll get these good results. That's right.
Starting point is 00:16:15 You get it in the normal case. I just want to be clear. It's orders of magnitude away from what you need. Yeah, you should not try it on the streets, but it works. But for example, if you... It's like a talking horse. It's impressive that it's talking, you know? Exactly, exactly.
Starting point is 00:16:27 And you can actually, the product that you wanted to build was maybe a driver assist system, not a fully autonomous system. Then maybe that's all you need to do. And then for that, you don't need all this other machinery of the simulator and the critic. So that there's, because the number of, 9s is drastically. But this is interesting because there is, you know, there is some intuition behind, you know, why that works. If you think about the hard parts of driving, it's, you know, not unlike, you know, having a conversation. Except, you know, if in the LLM world, right,
Starting point is 00:17:02 having, you know, you're modeling language or maybe modeling a dialogue in the space of sentences and words. What makes driving hard is also this kind of multi-agent social-interactive part of it. If I do something that's going to affect you, it's going to affect somebody else.
Starting point is 00:17:21 And the history matters. It's not local and just geometric. Context matters, semantics matters. So, but it's in a different, you know, it's not in the language of words, it's in the language of kind of body language, you know, right? How? So,
Starting point is 00:17:34 and we see that empirically validated if you, you know, do this approach. Okay. So then let's say we build this thing. Just cameras, camera encoder, pixels go in, the quality is sufficient to, you know, drive in the normal case. It's not sufficient to deal with the long tail of, you know, the edge cases and hit the high bar of superhuman safety that we require.
Starting point is 00:17:58 So then you start asking the question, what else do you need? Yes. And if all you did was kind of observing how other people drive, when you trained this system, maybe observing, you know, just passively how people drive and how they interact, maybe also, you know, driving the car yourself and then using imitative learning to train it. Mind that that's not enough. You have to do something in closed loop. You have to, you know, you have to do things like RLFT, which is also, you know, parallel to what we see on. RLFT. RLFT, reinforcement learning based fine tuning.
Starting point is 00:18:37 Okay, yeah, yes. This is similar to the reinforcement learning with human feedback in the LLM world, right? You want to do maybe close-loop proper closed-loop driving, where you explore all kinds of different situations, and then you give it a reward signal to keep it in distribution. For that, then, we need a realistic simulator. Right? You also, if you want to have a good REL system,
Starting point is 00:19:04 you need to have an opinion for the reward function. This is where the credit comes in, right? If you have a purely end-to-end system, let's look at the simulator. Now, what do you do? You're then constrained to just go from pixels to trajectory, right? That's all you can run the system on, right? And it's a very high dimensional space, so it's a hard problem to generate everything. But even if you solve that, it just becomes incredibly inefficient to run it in the full way of pixels to trajectory.
Starting point is 00:19:39 in simulation for training or for evaluation. So this is when intermediate representation come in. There are some intermediate representations in the world in this task of, you know, in the physical world, we know are correct. They're not sufficient, but they're not generality limiting. Right? You know, there's an object here. There's, you know, on the concept of a road,
Starting point is 00:19:57 there's signs, there's speed limits. So this is where augmenting that learned representation, those learned embeddings from the encoder decoder with that, you know, more, you know, structured representation. Yes. Is what we do. And we find that this kind of gives us additional knobs to simulate in that space, just you know, pixels to trajectories.
Starting point is 00:20:20 It allows us to have additional safety validation layers in real time. And it also allows us, you know, it gives us additional mechanism to specify the reward function, you know, for evaluation of critic or, you know, for training. So this is again, like we've gone kind of full circle of, is it antenna? Yes, it is. Yes. And if you want to do it at scale for full autonomy, it's augmented with all of this other stuff.
Starting point is 00:20:45 That's very interesting on the simulating point. It's just very hard to simulate for an end-to-m model because it's easier to deal in intermediate representations rather than coming up with the pixel-perfect view of the world. You need both. Yeah. So having an antenna architecture that's augmented with that structure allows you to kind of play in both of those worlds.
Starting point is 00:21:05 Yeah, yeah. What are you looking to do as a self-driving car? I mean, it sounds funny, but I think people maybe don't realize that there are many different things that you're looking to solve for where you're looking to get the person to their destination. You're looking to get them there reasonably promptly, but also drive quite smoothly
Starting point is 00:21:24 and also have many lines of safety and also not annoy other drivers and get honked at, and, you know, and then, and. So what are some of the reward functions or kind of things you're optimizing for that maybe are not obvious to people. So safety is the primary focus, right? But of course, we also want to be a smooth driver
Starting point is 00:21:46 so that for both people in the car and other actors. And I also want to be a predictable, well-behaved one so that it can nicely fit into the whole social ecosystem of our roadways. It seems like one of the issues that has quickly emerged with self-driving is the fact that people can't have nice things or not everyone is nice to the robots.
Starting point is 00:22:16 And so, you know, whether you're driving through a dodgy area or getting blocked or, you know, maybe I'm not going to drop you off here. Maybe I'm going to go around the block and, you know, drop you somewhere better. But all of these, as you say, kind of other human issues, how do you go about solving this? A lot of the ones that you mentioned are just things that we need to work on. And understanding, honestly, you know,
Starting point is 00:22:40 said that if we're not dropping you off, exactly where you want it to be dropped off, or, you know, we don't give you, have a good interface to tell us, that's on us. Right? You just, yeah, I don't make it better. Yeah. It feels like the drop-off is actually a pretty nuanced part
Starting point is 00:22:55 of the self-driving journey. Like the highway stuff and the, you know, the 35-mile-an-hour roads, like that is all nailed, but there's just like a lot of nuance in the drop-off experience. I'll say they're all hard. You picked freeways and you picked drop-posts for different reasons, right? For drop-ups, you're absolutely right. There are a few things that are maybe not obvious.
Starting point is 00:23:18 You just think about this problem. But it's understanding where you want to go, and making it as convenient as possible for you. And pick-ups from drop-off. It's not exactly symmetrical, right? But then I was also understanding the context of the situation where you, you know, where do you stop? You don't want to block a driveway, you don't want to be a double park, although in some cases where if it's a quick one, maybe it's okay.
Starting point is 00:23:40 So there's a lot of nuance that goes into doing that well so that it's a kind of smoothless, frictionless experience for the rider, as well as other folks. Freeways for most of the time, there are where, you know, not much happens. They're very well structured because we designed them that way. But there is still that long tail of really complicated stuff that happens where the consequences of, you know, a better event are much more severe, right? The speed is much higher. Everything is, you know, quadratic in speed. So it may see a lot of stuff. Imagine grills falling off a freeways. Imagine, you know, people getting into accidents and kind of spinning out of control.
Starting point is 00:24:29 You see one of those flat-ed trucks, but just like a bunch of stuff piled in it and you're driving behind it? I don't know. I always find it very nerve-wracking. Looks a bit. I know. Yeah. And we've seen them, you know, leave a trail. Yes.
Starting point is 00:24:44 Yes. Yeah. Okay. So it's a different set of problems. But I feel like the general sentiment with Waymo is that the driving has mostly now been solved by you guys. And it's kind of a question of scaling up and maybe some super long tails. stuff, really snowy condition. Like, is that your sense internally,
Starting point is 00:25:02 or is there actually much more nuanced with them that? I would say the, yeah, it's not like, you know, we're done with engineering. Yeah, yeah. I would say that we've clearly moved past the stage of scientific research and deep core technology development to this new phase of accelerated global scaling and deployment. Yes.
Starting point is 00:25:26 So, you know, we still have work to do, right? But I don't see today any limitations or any gaps in the core technology. The driving is good enough now. Well, the core technology, I think, is good enough that I can't think of any aspect of driving that is not supported by the fundamental technology. Now, that said, there is a lot of work to do in specialization and invalidation before we can deploy responsibly. We're not driving everywhere in the world. You know, we are planning to start operating in London and in Tokyo this year.
Starting point is 00:26:06 And, you know, are we just do have a driver that, you know, you're using today in San Francisco that we can just plop down in London and go? No, right? But what we're seeing is incredibly encouraging from the perspective of, like, is the core technology there? Right. So now it's a matter of, you know, collecting the data, doing some specialization and validation. And you can, you know, signs are different, you know, in both of those places, drive on the other side of the road, but you know, that's actually not that hard for computers.
Starting point is 00:26:32 Right. The core technology generalizes really well, but you still work that you have to do. What generalizes least well? Increasingly we're finding especially, you know, now that we're able to kind of hook the way more AI to the AI in the digital world, the BLMs, and kind of inherit the general world knowledge from VLMs, we're seeing really strong results from like zero shot or few shot learning, because of that general knowledge that we bring in. But there are a few things like, say, cold weather, cold winter weather, where it affects the entire stack. So it's not just the AI, but you actually have to.
Starting point is 00:27:12 The hardware, you need to have the proper cleaning solution, you know, hitting elements in it, and then you think about things that are completely solvable but computers like motion control and slippery surfaces, right? So that takes a bunch of work. you don't get that for free from just pulling it. Yeah. Some, you know, VLM decoder.
Starting point is 00:27:32 Was it the case, I mean, my impression, not knowing anything, is that in the early days, there was maybe a lot of San Francisco specific work or Phoenix specific work in the early markets, whether it be mapping or something else. And that you guys seem to either have solved that in generalizing it or just scaled up your ability to do the city-specific work. Quash enabled the kind of the rapid city expansion. We usually think about it, the capability of the wind driver as well as deployment,
Starting point is 00:28:08 not primarily and directly in that space of, you know, cities or zip codes. I think about the operating domain. And then that's, you know, freeway, and cold weather, freeways, cold weather, snow, rain, fog, density, etc., etc. And then that, that's what we are, building, that's where we're evaluating, and then that maps to a city, like a particular city, be within the operating domain or outside of it. So what, where, you know, if we remind history a little bit, our initial deployment in
Starting point is 00:28:41 where we started offering a fully autonomous commercial service for the first time was in 2020 in Chandler, Arizona. So, and that was on what we called the fourth generation of the Waymo driver. This was the, if you remember, the Pacifica minivans with different hardware, different software. There, you know, we were super focused on doing the whole thing end-to-end. Learn how to build the driver, evaluate it, deploy regularly, operated end-to-end 24-7 with customers, learn from the customers. And then we're very focused on not operating domain of, you know, mostly Chandler, just a medium, low-complexity one.
Starting point is 00:29:22 Then when we made the jump to the fifth generation of our system, this is what's on the I basis today, we really wanted to take a huge bite out of that operating domain. And we collected data all over the United States, all different states, different cities. When we chose to deploy in the hardest parts of San Francisco, hardest parts of Phoenix, we made a big jump on the hardware side and most importantly on the software, the AI side.
Starting point is 00:29:51 and I would say that was the big discontinuous jump. And that's what you're saying. Now after we've scaled up and iterated in all of the aspects of building and deploying the driver, this is now why you're seeing us and go in parallel and scaling in the U.S. And so driver version 5 was just a much more generalizable stack than version 4? And what was it about just that it had been trained on a much wider? Datasat? It was when we made this big bad on AI.
Starting point is 00:30:27 I think there was a lot more, you know, little AI models and ML models in the fourth generation. Got to made it much bigger bet and jump to kind of AI is the backbone for the fifth generation. AI is the backbone as the core engine, as in you're saying that Gen 4 had lots of small little AI subsystems. Okay. Yeah. And that's been, so we kind of made that jump.
Starting point is 00:30:52 and we've been, you know, iterating and improving the model since then. Can we talk about hardware a second? So lots of hardware questions. But one is maybe everyone in this space has a very charismatic demo of a vehicle that is custom made for self-driving. And so, you know, it's often the van with the, you know, no steering wheel, seats facing in both directions. You know, you guys have one. Tesla has the steering wheel-less cyber cab. You know, crews had the cruise origin.
Starting point is 00:31:29 And yet, we're still driving in Jaguars that have a steering wheel in the front and are pretty similar to consumer cars. And it's interesting to me because, you know, if we were talking about this 10 years ago, we might say, well, yeah, developing a custom car, like, that's relatively straightforward. We know how to put a bunch of sensors on a new car, but the software will take a long time. And what's interesting is we've made huge progress in the software, but interestingly, the cars are still derivatives of, you know, cars that people are driving.
Starting point is 00:32:02 And so I'm curious why you just think the custom hardware has not happened as of 2026. It's obviously, it's a small improvement compared to, you know, Waymo is the big improvement, but it's just interesting that it still has to happen. Well, let's say our sixth generation of the vehicle and the driver, is our version of that. Oh, no, I know it is. It is a whole platform, right? So that is, you know, it still has the, you know,
Starting point is 00:32:26 we can talk about, you know, whether you want to have the seeds point of backwards or not. I actually, you know, think it looks nice in a demo, but practically speaking, yes, yes, not the way to go. But that is, it is a custom design vehicle.
Starting point is 00:32:38 And it is, we put a lot of thought into, you know, moving away from a car that's designed around the driver. Yes. To a car that's designed. around passenger and it's much more spacious like but it's it's happening it's you know we're not it's not open to the you know the public yet but you know I took a ride in it the other day fully autonomously and that's coming this year yes how much
Starting point is 00:33:03 better is it as a passenger experience you'll tell me once you give it a try I love it okay so it's yeah it's all about the space yeah and the convenience of you know ingress and egress and the screens and the interface of the pasture. So we put a lot of thought into every aspect of it. It has sliding doors. It's very easy to get in. It has a flat floor. It is, yeah, if you sit in the back, you can like fully stretch out.
Starting point is 00:33:30 And there's so much space there. And it looks, you know, from the outside, you know, it looks fairly big. Yes. Right. But the actual footprint of that is barely, barely, barely larger than the eyepace. So it's kind of amazing that, you know, you walk in. It feels like you're in a little bit of large. living room.
Starting point is 00:33:47 Yes. I guess my question is just, you know, Waymo does, you know, 25 million rides a year, run-ride-ish, with the Jaguar I-Pace. And it's interesting that so much scaling has happened with self-driving so far
Starting point is 00:34:03 on the old, you know, retrofit. Maybe that's to be expected. Well, it matches the high, I don't think of, you know, as a given. You're right. But if you think of it, about the value proposition. Of course, there is the safety of it.
Starting point is 00:34:23 You have to worry about. There's also the privacy. Being in the car by yourself, maybe with other folks, but not having to share it as space with another human, right? No, we're great products. Yeah. But I guess this is why we're seeing such,
Starting point is 00:34:37 you know, consistency in car, you know, drives well. Yes. You know, very predictable. And, you know, you can go beyond that, right? You specialize even more to make the experience even more magical. around the writer. But I guess it would have been disappointing
Starting point is 00:34:53 if, you know, without the specialized car, and I think I would have been surprised if we leveled off, you know, at some other, much lower level of customer. Because, yeah, a car seems like, more of an optimization improvement, but the core of the value proposition comes from those other factors.
Starting point is 00:35:08 Yes, yes. I guess to just take risk on one thing at a time. We'll start by, you know, doing the software layer and then we'll build a specialized car or something like that. That's right. That's right. Yeah, yeah, it's also.
Starting point is 00:35:18 Well, I mean, as you said, it's a big investment. Yes. So you have to like, you de-risk the fundamentals. Yes. And throughout our history, we were very focused on setting the most, you know, the biggest goal for the company to de-risk the most important questions. Right. We talked about, you know, the third generation where, you know, we wanted to deploy something
Starting point is 00:35:39 and go in-10. We talked about what was the goal with the fourth generation and then, oh, sorry, the fifth generation. And then there's the sixth generation, right? So it was a sixth generation where it's made sense to go out and spend a little this effort into the custom. And sixth generation is both the custom vehicle? Is it also a new generation of the driving stack?
Starting point is 00:35:58 Yeah, it is the new hardware. The sensors, you know, the hardware, the surveying hardware they're putting on the Ohio vehicle is the sixth generation. It is very different from the fifth generation. It is simpler. It is more capable. It is much lower cost. It's a fraction of the cost.
Starting point is 00:36:18 you know, comparable to what you would get, like a fancy A-DAS system. Nowadays, the driver-ass system. The software is pretty much the same. So when we talk about generalizability of the Waymo driver, we talk about weather conditions, we talk about cities, but it also generalizes well to different vehicle platforms and different sensor configurations. Okay, so Gen 6 is a new vehicle and a new sensor stack,
Starting point is 00:36:44 but it's almost a TikTok cycle happening here. It's a similar software. That's right. That's right. And then we're going to put the sixth-generation Waymoid driver on other vehicle platforms like the Hyundai Ionic that's coming later in the year. What is different about the sixth-generation hardware stack, and how did you make it cheaper? So it still has the same three sensing modalities, but we've made significant optimizations in all three. So unification, simplification, and there's just, you know, the kind of just writing the commodity. Is it a classic case of, you know, manufacturing scale where we're moving on.
Starting point is 00:37:24 Well, scale hasn't fully come in place, but all of those, if you think about the supply chain, the industries, cameras is pretty mature. Radars, way many years ago, used to be bulky, complex, very expensive. You know, when we were putting them in planes and, you know, But then we start putting them on cars. Now you can get a decent automotive radar for, you know, tons of dollars. There is, you know, a variant of the automotive radar and it's called imaging radar. And it gives you a richer thing.
Starting point is 00:37:59 So that is also, you know, has come down and caused drastically, but it's a little bit behind your standard automotive raters. Liars are following the same, very predictable, very well-known trend. So we're, you know, writing that. And we're also, you know, learning from the previous generation to. Just make improvements and simplifications and optimizations. It's a very silly question. What are lighters versus radar is better in a self-driving context? Lider.
Starting point is 00:38:23 Are they complementary? They're very complementary. You know, it's all blasting, you know, effectively. Like, you know, blasting photos out there. And then they bounce off of something. They come back. You know, you measure what comes back. the frequencies are very different.
Starting point is 00:38:46 So laser gives you its very, very high resolution. So you can think of it as like a laser beam that goes out, spins around, it shoots out millions of these laser pulses per second. And then each one comes back and you can, you know, you're kind of sampling the 3D structure of the world with very high resolution. So a lot of for very fine-grained map. That's right. Radar has much lower resolution, but because of the physics of it,
Starting point is 00:39:12 of it, it can, it degrades much better in adverse weather conditions. So fog, snow, heavy rain. So it could be accuted by rectangles between it and the target. So imagine driving in super dense fog. Yes. We're close to San Francisco. So probably don't have to think that hard. It can be really hard to see.
Starting point is 00:39:39 So cameras degrade. Yes. Laser, you know, depending on kind of the, size of the particulates can degrade better or worse than camera. Radar is not well-affected. So you can imagine driving on a freeway, then radar will give you really good returns for cars that are absolutely invisible in the camera space.
Starting point is 00:39:57 That's interesting. So does that mean there are some environments where you'll be relying significantly more on radar, but the performance is good enough? Well, it's a combination of the sensors, right? So we rely on, you know, each one is noisy, right? how the noise characteristic show up in different environments is different. But it is, I mean, it's not like we switch from one to another.
Starting point is 00:40:23 It's not like we estimate what's happening with the world through cameras and through radars and through ladder and through lighter. And then we compare. No, they're like, there's an encoder for camera. There's an encoder for a lighter. And they all go into the system that gives you jointly the best view of what's happening in the world. So if you were, you know, if it's a nice. right, sunny day, cameras are very valuable. If it's pitch dark or you have sun in your face
Starting point is 00:40:48 or you're blinded but the headlights from, you know, oncoming car, then camera will degrade. There's still some noisy signal, but it will degrade. Yes, yes. And lighter light is completely unaffected. Are there technical problems that are your white whale or you're just, you're still chasing, or you are particularly interested in solving,
Starting point is 00:41:06 even if they're kind of niche for the, you know, we just, we really want to have, you know, driving when it's actually snowing, nailed, or steep hills in San Francisco or, you know, are there problems you've been very interested in historically or still are? I'm super excited right now about the accelerating global expansion. More cities in the United States and going internationally. So being, I understand I'm not answering your question about the knowledge. I'll come back to that.
Starting point is 00:41:38 But really, that's the thing that I'm today most excited. Well, just, you know, go be, you know, getting to a place where any major, you know, metropolitan area, you can fly into the airport and then take away more and go anywhere. You want to go? Like, that is insanely exciting to me right now. So then, you know, technically, what I'm most excited about is all of the rapid progress in AI. and the world models, the foundational model work. And it is just such a massive boost to how much we can simplify the system,
Starting point is 00:42:21 how much we can bring down the cost, and how we can scale globally. And there's some magic that happens that I don't think I would have anticipated a few years ago. So that I find from the technical perspective, just insanely thrilling. Yes. When you talk about kind of the progress in AI, what are the most fun parts of it for you these days? I think it's seeing the capability and the scaling laws from this approach of starting, you know, with that cornerstone of the foundational model
Starting point is 00:42:56 and then specializing to T-shirts and then, you know, distilling. It just, you get such big wins in performance across the board. I understand you invest something into the architecture or get better at data or a training recipe. And then you invest in that early stage and then it just has massive amplification and ripple effects. So that in some ways is kind of magical. And then you, I guess, then you see it in the car. And I've had some moments where, you know, car does something and you know, look at a log. And I've been surprised.
Starting point is 00:43:38 Like it does things that I didn't think it was capable of doing. So it's that, you know, it's, you know, it's, yeah. When you see emergent behavior, that's kind of a proud moment. One example, yeah, you know, it's, you know, when you build a system and then, you know, you think, you understand, you know, how it works and you understand fully, you know, the limits of its capability and performance. And then it does something, you know, kind of almost magical. Yes. It's exhilarating. Yes.
Starting point is 00:44:06 So one example I can give you, I think I've shared some videos of that publicly in some talks, was this example where the situation happened to San Francisco are fairly benign situation where at an intersection, our light is red, there's new cross traffic, a buzz goes by, and it stops partially blocking. Our light turns green. so we start to go, we're nudging around the bus, and then you see a pedestrian being detected on the other side of the bus.
Starting point is 00:44:43 And then the car responds appropriately, it slows down, goes a little bit wider, and then a pedestrian actually emerges from the bus, and we go on our own way. So the first time I looked at that log, what's going on here? I know we have pretty darn good sensors, and the software is very capable.
Starting point is 00:45:02 we don't see through stuff, right? That's not how cameras or lighters and radars work, right? It saw the pedestrian through the bus. It saw the pedestrian on the other side of the bus. Yeah, yeah. And it's not like, you know, you look at the windows. You're like, okay, you know, Raiders shouldn't, this massive metal box.
Starting point is 00:45:18 Yeah. You know, look at the sensor data. Yes. And like, it just shouldn't, radar shouldn't be able to go through it, right? You know, the camera, like you can't see in the camera because, you know, there's reflections and there's people on the bus,
Starting point is 00:45:33 so it's not like you can see through the windows. Right. So, like, what is going on? Maybe it's noise or some coincidence. And I, you know, first time I saw it, I couldn't actually believe it. It's like, no, no, there's something. It doesn't spell right.
Starting point is 00:45:43 So what actually turned out what was happening is that our peripheral ladders and bounce under the bus. And there was just a little bit of very, very noisy reflection of the movement of the person's feet that was enough for the AI models. Hey, I likely there's a pedestrian there. And I'm going to, you know,
Starting point is 00:46:01 I detected a site. And moreover, there's enough data there to predict what they're going to do. Yes. It's kind of like blowing my mind. Is this the perfect example to explain what we were talking about earlier, the value of one fusion across a sensor suite, but then secondly, building, I mean, relatedly, building an intermediate representation of what's going on,
Starting point is 00:46:28 where if you're just dealing with, with pixels. I mean, the person behind the bus does not exist in pixel space. And so you need to have some representation of the world that exists to be able to reason about the person behind the bus. I think it's an example where giving it, using that intermediate representation to boost the level of performance of all parts of the model,
Starting point is 00:46:56 is what's happening here. Just imagine solving this problem with the black box, purely open loop, imitative system. Is it impossible? Is it impossible? No. In practice, what would it take to achieve that level of performance? Yes.
Starting point is 00:47:14 Very, very difficult. What metrics can you share in just where the business is at today in terms of rides, revenues, cars on the roads? We have about 3,000 cars on the roads. We're doing about half a million rides per week. That translates about over 4 million fully autonomous miles per week. We are operating in a fully autonomous mode in 11 cities in the U.S. And 10 of those, we have riders, looking at our riders.
Starting point is 00:47:54 What's the Go City? The Go City is natural. We just started there. So we just opened it up to riders in four new cities in one day. That was one of those little but super exciting moments where I thought back to the history. Like how long did it take us from the first time we started fully autonomous writer-only operation to the first time we had external riders in four cities. It was about eight years.
Starting point is 00:48:24 and then the other week we just launched four in one day. Yes, yes. It seems now clear that in 15 years, most miles that are driven will be autonomous. Like there'll be some burning period and lots of old cars in the road,
Starting point is 00:48:42 like I think it'll actually take a little while. And some of that will be by level four, level five systems expanding in new cities and that expansion continuing. Some of it will be, you know, you're referenced existing driver assist systems and kind of getting up to, you know, level two and level three and existing systems in across current car brands getting more and more capable. What do you think that working your way up from the lower levels versus working your way expanding from existing products like Waymo? What would that convergence look like? Yeah.
Starting point is 00:49:17 We're going to eat it from both sides. I don't believe we will. And I actually think this. That's a great answer. Cars will get smarter. There's going to be advances in driver assist systems. And if there is at the same time from level four autonomy, there is simplification.
Starting point is 00:49:41 And the sensors of today are not going to be the sensors of tomorrow. So they'll be much more integrated. They'll be simpler. There will be much slower cost. So from that perspective, there is a path of convergence. And there's also a path of conversion. from the product lines. You know, right healing and what, you know, you can take, you know, a ride through the Waymo app today.
Starting point is 00:50:02 Eventually, they'll be on your personal car. So that I see. And talk about the technology. And I see it just as fundamentally two different problems. There's driver assist systems. And then there is full autonomy. And I think it's deceptive to think of them as kind of incremental, you know, on one spectrum of complexity. Okay.
Starting point is 00:50:24 Maybe you think one cannot work one's way up from driver assist systems to full self-driving. You think you have to start building a full self-driving system. You have to tackle, if I think about the hardest parts of building a fully autonomous, you know, rider-only system, they are very different from, you know, what you do for a driver-ass system. Yep. And, of course, you know, some work in this space helps you. you, right? So I don't want to say you can't make the jump, but it is a qualitative jump. Yes. When can I buy a Waymo so that I don't need to wait for us when I want to go? I can just like, when I'm ready, I can walk out the door and it's there. I'm not going to give away a date to date. But you're not the first person to bring this up as a product request. Yeah. Do we note it. Okay. I'll add it to the list.
Starting point is 00:51:17 Just you know that waiting for the car. It's just in the garage there and keep your stuff in this and everything. It's not the first time you've heard that request. So how, it seems to me operationally very intensive and very hard. Like a self-driving car is actually not self-driving. It takes a village. You have all of the human operator ready to step in. And there was that thundering herd incident that you guys talked about in San Francisco. That kind of highlighted that for people. And then there's just like keeping the cars clean and, you know, keeping.
Starting point is 00:51:54 everything running in that regard. And so, do you describe just what the operational infrastructure that sits behind Waymo looks like? Sure. I will say that we are overall, in all of those areas, on a path of increasing efficiency and automation. Yep. All right. So, you know, the number of manual steps that one had to do, you know, five years ago to, you know, launch
Starting point is 00:52:25 Waymo versus where we are today is drastically different right? So yeah but nowadays if you look at one of our depots
Starting point is 00:52:36 as like a fully automatically orchestrated you know dance of autonomous vehicles so the way it looks what it looks
Starting point is 00:52:46 like today is cars will automatically you know go on there, you know, to pick up their riders, you know, serve their trips. If for some reason, you know, they need to come back, you know, maybe they're low on energy,
Starting point is 00:53:05 maybe somebody, you know, left the mess on the car, they will, you know, automatically come to the depot, right? If it is, so cleaning today is a manual process, right? So I'll get flagged in the car, you know, we have fleet management systems, say, hey, you know, car, you know, number, you know, 378 needs cleaning and will actually, on the sensor dome, we're able to display icons, so we'll show you like a little emoji. But they're handouts, yeah.
Starting point is 00:53:30 Yeah, and there's people whose job it is to clean the car still come, you know, clean it. If that's, you know, cleaning is not required and it's just charging, you know, we'll say, pull automatically pull into a charging sole, and we'll say, hey, you know, I need charging. We don't yet have automated charging. In the future, you can imagine that being fully automated, right? But you know, a person will come in and, you know,
Starting point is 00:53:49 plug in a cable and the car will charge and then say, hey, you know, now I'm ready to go. and it will get unplugged, and the car will pull out of its parking stall and then go on its merry way. One of the new portions, I think it is, has inductive charging, just like your iPhone where you just drive over the charging mat. I was amazed that that works at car scale, but presumably in the future they'll just be able to drive on the charging mat.
Starting point is 00:54:12 Or do you think just robotic plug-in will be easier? We'll see. I don't know. I think there's some questions about efficiency and how that plays into the overall cost and which one will be, you know, most cost-beneficial remains to be seen, I think. How well-behaved are the Waymo riding population in terms of not leaving a mess in the car? We have wonderful riders.
Starting point is 00:54:36 We have the most amazing customers in the world. Generally, I'll say they are very good. I think, you know, there is something about, you know, I talked about not having, you know, a person in the car, it's not somebody else's car. In some ways, you kind of want to preserve the, I think generally people want to preserve the nice aspects of it. It's a broken window thing where it's so clean to begin with. I know, yeah, it's kind of like, you know, I think that that's the general trend that we see, right? And it's like because there's not somebody else's space, you know, you're in it, it feels like it's your own.
Starting point is 00:55:11 Yeah. So you don't like want to mess up, you know, your own space. So I think, I mean, I don't want to speculate too much on the, you know, psychology of the thing. However, I will say that it varies. And you can imagine, you know, a college town on a, you know, Saturday night. And, you know, that's a different distribution. Yes, yes. Will I be able to get away mode any address that has USPS service in the U.S.?
Starting point is 00:55:36 Or will there be some head-tail dynamic where Ketchkin, Alaska is just never worth it? Eventually, we'll, absolutely. Right? There's no doubt in my mind. I think it's just a matter of when. Mm-hmm. And, you know, what modality would make the most commercial. commercial sense.
Starting point is 00:55:55 This is for your right-chair versus privately owned. Yeah. For right, you know, it's not a technical problem. You know, technology is solved. Yes. But then, you know, if you're in the middle of nowhere and there's just not enough density of the trips, does it make sense for the right-healing service that, you know, Waymo is running to, you know, have cars on standby?
Starting point is 00:56:10 Yes, yes. Probably not, right? They can be deployed, you know, somewhere else. And you probably don't want, you know, a horribly bad ETA. And this is where, you know, personally owned vehicle that is equipped with the Waymo driver is maybe, you know, maybe you know how you will see it materialized. Relatedly, what will the second order effects of say majority autonomous traffic be? Like it feels like a lot of things will work better where as you say, you know,
Starting point is 00:56:32 when someone merges into a lane very poorly and everyone all the way back has to, you know, slam on the brakes. That's kind of anti-social behavior. And so it feels like higher quality and more pro-social driving will just, I mean, basically reduce traffic a little bit even for the same number of cars on the road. But presumably there'll be other second order effects like we'll want higher three. we put traffic lights and yeah how else will things change so the first thing i think you know we that you mentioned is uh you know that's a that's a huge deal i just need to think about
Starting point is 00:57:08 uh traffic jams yeah and what was what's that saying that the navy seals right uh slow is smooth and smooth as fast right that that's what like you know traffic jams are like you know you accelerate abruptly then you come to a stop and you know sometimes you have a traffic to like what happened. Yes. Well, like, you know, an old lady crossed the road three hours ago, and we still have the standing wave. Yes.
Starting point is 00:57:27 Right. So if everybody, you know, was a kind of a smooth, predictable driver and a consistent driver, and you would still have those, you know, traffic jams. Yeah. At the time off. Yeah. But then the time constant should clean it out, I think would be very different. But longer term, and you know, things like parking lots.
Starting point is 00:57:48 Right now, if you look at, you know, what is our most interesting. Yes. you know, pieces of land allocated to, in parking lots, it's garages, and why is that? Well, because again, you know, your car is just sitting there 90% of the time, right? If, you know, more cars become fully autonomous, then there's no, you know, that, right? And like, then imagine, just imagine what you can do with, you know, your favorite city in the world if you don't have to spend that money,
Starting point is 00:58:16 that huge fraction of it on, you know, just, just keeping your, these chunks of metal sitting around. Yeah, I don't think people often realize how big a deal parking minimums are for the layout of the urban landscape, the coffee shop here where I am, would like to have outdoor seating but can't because it would reclaim parking spots. Yeah, wouldn't even wonderful. Yeah. I have a few more questions, but I'm curious to talk about Google's relationship with self-driving where, again, it feels like right now, Waymo is, aside from everything else, I don't know. AI related, kind of the most exciting thing happening at Google. But it was a very long journey to get here. I mean, I feel like you could say that Google almost started working on it too early
Starting point is 00:59:06 because you were saying there's been a bunch of recent enabling technologies. And so did it require Google starting when it did so early or could one have spun up this project in 2015, 2020? And then how did Google keep the faith when it always felt like it was perennially two years away? Yeah, no, I have another latter part. I just have to give credit, huge kudos and gratitude to, you know, Larry and Sergey and, you know, Alphabet leadership, Senator, company.
Starting point is 00:59:40 It is part of the culture and the DNA of the company is to have that vision and have the stamina and conviction to go the distance. So to the other part of the question, was it too early? I don't know. I think what we've been seeing clearly all of the breakthroughs that we've seen over the years have changed how we're building the system. But the complexity of the problem is such that, like, you need to go through these iterative cycles, right?
Starting point is 01:00:25 It's not, you know, still, and we've seen many waves of technology, right? There's, you know, breakthroughs in, you know, 2013, ImageNet came around. And there's an error, okay, like, that is the right time to start, B.S.D.R.D.O. And, you know, Transformers came around, and you know, VLMs, and all of those are super powerful.
Starting point is 01:00:45 And you have applications in other, their spaces. In the digital world, they certainly have an impact on our AI and the physical world. They're no silver bullets. They kind of, they drastically reshape that early part of the curve.
Starting point is 01:01:00 It's always been the nature of this problem. It's very easy to get started. It's deceptively easy to get started. But it is super hard to go, you know, the full distance and get the edge came out. It's, you know, the number of nyes, right? They have to, like, there's the standard.
Starting point is 01:01:17 engineering rule of thumb that every next nine takes 10x more so I yeah maybe there is a more optimal path but I don't see there's you know that there's some magical moment where the true complexity of the problem goes away and then you can just take some off-the-shelf components and your business right if that were the case then I think the industry would look you know very different today yeah last question I have you've been promoted a lot at Google it feels like Google really recognized your talents. Just what do you think Google does? Like Google is famously, one of the very best in the world at technical talent.
Starting point is 01:01:53 And say, you know, the current AI wave more broadly happening, you know, is either stuff happening at Google or generally Google alumni. But just what have you observed firsthand from how Google does this so well? Yeah, I'll say Google, you know, that, that culture of Google of not accepting the status quo, having a big vision, and investing in technical talent, the people who can go the distance and realize the vision,
Starting point is 01:02:33 that is part of the culture. I think this is what you're seeing. And with the breakthroughs in AI in the digital world and all of the early investments in Transformers and other. fundamental technologies quantum computing
Starting point is 01:02:50 and I guess we're not unlike those efforts as well to be sure thank you yeah thanks for listening to this episode of the A16Z podcast
Starting point is 01:03:00 if you like this episode be sure to like comment, subscribe leave us a rating or review and share it with your friends and family for more episodes go to YouTube
Starting point is 01:03:10 Apple Podcast and Spotify follow us on X at A16Z and subscribe to our substack a16z.substack.com. Thanks again for listening and I'll see you in the next episode. This information is for educational purposes only and is not a recommendation to buy, hold, or sell any investment or financial product. This podcast has been produced by a third
Starting point is 01:03:32 party and may include pay promotional advertisements, other company references, and individuals unaffiliated with A16Z. Such advertisements, companies, and individuals are not endorsed by AH Capital Management LLC, A16Z, or any of its affiliates. Information is from sources deemed reliable on the date of publication, but A16Z does not guarantee its accuracy.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.