a16z Podcast - The Road to Autonomous Vehicles: Are We There Yet?

Episode Date: June 26, 2023

Self-driving cars have been on the horizon for quite some time. But, they might actually be here.We got to ride around in one all over San Francisco with Waymo’s Chief Product Officer, Saswat Panigr...ahi.We discussed so much, including the five levels of autonomy, the infamous LIDAR vs video debate, regulation, UX, and the role of AI in fine-tuning these models. But most importantly, if autonomy is here… now what?!Topics Covered:00:00 - Introduction03:20 - A first look at Waymo05:21 - 5 levels of autonomy09:45 - Technology challenges14:32 - LiDAR vs video debate18:19 - How Waymo differentiates19:01 - Technological unlocks on the horizon20:39 - The role of AI in autonomous vehicles25:37 - How Waymo views safety32:05 - Collaborating with regulators37:26 - Learnings from the first 2m miles39:45 - Driving user retention43:47 - Waymo’s expansion strategy47:00 - Changing regulation47:41 - Societal unlocks enabled by autonomy52:21 - Will self-driving cars replace humans?52:58 - Closing thoughtsResources:Check out Waymo: https://waymo.com/Check out Waymo's Youtube channel: https://www.youtube.com/@WaymoFind Saswat on Twitter: https://twitter.com/saswat101Check out a16z's 8-part series on the autonomous vehicle ecosystem: https://a16z.com/2018/02/03/autonomy-ecosystem-frank-chen-summit/Stay Updated: Find a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Transcript
Discussion (0)
Starting point is 00:00:00 We are in San Francisco. We're not taking you to some, you know, little test facility in a desert to show you that this can't. We are on the area. We are in one of the most vibrant cities on the planet. That's where we are in the journey. We're fully autonomous where we need to be. Self-driving is here. I think if I was a human driving there, like, I don't know what would have happened?
Starting point is 00:00:22 You were asking, it doesn't get boring for you afterward? I love it. I love being your excitement. And we are not the only ones who have noticed. That kid just noticing that we don't have a driver now. Look at that. I love seeing so many people when they look inside. Look at it.
Starting point is 00:00:35 Just like, what's going on there? Today, I got to ride fully autonomously with Waymo's chief product officer, Saswat, Panigrahi, who's actually been on this journey since 2016. We discuss so much, including the five levels of autonomy. Fully autonomous. As you can see, nobody in the front seat,
Starting point is 00:00:54 no expectation of a human to take over. The infamous LIDAR versus video debate. Saying you love LIDARs and hate cameras or vice versa as saying you love one wavelength versus the other wavelength, right? It's not a fundamental problem. Regulation, user experience,
Starting point is 00:01:11 the role of AI and all this, but especially the question if autonomy is truly here. Now what? As a reminder, the content here is for informational purposes only. Should not be taken as legal business tax, or investment advice or be used to evaluate any investment or security and is not directed at any
Starting point is 00:01:34 investors or potential investors in any A16Z fund. For more details, please see A16Z.com slash disclosures. All right, so before we jump in, if you are hearing this, you are listening to our RSS feed, which is great. We love that you're here. But for this episode, this episode is such a special one. I would really recommend you go watch on our YouTube channel. I honestly feel so lucky that I got to ride around in this self-driving car for an hour. And we have so much good footage. We literally saw the full gamut of possibilities. We saw someone running a red light. We saw a group of bicyclists, literally like 20 or 30 bicyclists. We saw a construction zone. We even saw a two-year-old kid who saw that there was no driver in the front seat. And you
Starting point is 00:02:23 could just see the confusion on this kid's face. So really, it was an incredible ride. And if you watch on YouTube, you can see all the footage from inside the car, outside the car, and from the car itself. You can literally see how the software is interpreting the world around it and transforming that into this representation simulation on screen. It's honestly so cool. I actually just hit my one year as the A6C podcast shows, and this by far was my highlight. This is so cool. I hope you all get to experience this one day. And again, if you want to go see what I'm talking about and not just hear about it, go to our YouTube channel at YouTube.com slash at. That's the at sign, A16Z, or just search A16Z on YouTube.
Starting point is 00:03:08 Really, this is the one episode. If you're going to go on over to our YouTube channel, this is it. All right, we will see you on the road. Wow. Very clean. So what do I do? Do I just... Oh, yeah, the KB. Up there. Nice. Oh. I like don't even not interact with it. All right. We just jump in.
Starting point is 00:03:38 Hey there, Catherine. Thank you. This is cool. Can we just click start? Yeah, do you want to? Go ahead. Oh, wow. Oh, wow. This is wild.
Starting point is 00:03:58 Happy Friday. This is cool, so you can play music. Yeah. And you can just ask it to pull over if you want at any time. And you can call support and there was an audio queue to remind you in case folks get in and they, you know, into a minute, think like, this is a... Yeah, like this is going to somebody. But it turned out actually, that wasn't, you know, folks just got eased into it pretty quick. You know, somehow the smoothness was, you know, somehow the smoothness
Starting point is 00:04:22 With which it drives, there's some instant trust. Totally. So I mean, so I get car sick pretty easily, but to your point, like, it doesn't feel jerky at all. Right. It feels really smooth. But I mean, I'm so curious because you've been working in this space for so long,
Starting point is 00:04:38 does knowing kind of like how the sausage is made, does that make it any less magical? Like when you got into this car for the first time and there was no driver sitting there, were you also like, oh my gosh, like what's going on here? Oh, absolutely, I'm totally like a kid on this one. So 2017 was probably the first time there was a person in the front there,
Starting point is 00:04:56 but they didn't have actual control. So that was my first. And 2019 was the first time I was in a car with truly nobody on a public street, just driving around. And every single time, even if, of course, the year before, the month before, the day before, and the hour before, I was in anticipation
Starting point is 00:05:13 and preparing for it, still being inside it, it's truly special. I mean, it's so cool to just see it navigating. This light turns on the wheel. Something I have to ask about is, it's felt like this is coming for a while. Like I almost feel like the future is here. Like we're seeing this, we're sitting in a car with no driver,
Starting point is 00:05:28 but a lot of people would kind of say, this has been promised every two years kind of thing. And it's like we're finally at the two years in a way. And so maybe you could kind of map out where we are in that kind of arc, the five levels of autonomy, and also where we still have left to go. Yeah, certainly. We are in level four right now.
Starting point is 00:05:48 So fully autonomous. As you can see, nobody in the front seat, front seat, no expectation of a human to take over. And in level two and three, it's really, really crucial to communicate the expectations to the driver. Yeah. Because it's very easy that during a normal situation, the driver feels, hmm, car is kind of driving well, so I can pick up a book and start reading, no, that's serious, and there is
Starting point is 00:06:09 an expectation to take over. So we are in that level four with a certain scope. So right now, if we were to begin heavily snowing in SF, which it has, believe it around last season, there was a little bit of snow. we wouldn't operate. We couldn't. But level five is truly defined as anywhere, anytime. Right.
Starting point is 00:06:26 I mean, and even the idea of autonomy, people kind of view in this binary way, right? Like the whole idea of level puts that into perspective. Like level two is plane sensing, automatic braking. Right. Is level three where basically what we're doing now, but there would be a human in the front? Or how would you define? No, no. I would say vast difference even between level three and level four, huge.
Starting point is 00:06:44 It's almost a difference between driving and flying, I would say. Okay, yeah. difference because this concept still that within seconds you need to take over versus now we can have this conversation you can look at me right that would be challenging right but the assurance level you need to get to for level four is just a universe different i would say and if we wanted to let's just imagine we're rolling down a hill for some reason it feels like the car is not stopping could i take this over if i wanted to so if you did the car would say hmm i'm being interfered with i'm a fully autonomous car I'm not supposed to be interfered with in this manner.
Starting point is 00:07:20 So it'll pull over, basically. Okay, got it. So we've talked about the kind of arc of innovation. You've also worked in this industry for a long time since 2016, is that correct? Yes, that's right. Okay, so tell me a little bit more about the barriers along the way to level four. What would you say? Was it the technology, the regulations, some combination? Are there major factors that have really delayed us from getting to this point where we're
Starting point is 00:07:42 now at level four? Yeah, yeah. One thing you mentioned the innovation arc, I always reflect that on any journey when you're trying to do something that has never been done before, there will always be ups and downs, some challenges you foresaw some you didn't. The key is, are you clear on how massive the price or the benefit to societies on the other side of it? Because then that makes it all worth it. So we were super clear from the get-go that a fully autonomous car that does not get rousy,
Starting point is 00:08:11 that we even have physical constraints, right? So even when you are alert, let's say you were looking for parking on the this side of the street. Your face would be turning towards that constantly looking for that parking spot. So you wouldn't be able to see. You don't have your sense. Yeah, yeah, exactly. You can't keep turning back and forth. And so we're fundamentally convinced that a fully autonomous driver is going to be safer. So once we started that, yes, the largest challenge I would say was largely I would call it technology, but I mean two different things. One is building the driver itself that can drive under these conditions and having the high grade of performance.
Starting point is 00:08:46 We're also measuring that is pretty hard. This smooth sort of early stage drive under very tight constraints is now relatively with today's technology not terribly hard to build. But to be able to do that at the scale of 24-7, busy intersections at slow speed, but also high-speed intersections of Phoenix. So in Phoenix, for example, the streets are wider, so you don't deal with these narrow situations. But the driving speed is 45, people are sometimes going 60 on that. That means you got to see a lot further.
Starting point is 00:09:19 So very different sets of challenges, very diverse ones. And the technology, and it really required the full stack, right? We build the hardware, we build the software. Because if you built just the software and waited for somebody else to deliver the hardware, the speed of learning, the speed of iteration that was necessary to build something like this was so steep that it was not feasible. So we had to build the lasers, the cameras, the radars. and the software on top and the massive simulation infrastructure as well.
Starting point is 00:09:45 I was going to say there's so many moving parts. And actually, maybe let's talk about that technology. So this is my first time in a fully autonomous vehicle, but I've seen them around my area. They're driving around. I see no driver. But I also see the swirling thing on top. I see a bunch of different kind of appendages to the car. So maybe you could just break that down.
Starting point is 00:10:04 What is happening? How is all this technology coming together? And what are the bits and pieces that you've added onto the car that? car that allow it to be autonomous so fundamentally you can think of it like is the car aware of what's happening around it and then can it anticipate what the things around it are going to do yeah and then reasoning on what it should these are sort of the three components in perceiving what's around us think of the example we're just discussing you're trying to look for a parking so you're focused on that task this car with those appendages as you mentioned can see three
Starting point is 00:10:38 football fields away, 360 degree, and it's getting a snapshot multiple times a second. Okay. And it's relying on a combination of the state-of-the-art laser, camera, radars, all strategically positioned. Okay. So to give you an idea, lasers give you a very precise understanding of everything around you. It's the smallest detail.
Starting point is 00:10:59 So if there was a child an inch out of this pole, it will be able to mark, oh, this is a demarcated child away from that pole. It will get to see that. But, you know, the cameras are needed to distinguish between the red light and the green light. And the radars can almost see around corners, even when the laser and the camera or our human eyes can't. Because they can sense objects coming in. And so we took an approach that we want to combine the best strengths of each of these modalities to create the best picture of what you can see around the world that you're just incredibly better than a human possibly could.
Starting point is 00:11:36 Both due to their tension span, the rain. the fidelity and the combination of these sensors coming together. So that's what we see. But then there's a harder challenge of anticipating what the person will do. Take a look at that pedestrian. They're standing pretty close to the crosswalk, right? But you don't know if they're going to move. Exactly.
Starting point is 00:11:53 Are they going to jump in or are they going to stay? Are they going to jaywalk or are they going to obey the light? Yeah. This requires a deep, deep understanding and tremendous amount of machine learning. Each stage here, by the way, requires an insane amount of machine learning. And it's not the type of thing that you can just like put on the road and say like, oh, let's see if it makes mistakes. Yeah, we're not trying to tell you is this a cat image or a dog image.
Starting point is 00:12:13 Like it's, so for example, for that pedestrian, in addition to seeing that they're there, acknowledging, which was this problem one, you got to look at even their gait, their hand movement, their leg movement to anticipate, are they about to take motion? And if you're always conservative, which was, say, four years ago, we couldn't detect the predictions, we could totally detect them.
Starting point is 00:12:32 It was this nuance of are we being over conservative, assuming they may jump in, hence, let's not move, move versus confidently moving forward. So are we at our first stop of the multi-stop. Amazing. The second opens the door. So yeah, that anticipation of what a car is going to do, what a pedestrian is going to do, what a child is going to do because they can erratically jump through.
Starting point is 00:12:54 What a motorcyclist is going to do? Are they going to lane split and speed and get around you? So all these motion models and understanding of how people behave, how this gentleman is walking and what their gate and motion tells you about where they're going to go, that is essentially. And third and final, what should the car do? This feeling that you had that it gently accelerated, but not harshly. Yes. That takes into account not just look at this gentleman.
Starting point is 00:13:16 He is almost has his feet almost in the cross when he's not intending to cross. Yeah. Because he has a stop sign. A very conservative system would just come to a stop. But here we sort of asserted ourselves a little bit. Yeah, we went for it. That's such a great point because... So I learned to drive in the last year.
Starting point is 00:13:34 I was actually waiting for self-driving and it took a little too long. long, but as part of that, oh, we got to resume, yeah, absolutely. Let's go. You can see here, for example, we're showing. Oh, yeah, they're jaywalking. Yeah. And you see, within the last minute, you saw an example in which we looked and we noticed that this pedestrian is not going to cross, so we didn't come to an abrupt halt. We went through smoothly. And later, we just noticed that somebody's jaywalking and we'd yield it to them. Yeah. That delicate nuance is pretty hard. And like this lady, you actually don't know. She kind of looks like, oh, she went back. And you can see here, we were tracking them. So, you know, if there was anxious passengers,
Starting point is 00:14:06 this car seeing it, we give them the feedback that we are, we're seeing this pedestrian crossing right here on the screen, we tell them why we're slowing, because what happens is also people zone out, they take their space, they speak to their kid if they're picking them up after soccer practice or take a phone call, and then they notice when the car is stopped, and they're like, hmm, why are we stopped? And here we try to give them the feedback why would stop. It's first up sign now. Or we're yielding for this truck that passed by. So I want to get to the comfort, like how do you pass the autonomy of all this product, because it really is a product.
Starting point is 00:14:37 Yeah. But first, I feel like the rivalry in self-driving is LIDAR versus TeslaVision or the video processing. How has Waymo thought about that decision of what some people pose it as expensive hardware, more simple software, because you have so much fidelity from LIDAR versus a bunch of cameras and just like a lot heavier processing? So what went into that decision and also how are you thinking about that moving forward? Like is there a future where maybe actually you don't need all of the same sensing system? Yeah, great question. So personally, I have found the LIDAR and video debate almost takes like an ideological sense.
Starting point is 00:15:14 You know, for hard problems, the innovation arc that we're talking about, the best approach is taking a first principles approach, right? Without neither love nor hate for a specific technology. That is a technology. You shouldn't get to that level. It's a tool. And LIDAR clearly has strengths that a camera doesn't. For example, at nighttime, even the best cameras will have some challenges. And camera clearly has strengths that the LIDAR doesn't, the red-green example that we mentioned. And similarly, radar has strengths that LIDARs and cameras don't. So saying you love LIDARs and hate cameras, or vice versa, saying you love one wavelength versus the other wavelength, right? It's not a fundamental thing.
Starting point is 00:15:54 Okay. But there is a practical question. So on the first principles, does the combination of these sensors position you better than individual? The answer is yes. We can show you that there are situations in which camera will be insufficient. Now the question is a practical economic one. Is your ability to bring this public good to a large number of individuals
Starting point is 00:16:15 hindered by the fact that these things are expensive? Yes. And LIDARs, 20 years ago, if somebody told you that all these cars are going to have radar, you'd be like, no, radars are expensive. Guess what? Most cars have radars today. Cameras on cell phones were a novelty. Now, cell phones have better cameras than dedicated cameras of six years ago.
Starting point is 00:16:33 LIDAR is going through the same transformation. Right now, the iPhones have LIDARs, right? The amount we have been able to cost down these LIDARs in the last two years is incredible. Okay, so you're not concerned about that. And four or five years ago, we had that belief. Now we have that proof. Because hardware generations,
Starting point is 00:16:50 there's multiple examples outside of Waymore to see something that began chips are a great example. Cameras are a great example. So it would be surprising if you had hardware hardware that you were able to package, and then with focused effort, you weren't able to. So we had that belief, and now we have the proof. Right. And Waymo actually manufactures LIDAR, correct?
Starting point is 00:17:11 And I feel like Waymo in a way has chosen to your point, like manufacture the hardware, work on its own software, simulation technology. Give me the thought process there in terms of there's always this question in business. Do I build? Do I borrow? Do I buy? Yeah. And given that this is a capital-intensive business, how do you think?
Starting point is 00:17:31 about that. Where should you outsource and where do you really need that fundamental technology yourself? Great question. The first thought we had, let's build this all ourselves. That was not the first thought. But we said, okay, let's see what's the best out there, absolute best, even easing a little bit the cost requirements. Say we're willing to pay. Yeah, we'll pay for anything. What's the best out there? And what we found is that the absolute best LIDAR out there, absolute best radar out there, was not optimized for the task of autonomous driving.
Starting point is 00:18:04 Okay. And hence, we had to build it. Each Harvard generation, we do evaluate that. We try to see, okay, have radars evolved to the point that we could use something off the shelf? So that build versus Y is a pretty practical choice each single time. And as we look forward, the question becomes, where do you build a moat?
Starting point is 00:18:22 because now you're not the only company that has achieved some level of autonomy. And you could imagine, like, let's just say a future where we've just achieved level 5 in many places with many companies. How do you think about what differentiates? Is it really a data mode based on the amount of training that a company has been able to do? Is it owning the proprietary LIDAR that is just 10x cheaper than the competitor? I'm trying to think ahead in terms of where does value really accrue in that future. So first, just thinking of the space we're operating. We're talking about a space with trillion miles, right?
Starting point is 00:18:56 So it's vast. Imagine when, let's say, the first cars were being built, folks said, hey, is there space for only one provider or two providers, right? When you're talking about a space of a trillion miles of today, and then you think about what is the potential value add when you have a driver that drives itself? Because partly, if you think about the miles today, we talk about a truck driver shortage, for example. Yep.
Starting point is 00:19:21 In fact, even those commercial miles are kind of stunted by the lack of availability of drivers. Sure. Yeah. So really, we're talking about a space that is all cars, trucks, and all transportation when something is that vast. And the number of autonomy players today are much smaller than three years ago, if you look at it. Because it is a pretty challenging problem still. So I think the universe is different.
Starting point is 00:19:43 But still a valid question. We do believe that there is a knowledge curve. So by having driven 20 million plus miles in testing, by having done billions of miles of simulation, we become aware of problem spaces that others may not have discovered yet. And that goes into hardware design and software design and simulation design. Hardware design in particular has long lead times. So that becomes a flywheel effect. Your question about how did you know you had to build your own laser,
Starting point is 00:20:11 because by that time we had already driven 10 million miles and we're like, we're going to need that thing. That was a person, you saw that person basically ran a red light. It was red for them. And we can talk if you and I were driving. Well, I was going to say, I think if I was a human driving there, I don't know what would have happened. And you would definitely not have been able to carry your conversation
Starting point is 00:20:32 when that was happening, right? No, I can barely, I can't even talk to someone next to me most of the time when I'm driving. But, okay, so talking about the technology, I want to talk about safety next. But first, are there any important technological unlocks that you still see on the horizon that not just Waymo, but the industry of autonomy is still trying to solve? Is it really that cost curve or is there something else that is still in the way of us really rolling this out more broadly? Look, we are definitely the rate of innovation, even just within Waymo, which I can speak to most confidently, is massive. I'll give you a concrete example. When we came from Phoenix to SF, we did have some work to do to adapt to assertiveness and the driving here is different from the Phoenix.
Starting point is 00:21:12 driving. But when we went to Los Angeles, the driver worked shockingly good from the get-go. Really? Yeah, shockingly. And now there's a portion of Scottsdale, which is the northeastern part of our Phoenix territory. It's a much denser area, lots of restaurants, lots of shops. There we were able to go in, like, within two months. We just went there, we decided we're going to open up Scottsdale within two months. And the reason for that is we're truly the driver's generalizing very well. And the concept is pretty intuitive. And this is advancement in AI that's enabling it, but think of a driver that's capable of this kind of tight traffic navigation, lots of pedestrians and cyclists, but low speed of travel. That's where we are right
Starting point is 00:21:52 now. Now imagine in Phoenix, 45 miles per hour, three lane, four lane streets, lots of oncoming traffic and being able to navigate that. Pretty much every good weather city is like a linear combination of those two things, right? So in Los Angeles, you got the best of both worlds. Exactly. So you go to West Hollywood, you're much more like an S-F-style driving, lots of pedestrians, cyclists, and so on. You go to the LA's more faster boule of arts. It's a lot like Phoenix. So once you have solved these two, you're just, the AI is just much more generalizable. The second area, I would say, you already mentioned cost down, is making the simulator a lot better. I'll give you an example. We have billions of miles of simulation in good weather.
Starting point is 00:22:35 Yeah. If you want to test, how would we do in rain? imagine being able to simulate rain so that you can take all the learnings in good weather, all the tough situations you encountered, and now test yourself in rain. What if rain was a complicating factor on top of that? What if I add a cyclist into that tough situation more? What if all these combinatorial questions being able to realistically simulate that, that's also a huge area?
Starting point is 00:22:59 Yeah, I mean, it's a little foggy today, but I also wonder, you know, you talked about AI, and obviously this is running off of an algorithm that's been trained on all these miles. is it one algorithm, or let's say it is an extremely foggy day, it's a rainy day, you're in a new environment, is it a different slightly fine-tuned model based on different situations, or is it all one aggregate that's just ingesting all of this information? It's definitely many, many deep models, some very general, extremely deep learning models, and some specialized models to make them really good at some very hard tasks. Like, for example, understanding pedestrians' intent is such a very important.
Starting point is 00:23:37 vast space. It's like understanding humans, right? I know, and we're pretty hard to understand. It takes us a lifetime to understand ourselves. So understanding human behavior and motion, there could be specific models. There could be end-to-end models on just driving like a good citizen, polite to other riders. That can be a more end-to-end model. Being comfortable to riders' preferences, that can be a very end-to-end model as well. So it's a mix of this. And there's AI at every layer of the stack, from perceiving the world, to predicting other people's behavior, to the driving, to the testing. So for example, you asked about fog. So what we tried to do is we both observed how other people drive in fog, but also tried to reason about how well can we see in fog. So if this
Starting point is 00:24:19 fall were to get a lot denser, the appropriate thing to do is I can't see that far, so I shouldn't be driving as fast as I would normally do. So that kind of learning is built in into multiple layers of the stack as well. But also some general things that the AI surprises you. You asked about fine-tuning, one of the powerful things that deep models are telling us, as well as generative AI is telling us, is that you actually don't need to hand-tune every single thing. It learns. So that kind of learning we are doing. And one thing I will say, you asked about Moot a little bit earlier. You know, the high-level concept of AI may be easy to understand, but really the breakthrough engineering that you sometimes need in AI is just having the raw infrastructure
Starting point is 00:25:00 to intake all this data. The amount of data you have to learn to handle to build a real really well-learned algorithm is pretty hard. And that's where Google's infrastructure that we have worked with, it's just immense and the machine learning investments that Waymo and Google did 12 years ago, 13 years ago, is beginning to pay off in a manner that's pretty hard to just get there. So that's a moat as well. That's actually a great point because, again, we're in a space where you need to react
Starting point is 00:25:27 in milliseconds, right? And so you need to not just be able to train this algorithm, but to interpret live and process that information live. Let's talk about safety, right? I mean, that is like the foundational piece of whether we can get these cars on the road. And something I'd love to hear from you on is how different parties interpret safety. Being on the road, you could say just inherently isn't safe. You can get in a crash. You can die. Unfortunately, that happens every day. So how would you say between the regulators, the technologists like yourself, we're building this, and then the consumers, the people, the riders. How do you say, how do you say, between the regulators,
Starting point is 00:26:05 How do each of them view this concept of safety? How are you designing the product with that in mind? There's the analytical and quantitative version of safety. You can say the nature of collisions you have, the probability of entering into a certain collision under certain circumstances, a lot of complications in there. And good news is many regulatory bodies do have their teams that study crashes and there are great databases of crashes and so on. There's the element of risky behavior.
Starting point is 00:26:33 may have gotten lucky that you didn't get into a collision, but you undertook risky behavior. So, for example, in that situation we were in before, we could see that those two individuals were going to jaywalk. Yes. Now, you could have argued that, well, the car has the right of way. We were driving right. No, but that's risky behavior, right? It's preventative behavior.
Starting point is 00:26:53 So you can't see that solely with the presence or absence of collisions. Were you a good driver? And third, and finally, do you make the person feel safe? So we could be breaking hard anytime we sense a risk. Yeah. Wouldn't feel very safe. Exactly. You wouldn't feel very safe.
Starting point is 00:27:11 And then the fourth layer I would add on top of that is just because you figured out how to drive smoothly, meeting the expectations of a rider, you shouldn't falsely promise the true analytical safety either. So for example, designing an algorithm whereby you drive smooth on a street could be easy, but you shouldn't over-promise that you can detect a pedestrian jumping out of a car. That takes real heavy engineering that consumer may begin believing that, oh, just because it can drive smoothly, it can probably protect me from that. And I think being truthful about your capabilities is really, really important and what Waymo
Starting point is 00:27:49 has tried to do. You see their reaction? Yeah, yeah. And you can see how tight this space is truck on the right. A car just went by, a pedestrian just crossed, and there's somebody just going in. and noticing that this fire truck didn't have its sirens on, so it didn't try to rush out of the way. Yeah, to get out.
Starting point is 00:28:04 So slight change into that situation, which we can test in simulation, we would say, hmm, right now the politest thing to do is let that pedestrian pass and, you know, wait for this turn. But when the sirens are on, it's a different environment. So anyhow. And that's another data point, right, that you need to take in its sound. It's not just visual.
Starting point is 00:28:21 Oh, yeah, that's the sensor we didn't discuss. There are microphones that can not only hear that there's a siren We're also pointing at where the siren is coming from. Yeah, that's fascinating. What does the data say, though? Can you kind of ground us in, there is a certain number of crashes that humans engage with every single year? And then where the technology is at relative to that. Yeah, so both the methodology by which we evaluate our safety, which is a combination of many, many.
Starting point is 00:28:47 This will be interesting. There's a bunch of a... It's a whole ocean of cyclists here. Look at that. It went around a double-part truck. I look at that and you see they're like they don't know what to yeah look at all of them we're telling you who are we waiting for see they just pointed inside look at that we can detect all of them we're giving you feedback that we can see all of them since every single one yeah yeah oh and them coming around the corner yeah oh and we're turning because we can see that their handles are turning
Starting point is 00:29:17 left so we can understand that they're likely going to go that way so we don't have to come to a stop so balance between making progress So that's what a rider expects. Yeah, exactly. And ultra-conservative behavior that may or may not be warranted. To your earlier question, so what we did is, I'll give you just two examples from many safety methodologies we employ. East Valley of Phoenix. We have been operating for a while. There, one thing we did is we took every fatal crash that had occurred and resimulated and showed that we could avoid that.
Starting point is 00:29:48 Okay. That was one very specific data set. What we also did is we were the first company ever to, to cross one million fully autonomous miles, miles like this. Yes. Nobody in the front street. So there's no debate about what the car could have done. It was real miles. In that, we published our full crash tags.
Starting point is 00:30:06 Okay. And there was not a single collision with injury. Really? Not a million miles. Not a single one. Okay. And only two of them would meet the standards of a reportability called CIS. And yeah. All right.
Starting point is 00:30:19 You're here. Oh, we're here. Yeah. And look at that how it stopped because it saw that they want to to cross. I love seeing so many people when they look inside. Just like, what's going on there? Cyclist Department. All right. Oh, you're telling us, yeah. Okay, so I'm going to click resume ride. And the reason, by the way, we're switching a little bit from safety to user experience design. The reason it told you cyclist approaching is what we know is even when the car is
Starting point is 00:30:44 top, what happens to people open the door and the cyclist. I mean, I've seen videos of that. So this element of safety, it's not just about when it's driving, but it's about caring about safety every second. I love the analogy you used about even a driver. We only have two eyes. They're right up in the front of our face. If we're looking a certain direction, if we're focused on music or a podcast in the car, our senses are as much as we like to believe as humans. We're all special and we're the best drivers. We are variable in ourselves in terms of what we're focused on. And if you're two kids in the back start screaming at each other or something. By the way, I know you're relatively new to SF as well. I hope you're enjoying the view as well. I know. I'm so
Starting point is 00:31:22 by some iconic locations. The city, absolutely. I love to think about these like second, third order effects, like people taking city tours in these cars. I mean, we've basically turned this into a recording studio, which you would never think of in the past. I can take confidential work calls. That was not a thing I could do before, right?
Starting point is 00:31:39 Oh, it's such a good point. I was looking through the reviews on the app, and I saw this one. You get a sense of the user that wants to be in a car like this, and they called it basically Uber and Lyft for introverts. And there's just these little things that you don't think about, Because, again, we're fixated on safety, and that is so foundational. But then you really do, once you have covered safety, once it becomes safe, in the eyes of the regulators, the consumers, it opens up all these doors.
Starting point is 00:32:04 Oh, absolutely. You know, we work with so many founders in nascent spaces, AI, Web 3, space. I would say those three industries also have a fair amount of pushback, which autonomous vehicles do as well, right? And in many cases, rightfully so, especially autonomous vehicles. Like, we're talking about people's lives on the line. And so have there been any learning since you've been working in the space for a while where you're trying to meet this safety need, but you're also trying to push regulation and welcome this technology into different cities? Yeah, yeah. It may sound like an idealistic answer, but I do believe that there are applications that have a fundamentally different use case or a promise,
Starting point is 00:32:44 and then they're trying to make sure they're not harming the public or unintended consequences. the reason we exist is to make driving safer. So we have deep fundamental alignment with what sometimes the regulators are trying to achieve. Now, we may have different inputs, we may have different data, we may have be approaching it from a different angle. But I believe that every conversation I have had with anybody who is in state, federal or local government or even outside of regulators, just firemen, local law enforcement, we very quickly in that conversation, I begin to appreciate, our team begins to appreciate, and they begin to appreciate that we're trying to do the same thing here.
Starting point is 00:33:24 Okay. And that is a powerful baseline to begin constructive conversations out of. But if, for example, the goal was something else, and by the way, what about safety, right? That would be a very challenging conversation. We try to say, this is what we're trying to do, folks. This is what we have measured. Finding common ground. Yeah, and just transparently sharing the data.
Starting point is 00:33:46 Safety. Or is it collisions? All right. Well, in collisions, we have published our accident reconstruction. We have done 20 million miles of testing, billions of miles of simulation, and we're telling you every single contact we have had in 1 million autonomous miles. By the way, this week we're about to cross 2 million. First company again.
Starting point is 00:34:03 First company again, 24-7, including daytime, not filtering out any of the challenging situations, dense-down downs, 24-7, 2 million miles, more than 160 years of human driving worth of data. We will just share it with the world. And then they can see for themselves that were clearly a safer driver. And if ever there was an event about which they asked us, we would transparently share with them. So I think that gives a fundamentally good basis. And I genuinely believe that even the word pushback, right, it's almost like internal debates at Waymo. When we debate, how should we design this thing?
Starting point is 00:34:40 We come at it from different angles. Somebody may see the user's expectation of smoothness of drive. Somebody may see more, what's that scooter's intent? You see how the scooter swung in from the right, kind of came in between a bus and themselves? What are their expectations? What's the expectation of a pedestrian if they were to jump out on this side? We may approach the problem from different angles,
Starting point is 00:35:00 but our core mission of safety is so deeply drilled into every way, monot, that we believe that every person who meets us will see that. Yeah, and I mean, one aspect I love that it's ingesting all this data from, like you said, almost 2 million miles now. And when you think about us as human drivers, like, no one, like, I certainly have not driven 2 million miles. I'm not even really processing exactly what happens when a scooter is coming up on the right here.
Starting point is 00:35:27 And if I've spent my whole life driving in SF, which is not the case for me, but some people that is, and then you drop them in Phoenix, they are new to that road just like you training in a new city. And by the way, the 2 million thing that you mentioned, that's just the fully autonomous miles. We have billions in simulation. But yeah, here there was a lot of experimentation on how much is the appropriate amount of detail.
Starting point is 00:35:51 So the vehicle and the sensors are seeing a lot more detail than what's being shown here, right? We're seeing many, many points per square inch of detail here. So we tried to experiment with how much detail we put in here. And there are folks who, when in the early days we had a version here that would show a lot more detail. And they would engage like this, right? In fact, they would look more inside than outside the window and keep crossing. Did it see that cone? Did it see that thing?
Starting point is 00:36:18 And what we came to a balance with is we want to put people at ease and invite them to use this space. And it pops up. So when the vehicle stops, you will notice it will try to explain why it's stopping. Like, is it a stop sign or is it yielding to a pedestrian? Because we realized through lots of experimentation that that's when people want to take a look at the screen. Okay. So, for example, if you're by yourself, we want you to get lost in the beauty of San Francisco around you. If you are checking emails, it's all right, go do that.
Starting point is 00:36:49 And you will take your head off your phone or from viewing the painted ladies when you see, hmm, why are we stopped? And we will tell you, well, there are two pedestrians on the right. There's a car crossing on the left. There's a car parking in the front. We'll explain that to you. But then you can see here, who are we slowing down for? You see that little gentle highlight?
Starting point is 00:37:08 Lots of design experimentation goes into that. Because we don't want to be in your face. We want to be gentle, soft. And you'll notice that in the night, this will go into dark mode. Because the ambient lighting is reduced. So we don't want to be too bright. Because if you want to just take a nap, we want you to take a nap.
Starting point is 00:37:24 That's a great point. And I'm curious if there's other learnings from, again, you've rolled this out. The number one thing is safety. But then from there, it's like, how do you create a great product that people want to engage with, that people want to come back to? So it's not just a novelty, right?
Starting point is 00:37:38 where you're like, oh, I sat in an autonomous vehicle where they want to use this for their commute every day or for their daily lives. So how do you think about that? Other than maybe the screen, are there other things within the car? Thousands, thousands. And by the way, speaking of more,
Starting point is 00:37:50 that's another powerful flywheel advantage, right? If you have served the first 10,000 humans who have been in a fully autonomous car, the feedback they give you and the time you have to incorporate that into your learning is a great positive flywheel approaching 100,000 fully autonomous rides in a month. So that feedback does help.
Starting point is 00:38:12 I'll give you just a couple of examples. We could speak for hours just on that part. One example I'll tell you is imagine a residential street like this one that we're passing. Remember when we were getting into the car, you were asking, which side of the street should I be on? Now imagine you're just getting out of home going to work. Some riders, you would imagine, expect the vehicle to pull up right on their side of the street. actually it turns out to be largely incorrect
Starting point is 00:38:37 because the way you would achieve that is let's say you're coming from this direction you would make a U-turn and come back to them but on narrow residential streets folks are like that was not necessary it takes me a second to walk across the street I could have entered on the other street you didn't need to turn around
Starting point is 00:38:52 you're like I got to get work but that same reasoning does not hold on a street like this one because you're like why are you making me cross two lanes of traffic. I was going to the coffee shop on the other side. Just drop me in front of the coffee shop. So you can see how it's not just a single rule that you code in. You don't say, hey, always park on the side that the pedestrian is on. No. It depends on the context. If you're on a busy street like that and somebody's going to a business, they would like to be dropped very
Starting point is 00:39:22 likely in front of the business they're going to. Whereas if they are getting out of home going to work and it's a narrow residential street, just go to whichever side is closest. That's just a thing. Just that rule training it required speaking to many riders what they expect, learning that, and then being able to articulate that from a machine learning standpoint and overall rules-based stand. That's just one example, many more. Are there any other learnings about what makes people feel comfortable or want to come back? Like one thing that's coming up is like, I can see the driving wheel, right? And I can imagine, especially once L5 is hit, then we don't really need the steering wheel, right? You
Starting point is 00:39:59 don't need the same car design because in the past for 100 years cars have been designed around the driver. And now we don't have a driver. So, you know, it introduces all these questions. But I'm curious, I know we haven't removed the steering wheel per se, but are there other dynamics that just from us growing up in cars designed a certain way that we expect certain things? And then are there other things where you're actually like, oh, no, we can start to get rid of some of this. Yeah, yeah. Specifically about the steering wheel, it's blocked currently by regular beyond a certain number of vehicles and so on. And our next generation vehicle that we did design with C, VT, and Gile
Starting point is 00:40:35 is a pretty powerful platform thought with the rider in mind. So we spend months and years with designers on all teams trying to visualize that. I personally do believe that thinking of all the screen and softer aspects how... He was like waving the car. And that was beautiful, wasn't it? Like, it's polite, but it's also responsive. And that, you know, today we spoke.
Starting point is 00:40:59 about everything from safety to artificial intelligence, to design, to user understanding. In that three seconds there, all of that got exercise, right? Because we were confident that there's enough gap. There's no imminent contact. We saw a collaborative fellow occupant of the road, not an adversarial one, because sometimes, it could be the other kind. Have you seen that, by the way, where people will start yelling at? No, I mean, yelling is expression of emotions. That's okay. The physical demonstration of turning in or like that example that we saw right now of somebody who ran a red light, if you recall, those are the ones that truly could result in danger, which is what the vehicle positions. It thinks about many small
Starting point is 00:41:38 details. Where is it positioning itself? How much gap is it leaving? What's its velocity so that it has the greatest optionality if somebody were to behave recklessly? So when we are crossing that sign, we're not assuming that everybody is going to obey the light. We're trying to monitor, hmm, their light is red, which means they should be slowing down. Why aren't they slowing down? That's an anomaly. Let's prepare for this anomaly. Let's protect. Let's be defensive against that anomaly. Something that comes to mind is just as a rider, you wanting to feel confident in the car and seeing it be a little more assertive is actually really reassuring at points, when it makes sense. Because to your point, if it's constantly stopping, if it's constantly pulling over,
Starting point is 00:42:16 then I don't have confidence that this thing is going to know what to do in a kind of trivial. And you have a busy life. You want to get where you're going, right? So, by the way, iconic place coming up. Oh, yeah. Are we at the painted ladies? I think I see them up there. Yeah, yeah. By the way, that's the other thing you were asking early on about where are we in the journey. We are in San Francisco.
Starting point is 00:42:38 We're not taking you to some little test facility in a desert to show you that this car park. We are on the road. We are on the most vibrant cities on the planet. And if you wanted to come by to Los Angeles, we would take you to the most important part of Los Angeles. And in Phoenix, we would take you to Scottsdale. And when you land at the airport,
Starting point is 00:42:58 we would pick you up at the airport. That's where we are in the journey. We're fully autonomous where we need to be. I love that it tells you finding a spot to pull over. Yeah, look at that. Right? And it sees all those little kids there. Do you see that?
Starting point is 00:43:10 Yep. And it's more cautious because it understands that kids can jump out more erratic than another. They're unpredictable. Yes. Oh, and vehicle approaching. So that's what you were saying earlier about so you know not to open your door.
Starting point is 00:43:23 Exactly, exactly. And now you can continue. All right, so those are the painted ladies. Beautiful. But yeah, I love the point that we're just, we're fully on the road. We're not in the middle of nowhere practicing. And you should see the eyes of that kid just noticing that we don't have a driver. Look at that. Like he's young enough where he can't even articulate, but he's like, this isn't, my pattern recognition is off here.
Starting point is 00:43:44 And by the time he grows up, this will be the more common route. Well, I mean, you mentioned you're in SF, you're in Phoenix. Yeah, Los Angeles. L.A. How have you decided which markets to address first? Is it just a matter of what cities will welcome this technology? What goes into that calculus? In the very, very early days, we tried to make sure
Starting point is 00:44:02 that we're picking a city that challenged the system in very different directions because we were in development stage. So we tested in 20 cities just to make sure that from the very early days we're building a generalizable driver, not one that just works in one location, but generalizes double parked truck, by the way. And so is it even sensing maybe the lights, blinking?
Starting point is 00:44:21 When, for example, somebody's un-parking, we notice their noses starting to jet out. It's just so bizarre to see a wheel move. to see a wheel moving on its own like that. Like, it truly does look kind of fake in a way. But yeah, so you're trying to find cities that kind of test the system, push it forward. Yeah, and then what we found is that we have tested, for example, in Miami as well for the heavy rain. We have tested in Death Valley for extreme temperatures.
Starting point is 00:44:45 We have tested in Tahoe for snow. So there was a testing phase in which we went to 20 cities just to make sure with enough of a diverse data set to be building off of. Look at that gentle because this car was coming, that tram was coming. Anyhow, this beauty to see every second right out. It really is something that it's like, I'm watching every little thing that's happening. And you can see this pedestrian crossing, the car going, another one approaching, but green light, so turn right after the pedestrian went off. Anyway, sorry, you're asking you, does it get boring for you after working?
Starting point is 00:45:15 No, I love seeing your excitement, because I mean, this is my first time. So I really am, like, as they say, like taking it all in. How many times do you think you've been in a, like just the first year in 2019, you spent a, ton of hours in it. Actually, were you scared at all? When you were first testing it, because now, I mean, I guess I've seen some of the data. I've seen these on the road.
Starting point is 00:45:36 So there's a level of like, oh, I know these work. Yeah. But when you were first getting into the vehicle, was there any, like, fear, apprehension? Not fear. The closest feeling I can describe is when you have prepared for an exam for six years. Whatever is the biggest exam you've given,
Starting point is 00:45:56 the little feeling that you got, that you have prepared as best as you possibly could for it. You left nothing on the table, but it is an exam day. You're like, oh gosh, I hope I can show up where I hope this works. Well, it's funny because a lot of people, they'll see the stats and they'll be like, that's the average human. But they think for whatever reason they outperform the average human on the road. But I'm the opposite. I'm like you where I'm like, I want this.
Starting point is 00:46:18 The fundamental attribution error, by the way, is that more than 50% of people think they're better than average, which is not supposed to be feasible, right? So maybe on the city selection, just to close that out, so yeah, LA, 2 billion plus 10. Huge diversity of use cases, everything from commute trips, all the way to sports events, to sightseeing tours that you were mentioning. San Francisco, similarly, both SF and LA are top two among the top five right-hailing markets in the U.S. And among the top in the world. Phoenix is the fastest-growing city in the United States.
Starting point is 00:46:50 Phoenix Airport is among the top ten busiest airports in the world. So when we commit to a city for a launch, that's different than going to a city for data collection and testing. Something else that's coming to mind is I wonder we are in the early stages. We're only in a few cities. But as consumers do see these on the roads, like this is something that some people really will want to see in their cities. And so have you seen any shift in terms of regulators maybe being a little against it to actually being like, this is a competitive advantage if my city offers this? We definitely see that in the place where we've been the longest is Phoenix East Valley.
Starting point is 00:47:26 And everybody from passengers to neighbors who even haven't taken a ride, they notice that it's a much more polite driver to law enforcement, all the way to city and mayoral level, all the way to state level, absolutely. I love thinking about just what does this unlock? And there are a few industries that can unlock so much because people do spend so much time in cars. I think that was an amazing. beautiful thing we saw from remote work is just the second third order effects like what do we have
Starting point is 00:47:55 when we get that commute back but then there still are people spending a lot of time on the roads and so i'd love to hear from you like what are some of those impacts the wider impacts maybe it's on the way the insurance industry works maybe it's on trucking maybe it's on city design now that we have data like that's another aspect we now have data about how people really move around a city and interact and how things are designed parking right so maybe pick it choose what incites you go. Yeah, it's pretty vast, yeah, right? So I do believe it's pretty profound,
Starting point is 00:48:28 and only some of those aspects we can see and some we will be shocked by what it'll do. Because again, like you said, normally is the car designed around a driver, life is designed around driving, right? Like, look at how much parking space. You know, that house right there costs insanely higher amount in dollar per square mile
Starting point is 00:48:47 as I'm sure as a new SF resident you were aware of. So there are so many things in our cities that are designed around assumption of not only a human driver, but also of a highly underutilized expensive asset, just sitting there. All these vehicles just add up the cost of these vehicles and think of how much space in the city they may be consuming while not adding to the productivity of the city. Because at this instant, I'm sure each of these vehicles added to the mobility and freedom of individuals. That's great. But at this instant, they're not utilized.
Starting point is 00:49:24 They're not adding value, neither to their rider. They're giving a promise that when the person who has gone in for an eight-hour workday, when they come out, it'll still be there. I heard this quote the other day. It was like, we dedicate in cities more space to sleeping cars and sleeping humans, which is kind of crazy when you think about it. And first and foremost, there are people who are traveling today that we believe over time, just the roads will get safer.
Starting point is 00:49:47 And that in itself, I know I'm saying safe. so many times, but that truly is central, right? 1.35 million people are killed on the streets. How is that acceptable, right? So that's first and foremost for all kinds of, for society at large. It's a pandemic almost, right? And this is an antidote to that. And then there are classes of individuals for whom this freedom is not available.
Starting point is 00:50:11 Above 65, many folks can't drive anymore or drive while taking risk or would have the freedom of mobility. There are folks with visual impairments. Then vulnerable populations being able to take jobs that they otherwise wouldn't have been, that nighttime use case that we said, giving economic opportunity, so which they simply would not be comfortable, or you shouldn't ask somebody to own a car before they can take their entry-level job. Like, that's a challenging economic catch-20 to you, right? So there's that. Then the third is yes, how much pollution idling cars and city centers are causing, how much city real estate is being lost to idle assets. So, yeah, layers and layers upon that, and yes, that's just in passenger vehicles.
Starting point is 00:50:57 Then you consider in-city delivery. Then you consider long-haul trucking where already a tremendous amount of economic opportunity in the United States is being lost to lack of drivers. And by the way, when we do have drivers, we have mandatory brakes because fatigue is a real thing. We recently moved. I drove from near San Diego up to San Francisco. And we happened to do it at night. And just the number of truck drivers that we saw on the route,
Starting point is 00:51:22 I just, I knew this already, but seeing it, I was like, something feels wrong here. And I just imagine a future where some people might not like this, but that's automated. That sounds beautiful to me. Yeah, and there's a beautiful transition points as well in the sense that truck driving could become a local job, which would be powerful in many, many different ways in the sense that.
Starting point is 00:51:42 Oh, I see it since the pylon. And you see that? Oh my gosh, that's incredible. Because see, I was wondering, actually, if it could tell, if that's a human. So they can tell that those are humans, but then these are pylons and it can send every single one. And there was like a roughly written keep right and understood going there. Yeah, because you could imagine how it might think to go left. They think there's enough space there.
Starting point is 00:52:04 So I think construction zones, by the way, are interesting change to an otherwise structured world, right? Because there are no construction zone is the same, right? Yeah, exactly. Each one is unique and the degree of, Intelligence required to figure that out is pretty substantial as well, because it's suddenly you're breaking the prior structure of the world. Yeah, so I mean, you mentioned how these vehicles can actually make our world safer. I mean, maybe I'm getting a little ahead of myself,
Starting point is 00:52:28 but do you imagine a future where actually right now it's like most humans drive where actually it becomes illegal or the minority of people who are able to drive because we get the technology so far along where it's just, again, it's a no-brainer. for us to have the tech drive us around instead of vice versa. The mission we have is being able to provide this option, safe and easy for people and things to move around. I think it's good to have the choice. All right, so maybe to close things off,
Starting point is 00:52:59 I'd love to hear you've been working in this space for a while. What gets you excited? Seeing the riders the first time, the fifth time, and what they say about us, that gets me excited. Amazing. Well, thank you so much. This was an excellent first ride. I'm so excited to do this more. in San Francisco.
Starting point is 00:53:16 I love the music too, that they're welcome. By the way, we had lots of music as well. Yeah, I know. I know. But yeah, a lot of effort has gone into this one as well. Yeah, and I can't wait. I imagine this being personalized as well in the future. The climate control, the music, hooking up to Spotify.
Starting point is 00:53:31 From your app, we did both of these screens as well, because there would be cases in which somebody would be in front, synchronizing all of that. Amazing. For another time. Well, this was great, and I guess we just hop out. Yeah, yeah. We're here.
Starting point is 00:53:45 This is great. I guess I shouldn't. It knows I'm here. I'm imagining myself on the screen as the little dot. So how did you find the ride? I really enjoyed it. I mean, I feel like it's funny because I only really noticed the lack of driver for maybe two minutes. And then, as you saw, I was so, like.
Starting point is 00:54:15 involved in the conversation, you know, you don't even notice there. Right, right. And that's what we want. What's that quote where it's like any sufficiently advanced technology is indistinguishable for magic? And it really does like, wow, we're here. Yeah. We're here.
Starting point is 00:54:30 We just did it. Well, I'm glad to have been there when you had your first experience. Yeah. Well, thank you so much, this was so cool. Yeah. All right. If you made it to the end here, I just wanted to say thank you. Thank you. This episode in particular was actually really special to me. I got my first
Starting point is 00:54:49 learner's permit when I was 16 in Canada, and I actually waited until I was 29 to get my driver's license because, quite frankly, it scared me, and I was waiting for self-driving. And it's finally, at least in some places like San Francisco, it has arrived. And I am so excited, I am so happy that I could share this with you. And if you are just as excited as I'm am to see how this whole thing unfolds. Let us know in the comments what you are most excited about, how you think autonomous vehicles might most reshape society. Because there really are so many implications, whether it's public infrastructure, energy, finance, shopping. I'm so interested to see how this technology finally comes into play. All right, on that note,
Starting point is 00:55:35 thank you again so much for joining me. I'll see you next time and we'll see you on the road. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.