a16z Podcast - Building the World's Most Trusted Driver
Episode Date: August 5, 2024Waymo's autonomous vehicles have driven over 20 million miles on public roads and billions more in simulation.In this episode, a16z General Partner David George sits down with Dmitri Dolgov, CTO at Wa...ymo, to discuss the development of self-driving technology. Dmitri provides technical insights into the evolution of hardware and software, the impact of generative AI, and the safety standards that guide Waymo's innovations.This footage is from AI Revolution, an event that a16z recently hosted in San Francisco. Watch the full event here: a16z.com/dmitri-dolgov-waymo-ai Resources: Find Dmitri on Twitter: https://x.com/dmitri_dolgovFind David George on Twitter: https://x.com/DavidGeorge83Learn more about Waymo: https://waymo.com/ Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
Hello, everyone. Welcome back to the A16D podcast. This is Steph. Now, one of my favorite
podcasts we've recorded since I joined the team was just about this time last year. That episode
was on autonomous vehicles, but it was actually also in an autonomous vehicle. That was my first
ride in a self-driving car. And over the last year, I've seen so many others have their first
as Waymo has expanded to the public in Phoenix and San Francisco, while also placing its roots in Austin and
LA. In 2015, Waymo tested its first fully driverless ride on public roads. It then opened to the
public in Phoenix in 2020, but it wasn't until 2022 that autonomous drives were offered in San
Francisco. And by the end of 2023, it clocked in over 7 million driverless miles. Slowly,
then all at once. So with this space moving so quickly, we wanted to give you an update on where
this industry is today. Passing the baton to properly introduce this episode,
here is our very own AI Revolution host and A16Z general partner, Sarah Wang.
As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax, or investment advice,
or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16C fund.
Please note that A16D and its affiliates may also maintain investments in the companies discussed in this podcast.
For more details, including a link to our investments,
please see A16C.com slash Disclosures.
Hey guys, I'm Sarah Wang, general partner on the A16Z growth team.
Welcome back to our AI Revolution series.
In this series, we talk to the Gen AI builders who are transforming our world to understand,
one, where we are, two, where we're going, and three, the big, open questions in the field.
Our guest this episode is Dmitri Dolgov, the co-cove.
CEO of Waymo. Dmitri has led Waymo to solve some of the biggest challenges in bringing AI to the
real world. And after tens of millions of miles of testing, Waymo's vehicles have shown themselves
to be safer and more reliable than human drivers, myself included. Dmitri has a unique
perspective, given that his work has spanned multiple AIML development cycles across decades.
He was an early pioneer in self-driving cars, working with Toyota and Stanford on DARPA's
grand challenge before joining Google's self-driving car project, which then evolved into Waymo.
In this conversation from a closed-door event with A16Z general partner David George,
Dmitri talks about the potential of embodied AI, the value of simulations and building training
data, and his approach to leading a company focused on solving some of the world's hardest
problems. Without further ado, here's Dmitri in conversation with David.
Maybe to start, take us back to Stanford, if you will, and that was when you first started
working on the DARPA project.
And maybe give us a little bit of your history of how you ended up from there to here.
My introduction to autonomous vehicles was when I was doing a postdoc at Stanford that you
just mentioned, David.
This was during, I got a pretty lucky with the time.
of it. This was when the DARPA Grand Challenges were happening.
DARPA is the Defense Advanced Research Project Agency that started these competitions
with the goal of boosting this field of autonomous vehicles.
And the one that I got involved in was in 2007 that was called the DARPA Urban Challenge.
So the setup there was, it's kind of like a toy version of what we've been working on since then.
It was kind of supposed to mimic the driving in an urban environment so they kind of created an urban environment.
So they kind of created a fake city on an abandoned airbase,
and they populated it with a bunch of autonomous vehicles,
a bunch of human drivers, and they had them do various tasks.
So that was kind of my introduction
to this whole field.
And it was a bit of a, I think, in DARP,
these challenges are often by people in the industry
considered kind of a foundational, pivotal moment
for this whole field.
And it was definitely that for me.
It was like a light bulb light switch moment.
that really got me hooked.
What was like the hardware and software
that you guys had at that point?
This is 2007?
Yeah, yeah, I know it's a very high level,
not unlike what we talk about today.
You know, a car that, you know, has some instrumentation
so you can, you know, tell it what to do
and you get some, you know, feedback back.
Then you have kind of what's called a post system,
a bunch of, you know, inertial measurement system,
accelerometers, gyroscopes that kind of tell you,
and GPS tells you how you're moving.
through space and it has sensors, radars, lighters, and cameras, you know, the same stuff
we still use today, and then there's a computer that, you know, gets the sensor data in, and
then tells the car what to do, and a bunch of software.
And, you know, software had, you know, perception components and decision-making planning
components and some AI.
But of course, everything, you know, that we had, like each one of those things over that,
you know, how long has it been, 18 years more than that, it's changed drastically, right?
So when we talk about AI today versus AI we had, you know, back in 2007, 2009, you know, nothing in common.
And similarly, everything else has changed.
You know, the sensor are not the same.
Computers are not the same.
Yeah, of course.
So then, okay, so take us, so at that point, that was the pivotal, that was like the light bulb moment.
And then at that point, you said, okay, I'm at Stanford.
I want to make this my career, right?
Is that, and then it was Toyota, and then where did it go from there?
I don't know if I thought about it in those terms.
I was like, like, this is pretty cool to work on.
This is the future.
I want to make it happen.
I want to be building this thing.
Career.
Okay, you know, that can wait.
But that was the next step.
That was the next big step is a number of us from the DARPA challenge competition started the Google self-driving project.
It was about a dozen of us.
Then in 2009, came together at Google with support and exciting from Larry and Sergey to see, you know, if we can take it to the next step.
And that then, you know, we worked on it for a few years.
years, and that project then became Waymo in 2016, and we've been on this path since then.
Okay, so we have this new big breakthrough in generative AI. Some would say it's new,
some would say it's 70 years in the making. How do you think about layering advances that have
come from generative AI to what many would describe as more traditional AI or machine learning
techniques that were kind of the building blocks for self-driving technology up to that point?
Yeah, great question.
So maybe you can, you know, generative AI is kind of a broad term.
Sure.
So maybe take a little bit of a step back and talk about kind of the role that AI plays in autonomous vehicles
and kind of how we saw the various breakthroughs in AI map to the space of our task.
Right.
So, like you mentioned, you know, AI has been part of self-driving autonomous vehicles from the earliest days.
Back when we started, it was a very different kind of AI, you know, ML, kind of classical maintenance.
decision trees, classical computer visions with kind of hand-engineered features, you know, kernels and so forth.
And then, you know, one of the first really important breakthroughs that happened in AI and computer vision,
but really was important for our task was the advancement in convolutional neural networks, right around 2012, right?
Many of you are probably familiar with AlexNet and the ImageNet competition.
This is where AlexNet blew away out of the water, all other approaches.
So that obviously has had very strong implications for our domain,
like how you do computer vision and not just on cameras, right?
How you're on, you know, you can use ConvNets to interpret what's around you
and do kind of object detection and classification from camera data,
from LiDAR data, from your imaging radars.
So that was kind of a big boost around that, you know, 2012-2013 time frame.
And then we played with those approaches and tried to extend the use of confidence to other domains,
you know, just be a little beyond perception, with, you know, some interesting but limited success.
Then another big, very important breakthrough happened around 2017, when Transformers came around.
It had a really huge impact on language, language understanding, language models, you know, machine translation, so forth.
And for us, it was a really important breakthrough that really allowed us to take ML in AI to new areas well beyond perception.
And so if you think about transformers and the impact that they had on language, the intuition is that they're good at understanding and predicting and generating sequences of words, right?
And in our case, we think about in our domain, think about the tasks of understanding and predicting what, you know, people
will do like other actors in the scene or the task of decision making and planning
your own trajectories or in simulation and of generating generative AI our
version of genera AI and generating behaviors of how the world will evolve
that kind of these behavior like these sequences are not unlike sentences
right you're kind of operating the state of objects and there's kind of
of local continuity but then the global context of the scene really matters
so this is where we saw some really exciting breakthroughs in behavior
prediction and decision-making and simulation and then you know since then we've been on
this trend of your models getting bigger you know the people started building
foundational models for you know multitask and most recently of the all of the
guys last couple of years all the breakthroughs in large language models and
you know modern state modern day generative AI visual language models where
you kind of align image understanding and language understanding and there's
been most recently, one thing I'm pretty excited about is kind of the intersection or combination
of the two. So that's what we've been very focused on at Waymo most recently, is taking
kind of the AI backbone and all of the AI, the Waymo AI that over the years we've built up
that is really proficient at this task of autonomous driving and combining it with kind of the
general world knowledge and understanding of these VLMs.
that you just mentioned is the role of simulation and how that has been, you guys have had major
breakthroughs in the use of simulation. And this idea in, you know, the recent breakthroughs
in generative AI around synthetic data and its usefulness is somewhat in question. I would say
in your field, this idea of synthetic data and simulation is extremely useful, and you've proven
that. So maybe you could just talk about the simulation technology you guys have built,
how it's allowed you to scale, you know, build that real world understanding,
you know, and maybe how it's changed in the last few years.
Yeah, yeah, definitely. It is super important in our field.
Largely, if you think about this question of, you know, evaluating the driver.
Like, you know, is it good enough? It's, you know, how do you answer that?
There's a lot of metrics and a lot of data sets you have to build up.
And then how do you evaluate the latest version of your system?
You can't just throw it on the physical world and then see what happens.
You have to do in a simulation.
But of course, the new system behaves differently from what might have happened in the world otherwise.
So you have to have a realistic closed-loop simulation to give you confidence in that.
So that is one of the most important needs
for the simulation.
You've also mentioned synthetic data,
as that's another area where simulation allows you
to have very high leverage.
You just kind of explore the long tail events, right?
Maybe there's something interesting
that you have seen in the physical world,
but you know, you want to modify that scenario
and you want to kind of turn one event into thousands
or tens of thousands of variations of that scenario.
You know, how do you do that?
You know, this is where the simulation comes in.
And then, you know, lastly, if you sometimes want to,
like, evaluate and train on things that you've never seen,
even our very vast experience.
So this is where purely synthetic simulations come in
that are not based on anything that you have seen
in the physical world.
So in terms of technologies that go into play,
I mean, it's a lot.
And that is like a huge generative AI problem.
But what's really important is that that simulator is realistic.
It has to be realistic in terms of your, you know, sensor or perception realism, right?
As you, it has to be realistic in terms of the behaviors that you see from other dynamic actors, right?
You have, you know, if other actors are not behaving in a realistic way, like if pedestrians are not walking the way they do in the real world,
you need to be able to quantify the kind of the scenarios that you create in simulation to,
the realism and the rate of occurrence
in the physical world, right? It's very crazy
to sample something very, you know, easy
to sample something totally crazy
in a simulator, but then, you know, what do you
do with that? So I think that that brings me to the third
point of, you know, realism is that it has to be
kind of realistic and quantifiable
at the macro level at the statistical level.
So there's, and you can imagine there's a lot of work
that goes into building a simulator that is
large scale and has, you know, that
level of realism across those categories.
And if kind of intuitively think about it,
you know, to build a good driver,
you need to have a very good simulator.
But to have a good simulator,
you actually have to build models
of like realistic pedestrians and cyclists and drivers, right?
So it's good, you know, you kind of do that iteratively.
Yeah, of course.
And then by having this simulation software
that is very good at mimicking real world
and very usable in the sense that you can create variables in the scenes,
you can actually give the driver multiples of the amount of experience
that they have on the road.
That's exactly right.
In real miles drive, is that right?
That's exactly right.
I mean, we've driven, you know, tens of millions of miles in the physical world.
At this point, we've driven more than 15 million miles in full autonomy,
what we call, you know, writer-only mode.
But we've driven, you know, tens of billions of miles of simulation,
so you get, you know, orders of magnitude of an amplifier.
Speaking of multiples of miles driven,
one of the hotly debated topics in the AI world today is this concept of scaling laws.
So how do you think about scaling laws as a lot of,
it relates to autonomous driving?
Is it miles driven?
Is it certain experience had?
Is it compute?
What are the ways that you think about that?
So model size matters.
So we are saying scaling laws applied.
A lot of typical old school models are severely under trained.
And so if you have a bigger model, you have data
that actually does help you.
You just have more capacity that generalize better.
So we are seeing the scaling laws apply there.
Data, of course, usually matters, right?
And but it's not just, you know, counting the miles, right?
Or hours.
It has to be, you know, the right kind of data
that, you know, teaches the models or trains the models
to be good at the rare cases that you care about.
And then, you know, there is a bit of a wrinkle
because then you have to, you can build those very large models.
But in our space, it has to run on board the car, right?
So you are somewhat completely.
So you have to distill it into your, you know, onboard system.
But we do see a trend, which is, you know, come in trend and we see that play out in our space
where you're much better off training a huge model and then distilling it into a small model
than just training small models.
Yeah.
I'm going to shift gears a little bit and I'm going to do a sort of simplifying statement,
which is probably going to drive you crazy.
But the DARPA School of Thought is, you know, there's sort of a rules-based approach, right?
a more traditional kind of AI-based approach with a massive amount of volume and you document
edge cases and then the model then learns how to react to those. The more recent approaches
from some other large players and startups would say, hey, we just have AI from the start, make
all the decisions end to end. You don't need to have sort of all that pattern recognition and
learning, you know, like the end-to-end driving that is kind of a tagline out there.
What is your interpretation of that approach, and what elements of that approach have you taken and applied inside of Waymo?
Yeah, I think it's kind of, you know, sometimes it's a, you know, the way people talk about it as kind of a, this weird dichotomy, is it this or that?
Yeah, of course.
But it's not. It's that and then some, right?
So it is, you know, big models. It is end-to-end models.
It is a generative AI and combining these models with VLMs, right?
But the problem is it's not enough.
Right.
So, I mean, like, we all know the limitations of those models, right?
And that's when we've seen, you know, through the years, a lot of these breakthroughs in AI, right?
Convness, Transformers, you have big end-to-end foundation models.
They're huge boosts to us.
And, you know, what we've been doing at Wayman through the history of our project is kind of constantly
applying and pushing forward these data
our techniques and ourselves in some cases, but then
applying them to our domain. And what we've been
learning is that they really give you a huge boost,
but they're just not enough.
Right. So the theme
has always been that you can take your
kind of latest and greatest
technology of the day,
and it's fairly easy to get
started. Right. Like the curves
always look like that. And they've been
kind of the curves in your shaping, but the really hard
problems in that remaining 0.001%.
Right. And there, it's
not enough. So then you have to do stuff on top of that, right? So yes, you can take, you know,
nowadays, you can take, you know, an end-to-end model, go from sensor to, you know, trajectory as
or actuation. Typically, you don't build them in one stage, you build them in stages, but, you know,
you can do, like, backprop through the whole thing. So, you know, the concept is very, very
valid. You can, you know, combine it and, you know, with a VLM, and then, you know, you
add closed-loop simulation, some sort, and, you know, you're off to the races. You can have
a great demo, like, almost out of the box. You can have, you know, an A-Dess,
or a driver system, but that's not enough to go all the way to full autonomy.
So that's where really a lot of the hard work happens.
So I guess the question is not, is it this or that?
It's this.
And then what else do you need to take it all the way to have the confidence in, you know,
so that you can actually remove the driver and go for full autonomy?
And that's a ton of work.
That's a ton of work through the entire kind of life cycle of these models
and the entire system, right?
So it starts with training.
Like, how do you train, how do you architect these models?
How do you, you know, evaluate them?
Then, you know, if you put in a bigger system,
the models themselves are not enough,
so you have to do things around them.
You have to, you know, they have modern generative AI is great,
but there are some issues with, you know, hallucinations,
hallucinations, with like explainability.
Exactly, right.
Exactly.
So, you know, they have some weaknesses
in kind of goal-oriented planning and policymaking
and kind of understanding this, you know,
operating in this 3D spatial world, right?
So you have to add, you know, something on top of that.
We talked a little bit about the simulator.
That's a really hard problem in of itself.
And then once you have something, once you deploy it and you learn, how do you feed that back?
So I guess this is where all of the really, really hard work happens.
So it's not like end-to-end versus something else.
It is end-to-end.
And, you know, big foundation models.
And then the hard work.
And then all the hard work.
Yeah, it totally makes sense.
That is a great segue into all of the progress that you guys have made, right?
Writing in the Waymo for those who have done it is an extraordinary experience.
It's not to say that you have solved all of these complex tasks, but you've solved a lot of them.
What are some of the biggest AI or data problems that you still feel like you're facing today?
The short answer is going to be taking it to the next order of magnitude of scale, multiple orders of magnitude of scale.
And with that come additional improvements that we need to make a great service.
Right? But, you know, just to level set and make, in terms of where we are today, you know, we are, you know, driving in all kinds of conditions.
Yeah.
We're driving, you know, 24-7 in San Francisco, in Phoenix, a little bit, those are the most metro markets, but also in LA and in Austin.
And, you know, all of the complexity that you see, you know, go and drive around the city, right?
All kinds of weather conditions, whether it's, you know, fog or, you know, storms or, you know, dust storms.
or rainstorms down here, like all of that,
all of those are conditions that we do operate in, right?
So then I think about, you know,
what makes it a great, you know, customer experience, right?
Like what does it take if you grow by, you know,
next, you know, orders of magnitude?
There's a lot of improvements that we want to make
so that it becomes a better service for you
to get from point A to point B, right?
Like we ask for feedback from our writers.
A lot of feedback we get is, you know,
it has to do with the quality of your pickup
and drop of locations, right?
So we're learning from users.
Like, we want to make it a magical, seamless, you know, delightful experience
from the time you kind of start the app on your phone to when you get on the decision.
So that's a lot of the work that we're doing right now.
Yeah.
Pick up and drop off for what it's worth is an extraordinarily hard problem, right?
Like, do you kind of block a little bit of a driveway if you're in an urban location
and then have a sensor that says, oh, actually, I just saw somebody opening a garage door,
I need to get out of the way, you know, how far down the street is acceptable to go pull,
or if you're in a parking lot, where in the parking lot do you go?
Like, this is an extraordinarily hard problem, but to your point, it's huge for user experience.
That's exactly right, right?
And I think that's a good example of, like, just one thing, one of the many things that we have to
build in order for this to be an awesome product, right?
Not just like a technology demonstrator.
And I think you just, like, you hit exactly on a few things that, I mean,
make something that kind of at the face of it
might seem fairly straightforward, right?
Okay, you know, I know there's a place on the map
and need to pull over, so like how hard can it be, right?
But really, if it's a complicated, you know,
a dense urban environment, there's a lot of these factors, right?
Is there, like, you know, another vehicle
that you're gonna be blocking?
Is there a garage door that's opening, right?
Like, you know, what is the most convenient place
for the user to pick off?
What is, you know, so it really gets into this,
the depth and the subtlety of understanding
the, you know, the, the, the, the, the,
semantics and the dynamic nature of this driving task and doing things that are safe, comfortable,
and predictable and lead to a nice, seamless, pleasant, delightful customer experience.
Of course. Okay, so you've mentioned this stat, but 15 million miles, I know the number's
probably a little bit bigger than that, but you just released it Tuesday. Yeah, it's growing by the
day. 15 million autonomous miles driven. That's incredible. Even more impressive, and you didn't share
the stat yet. It results in 3.5 times fewer accidents than human drivers. Is that right?
And I think 3.5 acts as the reduction in injury, and it's about 2x reduction in the police
reportable kind of lower severity incidents. This sort of comes to a question of both kind
of regulatory and, you know, kind of business or ethical judgment. What is the right level
that you want to get to? Obviously, you want to constantly get better, but is there a level at which
you say, okay, we're good enough, and that's acceptable to regulators?
Yeah, so there's no, you know, simple, super simple, short answer.
Right.
I think it starts with that.
It starts with those statistics that you just mentioned.
Yeah.
Like, at the end of the day, what you care about is that roads are safer.
So then you look at those numbers, we are where we operate today,
and we have strong empirical evidence that our cars are in those areas safer than human drivers.
So on balance, that means a reduction in, you know, collisions, and, you know,
and harm.
Then, actually, on top of the numbers, we've probably been publishing this, you're quoting
the latest numbers that we shared, consistently, you know, sharing numbers as our service scales
up and grows, if you can also bring in, you know, an additional lens of, you know, how much
did you contribute to a collision?
And we actually published, I think it was based on about 4 million miles, 3.8 million miles.
We published a joint study with Swiss RE, which is, I think, large.
largest global re-insurer in the world, and the way they look at it is who contributed
to an event.
And there we saw the same theme, but the numbers were very strong that Newfield was a 76%
reduction in property damage collisions, and it was in 100% reduction in claims around
bodily injury.
So if you kind of bring in that lens, I think the story becomes even more compelling.
That is extremely compelling.
Right.
some collisions where, you know, we'd be, and that's the bulk of the events that we see,
it would be stopped at a red light, and then somebody just plows into you, right? Sure.
So, but then, like, we, I think, you know, we do know, it's a new technology, it's a new
product, so it is held to a higher standard. So we, when we think about our safety and our readiness,
you know, framing of methodology, we don't stop at just the race, right? We build over the years,
as one of the huge areas of investment and experience over the years.
Like, how, you know, what else do you need?
So we have done, and we've done a number of the different things.
We've published some of our methodologies.
We've shared our readiness framework.
You know, we do all the other things like we actually, not just statistically,
but on, you know, specific events, we build models of an attentive, very good human driver.
Like, not distracted human, it's a good question whether such a driver exists, right?
But that's kind of what we compare our driver to, right?
And then in particular scenario, we evaluate ourselves versus that model of a human driver
and we hold ourselves to the bar of doing well compared to that very high standard.
And then you pursue other validation methodology.
So that's my answer is that it's the aggregate of all of those methodologies that we look at
to decide that, yes, the system is ready enough to be deployed in scale.
I'd love for you to talk about what you think maybe today and in the future,
about market structure, competition,
and what kind of role you envision Waymo playing?
So the way we think about Waymo and our company
is that we are building a generalizable driver.
That's the core.
And that's at the core of the mission
of making transportation safe and accessible.
And we're talking about right-hailing today.
That's our main, most mature.
your primary application, but, you know, we envision a future where the Waymo driver will
be deployed in other commercial applications, right? There's deliveries, there's trucking,
there's personally owned vehicles, right? So in all of those, you know, our guiding principle
would be to think about the go-to-market strategy in a way that accelerates access to this
technology and gets, you know, deployed as, you know, broadly, you know, while of course doing it
gradually and deliberately and safely,
as quickly and broadly as possible.
So with that, as our guiding principle,
we're going to explore different commercial structures,
different partnership structures.
For example, in Phoenix today, we have a partnership
with Uber and Rite Healing, both in Rite Healing
and Uber Eats.
So in Phoenix, we have our own app.
You can download the Waymo app and take a ride.
And our vehicle will show up and take it where we want to go.
That's one way to experience.
our product. Another one is through the Uber app. We have a partnership where you can get
through Uber app matched with our product, the Waymo driver, the Waymo vehicle, and it's
the same experience, right? But this is another way for us to accelerate and give more people
to experience full autonomy, and it gives us a chance to kind of, you know, think about the
different go-to-market strategies, right? One is, you know, us, you know, having, you know, more
of our own app. The other one is more of a, you know, driver as a service or somebody else's
network. So we'll, you know, still early days, but we will iterate and but all, you know,
in service of that main principle. That's amazing. Yeah, that's going to be, that's going to be
exciting. Maybe on back to the vehicle, what about the hardware stack that you use? You and I
have talked a bunch about, you know, you said like, hey, going all the way back to DARPA,
you know, it's kind of the same stuff, right? It's, you know, it's sensor. They've advanced
quite considerably, but, you know, you still use, you know, radars and LiDAR. Do you think that
remains the future path for autonomous driving, LIDAR specifically?
Yeah, no, I mean, the sensors are physically different, right?
They have each one in cameras, Liders, radar, they have their benefits.
Each one brings their own benefits, right?
Cameras obviously give you color and they give you high, you know, very high resolution.
Liders kind of give you, you know, a direct 3D measurement of
of your environment and they're an active sensor, right?
So they're kind of bring their own energy,
it's pitch dark when there's no external light source.
You know, you still get the seat just as well
as they do during the day, you know, better in some cases.
And then, you know, radar is, you know,
very good at like punching through just, you know,
physics, different wavelengths, right?
So if you build an imaging radar, which we do ourselves,
you know, it allows us to, you know,
give you an additional redundancy layer,
and it has benefits.
It's also an active sensor, it can directly measure, you know, through Doppler velocity of other objects,
and it can, you know, it degrades differently and more gracefully in some weather conditions.
Like, very dense fog, you know, are very dense.
So, you know, they will have their benefits.
So if you, you know, our approach has been to, you know, use all of them, right?
And, you know, that's how you have redundancy and that's how you get an extra boost in capability of the system.
And, you know, we are on, you know, today deployed in fifth and working to deploy the sixth generation of our sensors.
And, you know, over those generations, we've improved, you know, reliability, we've improved, you know, capability and performance.
And we've brought down the cost very significantly, right?
So, yeah, I think the trend, you know, for us, you know, using all three modalities just makes a lot of sense.
Again, you know, you might make different tradeoffs if you are building a driver assist system versus a fully autonomous vehicle.
where that last 0.01% really really matters.
Yeah, absolutely.
One of the observations that we have
from the very early days of this wave of LLMs
is that there has been sort of already a massive race
of cost reduction,
and many would argue that it's sort of a process
of commoditization already, even though it's very early days.
I would say the observation from autonomous driving
over many, many years now is kind of the opposite thing.
There's been a thinning of the field.
It's proven to be much, much harder than expected.
Can you just talk about maybe why that's the case?
You know, they always have this property that it's very easy to get started,
but it's very insanely difficult to get it, you know, all the way, you know,
to full autonomy so that you can remove the driver.
And, you know, there's maybe a few factors that contribute to that.
One is, you know, compared to the LLMs and, you know, it's kind of AI in the digital world.
You have to operate in the physical world.
The physical world is messy.
It is noisy.
And, you know, it can be quite humbling, right?
There's all kinds of, you know, uncertainty and noise that can kind of pull you out of distribution, if you will.
Right, sure.
So that's one thing, that makes us very difficult.
And secondly, it's safety, right?
Sure.
These AI systems, in some domain, you know, this is creativity, and it's great.
In our domain, the cost of mistakes or lack of accuracy has very serious consequences, right?
So that says the bar very, very high.
And then the last thing is that it is, you know, you have to operate in real time.
You're putting these systems on fast-moving vehicles and you have to, you know, milliseconds matter, right?
You have to make the decisions very quickly.
So I think it's, you know, the combination of those factors that really, you know, together lead to, you know, the trend that you've been seeing is that, like, you know, it's an and, right?
You have to be excellent on this and this and then, right?
It's all of the above.
The bar is very, very high for, you know, every component of the system and how you put them together.
But, you know, there's big advances, and they, you know, boost you and they propel the system forward, but there are no silver bullets, right?
And there's no shortcuts if you're talking about full autonomy.
And because of that lack of tolerance for errors, you have a very high bar for safety.
You have a very high burden from regulators.
You know, it's very costly to go through all those processes.
And so it makes sense.
And I'm very grateful that you guys have seen it through despite all the humbling experiences that you had along the last.
It's been a long journey, but it's, you know, for me and many people at Waymo, it is super exciting and very, very rewarding to finally see it become reality.
Now, we talk about safety and AI in many contexts, right? That's a big question, right?
But, you know, here we are in this application of AI in the fiscal world.
We have, you know, at this point, a pretty robust and increasing body of evidence that, you know, we are seeing, like, tangible safety benefits.
So that's very exciting.
Yeah, I always say to people, it was a long journey and very costly and expensive along the way.
But this is probably the most powerful manifestation of AI that we have available to us in the world today.
I mean, you can get in a car without a driver, and it's safer than having a human.
And that's just remarkable.
What were some of those humbling events along the way?
And those are early days in the first couple of years?
Oh, I'm sorry.
I remember one, there's one route that we did that started, I think it started in Monview,
then went through Palo Alto, then went, you know, through the mountains to Highway 1,
that took Highway 1 to San Francisco and I think, you know, went around the city a little bit
and like actually finished for Lombard Street. So like in 2009, 10 people, that is really complicated.
A hundred miles from beginning to end, right?
You know, as human drivers would fail at that task, I think. So yeah, yeah, yeah.
So, you know, we're doing it one day and then we're driving and kind of made it through the,
the Monteree Palo Alto part, we're driving through the mountains, and it's foggy, it's early morning,
and then we're like seeing objects, and you know, objects seem like random stuff on the road in
front of us. There's like a bucket and like a shoe, and then there's like at some point we come
across like a, you know, a rusty bicycle. Like, okay, what's going on there? And then we catch
eventually. And then the car, you know, it doesn't, you know, handles it okay. You know,
maybe not super smoothly, but, you know, we didn't get stuck and we catch up to like this dump truck
that has all kind of stuff on it and just, you know, periodically losing things
that present obstacles to the car.
This is like a cartoon, you know, continuation of anomalies being thrown at you guys.
That's pretty cool.
Okay, last question.
I'm going to tee you up to do some recruiting probably, but if you were in the shoes of the audience
here and just kind of seeking your first job, I'm going to take something that you said,
which is like, I can see your passion and excitement
for doing the startup thing, right?
And like, you know, kind of longing back for those days
is so cool.
What advice would you have for these folks
in where to go,
whether it's type of company, type of role,
industry, or anything else?
Waymore?
That's what I'm saying.
It's the easiest to just tee right up.
Yeah, yeah.
I would say find a problem, I mean we're talking about AI today,
but I'll say find a problem that matters.
You know, problem that matters to the world, problem that matters to you.
Chances are it's going to be a hard one.
Many things we're doing have that property.
So don't get discouraged by, you know, the unknown, by what others might tell you.
And, you know, start building.
and then, you know, keep building and don't look back.
A huge congratulations on all the progress you guys have made.
And as a very happy customer, thank you for building it.
And we really appreciate you being here.
All right, that is all for today.
If you did make it this far, first of all, thank you.
We put a lot of thought into each of these episodes,
whether it's guests, the calendar Tetris,
the cycles with our amazing editor Tommy until the music is just right.
So if you like what we put together,
consider dropping us a line at rate thispodcast.com slash A16Z.
And let us know what your favorite episode is.
It'll make my day, and I'm sure Tommy's too.
We'll catch you on the flip side.