Big Technology Podcast - NVIDIA's Auto Play and the Future of Autonomous Driving — With Danny Shapiro
Episode Date: July 31, 2024Danny Shapiro is the Vice President of Automotive at NVIDIA. He joins Big Technology to discuss the current state of autonomous driving technology, its future, and NVIDIA's role. Tune in to hear how N...VIDIA is pushing the boundaries of AI and simulation to make self-driving cars safer and more reliable. We also cover the challenges of full autonomy, NVIDIA's broader play in the automotive ecosystem, and how generative AI is transforming the industry. Hit play for an insider's look at the cutting-edge technology shaping the future of transportation.
Transcript
Discussion (0)
NVIDIA's Automotive VP is here to speak with us about the state of autonomous driving
and how the latest AI innovations translate to the car.
That's coming up right after this.
Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world and beyond.
Well, look, the car is a computer.
And so, of course, we've got to speak with NVIDIA.
I have a perfect person to do it with us here today.
Danny Shapiro is here.
He's the vice president of automotive at NVIDIA, someone I has spoken with before
privately, but we're so excited to bring him here and have him share everything that
NVIDIA is working on with you and then also give a real important picture into the state
of autonomous driving, which I think is one of the most interesting tech innovations that
we're in the midst of currently. So Danny, great to have you here. Welcome to the show.
Hey, Alex, thanks so much for inviting me. Really excited. So let's talk at the beginning,
just looking at the big question. How far are we from fully autonomous driving?
You know, they're operating on our roads today.
There are vehicles, robo-taxies, even some trucks that are truly driverless.
They've gotten to the point where the safety driver can be removed.
And they're operating some as part of a revenue program as well for those companies.
It's not widespread in the Bay Area where I am.
I see them on the roads all the time.
I've ridden in them.
It's an amazing experience.
It really is transformative.
But I think we're seeing a lot of the technology come down into consumer vehicles.
So you have driver assistance systems, things that are originally based on the technology
for self-driving, but we're bringing it into vehicles like Mercedes, like Jaguar Land Rover,
vehicles like Volvos and a number of other brands all over the world are integrating
Nvidia technology to enable much safer driving on the streets.
Yeah, and I had experience in San Francisco last summer where I spent a bunch of time
riding in Waymos and then riding in cruises. And we had Cruz, the crew CEO at the time,
Kyle Vote come on the show and talk about how his plan was to 10X cruise rides every year going
forward. And of course, we know it didn't happen. They ran into some safety concerns. And he's out.
And it looks like their ambitions have scaled back, you know, pretty dramatically, although maybe
they'll ramp back up again. But I'm curious to hear your perspective of like, what is the difference
between a Waymo and a cruise?
I mean, not specifically, like, going into the technical details,
but you mentioned that we have autonomy already.
So why is it taking us so long to,
and I know so long as relative,
but so long to have that spread from one company
that does it really well to every car through the economy.
Is it a safety thing?
Is it a cost thing?
Like, what is the roadblock now?
Well, for us that I believe a lot of our partners, safety is the primary concern.
I think if you look back to 2016 when a lot of predictions were made, everyone was talking
about 2020 was the year.
And so for a compute standpoint, from a software development standpoint, that really looked
realistic.
I think everyone underestimated the true complexity of being able to make sure you get it right
virtually all the time.
And that's what's really challenging.
So the basics are easy when you can drive down the freeway.
cars are all going the same direction.
There's no pedestrians, there's good lane markings.
That's really a solved problem.
But as soon as you really introduce the complexity and the anomalies that come about from
human behavior, people either falling asleep on the roads or driving recklessly or being
impatient or road rage or whatever it is, it's really hard to predict that and creates
hazards for the self-driving car.
So I think what we're doing is we're seeing a whole new wave of
of innovation now, part of it's based on the same fundamental technology from chat GPT,
these large language models, more of an end-to-end system that's able to look holistically
at the whole environment around the car and be able to anticipate and predict what other
drivers will do and understand how to react.
Just like chat GPT, you can say anything now and it knows how to respond.
It's kind of, it's quite amazing.
So while chat GPT is a large language model and using a form of generative AI, you know, it's
that's text in and text out. What we're doing is applying that same type of algorithmic approach
and training of the neural networks really. The technology innovation from that we're doing
with training video in and imagery from cameras or sensor data to be able to understand that
environment and then determine what is the best course of action for that vehicle to safely navigate.
Yeah, I was going to ask you what the path forward is given that we know that, okay, this is
something that these models are struggling with. So do you think that this is the path forward,
the way that you just described? You look at the innovation that's taking place from players out
there. Companies like Wave in the UK have put together a really amazing underlying technology
that lets the front-facing camera be interpreted in a way that the system can communicate to the
occupants of the vehicle, what's going on outside the car, and also use that then as a way to
determine what is the car going to do? How is it going to steer, accelerate, or break?
So it can interpret that video feed and explain that there's somebody jaywalking or somebody
ran a red light, or there's a child waiting to cross, or whatever it may be. So this generative
AI approach is really, I think, going to accelerate the adoption. Very recently at a conference
called CVPR, so that's computer vision and pattern recognition. It takes place annually.
Last month, it was in Seattle, Washington. And there was a competition.
that they held. So it's a research conference over 400 entries into this autonomous driving
challenge and it was basically looking at sensor data and trying to predict the best trajectory
for the vehicle moving into the future. NVIDIA submitted and won the challenge our research
team had developed a new large language model basically end-to-end training of that sensor data
system for then controlling the vehicle. And so over 400 entries, NVIDIA came out on
top with this new large language model type of approach.
And I think really that's where we're seeing a lot of innovation now.
And instead of having a lot of individual neural nets that are trained on lanes and on signs
and on pedestrians and all these individual things, a more end-to-end approach looks at the entire
environment and can then understand what to do in the case that maybe there are no lane markings.
So again, I think as you look at how dynamic the environments are, how there's no standard of streetlights all look the same, or signs are all the same, or landmarkings are all the same, this end-to-end approach is really going to help get us to that point where we don't worry like we used to worry about those end cases that haven't been explicitly trained.
I think this is important what you're saying.
So basically, if I have it right, the current set of autonomous driving vehicles, at least the base layer of technology they use, they have a bunch of different basically artificial intelligence systems picking out each different feature in the road.
And they combine that to eventually make their predictions of where to go.
But you're saying that the cutting edge today is not these individual systems, but one system that looks at everything and then is able to predict.
I think what's really key about being safe is the combination of diversity and redundancy.
So you want backup systems, but also you want a variety of different algorithms.
So I think we're seeing a layering of different technologies and these things that you're
looking for lane markings, but if you don't find the lane markings, there's an end-to-end network
also that's there to guide the vehicle and determine what to do.
And we have neural networks now that are looking at signs and can interpret complex signs.
So if you're trying to figure out, can I park here or not, the sign net will actually be able to read that sign and understand, is it a Saturday, is there street sweeping or whatever it is. It has the context. And so the complexity of these networks is quite elaborate as well.
Trust me, I could have been saved a lot of tickets if I had something in that. So what you're saying is basically with, if people have cars with this technology embedded, they'll be able to pull up to the curb and it will tell you, no, you can't park there?
That's right.
So it's interesting to me that NVIDIA is developing a lot of this technology.
Like I went to your automotive section on the website and was like, wow, there's a tremendous
amount of models coming out of Vindia.
I thought it was largely the carmakers, like the Waymos or the Teslas that are developing
the autonomous technology.
So how involved is NVIDIA in developing these models itself?
And then who's the customers?
We work with hundreds of automakers, truckmakers, robo-taxie.
companies, software startups, the sensor companies, the mapping companies. It really is quite an
ecosystem that we've built. We're not creating the vehicles, but we work with those manufacturers.
And so we offer the compute hardware. That's our drive platforms. That's the brain that goes inside
the car. Our drive OS is the safe operating system that's part of that package. We have a lot of
different middleware and libraries that they can use to develop their applications, algorithms,
the neural networks.
That application layer, though, is generally built by our customers.
So Mercedes-Benz or a Jaguar Land Rover, Volvo, Neo in China.
And so they can pick whatever parts of the software stack they want.
And in many cases, our customers are taking the whole stack
and they're developing some of their own algorithms as well.
So there might be a pedestrian detection algorithm from Mercedes,
running along a pedestrian detection algorithm from Nvidia.
And we collaborate on that.
So starting through the end of this year, the introduction of the new CLA, it's already been
announced from Mercedes.
That's the new Mercedes model, the C-Class.
And so every Mercedes will be built on NVIDIA Drive with the software that we've developed
and rolled out by NVIDIA.
So it starts with that C-class vehicle and then we'll go through their entire line over time.
Do you sense a time where eventually NVIDIA will be able to?
to build, like using some of the technology that you've discussed already, build the software
suite that empowers autonomous driving, and then just its plug-in-play for these auto manufacturers.
So that's essentially what we're doing.
We're making the software available.
So we're developing the whole stack.
And really, it's a three-computer problem, we call it.
So you have the computer we just talked about inside the car.
So that's the drive platform.
It's a very high-performance, energy-efficient, automotive-grade supercomputer, you plug all the sensors into that, and it's purpose-built for a vehicle.
It's going to operate in the heat of the desert sun.
It's going to operate in very cold temperatures in Alaska, unlike your phone, which if you get too hot or cold, it'll shut off.
So we have to make sure that the temperature range will work, that the shock and vibration, the dust environment.
So all of that goes into making this computer automotive grade.
But then in addition to that, we make the computer that's used to train the artificial intelligence.
That's our DGX.
That's a supercomputer.
And so we have a huge business in terms of the automakers building out their own data centers
or using cloud providers like Azure, AWS, and at Oracle to train.
And then we also have our OVX.
That's our Omniverse computer for simulation.
So again, that's another data center solution for first developing and then testing and validating
in simulation before the software even goes into the car. So Nvidia is the only company that
has these three computers that's really this whole life cycle of developing testing and
deploying the software. And really, it's a continuous flywheel. Just like your phone get
software updates, all these cars are designed to get software updates and get smarter and smarter
over their life. Yeah, that's wild. You know, Danny, a lot of
folks think of NVIDIA as just like the chip company for AI training. And I'm always like,
it's a little bit more than that. And it's just wild that like just in this one discipline,
it's deeply involved in pushing the cutting edge forward with autonomous.
Absolutely. I think you're right. People tend to focus on what happens in the car,
not realizing there's so much work before you can get to that point and so much development.
So what's great about the customers that work with us, it's a single architecture.
It's the same chip technology that's in their data center that they're training on, that we do testing.
It's called hardware and the loop testing.
So we actually test the whole software and hardware that goes in the car.
We can test that in the data center first in these virtual environments, called a digital twin.
So we create a model of a city and we simulate the camera, the radar, the lidar signals that are detecting everything happening around the
car, motorcycles, cutting off vehicles or pedestrians, jaywalking.
And so all of that to be tested before we actually even put it on the road.
So it's very efficient.
It's very safe to test it that way.
And it really helps create a much better product in the end.
Yeah, I'm going to talk to you a little bit more about that in a moment.
But first, I want to ask you another technology question because it sort of seems like
there's two different schools of thought when it comes to building autonomous cars.
One is, I'm just going to call it shorthand, the Waymo School of Thought,
which is that you need like a gazillion sensors and cameras,
and your car is going to look like a submarine,
and it's going to cost hundreds of thousands of dollars,
but it's going to work pretty freaking well.
And another school of thought, I'm just going to call it the Tesla School of Thought,
is that you just need a few cameras,
and then eventually you'll be able to train machine learning models
to the point where you can run your Tesla autonomy,
without a LIDAR.
Who do you think is right?
I think they both have merits.
The reality, as I mentioned before,
this diversity and redundancy
is really how you get higher levels of safety.
So cameras are great,
but they don't work in all conditions.
And so when you combine radar,
when you combine LIDAR,
you have the strengths of many different types of sensors
that they complement each other.
So I think we see,
in the case of Waymo, is really
of Waymo is really a higher level of, you know, security and confidence and that redundancy
that comes from the system. And so they're operating fully autonomously with no drivers,
whereas I do have a test line. It's quite remarkable being camera based. But every once in
I still need to jump in and grab the wheel. So it's not there yet. Can it get there? I think
it probably can eventually, but it's not there today. How's the latest full self-driving update?
Like I said, it's pretty good. I rely on it every day. You know, it is, it is still considered
beta. And so I'm watching it. I'm in the industry. So I'm curious about each software rev and what
can do and what it can't do. But it's, it's quite remarkable. And there was a report in the
Wall Street Journal this week that talked a little bit about some of the deficiencies of the way
the Tesla operates and points exactly at this issue, right, that it's decided to not use
LIDAR, for instance. Some of them have radars. But Elon basically wants to
do everything cost as cheap as possible, right?
Seen it in SpaceX, you're seeing it in Tesla, and it's because these systems are just good enough
that people trust them, that we've seen some of the tragedies happen.
There was a video that they put out, the journal put out, that showed effectively a car
driving at night, and there was an overturned tractor trailer blocking the road.
And because the car's computer vision models hadn't seen effectively like dark underbelly,
of the of the truck dark evening couldn't pick it up because it hadn't been trained on enough
examples of this and the car went right into the right into the tractor trailer but you but
thinking about your answer you think that this is just a temporary thing that eventually they'll be
able to figure out these sort of outlier scenarios over time I think if if the models can
be able to detect that there's a physical obstacle even if it doesn't know what it is
is, then it will be able to take the right action. But again, this is where having the diversity
of sensor data becomes a really big differentiator. Right. Now, talking again about what
NVIDIA is doing internally, it's pretty wild. So you're actually simulating collisions. Is this
something that you do in that sort of world that you talked about, where you go through these?
Absolutely. So talk about it. What we try to do is create the scenarios. It's less about
creating accidents, but creating the scenarios to ensure that the systems will do, have a safe
outcome. There's no way to ensure that there's zero accidents in the world, right? There's always
can be crazy stuff that's going to happen on the road, and no human driver could avoid something
falling right in front of the car or somebody getting pushed in front of a vehicle or something
like that. But what we want to do is be able to anticipate all that and be able to avoid it or mitigate
what would happen in one of these hazard scenarios.
One of the things we're able to do, actually,
is record drives that we're taking
and then use that as input to create a huge range
of different scenarios, permutations, on that
and test the software.
And so we can actually capture cars in a scene
and make any one of those cars in the scene,
the autonomous car, and see how it would behave.
So we're building a massive database of scenarios
in ways to test and validate that the technology is good.
The other thing that we can do is we can take accident reports.
And now using these large language models,
we can input these accident reports
and be able to create scenarios from a text input
explaining what happened or if there's a map or something like that.
So then how important is simulation and training
anything having to do with autonomous driving?
Simulation plays a really, really big role in ensuring the safety of the system.
There's really no way that, first of all, driving around, collecting data, you rarely are going
to see the dangerous scenarios, the hazards, the things that very rarely occur.
You're not going to capture them on your data collection.
So we need to use simulation and really what we call synthetic data generation to create
those kind of scenarios.
So we can create fake potential hazards, things falling off of trucks,
can people running across the street at night, whatever it may be, somebody running a red light.
And so we can create that data to augment the real data for training the AI.
And then we can actually simulate all these dangerous scenarios to ensure that the system will do the right thing.
And the benefit of using simulation to it's repeatable.
So we can adjust the software and test.
something that maybe didn't pass a month ago, but we can run it through the same scenario and
see, oh, yeah, we fix that. There may be situations, you know, it often happens that the sensors
are blinded by the sun, right? As it said it, right? The sun is like coming right into the eyes
of the car, into the driver's eyes, into the camera's eyes. And so you only have a few minutes a day
where you can actually capture that data as well as test. And so in simulation, it can be sunset
at 24 hours a day.
So we really can control that environment.
And we can create rain, snow, fog.
It's really remarkable.
And it gives us the ability to test those things
that, again, may never be seen in the real world.
So if you drive in a real vehicle trying to test it
in autonomous mode, you may never know.
In simulation, we can be sure it's going to worry
before we release it.
And you also have a assistant that you're working on
that sort of appears within the car that helps you
like navigate tough things.
Like, for instance, like it will look around.
It's called Lada, I believe.
And if you're in New York and you want to make a right on red, it will be like,
hey, listen, like can't write on red is illegal here.
Or if you're in Mexico and like you're taking a hairpin turn, you know,
there's like some traffic patterns that allow you to kind of go outside the lane to take a peek and then come back in.
And it will help you figure out what's going on with that.
I mean, I'm about to go to Ireland and I definitely need something like that to tell me what side of the road to be driving on.
So talk a little bit about the progress there.
So I think you're referring to a video series we put out called Drive Labs.
Right.
And so what we do is each episode, we sort of pick one little piece of technology that's part
of this huge software suite that we're building.
And so some of these things come out of our research team.
So it might not be in our software today, but it's sort of a preview of what's coming
out soon.
And so what we're able to do is train these systems and create these different modules, essentially,
that become part of the software stack that our customer.
can use. So how they decide to bring it to a customer sort of is the decision of a Mercedes
or a Volvo or others. But the technology is there so that we can train it on what are the laws
of a particular region. I mean, the signs, the light signals, the lane markings in different
regions of the country or around the world differ quite a bit. And so basically creating these
large language models for each region that understands what you're allowed, what the cars
allowed to do and not to. And so it will, it could give you an alert. If you try to,
if it's been implemented in the car, it could give you alert if you try to turn right on red in
New York, for example. Yeah. Perfectly fine to do in California. Okay. We're going to go to
break. But before we do, I got to ask you a stock question. The stock is down like 7% today and
16% on the month. Do you notice that within the company? You know, things move around a lot.
The market just shifts a lot. If you look at the page off and everything's green one day,
Everything's read another day and I think there's really no way to predict.
We'll put out great news and we never know what's going to happen.
I think what we do know is Nvidia is just creating amazing products.
We have really exciting technology we're bringing to markets, not just in automotive, it's
in healthcare, it's in energy, it's in finance.
Gaming is still a big part of the company.
People don't necessarily talk about it as much, but it's a huge, huge part of our company.
It's super exciting.
this week going on as Sigraph, which is the big graphics conference annually, and that's something
that Nvidia has always been a part of, and that's kind of our core, accelerated computing
for graphics.
And now that conference is turning into an AI conference and it's all about robotics.
And so, Nvidia is still at the center of all these different industries providing technology,
platforms, ecosystems for other companies to build.
So I think things are feeling good.
Yeah, I have been tuning into Sigraph this week, and it is funny right at a gaming conference
And I think Jensen is just so happy to be back in the gaming environment because at the start of every session that he's had, he just shouts out, this is my home turf, which is, you know.
Yeah, I mean, it's so true.
The graph is just where all the graphics researchers kind of unveiled the new technology.
And graphics is so core to Nvidia, but so many other things, whether it's filmmaking, whether it's manufacturing, whether it's now visualization for, you know, engineering with fluid dynamics.
simulations or weather forecasting.
We have a system where we're kind of called Earth 2,
which is going to just do amazing things for that,
for the meteorologists and be able to predict weather
with incredible detail beyond what was ever been possible
and look way in advance.
So that has huge implications for just a lot of predicting
natural disasters, for example.
Right.
All right, let's take a break, come back.
You mentioned robotics.
We'll talk about robotics, including how
this technology will help translate the state-of-the-art advances to generative AI as well, right?
We've learned what gendered AI can do to autonomous cars.
What can autonomous cars teach gendered-to-A-I?
All right, that's coming up right after this.
Hey, everyone.
Let me tell you about The Hustle Daily Show, a podcast filled with business, tech news,
and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email for its irreverent and informative takes
on business and tech news.
they have a daily podcast called The Hustle Daily Show,
where their team of writers break down the biggest business headlines
in 15 minutes or less and explain why you should care about them.
So, search for The Hustle Daily Show and your favorite podcast app,
like the one you're using right now.
And we're back here on Big Technology Podcasts with Denny Shapiro.
He's the VP of Automotive at NVIDIA.
All right, let's talk about robotics.
So to begin with, it seems like you were talking a little bit
about how when you're driving around with a car,
that has all these sensors, you start to begin building a world model.
Is that helpful for the development of robotics?
And is Invidia already sort of taking what it's learning from cars and putting it into robots
and what it's learning from robots and putting it into cars?
Absolutely.
We have an entire robotics group that sits alongside our automotive group.
And there's a lot of related aspects to the problems.
If we think about driving our vehicle, well, the whole thing is we don't want to
hit anything right we have to understand the environment we have to sense it we want to plan how
we're going to maneuver and then we act we actuate the vehicle and drive it but we don't want to touch
anything robotics is almost the other flip side of that is like robots they need to interact
they're going to grab something they want to touch something but you have to do it very delicately
but the ability to sense plan and act is really the same so so much of what we're doing is
related um you know a car is really an autonomous car is really a form of robot it's got wheels
and it drives around. Some robots that we're doing in terms of factory robots still might be
autonomous machines that are roaming around, like at a warehouse or in a factory. Others are fixed.
There's an arm that's moving around. I think the key thing is, again, this three-computer model
of training the systems, of simulating them in digital twins, and then deploying the software
is the same in both of these cases. We're doing a lot of work in factories, all the
types of factories, but also factories to build cars.
And so companies like Mercedes-Benz, like BMW are working with our teams to develop this factory
as a digital twin-first.
So it's a full simulation of the entire factory, all the robots, the workers, the assembly
lines, the trucks pulling up, the logistics of moving parts around.
All of that is modeled and run digitally in simulation before they even build the factory.
And so the benefit of that is you're not halfway into construction.
You realize, oh, wait a minute, the arm that has to swing here to take the body of the vehicle
and rotate it around isn't going to clear the ceiling.
We need to raise the roof another two feet, right?
You plan all that in advance before you ever build the factory and you can really optimize
your layout.
So digital twins, AI are a huge part of planning how robots are actually going to interact.
And then we train those robots in simulation so that you can then do.
take that software, just like you do in the autonomous car. We load that software into the robot.
Yeah, it is so interesting thinking about it. Like car, your one mission is don't touch anything
in robot. Your mission is interact. But I imagine both type of technologies are building
models of the world trying to figure out what's going on. Is there shared, is there like a shared
universe that both the robotics and the automotive elements of Nvidia work on? Absolutely. There's a lot
of shared technology. So this is the strength of NVIDIA as a company. We're relatively small
company for the impact that we have on industries around the world. So the engineering team
that's developing a lot of the core hardware and software is leveraged across groups from
automotive to robotics to healthcare. You know, example is we're developing pedestrian detection
algorithm. That same core tech can be used to detect cancer in an x-ray. How? Or a CT scan.
just computer vision?
Yeah, well, it's AI, it's deep learning.
So those same techniques, it's just different patterns of data.
It's trained on different types of data, different modalities.
But the concepts are essentially the same.
And that's where we've been able to go to market in many different industries
with the same basic architecture, hardware and software with purpose-built applications or devices.
But again, the core technology is leveraged across so many different groups.
if you're looking for where to drill for oil, right?
You have seismic data that you can then apply deep learning to to figure out
where is a pocket of oil buried miles below the earth.
So it's the same conceptually, the applications are markets are totally different.
Right.
Now let me ask you this.
How does the company incentivize these divisions to cooperate?
Because I imagine you're like the automotive group and you have one set of goals you're
working towards and they're the robotics group and they're the robotics group.
they have another set of goals they're working toward now if you work together you could probably
both help each other to the point where you're going to both exceed those goals or do better
than you would have in a silo but a lot of companies work in silos whether it's by design
thinking about apple don't you know if you're working on face ID and you're working on
automotive road detection don't talk to each other you're on two different projects and maybe
you know wink wink that's why the automotive project within apple failed and then other companies
incentivized silos just by the incentives in terms of like your performance review. If you were,
you know, coming up short, even though you were collaborating, you came up short, you get this
grade, you don't advance. So how does NVIDIA address this? Yeah, that's, those things you
describe are not the culture of our company. I think one of the principles that were founded on
is one team. It's all about NVIDIA first. And so the individual group kind of comes second.
And in fact, the notion of the group is kind of dynamic. We really don't have much of an org
chart in the company. Jensen says the mission is the boss. And so we have these virtual teams.
There's a lot of cross-functional work that goes on. People have different roles and
responsibilities and might be working on a variety of different things. And so it's really all
about how does what's the best thing for the company. And working across groups is really
rewarded and part of the way that just the culture is we want to help each other and the
whole company succeeds as opposed to, hey, this is my thing. I own this. I'm just going to focus
on this. So it really is part of just the culture of the company and embrace throughout. And
Jensen is constantly looking at, okay, if he finds two different groups that are doing related
things, he's like, you guys get together and figure it out together. We don't need to have two
separate programs going on here, but let's pick the best. So there really is a huge collaboration
that goes on throughout the company.
So I'm wondering whether another collaboration is going to be
or already is yours with the groups
that are working on AI models, generative AI models,
because the biggest limitation with generative AI has been
that it doesn't really understand the real world.
I mean, at least with text.
Now, you might push back on me on this,
but this is what we've heard on the show,
which is basically like there's only a small amount of human knowledge
that's been codified.
in text and the rest is just being out there in the world interacting using your eyes figuring out
what gravity is right you can't really understand that from from text i mean you can read about it
but you don't really experience it until you're out there in the world so i'm curious how these
real world interactions whether it's something like a autonomous car something like a robot
might be used to advance the knowledge that we have with large language models today
I think it's happening. I think part of what we do is we're able to model physics.
Right. So that's a key thing. So we can model gravity. We can model how things interact. We can
model, you know, motions of different types of materials or fluids or things like that. So it comes down to
mathematical modeling of the real world. And that's a big part of what could do. Yeah. Have you thought
about just like the immense progress that China is making? I mean, once your view,
there. I've heard that you can buy an electric vehicle in China for $10,000, whereas in the U.S.,
if you could do that, that would be a sea change here. So what's your view there?
We work with a lot of companies around the world. We have a number of customers in China.
They're doing remarkable work in terms of the development of EVs, but also driver assistance
systems and autonomy with robo taxis. So it's a big market. It's the biggest in the world,
and it's a big market for NVIDIA.
So we work closely.
We have teams over there.
We work in Japan, in South Korea, throughout Germany, UK, and of course, North America as well.
Is there anything the U.S. can learn from the China market to price our cars better?
Is that?
I'm not an expert on battery technology, but I think there's certainly economies of scale and some
things they're doing to bring the cost down.
I think there's also a lot of government support that they're getting in China.
lot. I guess, like, last question for you, how long do you think it's going to be until anywhere
in, let's say, the U.S., you can open an app and hail a robotoxy? I can do it today.
Anywhere, though. I can't do it in New York. Oh, anyway. So, I mean, I don't know if it's always
going to be anywhere, right? I mean, there's got to be a market for it, but I think cities will make
a lot of sense, the suburbs in some areas. Yes, in the rural areas, it's going to be, I'm
a challenge. It's like you can't, you can't get an Uber everywhere today, right? So you can't get
a taxi everywhere. But I think in major markets, it's very soon. Okay. Can't wait for it.
Danny Shapiro, thanks so much for joining. Great to see you. Great to see you too. Thanks,
Alex. All right, everybody, thanks so much for listening. We'll be back on Friday with Ron John Roy
breaking down the week's news. We'll see you next time on Big Technology Podcast.