Limitless Podcast - NVIDIA GTC: Jensen Huang's 5 Biggest Announcements
Episode Date: March 17, 2026At NVIDIA's GCC conference, CEO Jensen Huang announced a bold target of one trillion dollars in orders by 2027. The AI graphics breakthrough DLSS 5, which has been received with memes and co...ntroversy.Other announcements include the groundbreaking Vera Rubin platform, which promises 35 times the performance; strategic acquisitions like Grok; and advancements in self-driving technology.------🌌 LIMITLESS HQ ⬇️NEWSLETTER: https://limitlessft.substack.com/FOLLOW ON X: https://x.com/LimitlessFTSPOTIFY: https://open.spotify.com/show/5oV29YUL8AzzwXkxEXlRMQAPPLE: https://podcasts.apple.com/us/podcast/limitless-podcast/id1813210890RSS FEED: https://limitlessft.substack.com/------POLYMARKET | #1 PREDICTION MARKET 🔮https://bankless.cc/polymarket-podcast------TIMESTAMPS0:00 NVIDIA's Trillion-Dollar Vision1:23 DLSS 53:49 Vera Rubin5:25 Breakthroughs with AI Chips9:48 The Next Generation: Feynman11:33 Full Self-Driving Revolution14:01 Robotics on Stage16:45 OpenClaw and Enterprise Solutions17:22 AI in Space19:12 The DGX Spark Announcement21:10 Closing Thoughts on NVIDIA's GTC------RESOURCESJosh: https://x.com/JoshKaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
And Vita just held its GCC conference in San Jose, where Jensen Huang walked on stage in front of 30,000 people and opened with a number that's probably going to echo across Wall Street for weeks, a trillion dollars in expected orders through 2027.
That's double what he predicted just six months ago from that very same stage.
And then he spent the next two hours.
This was a very long presentation unveiling why even a trillion dollars is conservative.
But a lot of people throughout this presentation seem to have missed the actual reveal.
I think they're focused on a few specific highlights when the reality is the things that he presented that are going to yield this trillion dollars are probably much different than I think the average person expects.
I know we were chatting as we were watching this this two hour movie marathon.
What were your thoughts?
Did you make it through?
Was it too boring?
Was it exciting?
What were the first impressions of this presentation?
The thing that I excited me the most was the announcement of the DLSS-5, which seems to be the most controversial announcement.
It's this new 3D rendering AI.
model that basically refactors old games or gaming graphics into newer, higher-performance graphics.
So if you're looking on the screen right now, you're seeing a version of a video game and then suddenly
it's enhanced. It's kind of like a Snapchat filter, which I think a lot of the gaming community
had backlash about. They thought it was just AI slopped. They didn't really vibe with it.
But in my opinion, it's actually quite a good product and would make me more engaged to play the actual
game. So it was surprising to see DLSS5 get the attention that it did just because Nvidia announced some
unbelievable stuff and seemingly this was the headline at all the news outlets. And it's basically
an AI upscaler for video games. It takes existing graphics that are, you know, pretty decent and
upscales them. It makes the facial features better. It increases the dynamic range, the highlights,
the shadows. And when I saw it, I loved it. I was like, oh, this is pretty cool. But the internet
reaction to this was far from what mine was. I mean, if you're looking at the video on screen,
it increases facial features. If you're now looking at the meme on screen, that's, that was
the public perception. It was also negative for such a small feature that,
Jensen dropped in a two-hour presentation.
So EJS, do you have any idea what's going on with this backlash here, particularly around DLSS-5?
I think the point around the gaming community is they're just very sensitive around AI being involved in art.
And I get it, right?
Like, some things can be kind of cringe and doesn't seem very human.
But the point is, like, this just makes graphics of games way, way better.
I mean, the meme you're showing on the screen isn't the accurate representation of what this thing is going to do.
We had the head of Bethesda Games, like, I'll make a partnership with Nvidia just for this tool.
going to save him and his team hours and hours of work. And I saw someone make a really good
point online yesterday, which says, if you're a gaming developer, that's spending years designing
AAA games, this not only saves you a bunch of time, but it also helps you realize your artistic
vision. Usually when you're a game developer, you make sacrifices when you are designing a particular
character or an asset because you don't have enough money, compute, all the tools to be able to do
this. This should just be seen as another tool to get to your actual vision. So I think it's a good
reason, but there's another reason why this is super cool and everyone missed it, in my opinion.
This is the exact same technology that you can use to create visual learning for robotics
and for automotive, sorry, autonomous driving cars. So this is the same technology that InVito's using
to build out their partner program with, I think it was BYD and a bunch of other car companies,
which we'll talk about in a second, as well as being used in their robotics division with their
group robotics models. This is the same tech. So I actually think it's cool.
that it's so pervasive and it's entering gaming,
but that's not really the big story for me here.
Whether you hate it or you like it,
you're not going to be able to use this thing
for the mass audience until probably next year.
This thing runs on like two RTX 5090s,
which are very expensive.
It's not very accessible to the average day-to-day person.
So by the time it gets released to the mass audience,
I think it's going to be a lot better than what we see today.
So that's the headliner.
If you are not paying attention to the headliner,
there is a lot of other stuff that was announced
that is far more interesting than this.
And we have a lot to pack on
So buckle up, starting with the Vera Rubin platform, which is the big headliner.
I mean, this is the big boy.
This is what was teased previously six months ago, I think, when Jensen was announcing his like
$500 billion in revenue.
Now he's up to a trillion.
He was unveiling a little bit more information about the chip.
Ejaz, what's new with Vera Rubin?
Yeah, so the headline metric is it's 35 times more performance than the previous generation.
So anyone who's been tracking, typically a new Nvidia GPU gets you a
about a two to five X performance upgrade on a good day.
This is the largest jump overall.
And the secret is there are about five to seven major components of a GPU.
Typically, when you improve for the next generation of GPUs,
you just refactor one of those things.
Why?
Because if you did all of them at once, that's really high risk.
Anything could go wrong.
And it results in delays of improving your GPUs.
Jensen said, forget, I'm just going to do it anyway.
And he pulled it off.
Seven new chips make up this entire new thing, and it gets implemented into five new racks,
creating what he calls on stage an AI supercomputer.
And that's why you get this massive performance increase.
It's just insane.
These chips are what's running all the AI that we use every single day.
Previously, everyone was training on Hopper.
Hopper was the chips that are running a lot of the AI models that you're actually using today.
The Frontier Labs have just started to spin up the Blackwell models.
That's what we've seen with Opus 4.6, what we've seen with GPT 5.4.
that's the Blackwell chips. It takes a long time from these chips to be invented to actually roll down
into data centers and then train the models. What we're seeing next, and we're not going to
actually feel the effects of this until probably early next year, is Vera Rubin. And Vera Rubin,
I mean, it's a 10 times performance improvement versus Blackwell just in terms of performance per
watt. So for every gigawatt of energy that these data centers have, this new chip is equivalent to
10 gigawatts worth of compute today. So for every gigawatt, you get a 10xx.
improvement on intelligence. And that is huge. It is an absolutely massive growth because we're planning
to scale the gigawatts of these data factories pretty significantly by the end of the year,
six to seven gigawatts for some of these. That's going to be equivalent to 60 to 70 gigawatts
of intelligence as of today. And I think that's pretty important to note is that there is a
strong delay when it comes to these ships actually being released to actually being implemented
on the racks, trained and deployed. It's hard to imagine we don't get AGI from this. Yeah.
The other major improvement that they made was a few, I think it was about a month and a half ago.
Nvidia acquired, and I do this like this, because apparently it wasn't a formal acquisition.
A company called GROQ, spelled GROQ.
And the reason why they acquired them is they get the rights to a very special type of AI chip called an LPU,
which uses something called S-RAM, static random access memory.
Now, if you've been keeping up to tabs with the memory walls that are happening right now, memory prices have skyrocketed.
In fact, it's probably going to affect a bunch of major companies releasing their own technology devices because the cost of memory is so high.
So they can't even give it to their customers because otherwise they need to charge extortion of prices.
Jensen made a really smart move by acquiring this company and integrating their technology into Vera Rubin.
And so what you're seeing on the screen now is basically the same architecture of Vera Rubin, but integrate.
with this S-RAM technology,
and the resulting effects is
you can inference AI models
at a much larger scale.
So that 10x that you just mentioned, Josh,
partially a bunch of that is unlocked
by these new LPUs.
So we're now starting to see Jensen
take two things more seriously.
One, a different type of chip architecture,
usually Nvidia is known for generalized GPUs,
and that's where their bread and butter is.
Now we see him branching off
into these hyper-specific inference.
chips because he looks over his shoulder and he sees not close but kind of far back,
Google's TPUs looming, AMD's chips and Intel's CPUs and chips coming up behind him as well.
And they're all specializing in inference specific chips. And the argument or the reason behind
that is a lot of the world isn't going to be focused on training AI models. It's going to
be prompting and querying AI models. And that's going to grow exponentially more. So this is
Nvidia and Jensen basically saying we're going to make a mark here. This is our stand. This is why
be acquired GROC, and here's the chip that we're doing. And Viroubin is going to be that chip
for anything and everything, general purpose and inference. When I think about these chips and just
projected out to the future, it's so exciting because they're such a clear path to going to where
I think every AII wants to go. Yes. To getting to that AGI level and beyond. And this chart that
we're showing on screen here is a beautiful example of this. Because in addition to Blackwell,
in addition to Rubin, they also teased Feynman already, even though Rubin is months to years away from
actually being deployed at scale. So Nvidia is essentially 18 months, give or take a few ahead of
what the current reality looks like. And I think this is really important to note is currently
with the bleeding edge of AI, we're running Blackwell right now. And we just started running Blackwell.
And Blackwell has about 12 months of improvements to be made before we start to feel the effects
of Rubin. By the time we feel the effects of Rubin, which is that 10x performance per watt improvement,
they already have Feynman ready to go and to be deployed into these data.
centers, and already we have two incremental steps, two exponential steps ahead of where we currently
sit. And it's hard to imagine that with the buildout that's happening, with the performance per
increase that we're seeing from all these chips sets, that we're not just going to have this
completely vertical and exponential growth of AI across the board. And I think that's probably
at the core of Jensen's thesis of a trillion dollars. It's like, the spending isn't going to stop
because he's already created the future. It's just a matter of actually deploying it and
plugging it into the grid so you could power these chips and get the intelligence that everyone
wants and it's unbelievable. So Feynman is coming. They didn't announce a bunch of things about
Feynman, but that's the name of the next chip architecture. Named after your favorite mathematician's
favorite mathematician, Richard Feynman. Everyone's big fan of him. Very cool. Very excited. Did you
finish his book with something, Josh? I did. Yep. I'm surely you're joking, Mr. Feynman. He has a few
books that are all awesome. So if you're into physics or math or just really admire great teachers,
Richard Feynman is amazing and is now the naming architecture for the future of NVIDIA's AI chips.
So pretty cool stuff.
Bold name, big ambitions.
Ambidia currently sits at what, $4.5 trillion, biggest most valuable company in the world.
Odds that it's the same by the end of the year.
Is this going to be prolonged?
Well, we can ask our friends at Polly Market to answer this for us.
And it looks like there has been a strong trend signaling, yes.
And this was not always the case.
I mean, it looks like Alphabet Google was at one point during the year, February, just a month ago, was projected to flip them.
people thought Google was going to be the world leader. It is clear now that is absolutely not the
case. In fact, Apple, who we frequently talk about looks like they have a better chance of doing it
than Google now, and now Nvidia is up to 70%. So it seems highly probable that people saw this
presentation, people have been seeing progress, and they are very much bullish on Nvidia. So the market
is pricing in a pretty steep increase to the stock price before the end of the month. I mean, it's currently
trading at 182. And it looks like there's what, 25, 30, it's about a 30% chance that it trades over
200 this month. So it looks like things are looking good for Nvidia, for being the most valuable
company in the world and also continuing to trade up on this news. It was an incredible presentation.
Thank you to Polymarket for sponsoring this segment of the episode. And now we could probably
get into the next most interesting thing for me at least, which was the full self-driving moment.
In fact, Jensen Huang said, this is the chat GPT moment for self-driving cars. It has arrived.
This is a bold take because the full self-driving industry is pretty, pretty big.
Isn't it been solved by Tesla already at this point?
Well, it depends who you ask.
It sounds like internally they feel confident in the fact they've solved it, but they're
currently solving this March of 9x, where they have efficacy up to 99.x percent, and they need
to get it to 9-999.
Now, Waymo clearly has the most deployed version of this.
You could actually go and you could get into a Waymo.
You can get into a cyber cab in some places in Austin, but they still have the kind of guiding drivers.
They haven't figured out the legislation to let them be fully autonomous.
But Jensen is saying, hey, if you're not.
not Waymo, if you're not Tesla, we have a solution for you. We are actually going to build the
full self-driving stack and integrate it directly into your cars for you from the sensors all the
way to the software stack. And they just recently partnered with B-YD, Nissan, Hyundai, and Gilly. And for
those who aren't aware, B-YD is actually the largest electric car manufacturer in the world,
more so than Tesla. They're based in China, and it's showing that Nvidia is not really country,
they're kind of country-agnostic, right? Like, they're just going to, if you want a self-driving car,
come to us, we got you.
or Jensen just doesn't care if he aligns with China or not.
He's just out there to expand a video into anyone in everyone's hands.
As you said, BID is the biggest EV maker.
They sell more cars than Tesla every single year.
And so that distribution, like, think about that.
Like, imagine you put your self-driving model into as many cars as possible.
It's probably going to get smarter, way, way quicker because it's just inside more cars.
So that's real competition against Tesla from a competitive mode.
The other thing is, he's also integrating into Uber.
as well, right? So it's going to be launching in 28 cities by 2028, so through the end of next year,
which seems like a long time, but that's a lot of cities. And Uber has a lot of reach when it comes
to just a driving network in general. So this is a really cool announcement. I don't quite
know if it's apples to apples with Tesla full self-driving. They own the end-to-end stack there.
Invidia doesn't really have that. This is more of a thing that you can kind of attach on to cars.
And if I had to guess, this is not just me being an Elon fanboy.
There's a lot more friction that Nvidia will run into.
So I don't think this is a direct one-to-one competitor.
This is a key difference.
If I'm a Tesla shareholder, I'm not really nervous about this.
Because like you said, Tesla owns the full manufacturing stack,
and they have millions of cars on the road that are full self-driving capable today.
They're just one software update away from cracking that.
When that final software update comes, when the legislation passes,
that is to be determined, but they're there.
They're ready.
Waymo and I guess Uber now are kind of.
of on the other side of this where they've kind of perhaps figured out the software stack.
They're close at least, but they have nowhere near figured out the manufacturing stack for this
at scale. And manufacturing, as we know, designing hard things in the physical world is hard.
And that's going to slow these companies down a lot. So I think for Uber, this is probably
the best case scenario. They finally have a saving grace, someone who wants to actually work with
them to help deploy the full self-driving vision. But they got a long way to go. So it's nice that
they're trying. This is kind of like Apple Car Play, but for full self-driving where they're
going to make the cars, they're going to sell you the software to put in the cars and hopefully one day make them full self-driving.
So we'll see how that goes. That was the first of the robotics section of this episode.
Let me introduce you to the unhinged version of this, Josh. So Olaf made popular in the fictional movie Frozen came to life on stage. What you're looking at is an autonomous, self-directed robot that runs on
Nvidia, I'm not making this up, that runs on
Nvidia's Newton Robotics
Engine. It also runs on their
Jetson chip as well. So what you're looking at
is a homegrown
Nvidia robot and product that is
autonomously interacting with Jensen.
I can't help but think that some
of this must be scripted. There's no way
that the robot is this interactive. And obviously
it's been outfitted with the look
of this frozen character, but
pretty cool all around. I don't
know if this is going to be in everyone's
home. I don't know what the point of this was.
Like maybe they're going to sell rights to Disney or something.
But yeah, like I don't really have a strong take on this.
Yeah, well, it's just, I mean, it's more of the direction that they're heading towards,
which is real world physical AI, right?
It's like we're getting self-driving cars.
Now we're going to get robots.
They're creating these small packaged computers to put into these things.
They're creating the entire stack.
Nvidia is becoming the Tesla for the general purpose company.
It's like if you can't build it all yourself, Nvidia has done it.
And they will sell you all of their hardware.
They'll sell you all of their software.
They're moving a lot into open.
source. And I guess that's probably the transition to the next announcement, which is their
NemoClawe announcement, the OpenClaught competitor, which isn't an OpenClawe competitor at all, actually.
It's just a basically enterprise solution for companies that want to use OpenClaw. So the founder of
OpenClaw, he was there. Jensen gave him a nice shout out. And basically NemoClaugh is a way for
companies to deploy OpenClaw in a more secure way and to run on any coding agent and deploy from
anywhere. And I think a lot of people, I mean, ourselves included, thought this could be competition.
The reality is it's complementary. InVdia wants open source AI because they want to build the
hardware that you use to run the open source AI. And it seems like this was kind of like a win
for everyone, including the open source community. It's pretty cool. Yeah. I mean, Pete Stey,
as you mentioned, the founder of OpenClaw actually worked with Jensen and the Invideo team for
months to build this out. Their target market are enterprise customers specifically because
when OpenClo went viral, it went viral. It went viral.
because everyone could spin up their own personal agent,
there was one glaring issue,
loads of security issues.
So people could lose money,
expose their credit card details,
or lose all their personal data,
or have people hack their computers.
Not good if you are an enterprise company,
but companies still want to get access to this thing.
So Jensen kind of dreamt up this platform
that sits on top of open court.
So it works very synonymously with it.
And now you can kind of use open call without any worry.
You can spin up an agent that does a particular enterprise workflow,
or you can use it for accounting back office stuff, whatever you can dream of.
It's now safe to use.
And it's open source, which is great.
Yeah.
Okay.
So two more things.
We have two more quick announcements.
One, AI and space.
Space GPUs.
We're doing the damn thing.
It's happening.
So Jensen got on stage.
He said, we are going to build Vera Rubin for space.
You're going to have Vera Rubin or bring the Earth.
It's going to be in these data centers.
It's going to be fantastic.
And then he says, well, we're not quite sure how we're going to do it, but we're going to do it.
They still have a lot of issues that they need to solve, one of which is the cooling.
one of which is solving the radiation.
There are a series of issues that are going to need to be solved,
but there is the intention to do this.
And I suspect he didn't announce that here,
but I suspect they're working with SpaceX to design these chips hand-to-hand
because that's really the only company that's going to be getting these things up in space.
And I think it's really exciting.
When we think about AI data centers in space
and the quality of the Vereruban chip architecture,
bringing those two things together and getting them in orbit by 2027,
28, maybe the latest,
that's going to be pretty cool, that's going to change the game.
Elon is incredibly bullish on this.
He thinks that SpaceX is now going to flip every company in the world when it comes to AI development.
And he might not be wrong.
Because if he can get these chips at scale from Jensen, send him up into orbit, lower the cost per watt to be, I mean, a marginal of a small fraction of what it is today.
It's a huge upgrade.
I feel like this was just a custom announcement for Elon Musk, for one individual.
He's the only guy that's really trying to launch GPS into space at scale.
Like in this demo, he's demoing it using one of Nvidia's investment portfolio companies, StarCloud,
which are kind of the initial startup that made GPUs in space a trend, a thing.
But then Elon jumped on the wave and completely took it over.
And he's the guy that's actually going to be economically able to launch these at scale.
So it's a good day to be a Tesla or SpaceX share owner or equity owner.
And the final announcement that we're going to talk about is the DGX Spark.
They released the new Spark.
and it's now looking like it's going to be priced around $4,700, which seems high,
but if you are someone who runs local inference at your home and you're considering buying
a Mac Studio or something to run these tokens on your own, perhaps you have an open-cloth instance
you want to run local AI, this is a pretty compelling option.
They're basically taking a GB300, which is the Grace Blackwell chip, and they're turning it
into a tiny little thing that fits on your desk. That's 750 gigabytes of coherent memory and
20 pedophlops of AI compute, which allows you to run models up to a trillion parameters
right from your desk. So it's an unbelievably dense machine. In fact, if this was released
probably even five years ago, this probably would have been the most powerful supercomputer
in the world. And now it's compressed down to something that fits on your desk. So it's just,
it's a testament to how much efficiency improvements have been made every single year. And how
powerful the Nvidia brand is, man. There's no one else building
stuff like this. No one's even close. Looking at this holistically, this was a home run for
Nvidia, for shareholders, for investors, for the AI industry. Everyone wins because Nvidia is just
running full tilts. It's funny, you said that back in the day, you would, this would be so much
more expensive. You would also need like a dedicated like server room to fit this entire thing.
And now you can just sit it on your desk next to your laptop and have in Jensen's words,
an AI supercomputer in your house, super cool. It comes shipped with Nemo Klo, as well. So you get two
products to Nvidia GTC 2026 announcements for the price of one. And you said it was 4,700, Josh.
That's super cheap. That's what's looking like on their website right now. I think that's for the spark.
I think that's for the spark. That is for the spark. That's for the spark. Yeah. And then they had a
separate announcement, I think, on the DGX station, which is like the more powerful supercomputer,
which also consequently sits on your desk as well. So just two different price points, but two very
powerful things. Yeah, just a home run for Nvidia. Yeah. What a great presentation. That is
everything, those the highlights. I would love for you to share which part you are most excited
about. Is it DLSS-5? How many gamers are here that actually care about this stuff? Do you hate it?
Tell me. Because I don't. I think it's cool. I mean, I could understand why the artists maybe
don't like their art being, you know, digitally enhanced. But I have good news. You could just
turn it off. Like, you could just play the vanilla game too. That's also cool. So I'd love to know what
people are most excited about here. I think for me, space data centers, man, that's my favorite thing in
the world. I want to see AI in space.
It's going to be the DLSS-S-5, but use for robotics.
Like, okay, I'm like nerding out over robotics right now
because I think they're going to have their chat GBT from 2022 moment
at any point this year.
They're getting good enough to move around, run, lift heavy items.
We just need a good model.
And I think having something like DLSS-S-S-this-code name is.
So DLSS-5 kind of like expand robotics models is really exciting for me.
Yeah, and one thing's for sure.
The naming of all of these is going to continue.
Please come up with easy names.
My God.
But yeah, that's the wrap.
Thank you so much for watching this recap on Nvidia's GTC.
I hope you enjoyed it.
We pursued through two hours of pretty boring presentation to bring this to you.
So hopefully it was a little more interesting, a little more exciting.
Very technical.
Jens is a technical guy.
I hope you enjoyed EJazz's leather jacket that he's rocking today in honor of Jensen and
NVIDIA who dawned a leather jacket on stage.
I know you are.
I hope you appreciate it.
But yeah, as always, please, don't forget to share with your.
friends like the video subscribe leave a comment rate us five stars all the great things thank you so
much for watching and yeah we'll see you guys in the next episode
