Acquired - Nvidia Part III: The Dawn of the AI Era (2022-2023)
Episode Date: September 6, 2023It’s a(nother) new era for Nvidia.We thought we’d closed the Acquired book on Nvidia back in April 2022. The story was all wrapped up: Jensen & crew had set out on an amazing journey ...to accelerate the world’s computing workloads. Along the way they’d discovered a wondrous opportunity (machine learning powered social media feed recommendations). They forged incredible Power in the CUDA platform, and used it to triumph over seemingly insurmountable adversity — the stock market penalty-box.But, it turned out that was only the precursor to an even wilder journey. Over the past 18 months Nvidia has weathered one of the steepest stock crashes in history ($500B+ market cap wiped away peak-to-trough!). And, it has of course also experienced an even more fantastical rise — becoming the platform that’s powering the emergence of perhaps a new form of intelligence itself… and in the process becoming a trillion-dollar company.Today we tell another chapter in the amazing Nvidia saga: the dawn of the AI era. Tune in!Links:Asianometry on AI HardwareEpisode sourcesCarve Outs:AliasMoanaSponsors:ServiceNow: https://bit.ly/acqsnaiagentsHuntress: https://bit.ly/acqhuntressVanta: https://bit.ly/acquiredvantaMore Acquired!:Get email updates with hints on next episode and follow-ups from recent episodesJoin the SlackSubscribe to ACQ2Merch Store!Note: Acquired hosts and guests may hold assets discussed in this episode. This podcast is not investment advice, and is intended for informational and entertainment purposes only. You should do your own research and make your own independent decisions when considering any financial transactions.
Transcript
Discussion (0)
You like my Bucks t-shirt?
I love your Bucks t-shirt.
I went for the first time, what, two weeks ago when I was down for meeting at Benchmark,
and the nostalgia in there is just unbelievable.
I can't believe you hadn't been before.
I know Jensen is a Denny's guy, but I feel like he would meet us at Bucks if we asked him.
Or at the very least, we should figure out some NVIDIA memorabilia to get on the wall at Bucks.
Totally.
Fit right in.
All right, let's do it.
Let's do it.
Who got the truth?
Is it you?
Is it you?
Is it you?
Who got the truth now?
Is it you?
Is it you?
Is it you?
Sit me down, say it straight.
Another story on the way.
Who got the truth?
Welcome to season 13, episode three of Acquired, the podcast about great technology companies
and the stories and playbooks behind them.
I'm Ben Gilbert.
I'm David Rosenthal.
And we are your hosts.
Today, we tell a story that we thought we had already finished, NVIDIA.
But the last 18 months have been so insane, listeners, that it warranted an entire episode on its own.
So today is a part three for us with NVIDIA, telling the story of the AI revolution, how we
got here, and why it's happening now, starting all the way down at the level of atoms and silicon.
So here's something crazy that I did a transcript search on to see if it was true.
In our April 2022 episodes, we never once said the word
generative. That is how fast things have changed. Unbelievable. Totally crazy. And the timing of all
of this AI stuff in the world is unbelievably coincidental and very favorable. So recall back
to 18 months ago. Throughout 2022, we all watched financial markets from public equities to early-stage startups to real estate just fall off a cliff due to rapid rise in interest rates. The crypto and Web3 bubble burst, banks fail. It seemed like the whole tech economy, and potentially a lot with it, was heading into a long winter.
Including NVIDIA. and potentially a lot with it was heading into a long winter.
Including NVIDIA.
Including NVIDIA, who had that massive inventory write-off for what they thought was overordering.
Yep. Wow how things have changed.
Yeah.
But by the fall of 2022, right when everything looked the absolute bleakest,
a breakthrough technology finally became useful after years in research labs.
Large language models, or LLMs, built on the innovative transformer machine learning mechanism
burst onto the scene, first with OpenAI's ChatGPT, which became the fastest app in history to 100
million active users, and then quickly followed by Microsoft, Google, and seemingly
every other company. In November of 2022, AI definitely had its Netscape moment. And
time will tell, but it may have even been its iPhone moment.
Well, that is definitely what Jensen believes.
Yep. Well, today we'll explore exactly how this breakthrough came to be, the individuals
behind it, and of course, why the entire thing has happened on top of NVIDIA's hardware and software. If you want to
make sure you know every time there's a new episode, go sign up at acquired.fm slash email.
You'll also get access to two things that we aren't putting anywhere else. One, a clue as to
what the next episode will be, and two, follow-ups from previous episodes from things that we learned after release.
You can come talk about this episode with us
after listening at acquired.fm slash slack.
If you want more of David and I,
check out our interview show, ACQ2.
Our next few episodes are about AI
with CEOs leading the way
in this world we are talking about today
and a great interview with Doug DeMuro
where we wanted to talk about a lot more
than just Porsche with him,
but we only had 11 hours
or whatever we had in Doug's garage.
So a lot of the car industry chat
and learning about Doug
and his journey and his business,
we saved for ACQ2.
So go check it out.
One final announcement.
Many of you have been wondering,
and we've been getting a lot of emails,
when will those hats be back in stock? Well, they're back. For a limited time, you can get an ACQ embroidered
hat at acquired.fm.store. Go put your order in before they go back into the Disney vault forever.
This is great. I can finally get Jenny one of her own so she stops stealing mine.
Yes. Well, without further ado, this show is not investment advice. David and I may
have investments in the companies we discuss, and this show is for informational and entertainment
purposes only. David, history and facts.
Oh, man. Well, on the one hand, we only have 18 months to talk about.
Except that I know you're not going to start 18 months ago.
On the other hand, we have decades and decades of foundational research to cover. Except that I know you're not going to start 18 months ago. On the other hand, we have
decades and decades of foundational research to cover. So when I was starting my research,
I went to the natural first place, which was our old episodes from April 2022. And I was listening
to them and I got to the end of the second one. And man, I had forgotten about this. I think
Jensen maybe wishes we all had forgotten about this in one of NVIDIA's
earnings slides in 2021. They put up their total addressable market and they said they had a $1
trillion TAM. And the way that they calculated this was that they were going to serve customers
who provided $100 trillion worth of industry, and they were going to capture just 1% of it.
And there was some stuff on the slide that was fairly speculative, you know, like autonomous vehicles and the omniverse,
and I think robotics were a big part of it. And the argument is basically like, well, cars plus
factories plus all these things added together is 100 trillion, and we can just take 1% of that
because surely their compute will amount to 1% of that, which I'm not arguing is wrong, but it is a very blunt way to analyze that market.
Yeah, it's usually not the right way to think about starting a startup. You know,
oh, if we can just get 1% of this big market, blah, blah, blah.
It's the toppiest down way I can think of to size a market.
So you, Ben, rightly so called this out at the end of
NVIDIA Part 2. And you're like, you know, I think to justify where NVIDIA is trading
at the moment, you kind of actually got to believe that all of this is going to happen and happen
soon. Autonomous cars, robotics, everything. Yeah. Importantly, I felt like the way for them
to become worth what they were worth at that time
literally had to be to power all of this hardware in the physical world.
Yep. I kind of can't believe that I said this because it was unintentional and uninformed,
but I was kind of grasping at straws trying to play devil's advocate for you.
And we just spent most of that whole episode talking about how machine learning powered by NVIDIA
ended up having this incredibly valuable use case, which was powering social media feed
recommenders, and that Facebook and Google had grown bigger than anyone ever imagined on the
internet with those feed recommendations, and NVIDIA was powering all of it. And so I just
sort of idly proposed, well, maybe, but what if you don't actually need to believe any of that
to still think that NVIDIA could be worth a trillion dollars? What if, maybe, just maybe,
the internet and software and the digital world are going to keep growing.
And there will be a new foundational layer that NVIDIA can power.
Is that possible?
And I think we were both like, yeah, I don't know.
Let's end the episode.
Yeah, sure.
We shrugged it off and we were like, all right, carve outs.
But the crazy thing is that, of course, at least in this time frame,
most things on Jensen's trillion dollar
TAM slide have not come to pass. But that crazy question just might have come to pass. And from
NVIDIA's revenue and earnings standpoint, definitely has. It's just wild. All right,
so how did we get here? Let's rewind and tell the story. So back in 2012, there was the big bang moment of artificial intelligence,
or as it was more humbly referred to back then, machine learning, and that was AlexNet. We talked
a lot about this on the last episode. It was three researchers from the University of Toronto
who submitted the AlexNet algorithm to the ImageNet computer science competition.
Now, ImageNet was a competition where you would look at a set of 14 million images that
had been hand-labeled with what the pictures were of, like of a strawberry or a cat or
a dog or whatever.
And David, you were telling me it's the largest ever use of Mechanical Turk up to that point
was to label the ImageNet dataset? Yeah, it's the largest ever use of Mechanical Turk up to that point was to label the ImageNet dataset.
Yeah, it's wild.
I mean, until this competition and until AlexNet, there was no machine learning algorithm that could accurately label images.
So thousands of people on Mechanical Turk got paid however much, two bucks an hour, to label these images.
Yeah, and if I'm remembering from our episode, basically what happened is the
AlexNet team did way better than anybody else had ever done. The complete step changed better.
I think the error rate went from mislabeling images 25% of the time to suddenly only mislabeling them
15% of the time. And that was like a huge leap over the tiny incremental progress that had been
made along the way. You're spot on. And the way that they did it, and what completely changed the fortunes of
the internet, of Google, of Facebook, and certainly of NVIDIA, was they actually used
old algorithms, a branch of computer science and artificial intelligence called neural networks,
specifically convolutional neural networks,
which had been around since the 60s, but they were really computationally intensive to train. And so nobody thought it would be practical to actually train and use these things, at least not
anytime soon or in our lifetimes. And what these guys from Toronto did is they went out probably to their local Best Buy
or equivalent in Canada.
They bought two GeForce GTX 580s,
which were the top of the line cards at the time.
And they wrote their algorithm,
their convolutional neural network in CUDA
in NVIDIA's software development platform for GPUs.
And by God, they trained this thing
on like $1,000 worth of consumer-grade hardware.
And basically, the algorithm that other people
had been trying over the years
just wasn't massively parallel
the way that a graphics card sort of enables.
So if you actually can consume the full compute
of a graphics card,
then perhaps you could run some unique novel
algorithm and do it on a fraction of the time and expense that it would take in these supercomputer
laboratories. Yeah, everybody before was trying to run these things on CPUs. CPUs are awesome,
but they only execute one instruction at a time. GPUs, on the other hand, execute hundreds or thousands of instructions
at a time. So GPUs, NVIDIA graphics cards, accelerated computing, what Jensen and the
company likes to call this, you can really think of it like a giant Archimedes lever.
Whatever advances are happening in Moore's law and the number of transistors on a chip,
if you have an algorithm that can run in
parallel, which is not all problem spaces, but many can, then you can basically lever up Moore's
law by hundreds of times or thousands of times or today tens of thousands of times and execute
something a lot faster than you otherwise could. And it's so interesting that there was this first
market called graphics that was obviously parallel, where every pixel on a screen is not sequentially
dependent on the pixel next to it. It literally can be computed independently and output to the
screen. So you have however many tens of thousands or now hundreds of thousands of pixels on a screen
that can all actually be done in parallel. And little did NVIDIA realize, of course, that AI and crypto and all this other
linear algebra, matrix math-based things that turned into accelerated computing, pulling things
off the CPU and putting them on GPU and other parallel processors, was an entire new frontier
of other applications that could use the very same technology they had pioneered for graphics. Yeah, it was pretty useful stuff. And this Alex Knapp moment,
and these three researchers from Toronto, kicked off, Jensen calls it, and he's absolutely right,
the Big Bang moment for AI. So David, the last time we told this story in full, we talked about
this team from Toronto.
We did not follow what this team of three went on to do afterwards.
Yeah.
So basically what we said was, it turned out that a natural consequence of what these guys were doing was, oh, actually you can use this to surface the next post in a social
media feed on like an Instagram feed or the YouTube feed or something
like that. And that unlocked billions and billions of value. And those guys and everybody else
working in the field, they all got scooped up by Google and Facebook. Well, that's true. And then
as a consequence of that, Google and Facebook started buying a lot of NVIDIA GPUs. But turns
out there's also another chapter to that story that we completely skipped over. And it starts with the question you asked, Ben.
Who are these people?
Yes.
So the three people who made up the AlexNet team were, of course, Alex Krzyzewski, who was a PhD student, under his faculty advisor, the legendary computer science professor, Jeff Hinton.
I have an amazing piece of trivia about Jeff Hinton.
Do you know who his great-great-grandparents were?
No, I have no idea.
He is the great-great-grandson of George and Mary Boole.
You know, like Boolean algebra and Boolean logic?
This guy was born to be a computer science researcher.
Oh, my God.
Right?
Foundational stuff for computation and computer science.
I also didn't know there were people named Boole,
that that's where that came from.
That's hilarious.
Yeah.
You know, the and, or, x, or, nor operators.
That comes from George and Mary.
Wild.
So he's the faculty advisor.
And then there was a third person on the team,
Alex's fellow PhD student in this lab, one Ilya Sutskiver. And if you know where we're going with
this, you are probably jumping up and down right now in your seat. Ilya is the co-founder and
current chief scientist of OpenAI. Yes. So after AlexNet, Alex, Jeff, and Ilya do the very natural thing.
They start a company.
I don't know what they were doing in the company, but it made sense to start one.
And whatever they did, it was going to get acquired real fast.
By Google within six months.
So they get scooped up by Google.
They join a bunch of other academics and researchers
that Google has been monopolizing, really, in the field. Three specifically, Greg Corrado,
Jeff Dean, and Andrew Ng, the famous Stanford professor. The three of them had just formed the
Google Brain team within Google to turbocharge all of this AI work that has been unleashed by AlexNet.
And of course, to turn it into huge amounts of profit for Google.
Turns out, individually serving advertising that's perfectly targeted on the internet
through Facebook or Google or YouTube is an enormously profitable business
and one that consumes a whole lot of NVIDIA GPUs.
Yes. So about a year later, Google also acquires DeepMind, famously. And then right around the same time, Facebook scoops up computer science professor Jan LeCun, who also is a legend in the field. And the two of them basically establish a duopoly on leading AI researchers. Now, at this point, nobody is mistaking what
these companies and these people are doing for true human-level intelligence or anything close
to it. This is AI that is very good at narrow tasks, like we talked about social media feed
recommendations. So the Google Brain team and Jeff and Alex and
Ilya, one of the big projects they work on is redoing the YouTube algorithm. And this is when
YouTube goes from like money losing, you know, crazy thing that Google acquired to the just
absolute juggernaut that it is today. I mean, back then in like 2013, 2014, we did our YouTube episode not that long after.
The majority of views of YouTube videos were embeds on other web pages.
This is when they build it into a social media site.
They start the feed.
They start autoplay.
All this stuff is coming out of AI research.
Some of the other stuff that happens at Google, famously after they acquired DeepMind, DeepMind built a bunch of algorithms to save on cooling costs. And Facebook, of course,
they probably had the last laugh in this generation because they're using all this
work and Jan LeCun is doing his thing and hiring lots of researchers there.
This is just a couple of years after they acquired Instagram. Man, we need to go back
and redo that episode
because Instagram would have been a great acquisition anyway,
but it was AI-powered recommendations in the feed
that made that into a $100, $200, $500 billion asset for Facebook.
And I don't think you're exaggerating.
I think that is literally what Instagram is worth to Meta now.
By the way, I have bought a lot of things on Instagram ads so that the targeting works. Google Brain during this period, I don't think this even includes DeepMind, just the gains from the Google Brain team alone in terms of profits to Google, more than funded everything they were
doing in Google X. Which has there ever been anything profitable out of Google X? Google Brain.
Yeah, I mean, yeah. We'll leave it at that. So this takes us to 2015, when a few people in Silicon Valley start to realize that this Google-Facebook-AI duopoly is actually a really, really big problem. And most people had no idea about this. This is really visionary of these two people. And not just a problem for the other big tech companies,
because you could make the argument it's a problem
because Siri's terrible.
All the other companies that have lots of consumer touchpoints
have pretty bad AI at the time.
But the concern is for a much greater reason.
I think there are three levels of concern here.
One, obviously, is the other tech companies.
Then there's the problem of startups. This is terrible
for startups. How are you going to compete with Google and Facebook when this is the primary
value driver of this generation of technology? I mean, there really is another lens to view
what happened with Snap, what happened with Musical.ly and having to sell themselves to ByteDance and
becoming TikTok and going to the Chinese. Maybe it was business decisions, maybe it was execution
or whatever that prevented those platforms from getting to independent scale. Snap's a public
company now, but like it's no Facebook. Maybe it was that they didn't have access to the same AI
researchers that Facebook and Google had. That feels like an interesting question.
It's probably a couple steps too far in the conclusion,
but still sort of a fun straw man to think about.
A fun straw man.
Nonetheless, this is definitely a problem.
The third layer of the problem is just like,
this sucks for the world
that all these people are locked up in Google and Facebook.
This is probably a good time to mention
this founding of OpenAI was motivated by the desire to find AGI or artificial general intelligence first before the big tech
companies did. And DeepMind was the same thing. It was going to be this winding and circuitous path
at the time since really nobody knew then or knows now the best path to get to AGI. But the big idea
at OpenAI's founding was whoever figures out and finds AGI first
will be so big and so powerful so quickly,
they'll have an immense amount of control.
And that is best in the open.
So these two people,
who are quite concerned about this,
convene a very fateful dinner in 2015.
Of all places.
Is it the Rosewood? The Rosewood Hotel
on Sand Hill Road. Naturally. It would have been way better if it were a Denny's or Bucks
and Woodside or something like that. But it does actually just show like where the seeds of open
AI come from. It is very different than this sort of organic scrappy way that the NVIDIAs of the
world got started. You know, this is powers on high and existing money saying, no, we need to will something into
existence. Yep. So of course, those two shadowy figures are Elon Musk and Sam Altman, who at the
time was president of Y Combinator. So they get this dinner together and they invite basically
all of the top AI researchers at Google and Facebook.
And they're like, yo, what is it going to take for you to leave and to break this duopoly?
And the answer from almost all of them is nothing.
You can't.
Why would we ever leave?
We're happy as clams here.
We've gotten to hire the people that we want.
We've built these great teams.
There's a money spigot pointed at our face.
Right.
Not only are we getting paid
just ungodly amounts of money,
but we get to work directly
with the best AI researchers in the field.
If we were still at academic institutions,
you know, say you're at
the University of Washington, amazing academic institution for computer science, one of the top in the field. If we were still at academic institutions, you know, say you're at the University of Washington, amazing academic institution for computer science, one of the
top in the world, or the University of Toronto, where these guys came from, you're still at a
fragmented market. If you go to Google or you go to Facebook, you're with everybody.
Yep.
So the answer is no from basically everybody, except there's one person who's intrigued by Elon and
Sam's pitch. And to quote an amazing Wired article from the time by Cade Metz that we will link to
in our sources, quote, the trouble was so many of the people most qualified to solve all these AI
problems were already working for Google and Facebook, and no one at the dinner was quite
sure that these thinkers could be lured to a new startup, even if Musk and Altman were behind it.
But one key player was at least open to the idea of jumping ship. And then they have a quote from
that key player. I felt there were risks involved, but I also felt it would be a very interesting
thing to try. And that key player was Ilya Sutskiver. Yep. So after the dinner,
Ilya leaves Google and signs up to become, as we said, co-founder and chief scientist of a new
independent AI non-profit research lab backed by Elon and Sam. OpenAI. Okay, listeners, now is a
great time to tell you about longtime friend of the show, ServiceNow.
Yes, as you know, ServiceNow is the AI platform for business transformation.
And they have some new news to share.
ServiceNow is introducing AI agents.
So only the ServiceNow platform puts AI agents to work across every corner of your business. Yep. And as you know from listening to us all year,
ServiceNow is pretty remarkable
about embracing the latest AI developments
and building them into products for their customers.
AI agents are the next phase of this.
So what are AI agents?
AI agents can think, learn, solve problems,
and make decisions autonomously.
They work on behalf of your teams,
elevating their productivity and
potential. And while you get incredible productivity enhancements, you also get to stay in full control.
Yep. With ServiceNow, AI agents proactively solve challenges from IT to HR, customer service,
software development, you name it. These agents collaborate, they learn from each other,
and they continuously improve, handling the busy work across your business so that your teams can actually focus on what truly matters.
Ultimately, ServiceNow and agentic AI is the way to deploy AI across every corner of your enterprise.
They boost productivity for employees, enrich customer experiences and make work better for everyone.
Yep. So learn how you can put AI agents to work for your people by clicking the
link in the show notes or going to servicenow.com slash AI dash agents. Okay, so David, OpenAI is
formed. It's 2015. Here we are eight years later and we have ChatGPT. Super linear path from there
to here, right? Turns out, uh, no. So as we were talking about a little bit,
AI at this point in time, super good for narrow use cases, looks nothing like GPT-4 today.
The capabilities that it had were pretty limited. And one of the big reasons was that the amount of data that you
could practically train these models on was pretty limited. So the AlexNet example, you're talking
about 14 million images. In the grand scheme of the internet, 14 million images is a drop in the bucket. And this was both a
hardware and a software constraint. On the software side, we just didn't actually have the algorithms
to sort of suppose that we could be so bold to train one single foundational model on the whole
internet. Like, it wasn't a thing. Yeah, that was a crazy idea. Right. People were excited about the concept of language models,
but we actually didn't know how we could algorithmically get it done. So in 2015,
Andrej Karpathy, who was then at OpenAI and went on to lead AI for Tesla and is actually now back
at OpenAI, writes this seminal blog post called The Unreasonable Effectiveness
of Neural Networks. And David, I don't think we're going to go into it on this episode,
but note that recurrent neural networks are a little bit of a different thing than
convolutional neural networks, which was the 2012 paper. The state of the art had evolved.
Yes. And right around that same time, there is also a video that hits YouTube a little bit later in 2016 that is actually
on NVIDIA's channel. And it has two people in this very short one minute and 45 second video.
One is a young Ilya Tsitskiver, and two is Andrej Karpathy. And here is a quote from Andrej
from that YouTube video. One algorithm I'm excited about is a language model. The idea that
you can take a large amount of data and you feed it into the network and it figures out the pattern
in how words follow each other in sentences. So for example, you could take a large amount of data
on how people talk to each other on the internet. You can train basically a chat bot, but you can do
it in a way that the computer learns how language works
and how people interact. Eventually, we'll use that to talk to computers just like we talk to
each other. Wow, this is 2015. This is two years before the transformer, while Karpathy is at
OpenAI. He both comes up with the idea or espouses the idea of a chat bot, so that sort of had already
been discussed. But even before we had the transformer, the method to actually pull this off,
he sort of had the idea that, and there's an important part here, it figures out the pattern
in how words follow each other in sentences. So there's this idea that the very structure of language and the way to interpret
knowledge is actually embedded in the training data itself rather than requiring labeling.
This is so cool. So at Spring GTC this year, Jensen did a fireside chat with Ilya. And it's
amazing. You should go watch the whole thing. But in it, this question comes
up. Jensen kind of poses as a straw man, like, hey, some people say that GPT-3, 4, chat GPT,
everything going on, all these LLMs, they're just probabilistically predicting the next word in a
sentence. They don't actually have knowledge. And Ilya has this amazing response to
that. He says, okay, well, consider a detective novel. Yes. At the end of the novel, the detective
gathers everyone together in a room and says, I am now going to tell you all the name of the person who committed the crime, and that person's name is blank. The more accurately
an LLM predicts that next word, i.e. the name of the criminal, ipso facto, the greater its
understanding not only of the novel, but of all general human-level knowledge and intelligence, because you need all of your experience in the
world and as a human to be able to guess who the criminal is. And the LLMs that are out there today,
GPT-3, GPT-4, LAMA, BARD, these others, they can guess who the criminal is.
Ooh, yeah. Put a pin in that. Understanding versus predicting.
It's a hot topic du jour.
So, David, is now a good time to fast forward two years to 2017 to the Transformer paper.
Absolutely.
Ben, tell us about the Transformer.
Okay.
So, Google, 2017, Transformer paper.
Paper comes out.
It's called Attention is All You Need.
And it's from the Google brain team, right?
Yes.
That Ilya just left.
Just left, two years before to start OpenAI.
So machine learning on natural language,
just to set the table here,
had long been used for things like autocorrect
or foreign language translation.
But in 2017, Google came out
with this paper and discovered a new model that would change everything for these fields and
unlock another one. So here is the scenario. You're translating a sentence from English to French.
You could imagine that a way to do this would be one word at a time, in order. But for anyone who's
ever traveled abroad and tried to do this, you know
that words are sometimes rearranged in different languages. So that's a terrible way to do it.
You know, United States in Spanish is Estados Unidos, so failure on the very first word in
that example. So enter this concept of attention, which is a key part of this research paper.
So this attention, this fairly magical component of the transformer paper,
it literally is what it sounds like. It is a way for the model to attend to different areas of the
input text at different times. You can look at a large amount of context while considering what
word to pick next in your translation. So for every single word that you're about to output in French,
you can look over the entire set of inputted words to figure out what words you should wait
heavily in your decision for what to do next. This is why AI and machine learning was so
narrowly applicable before. If you anthropomorphize it and you think of it like a human, it was like a human with a very, very short attention span.
Yes.
Now, here's the magical part.
While it does look at the whole input text to consider what the next word should be,
it doesn't mean that it throws away the notion of position entirely.
It uses a technique called positional encoding, so it doesn't forget the position of the
words altogether.
So it's got
this cool thing where it weights the important part relevant to your particular word and it
still understands position. So remember I said the attention mechanism looks over the entire input
every time it's picking what word to output. That sounds very computationally hard. Yes.
In computer science terms, this means that the attention mechanism is O of n squared.
Oh, that's giving me the heebie-jeebies back to my intro CS classes in college.
Oh, just wait till we get through this episode. It gets deeper. So obviously, yes,
traditionally you'd say this is very, very inefficient. And it actually means that the
larger your context window, aka token limit, aka prompt length, gets, the more computationally expensive
it gets on a quadratic basis. So doubling your input means quadrupling the cost to compute an
output, or tripling your input means nine times the cost. It gets real gnarly. Yeah, it gets real
expensive real fast. But GPUs to the rescue. The amazing news for us here is that these transformer
comparisons can be done in parallel. So even though there are lots of them to do, if you have
big GPU chips with tons of cores, you can do them all at exactly the same time. And previous
technologies to accomplish this, like recurrent neural networks or LSTMs, long short-term memory networks, which is a type of
recurrent neural network, etc. Those required knowing the output of each step before beginning
the next one, before you picked the next word. So in other words, they were sequential since they
depended on the previous word. Now with transformers, even if your string of text that you're inputting
is a thousand words long,
it can happen just as quickly in humid, measurable time as if it were 10 words long,
supposing that there were enough cores in that big GPU. So the big innovation here is you could now train sequence-based models in a parallel way.
You couldn't train models of this size at all before, let alone cost-effectively.
Yeah, this is huge and probably for all listeners out there starting to sound very familiar to the world that we live in today. Yeah, I sort of did a sleight of hand there morphing translation to using words like context window and token length.
You can kind of see where this is going.
Yep.
So this Transformer paper comes out in 2017.
The significance is going. Yep. So this Transformer paper comes out in 2017. The significance is huge. But for whatever reason, there's a window of time where the rest of the world doesn't quite realize it.
So Google obviously knows how important this is. And there's like a year where Google's AI work,
even though Ilya has left and OpenAI is a thing now,
accelerates again beyond anybody else in the field. So this is when Google comes out with
Smart Compose in Gmail, and they do that thing where they have an AI bot that'll call local
businesses for you. Remember that demo from IO that they did? Did that ever ship? I don't know.
Maybe it did. I mean, this is Google here.
Like, the capabilities are there.
The product sense,
not as much.
This is when they really
start investing in Waymo.
But again,
where it really manifests
is just back to serving ads
and search
and recommending YouTube videos.
Like, they're just crushing it
in this period of time.
OpenAI and everyone else, though,
they haven't adopted
transformers
yet. They're kind of stuck in the past. And they're still doing these really researchy
computer vision projects. So like this is when they build a bot to play Dota 2, Defense of the
Agents 2, the video game. And super impressive stuff. Like they beat the best Dota players in
the world at Dota by literally just consuming computer vision, like consuming
screenshots and inferring from there. And that's a really hard problem because Dota 2 is not a game
where you get to see the whole board at once. So it has to do a lot of like really intelligent
construction of the rest of the game based on just a single player's worth of input. So it's
unbelievably cutting edge research. For the past generation. It's a faster horse, basically. Maybe, yeah.
I mean, they were also doing stuff like Universe,
which was the 3D-modeled world to train self-driving cars.
You don't really hear anything about that anymore,
but they built this whole thing.
I think it was using Grand Theft Auto as the environment,
and then it was doing computer vision training for cars
using the GTA world. I mean, it was crazy stuff, but it was doing computer vision training for cars using the GTA world.
I mean, it was crazy stuff, but it was kind of scattershot.
Yeah, it was scattershot. And I guess what I'm saying is, it was still in this narrow
use case world. They weren't doing anything approaching GPT at this point in time. Meanwhile,
Google had kind of moved on.
Yep.
Now, one thing I do want to say in defense of OpenAI
and everybody else in the field at the time,
they didn't just have their heads in the sand.
To do what transformers enabled you to do,
which Ben, you're going to talk about in a sec,
cost a lot in computing power.
GPUs and NVIDIA and the transformer
made it possible.
But to work with the size of models you're talking about, you're talking about spending an amount of money
that's certainly for a non-profit and anybody really except Google was untenable.
Right. It's funny, David, you made this leap to expensive and large models. All we were doing
before was merely talking
about translating one sentence to another. The application of a transformer does not necessarily
require you to go and consume the whole internet and create a foundational model.
But let's talk about this. Transformers lend themselves quite well, as we now know,
to a different type of task. So for a given input sentence, instead of translating to a target
language, they can also be used as next word predictors to figure out what word should come
next in a sequence. You could even do this idea of pre-training with some corpus of text to help
the model understand how it should go about predicting that next word. So backing up a little
bit, let's go back to the recurrent neural networks,
the state-of-the-art before transformers. Well, they had this problem in addition to the fact
that they were sequential rather than parallel. They also had a very short context window. So you
could do a next word predictor, but it wasn't that useful because it didn't know what you were saying
more than a few words ago. By the time you'd get to the end of the paragraph, it would forget what was happening at the beginning. It couldn't sort
of hold on to all that information at the same time. So this idea of a next word predictor that
was pre-trained with a transformer could really start to do something pretty powerful, which is
consume large amounts of text and then complete the next word based on a huge amount of context.
We're starting to come up with this idea of a large language model. And we're going to flash
forward here just for a moment to do some illustration, and then we'll come back to the
story. In GPT-1, the first open AI model, this generative pre-trained transformer model, GPT, it used unsupervised pre-training, which basically
meant that as it was consuming this corpus of language, it was unlabeled data. The model was
inferring the structure and meaning of language merely by reading it, which is a very new concept
in machine learning. The canonical wisdom is that you needed extremely structured data to train your smallish model on,
because how else are you going to learn what the data actually means?
This was a new thing.
You can learn what the data means from the data itself.
It's like how a child consumes the world, where only occasionally does their parent say,
no, no, no, you have that wrong.
That's actually the color red.
But most of the time, they're just self-teaching by observing the world.
As a parent of a two-year-old, can confirm.
And then a second thing happens after this unsupervised pre-training step,
where you then have supervised fine-tuning.
The unsupervised pre-training used a large corpus of text
to learn the sort of general language,
and then it was fine-tuned on labeled data sets for specific tasks that you sort of really want the model to be actually
useful for.
So to give people a sense of why we're saying that the idea of training on very, very, very
large amounts of data here is crazy expensive, GPT-1 had roughly 120 million parameters that it was trained on.
GPT-2 had 1.5 billion. GPT-3 had 175 billion. And GPT-4, OpenAI hasn't announced, but it's rumored
that it has about 1.7 trillion parameters that it was trained on.
This is a long way from AlexNet here.
It's scaling like NVIDIA's market cap.
There is this interesting discovery, basically,
that the more parameters you have,
the more correctly you can predict the next word.
These models were basically bad sub-10 billion parameters. I mean, maybe even
sub-100 billion parameters. They would just hallucinate or they would be nonsensical.
It's funny when you look at some of the 1 billion parameter models, you're like,
there is no chance that turns into anything useful ever. But by merely adding more training data and
more parameters, it just gets way, way better. There's this weirdly emergent property where
transformer-based models scale really well due to the parallelism. So as you throw huge amounts
of data at training them... You can also throw huge amounts of NVIDIA GPUs at processing that.
Exactly. And the output sort of unexpectedly gets magically better. I mean, I know I keep saying
that, but it is like,
wait, so we don't change anything about the structure. We just give it way more data and
let it run these models for a long time and make the parameters of the model way bigger.
And like, no researchers expected them to reason about the world as well as they do,
but it just kind of happened as they were exploring larger and larger models. So in defense of OpenAI, they knew all this, but the amount of money that you would have to spend
to buy GPUs or to rent GPUs in the cloud to train these models is prohibitively expensive.
And, you know, even Google at this point in time, this is when they start building
their own chips, TPUs. Because
they're still buying tons of hardware from NVIDIA, but they're also starting to source their own
here. Yeah. And importantly, they've at this point are getting ready to release TensorFlow to the
public. So they have a framework where people can develop for stuff. And they're like, look,
if people are developing using our software, then maybe it should run on our hardware that's
optimized to work with that software.
So they actually do have this very plausible story around why their hardware, why their software framework.
It was kind of a surprising move when they open sourced it because people were like, gasp, you know, why is Google giving away the farm for free here?
But this was three, four years early and a very prescient move to really get a lot of people using Google architecture compute
at scale. Yep. All within Google Cloud. Yep. So with this, it starts to look like maybe this whole
open AI boondoggle didn't actually accomplish anything. And the world's AI resources are more
than ever just locked back into Google. So in 2018, Elon gets super frustrated by all this,
basically throws a hissy fit and quits and peaces out of OpenAI. There's a lot of drama around this
that we're not going to cover now. He may or may not have given an ultimatum to the rest of the
team that he would either take over and run things or leave. Who knows? It's Elon. But whatever happened, this turns out to be a
major catalyst for the rest of the OpenAI team and truly a history turning on a knife point
moment. It was also a probably super bad decision by Elon. But again, story for another day.
So there's this great explanation of what happened in the semaphore piece that we'll link to in our
sources.
The author says,
That fall, it became even more apparent to some people at OpenAI that the costs of becoming a cutting-edge AI company were going to go up.
Google Brain's Transformer had blown open a new frontier where AI could improve endlessly.
But that meant feeding endless data to train it, a costly endeavor.
OpenAI made a big decision to pivot toward these transformer models. On March 11th, 2019, OpenAI announced it was creating a for-profit entity so it could raise
enough money to pay for all the compute power necessary to pursue the most ambitious AI models.
We want to increase our ability to raise capital while still serving our mission,
and no pre-existing legal structure that we know of strikes the right balance,
the company wrote at the time. OpenAI said it was capping profits for investors with any excess going back to the original non-profit. Less than six months later, OpenAI took a $1
billion investment from Microsoft. Yeah, and I believe this is mostly, if not all,
due to Sam Altman's influence and taking over here. So, you know, on the one hand,
you can look at this sort of skeptically and say, okay, Sam, you took your nonprofit and you
converted it into an entity worth $30 billion today. On the other hand, knowing this history
now, this was kind of the only path they had. They had to raise money to get the computing
resources to compete with Google. And Sam goes out and does these landmark deals with Microsoft.
Yeah, truly amazing. And their opinion at the time of why they're doing this is, basically,
this is going to be super expensive. We still have the same mission to ensure that artificial
general intelligence benefits all of humanity, but it's going to be ludicrously expensive to get there.
And so we need to basically be a for-profit enterprise and a going concern and have a business that funds our research eventually to pursue that mission.
Yep.
So 2019, they do the conversion to a for-profit company.
Microsoft invests a billion dollars, as you say, and becomes the exclusive cloud provider for OpenAI,
which is going to become highly relevant here for NVIDIA. More on that in a minute.
June of 2020, GPT-3 comes out. In September of 2020, Microsoft licenses exclusive commercial
use of the underlying model for Microsoft products. 2021, GitHub Copilot comes out. Microsoft invests
another $2 billion in OpenAI. And then, of course, this all leads to November 30th, 2022.
In Jensen's words, the AI heard around the world. OpenAI comes out with chat, GPT. As you said, Ben,
the fastest product in history to reach 100 million users. In January
2023, this year, Microsoft invests another $10 billion in OpenAI, announces they're integrating
GPT into all of their products. And then in May of this year, GPT-4 comes out. And that basically
catches us up to today. We eventually need to go do a whole nother episode
about all the details here of OpenAI and Microsoft. But for today, the salient points are, one,
thanks to all this, generative AI as a user-facing product emerges as this enormous opportunity. Two, to facilitate that happening, you needed enormous amounts of GPU
compute, obviously benefiting NVIDIA. But just as important, three, it becomes obvious now
that the predominant way that companies are going to access and provide that compute
is through the cloud. And the combination of those
three things turns out to be basically the single greatest moment that could ever happen for NVIDIA.
Yes. So you're teeing all of this up. And so far I'm thinking, so this is like the OpenAI
and Microsoft episode. Like, what does this have to do with NVIDIA? And God, there's a great NVIDIA story here to be told.
So let's get to the NVIDIA side of it.
All right, listeners.
Our next sponsor is a new friend of the show, Huntress.
Huntress is one of the fastest growing
and most loved cybersecurity companies today.
It's purpose-built for small to mid-sized businesses
and provides enterprise-grade
security with the technology, services, and expertise needed to protect you. They offer a
revolutionary approach to managed cybersecurity that isn't only about tech, it's about real people
providing real defense around the clock. So how does it work? Well, you probably already know this,
but it has become pretty trivial for an
entry-level hacker to buy access and data about compromised businesses. This means cybercriminal
activity towards small and medium businesses is at an all-time high. So Huntress created a full
managed security platform for their customers to guard from these threats. This includes endpoint
detection and response, identity threat detection
and response, security awareness training, and a revolutionary security information and event
management product that actually just got launched. Essentially, it is the full suite of great
software that you need to secure your business, plus 24-7 monitoring by an elite team of human
threat hunters in a security operations center to stop
attacks that really software-only solutions could sometimes miss. Huntress is democratizing security,
particularly cybersecurity, by taking security techniques that were historically only available
to large enterprises and bringing them to businesses with as few as 10, 100, or 1,000
employees at price points that make sense for them.
In fact, it's pretty wild. There are over 125,000 businesses now using Huntress,
and they rave about it from the hilltops. They were voted by customers in the G2 rankings as
the industry leader in endpoint detection and response for the eighth consecutive season,
and the industry leader in managed detection
and response again this summer.
Yep.
So if you want cutting-edge cybersecurity solutions backed by a 24-7 team of experts
who monitor, investigate, and respond to threats with unmatched precision, head on over to
huntress.com slash acquired or click the link in the show notes.
Our huge thanks to Huntress.
Okay, so NVIDIA.
Okay, so we just said these three things
that we've painted the picture of
on the first part of the episode here,
that A, generative AI is like possible,
a thing, and it's now getting traction.
B, it requires an unbelievably massive amount
of GPU compute to train.
And three, it looks like the predominant
way that companies are going to use that compute is going to be in the cloud. The combination of
these three things is, I think, the most perfect example we've ever covered on this show of the
old saying about luck being what happens when preparation meets opportunity for NVIDIA here. So obviously the opportunity is generative AI, but the preparation front, NVIDIA has literally
just spent the past five years working insanely hard to a GPU accelerated computing platform to, in their minds, replace the old
CPU-led, Intel-dominated x86 architecture in the data center. And for many years, I mean,
they were getting some traction, right? And the data center segment was growing for NVIDIA, but
people were like, okay, you want this to happen, but why is it going to happen?
Right.
There's these little workloads here and there that will toss you, Jensen, that we think
can be accelerated by your cool GPUs.
And then crazy things like crypto happened.
And there was AI researchers in academic labs that are using it as supercomputers. But for the longest time,
the data center segment of NVIDIA, it just wasn't clear that organizations had enormous
parts of their software stack that they were going to shift to GPUs. Like, why? What's driving this?
And now we know what could be driving it, and that is AI.
Not only could be, but if you look at their most
recent quarter, absolutely freaking is. Okay, so now it begs the question, why is it driving it?
And David, are you open to me giving a little computer science lecture on computer architecture?
Ooh, please do. All right, I need to do my best professor impression here.
Dude, I loved computer science in college. They were my favorite classes.
I will say doing these episodes, this TSMC, it really does bring back the thrill of being in a CS lecture and being like, oh, that's how that works. Like, it's just really fun.
So let's take a step back and consider the classic computer architecture, the von Neumann
architecture. Now the von Neumann architecture is what most computers, most CPUs, are based on today
where they can store a program in the computer's memory and run that program. You can imagine why
this is the dominant architecture,, we'd need a computer that
is specialized for every single task. The key thing to know is that the memory of the computer
can store two different things. The data that the program uses and the instructions of the program
itself, the literal lines of code. And in this example we're about to paint, all of this is wildly simplified because I
don't want to get into caching and speeds of memory and, you know, where memory is located,
not located. So let's just keep it simple. So the processor in the von Neumann architecture
executes this program written in assembly language, which is the language that compiles
down to the bytecode that the processor itself can speak. So it's written in an instruction set architecture,
an ISA from ARM, for example.
Or Intel before that.
Yes. And each line of the program is very simplistic. So we're going to consider this
example where I'm going to use some assembly language pseudocode to add the numbers 2 and 3
to equal 5. Ben, are you about to program live on Acquired? Well, it's pseudo assembly language code.
So the first line is we're going to load the number 2 from memory. We're going to fetch it
out of memory, and we're going to load it into a register on the
processor. So now we've got the number two actually sitting right there on our CPU ready to do
something with. That's line of code number one. Two, we're going to load the number three in exactly
the same fashion into a second register. So we've got two CPU registers with two different numbers.
The third line, we're going to
perform an add operation, which performs the arithmetic to add the two registers together on
the CPU and store the value in some either third register or into one of those registers. So that's
a more complex instruction since it's arithmetic that we actually have to perform. But these are
the things that CPUs are very good at, doing math operations on data fetched from memory. And then
the fourth and final line of code in our example is we are going to take that five that has just
been computed and is currently held temporarily in a register on the CPU, and we're going to write
that back to an address in memory. So the four lines of code are load, load, add, store.
This all sounds familiar to me.
So you can see each of those four steps is capable of performing one and only one operation at a
time. And each of these happens with one cycle of the CPU. So if you've heard of gigahertz,
that's the number of cycles per second. So a one gigahertz computer could handle the simple
program that we just wrote 250 million times in a single second. But you can see something going on here. Three of our four clock cycles are taken up by loading and storing data to memory. And it is one of the central constraints of AI, or at least it has been historically.
Each step must happen in order and only one at a time.
So in this simple example, it actually would not be helpful for us to add a bunch more memory to this computer.
I can't do anything with it.
It's also only incrementally helpful to increase the clock speed.
If I double the clock speed, I can only execute the program twice as
fast. If I need like a million x speedup for some AI work that I'm doing, I'm not going to get it
there with just a faster clock speed. That's not going to do it. And it would of course be helpful
to increase the speed at which I can read and write to memory, but I'm kind of bound by the
laws of physics there. There's only so fast that I can transmit data over a wire.
Now, the great irony of all of this
is that the bottleneck actually gets worse over time,
not better,
because the CPUs get faster
and the memory size increases,
but the architecture is still limited.
So this one pesky single channel known as a bus,
I don't actually get to enjoy the performance gains
nearly as much as I should
because I'm
jamming everything through that one channel and it only gets to sort of be used one time
per every clock cycle. So the magical unlock, of course, is to make a computer that is not
a von Neumann architecture, to make programs executable in parallel and massively increase
the number of processors or cores. And that is
exactly what NVIDIA did on the hardware side, and all these AI researchers figured out how to
leverage on the software side. But interestingly, now that we've done that, David, the constraint
is not the clock speed or the number of cores anymore. For these absolutely enormous language models,
it's actually the amount of on-chip memory that concerns us.
I thought you were going,
and this is why the data center and what NVIDIA has been doing is so important.
Yes, there's this amazing video that we'll link to on the Asianometry YouTube channel
that we link to also on the TSMC episode,
but the constraint today is actually in
how much high-performance memory is available on the chip. These models need to be in memory all
at the same time, and they take up hundreds of gigabytes. So while memory has scaled up,
I mean, we're going to get flashing all the way forward, the H100's on-chip RAM is like 80
gigabytes. The memory hasn't scaled up nearly as fast
as the models have actually scaled in size. The memory requirements for training AI are just
obscene, so it becomes imperative to network multiple chips and multiple servers of chips
and multiple racks of servers of chips together into one single computer, and I'm putting computer and air quotes
there, in order to actually train these models. It's also worth noting, we can't make the memory
chips any bigger. Due to a quirk of the extreme ultraviolet photolithography that we talked about,
the EUV on the TSMC episode, chips are already the full size of the reticle. It's a physics and
wavelength constraint. You
really can't etch chips larger without some new invention that we don't have commercially viable
yet. So what it ends up meaning is you need huge amounts of memory, very close to the processors,
all running in parallel with the fastest possible data transfer. And again, this is a vast
oversimplification, but you kind of get
the idea of why all of this becomes so important. Okay, so back to the data center. And here's what
NVIDIA is doing that I don't think anybody else out there is doing and why it's so important for
them that all of this new generative AI world, this new computing era, as Jensen dubs it,
runs in the data center. So NVIDIA has done three things over the last five years. One,
and probably most importantly, related to what you're talking about, Ben,
they made one of the best acquisitions of all time back in 2020, and nobody had any idea. They bought a quirky little networking company
out of Israel called Mellanox. Well, it wasn't little. They paid $7 billion for it.
Okay, yeah. And it was already a public company, right?
It was, yep.
Yep. But it was definitely quirky. Now, what was Mellanox? Mellanox's primary product
was something called InfiniBand,
which we talked about a lot with Chase Lockmiller on our ACQ2 episode with him from Crusoe.
And actually, InfiniBand was an open source standard or managed by a consortium. There
were a bunch of players in it, but the traditional wisdom was, well, InfiniBand is way faster,
way higher bandwidth, a much more efficient way to transfer data around a data center.
At the end of the day, Ethernet is the lowest common denominator.
And so everyone had to implement Ethernet anyway.
And so most companies actually exited the market.
And Mellanox was kind of the only InfiniBand spec provider left.
Yeah.
So you said, wait, what is InfiniBand?
It is a competing standard to Ethernet.
It is a way to move data between racks in a data center.
And back in 2020, everybody was like, Ethernet's fine.
Why do you need more bandwidth than Ethernet between racks in a data center. What could ever require 3,200 gigabits a second of bandwidth running down a wire in a data center? Well, it turns out if you're trying to address hundreds, maybe more than hundreds of GPUs as one single compute cluster to train a massive AI model, yeah, you want really fast data interconnects between them. Right. People thought, oh, sure, for supercomputers, for these academic purposes. But
what the enterprise market needs in my shared cloud computing data center is Ethernet. And
that's fine. And most workloads are going to happen right there on one rack. And maybe,
maybe, maybe things will expand to multiple computers on that rack, but certainly they
won't need to network multiple
racks together. And NVIDIA steps in, and you got Jensen saying, hey dummies, the data center is the
computer. Listen to me when I tell you the whole data center needs to be one computer. And when you
start thinking that way, you start thinking, geez, we're really going to be cramming huge amounts of data through wires that are going between these routes?
How can we sort of think about them as if it's all sort of on-chip memory,
or as close as we can make it to on-chip memory, even though that's in a box located three feet away?
Yep. So that's piece number one
of NVIDIA's grand data center plan
over the last five years.
Piece number two
is in September 2022,
NVIDIA makes a quite surprising
announcement of a new chip.
Not just a new chip,
an entirely new class of chips
that they are making called the Grace
CPU processor. NVIDIA is making a CPU. This is like heretical.
But Jensen, I thought all computing was going to be accelerated. What are we doing here on
these ARM CPUs? Yeah, these Grace CPUs are not for putting in your laptop.
They are for being the CPU component of your entire data center solution that is specifically from the ground up design to orchestrate with these massive GPU clusters.
This is the end game of a ballet that has been in motion for 30 years.
Remember when the graphics card was subservient to the PCIe slot in Intel's motherboard?
And then eventually, you know, we fast forward to the future,
NVIDIA makes these GPUs that are these beautiful standalone boxes in your data center,
or perhaps these little workstations that sit next to you
while you're doing graphics programming, while you're directly programming your GPU.
And then, of course, they need some CPU to put in that,
so they're using AMD or Intel or they're licensing some CPU.
And now they're saying, you know what?
We're actually just going to do the CPU too.
So now we make a box,
and it's a fully integrated NVIDIA solution
with our GPUs, our CPUs, our NVLink between them,
our InfiniBand to network it to other boxes.
And, you know, welcome to the show.
One more piece to talk about the third leg of the stool there,
strategy, before we get to what it all means
that I think you're about to go to.
Spoiler alert, you say solution, I hear gross margin.
The third part of it is the GPUs.
Up until NVIDIA's current GPU generation, the hopper
generation of GPUs for the data center, there was only one GPU architecture at NVIDIA. And that same
architecture and those same chips from the same wafers made at TSMC. Some of them went to consumer gaming graphics cards,
and some of those dyes went to A100 GPUs in the data center. It was all the same architecture.
Starting in September of 2022, they broke out the two business lines into different architectures.
So there's the Hopper architecture, named after great computer scientist Grace Hopper.
I think rear admiral in the US Navy, Grace Hopper.
Get it?
Grace CPU, Hopper GPU, Grace Hopper.
The H100s.
That was for the data centers.
And then on the consumer side,
they start a whole new architecture called Lovelace,
after Ada Lovelace.
And that is the RTX 40XX.
So you buy a, you know, top of the line RTX 40 what have you gaming card right now.
That is no longer the same architecture as the H100s that are powering ChatGPT.
It's got its own architecture.
This is a really big deal because what they do with the Hopper architecture is they start using what's called chip-on-wafer-on-substrate. C-O-W-O-S. Co-oss. When you start talking to the real semi-nerds,
that's when they start busting out the co-oss conversation. This is when a certain segment
of our listeners are going to get really excited. So essentially what this is, back to this whole concept of memory being so important for GPUs
and for AI workloads, this is a way to stack more memory on the GPU chips themselves,
essentially by going vertical in how you build the chips. This is the absolute leading edge
technology that is coming out of TSMC. And by NVIDIA bifurcating their chip architectures
into a gaming segment that does not have this latest CoWAS technology, this allows them to
monopolize like a huge amount of TSMC's capacity to make the CoWAS chips specifically for these
H100s, which allows them to have
way more memory than other GPUs on the market. Yes. So this gets to the point of why can't they
seem to make enough chips right now? Well, it's literally a TSMC capacity problem. So there's
these two components that are extremely related that you're talking about, the CoAus chip on
wafer on substrate and the high bandwidth memory. So there's this great post from
SemiAnalysis where the author points out a 2.5D chip, which is basically how you assemble this
Coase stuff to get the memory really close to the processor. And of course, 2.5D, it is literally
3D, but 3D means something else. It's even more 3D, so they came up with this 2.5D denominator. Anyway, the 2.5D chip
packaging technology from TSMC is where you take multiple active silicon dies, like the logic chips
and the stack of high bandwidth memory, and they stack them on one piece of silicon. And there's
more complexity here, but the important thing is CoAus is the most popular technology for GPUs and AI accelerators
for packaging these chips.
And it's the primary method to co-package high bandwidth memory.
Again, remember, think back to the thing that's most important right now is get as much high
bandwidth memory as you can closest to the CPU next to the logic to get the most performance
for training and inference. So CoaS represents right
now about 10 to 15 percent of TSMC's capacities, and many of the facilities are custom built for
exactly these types of chips that they're producing. So when NVIDIA needs to reserve
more capacity, there's a pretty good chance that they've already reserved some large part of the
10 to 15 percent of TSMC's total footprint. And TSMC needs to go make more fabs
in order for NVIDIA to have access
to more CoaS-capable capacity.
Yeah, which, as we know,
it takes years for TSMC to do this.
Yep.
There are more experimental things that are happening.
I would be remiss not to mention
there are actually experiments of doing compute in memory. lossy, expensive, energy-intensive thing of moving data over the copper wire to get it to the CPU.
All sorts of trade-offs in there,
but it is very fun to sort of dive
into the academic computer science world right now
where they really are rethinking,
like, what is a computer?
So these three things that NVIDIA has been building,
the dedicated Hopper data center GPU architecture,
the Grace CPU platform, the Mellanox powered
networking stack. They now have a full suite solution for generative AI data centers. And Ben,
when I say solution, I hear margins. But let's be clear, you don't need to offer some sort of
solution to get high margins. If you're on video, you don't need to offer some sort of solution to get high
margins if you're NVIDIA. Price is set where supply meets demand, and they're adding as much
supply as they possibly can right now. Believe me, for all sorts of reasons, NVIDIA wants everyone
who wants H100s to have H100s. But for now, the price is kind of like, I'll write you a blank
check, and NVIDIA, you write whatever you want on the check.
So their margins are crazy right now,
just literally because there's way more demand than supply for these things.
Yes.
Okay, so let's break down what they're actually selling.
So like you were saying, Ben, of course you can, and lots of people do,
just go buy H100s.
You're like, I don't care about the Grace CPU.
I don't care about this Mellanox stuff. I'm running my own data center. I'm really good at it. And the people
who are most likely to do this are the hyperscalers, or as NVIDIA refers to them, the CSPs,
the cloud service providers. This is AWS. This is Azure. This is Google. This is Facebook for
their internal use. Like NVIDIA, don't give me one of these DGX servers that you assemble. Just give me
the chip and I will integrate it the way that I want to integrate it. I am a world-class data
center architect and operator. I don't want your solution. I just want your chips. So they sell
a lot of those. Now, NVIDIA, of course, has also been seeding new cloud providers out there in the
ecosystem, like our friends at Crusoe, also CoreWeave and Lambda Labs, if you've heard of them.
These are all new GPU-dedicated clouds that NVIDIA is working closely with.
So they're selling H100s and A100s before that to all these cloud providers.
But let's say you are an arbitrary company in the Fortune 500 that is not a technology company.
And my God, do you not want to miss the boat on generative AI?
And you've got a data center of your own.
Well, NVIDIA has a DGX for you.
Yes, they do.
Full GPU-based supercomputer solution in a box that you can just plug right into your data center.
And it just works.
There's nothing else on the market like this.
And it all runs CUDA.
It is all speaking the exact language
of the entire ecosystem of developers
that know exactly how to write software for this thing.
Which means that whatever developers you already had
who were working on AI or anything else,
everything they were working on
is just going to come right over
and run within your brand new shiny AI supercomputer because it all runs CUDA.
Amazing.
More on CUDA in a minute. But as we said, you say solution, I hear gross margin.
NVIDIA sells these DGX systems for like $150,000 to $300,000 a box. That's wild. And now with all these three new legs of the stool, Hopper,
Grace, and Mellanox, these systems are just getting way more integrated, way more proprietary,
and way better. So if you want to buy a new top of the line DGX H100 system. The price starts at $500,000 for one box. And if you want to buy the DGX GH200
SuperPod, this is the AI wall that Jensen recently unveiled, the huge room full of AI.
And it's like 20 racks wide. Imagine an entire row at a data center. Yes, this is 256 Grace Hopper DGX racks all
connected together in one wall. They're billing this as the first turnkey AI data center that you
can just buy and can train a trillion parameter GPT-4 class model. The pricing on that is call us.
Of course it is. But I'm imagining like hundreds
of millions of dollars. Like I doubt it's a billion, but hundreds of millions easily.
Wild. Well, let's talk about the H100. I've got a baseball card right here on this insane thing
that they've built. So they launched it in September 2022. It's the successor to the A100. One GPU, one H100 costs $40,000. So that's
how you get to that price point you're talking about. That's what they're selling to Amazon and
Google and Facebook. Right. And you mentioned that $500,000 price point. The $500,000 is the
eight $40,000 H100s in a box with the gray CPU and, you know, the nice bow around it.
Yep. Which, do the math on that. So, 8 times 40,000, that's $320,000. So, that's essentially
an extra $180,000 of margin that NVIDIA is getting out of selling the solution. It's an ARM CPU. It
doesn't cost them anything to make that. And these $40,000 H100s have margin of their own.
So every time they bundle more, there's more margin in the fully assembled.
I mean, that's literally bundle economics.
You are entitled to margin when you bundle more things together and provide more value for customers.
But just to illustrate the way that this pricing works,
the reason you want an H100 is they're 30 times faster than
an A100, which mind you is only like two and a half years older. It is nine times faster for AI
training. The H100 is literally purpose-built for training LLMs, like the full self-driving video
stuff. It's super easy to scale up. It's got 18,500 CUDA cores. Remember when we were talking about
the von Neumann example earlier? That is one computing core that is able to handle
those four assembly language instructions. This one H100, which they're calling AGPU,
has 18,500 cores that are capable of running CUDA software. It's got 640 Tensor cores,
which are highly specialized for matrix multiplication. They have 80 streaming
multiprocessors. So what are we up to here? Close to 20,000 unique cores on this thing.
It's got meaningfully higher energy usage than the A100. I mean, a big takeaway here is that NVIDIA is massively increasing the power requirement
every time they come out with the next generation.
They're both figuring out how to push the edge of physics,
but they're also constrained by physics.
Some of this stuff is only possible with way more energy.
This thing weighs 70 pounds.
This is one H100.
Jetson makes a big deal about this every keynote that he gives.
It's got a quarter trillion transistors across 35,000 parts.
It requires robots to assemble it.
Not only does it require physical robots to assemble it, it requires AI to design it.
They're actually using AI to design the chips themselves now.
I mean, they have completely
reinvented the notion of what a computer is. Totally. And this is all part of Jensen's
pitch here to customers. Yes, our solutions are very expensive. However, he uses the line that
he loves, the more you buy, the more you save. If you could get your hands on some.
Right.
But what he means by that is like, okay, say you're McDonald's and you're trying to build
a generative AI so that, I don't know, customers can order or something.
You're using it in your business.
If you were going to try and build and run that in your existing data center infrastructure,
it would take so much time and cost you so much more over the long run in compute
than if you just went and bought my super pod here.
You can plug and play and have it up and running in a month.
Yep. And by the fact that this is all accelerated computing,
the things you're doing on it, you literally wouldn't be able to do otherwise
or might take you a lot more energy, a lot more time, a lot more cost.
There is a very
valid story to buying and running your workloads here or renting from any of the cloud service
providers and running your workloads here is more performant because the results just happen
much faster, much cheaper, or at all. Yep. You mentioned energy here. This is also
Jensen's argument. He's like, yes, these things take a ton of energy, but the alternative takes even more energy. So we are actually saving energy if you assume this
stuff is going to happen. Now, there's a bit of caveat here in that it can't happen except on
these types of machines. So he enabled this whole thing, but he has a point.
Oh, I totally buy it, though. I mean, I think there's a very real case
around, look, you only have to train a model once, and then you can do inference on it over and over
and over again. I mean, the analogy I think makes a lot of sense for model training is to think
about it as a form of compression. LLMs are turning the entire internet of text into a much
smaller set of model weights.
This has the benefit of storing
a huge amount of usefulness in a small footprint,
but also enabling a very inexpensive amount of compute,
again, relatively speaking,
in the inference step for every time
that you need to prompt that model for an answer.
Of course, the trade-off you're making there
is once you encode all of the training data into the model,
it is very expensive to redo it, so you better do it right the first time or figure out little
ways to modify it later, which a lot of ML researchers are working on. But I always think
a reasonable comparison here is to compress a zillion-layer Photoshop file. For anybody that's
ever dealt with, oh, I've got a three-gigabyte Photoshop file. Well, that's not a thing you're
going to send to a client. You're going to compress it into a JPEG, and you're
going to send that. And the JPEG is
in many ways more useful as a
compressed facsimile of the
original layers comprising the Photoshop
file, but the trade-off
is you can never get from that compressed little JPEG
back to the original thing. So I think
the analogy here is like, you're saving
everyone from needing to make the full PSD
every time because you can just use the JPEG the vast, vast majority of the time.
So, hopefully we've now painted a relatively coherent picture of both the advances that made the generative AI opportunity possible, that it has truly become a real opportunity and why nvidia even above the obvious
reasons was just so well positioned here particularly because of the data center
centric nature of these workloads and that they had been working so hard for the past five years
to fundamentally re-architect the data center. Yep. So on top of all this,
NVIDIA recently announced yet another
pretty incredible piece of their cloud strategy here.
So today, like we've been saying,
if you want to use H100s and A100s,
say you're an AI startup,
the way you're probably going to do that
is you're going to go to a cloud, either a hyperscaler or a dedicated GPU cloud like Crusoe or CoreWeaver, Lambda Labs
and the like, and you're going to rent your GPUs. And Ben, you did some research on this. So like,
what does that cost? Oh, I just looked at the pricing pages on public clouds today. I think
Azure and AWS were where I looked. You can get access to a DGX server that's eight A100s for about 30 bucks an hour,
or you can go over to AWS
and get a P5.48X large instance,
which is eight H100s,
which I believe is an HGX server
for about $100 an hour.
So about three times as much.
And again, when I say you can get access,
I don't actually mean you can get access. I mean, that's the price. Right. If you could get access, that's what you would pay
for it. Correct. Okay. That's just getting the GPUs. But if you buy everything we were talking
about a minute ago, say your McDonald's or UPS or whoever, and you're like, you know, I really like
Jensen, I buy what you're selling. I want this whole integrated package. I want an AI supercomputer in a box that I can plug into my wall and have it run. But I'm all in on the cloud. I don't run my own data centers anymore. NVIDIA has now introduced DGX Cloud.
Yeah. And of course, you could rent these instances from Amazon, Microsoft,
Google, Oracle, but like... You're not getting that full integrated solution. Right, and you're getting
some integration the way that the cloud service provider wants to create the integration using
their proprietary services. And to be honest, you might not have the right people on staff to be able to
deal with this stuff in a pseudo bare metal way. Even if it's not in your data center and you're
renting it from the cloud, you might actually need, based on your workforce, to just use a web
browser and just use a real nice, easy web interface to load some models in from a trusted
source that you can easily pair with your data
and just click run
and not have to worry about any of the complexity
of managing a cloud application
that's in Amazon or Microsoft
or something a little bit scarier
and closer to the metal.
Yep.
So NVIDIA has introduced DGX Cloud,
which is a virtualized DGX system
that is provided to you right now via other
clouds, so Azure and Oracle and Google.
Right, the boxes are sitting in the data centers of these other CSPs.
Right, they're sitting in the other cloud service providers.
But as a customer, it looks like you have your own box that you're renting you log into the dgx cloud website
through nvidia and it's all nice whizzy wig stuff there's an integration with hugging face where you
can easily deploy models right off of hugging face you can upload your data like everything
is just really whizzy wig is probably the way to describe it. This is unbelievable.
NVIDIA launched their own cloud service through other clouds.
And NVIDIA does have, I think, six data centers,
but that I don't believe is what they're actually using to back DGX Cloud.
No.
So starting price for DGX Cloud is $37,000 a month, which will get you an A100-based system,
not an H100-based system. So the margins on this are insane for NVIDIA and their partners.
A listener helped us out and estimated that the cost to actually build an equivalent A100
DGX system would be today something like 120K. Remember,
this is the previous generation. This is not H100s. And you can rent it for 37K a month.
So that's three-month payback on the CapEx for this stuff for NVIDIA and their cloud partners
together. And even more for NVIDIA, more important longer term,
for enterprises that buy this, NVIDIA now has a direct sales relationship with those companies,
not necessarily intermediated by sales through Azure or Google or AWS, even though the compute
is sitting in their clouds. after that is enterprises. So there's a few interesting things in there, one of which is,
oh my god, their revenue for this is concentrated among like five to eight companies with these CSPs.
Two, they don't necessarily own the customer relationship. They own the developer relationship
through CUDA. You know, they've got this unbelievable ecosystem right now of NVIDIA
developers that's stronger than ever. But in terms of the actual customer, half
of their revenue is intermediated by cloud providers. The second interesting thing about
this is even today in this AI explosion, the second biggest segment of data centers is still
the consumer internet companies. It's still all that stuff we were talking about before of the
uses of machine learning to figure out what should show up
in your social media algorithms and match ads to you, that's actually bigger than all of the direct
enterprises who are buying from NVIDIA. So the DGX Cloud Play is a way to sort of shift some of that
CSP revenue into direct relationship revenue. So all of this brings us to 2023. In May of this year, NVIDIA reported their Q1 fiscal 24 earnings. NVIDIA's on this weird January fiscal year end thing, so Q1 24 is essentially Q1 23, but anyway. But anyway, in which revenue was up 19% quarter over quarter to $7.2 billion, which is great
because remember, they had a terrible end of 2022 with the write-offs and crypto falling
off a cliff and all that.
Yes, it's amazing that in that Stratechery interview, when was that?
In March of 2023, Jensen said last year was unquestionably a disappointing year.
This is the year ChatGPT
was released. It is wild the roller coaster this company has been on. The time frame is so
compressed here. And part of that, of course, is Ethereum moving to proof of stay, the end of the
crypto thing for NVIDIA, which I'm sure they're actually thrilled about. But part of it was they also put in a ton of pre-orders for capacity with TSMC that then they thought they weren't going to need,
so they had to write down. So from an accounting perspective, it looks like a big loss, like a
really big blemish on their finances last year. But now, oh my God, are they glad that they
reserved all that capacity? Yep, it's actually going to be quite valuable.
So speaking of, you know, this Q1 earnings is like great up 19% quarter over quarter, but then they dropped the bombshell due to unprecedented demand for generative AI compute
in data centers. NVIDIA forecasts Q2 revenue of $11 billion,
which would be up another 53% quarter over quarter over Q1
and 65% year over year.
The stock goes nuts.
25% in after hours training.
Yep.
This is a trillion dollar company,
or at least this made them a trillion dollar company,
but like a company that was previously valued at around $800 billion popped 25%
after earnings. Well, and it's even crazier than that. Back when we did our episodes last April,
NVIDIA was the eighth largest company in the world by market cap, had about a $660 billion
market cap. That was down slightly
off the highs, but that was kind of the order of magnitude back then. It crashed down below $300
billion. And then within a matter of months, it's now back up over a trillion, just wild.
And then all of this culminates last week at the time of this recording, when NVIDIA reports Q2 fiscal 24
earnings. And this earnings release, we usually don't talk about like individual earnings releases
on Acquired because like in the long arc of time, who cares? This was a historic event.
I think this was one of, if not the most incredible earnings release by any scaled public company ever.
Seriously, no matter what happens going forward, last week was a historic moment.
The thing that blows my mind the most is that their data center segment alone did $10 billion in the quarter.
That's more than doubling off of the previous quarter. In three
months, they grew from $4-ish billion to $10 billion of revenue in that segment. And revenue
only happens when they deliver products to customers. This isn't pre-orders. This isn't
clicks. This isn't wave your hands around stuff. This is we delivered stuff to customers and they
paid us an additional $6 billion this quarter than they did last quarter. So here are the full
numbers. For the quarter, total company revenue of $13.5 billion, up 88% from the previous quarter
and over 100% from a year ago. And then Ben, like you said, in the data center segment, revenue of $10.3
billion. So $10.3 out of $13.5 for a segment that basically didn't exist five years ago for the
company. That's up 141% from Q1 and 171% from a year ago. This is $10 billion. That kind of growth
at this scale, I've never seen anything like it.
Neither has the market.
That's right.
And so this, this is the first time I noticed it.
Jensen had talked about this in Q1 earnings,
so it wasn't the first time.
But he brings back the trillion dollar TAM.
Not in a slide, I think this time, he just talks about it.
No, but in a new way that I think is a better way to slice it.
This time it's different.
You know, look, we'll spend a while here now talking about what we think about this, but
this is very different.
This time he frames NVIDIA's trillion dollar opportunity as the data center.
And this is what he says.
There is $1 trillion worth of hard assets sitting in data centers around the world right now.
Growing at $250 billion a year.
Annual spend on data centers to update and add to that CapEx is $250 billion a year. has certainly the most cohesive, fulsome, and coherent platform
to be the future of what those data centers
are going to look like
for a large amount of compute workloads.
This is a very different story than like,
oh, we're going to get 1% of this $100 trillion
of industry out there.
And the thing you have to believe now, because whenever someone paints a picture, you say, okay, what do I have there. And the thing you have to believe now, because
whenever someone paints a picture, you say, okay, what do I have to believe? The thing you have to
believe is there is real user value being created by these AI workloads and the applications that
they are creating. And there's pretty good evidence. I mean, ChatGPT made it so OpenAI
is rumored to be doing over a billion dollar
run rate now, maybe multiple single digit billions, and still growing meaningfully. And so that is
like the shining example. Again, that's the Netscape navigator here of this whole boom. But
the bet, especially with all these Fortune 500s, is that there are going to be GPT-like experiences in everyone's
private applications, in a zillion other public interfaces. I mean, Jensen frames it as in the
future, every application will have a GPT front end. It will be a way that you decide that you
want to interact with computers that is more natural. And I don't think
he means like versus clicking buttons. I think he means everyone can kind of become a programmer,
but the programming language is English. And so when you're sort of like, well, why is everyone
spending all of this money? It is that the world's executives with the purchasing power to go write
a $10 billion check last quarter to NVIDIA for all
this stuff, wholeheartedly believes from the data they've seen so far that this technology
is going to change the world enough for them to make these huge bets.
And the thing that we don't know yet is, is that true?
Is the GPT-like experiences going to be an enduring thing for the far future or not. There's pretty
good evidence so far that people like this stuff and that it's quite useful in transforming the way
that, you know, everyone lives their lives and goes about day to day and does their jobs and
goes through school and, you know, on and on and on. But that is the thing you have to believe.
We want to thank our longtime friend of the show, Vanta, the leading trust management
platform. Vanta, of course, automates your security reviews and compliance efforts. So frameworks like
SOC 2, ISO 27001, GDPR, and HIPAA compliance and monitoring, Vanta takes care of these otherwise
incredibly time and resource draining efforts for your organization and makes them fast and simple.
Yeah, Vanta is the perfect example of the quote that we talk about all the time here on Acquired,
Jeff Bezos, his idea that a company should only focus on what actually makes your beer
taste better, i.e. spend your time and resources only on what's actually going to move the needle
for your product and your customers and outsource everything else that doesn't.
Every company needs compliance and trust with their vendors and customers. It plays a major role in enabling revenue because customers and
partners demand it, but yet it adds zero flavor to your actual product. Vanta takes care of all
of it for you. No more spreadsheets, no fragmented tools, no manual reviews to cobble together your
security and compliance requirements. It is one single software pane of glass that connects to
all of your services via APIs and eliminates countless hours of work for your organization.
There are now AI capabilities to make this even more powerful, and they even integrate with over 300 external tools.
Plus, they let customers build private integrations with their internal systems.
And perhaps most importantly, your security reviews are now real-time instead of static, so you can monitor and share with your customers and partners to give them added confidence.
So whether you're a startup or a large enterprise, and your company is ready to
automate compliance and streamline security reviews like Vanta's 7,000 customers around
the globe, and go back to making your beer taste better, head on over to vanta.com
slash acquired and just tell them that Ben and David sent you. And thanks to friend of
the show, Christina, Vanta's CEO, all acquired listeners get $1,000 of free credit, vanta.com
slash acquired. Okay, so David, analysis. We got to talk about CUDA before we start analyzing
anything else here. Talked about a lot of hardware so far on this episode, but there's this huge piece of the NVIDIA puzzle that we haven't talked about since part two.
And CUDA, as folks know, was the initiative started in 2006 by Jensen and Ian Buck and a
bunch of other folks on the NVIDIA team to really make a bet on scientific computing,
that people could use graphics cards for more than just graphics, and they would need great software tools to help them do that.
It also was the glimmer in Jensen's eye of,
ooh, maybe I can build my own relationship with developers,
and there can be this notion not of a Microsoft or an Intel developer
who happens to be able to have a standard interface to my chip,
but I can have my own developer ecosystem,
which has been huge for the company. So CUDA has become the foundation that everything that
we've talked about, all the AI applications are written on top of today. So, you know,
you hear Jensen in these keynotes reference CUDA the platform, CUDA the language. And I spent some
time trying to figure out, like, when I was watching developer sessions and, like, literally learning some CUDA programs, what is the right way to
characterize it? And what is the right way to characterize it today? Because it has evolved a
lot. Yes. So today, CUDA is, starting from the bottom and going up, a compiler, a runtime,
a set of development tools like a debugger and a profiler. It is its own programming
language, CUDA C++. It has industry-specific libraries. It works on every card that they
ship and have shipped since 2006, which is a really important thing to know. And if you're
a CUDA developer, your stuff works on everything, anything NVIDIA, all this unified interface.
It has many layers of abstractions and
existing libraries that are optimized. So these libraries of code that you can call to keep your
development work short and simple instead of reinventing the wheel. So, you know, there are
things that you can decide that you want to write in C++ and just rely on their compiler to make it
run well on NVIDIA hardware for you, or you can write
stuff in their native language and try to implement things yourself in CUDA C++. The answer is,
it's incredibly flexible, it is very well supported, and there's this huge community of people
that are developing with you and building stuff for you to build on top of. If you look at the number of
CUDA developers over time, it was released in 2006. It took four years to get the first 100,000
people. Then by 2016, 13 years in, they got to a million developers. Then just two years later,
they got to 2 million. So 13 years to add their first 13 million,
then two years to add their second.
2022, they hit 3 million developers.
And then just one year later, in May of 2023,
CUDA has 4 million registered developers.
So at this point, there's a huge moat for NVIDIA.
And I think when you talk to folks there,
and frankly, when we did talk to folks there,
they don't describe it this way. They don't think about it like, well, CUDA is our moat versus competitors. It's more like, well, look, we envisioned a world of accelerated computing
in the future. And we thought there are way more workloads that should be parallelized and made
more efficient that we want people to run on our hardware. And we need to make it as easy as possible for them to do that. And we're going to go to great lengths and have 1,000, 2,000 people
that work at our company that are going to be full-time software engineers building this
programming language and compiler and foundation and framework and everything on top of it
to let the maximum number of people build on our stuff. That is how you build a developer ecosystem.
It's different language,
but the bottom line is they have a huge reverence
for the power that it gives them at the company.
This is something we touched on on our last episode,
but has really crystallized for me in doing this one.
NVIDIA thinks of themselves as,
and I believe is, a platform company, especially this week after the blowout
earnings and everything that happened this quarter and the stock and whatnot. Sort of a
popular take out there that you've been seeing a lot is, oh, we've seen this movie before.
This happened with Cisco. You could say over a longer timescale, this happened with Intel. Yeah, these hardware providers, these semiconductor companies,
they're hot when they're hot and people want to spend CapEx.
And then when they're not hot, they're not hot.
But I don't think that's quite the right way to characterize NVIDIA.
They do make semiconductors and they do make data center gear,
but really they are a platform
company. The right analogy for NVIDIA also is Microsoft. They make the operating system,
they make the programming environment, they make many of the applications.
Right. Cisco doesn't really have developers. Intel never had developers. Microsoft had developers,
and Intel had Microsoft, but Intel didn't have developers. NVIDIA has developers. Intel never had developers. Microsoft had developers, and Intel had Microsoft, but Intel
didn't have developers. NVIDIA has developers. I mean, they've built a new architecture that is not
a von Neumann computer. They've bucked 50 years of progress, and instead, every GPU has a stream
processor unit. And as you'd imagine, you need a whole new type of programming language and compiler
and everything to deal with this new computing model.
And that's CUDA, and it freaking works.
And there's all these people that develop their livelihood in it.
You talk to Jensen, and you talk to other people at the company, and they will tell you, we are a foundational computer science company.
We're not just slinging hardware here.
Yeah, I mean, it's interesting.
They're a platform company for sure.
They're also a systems company.
They're effectively selling mainframes.
I mean, it's not that different than IBM way back when.
They're trying to sell you a $100 million wall that goes in your data center,
and it's all fully integrated, and it all just works.
Yeah, and maybe IBM actually is a really good analogy, like old school IBM here.
They make the underlying technology. They make the hardware, they make the silicon,
they make the operating system for the silicon, they make the solutions for customers,
they make everything, and they sell it as a solution.
Yep. Okay, so a couple other things to catch us up here as we're starting analysis. One big point
I want to make is
let's look at a timeline, because I didn't discover this until like two hours before we started
recording. In March of 2019, NVIDIA announced they were acquiring Mellanox for $7 billion in cash,
and I think Intel was considering the purchase, and then NVIDIA came in and kind of blew them out
of the water. And it is fair to say nobody really understood what NVIDIA was going to do there and why it was so important, but the question is why?
Well, NVIDIA knew that these new models coming out would need to run across multiple servers,
multiple racks, and they put a huge level of importance on the bandwidth between the machines.
And of course, how did they know that? Well, in August of 2019, NVIDIA released what was
at the time the largest transformer-based language model called Megatron. 8.3 billion parameters
trained on 512 GPUs for nine days, which at the time at retail would have cost something like half
a million dollars to train, which at the time was a huge amount of money to spend on model training, which is, what, only four years ago? But today that's quaint. NVIDIA did that
because they do a huge amount of research at the company and they work with every other company
doing AI research and they were like, oh yes, this stuff is going to work and this stuff is going to
require the fastest networking available. And I think that has to do with why no one else saw how valuable the Mellanox technology
could be.
Another thing that I want to talk about for NVIDIA's business today is this notion of
the data center is the computer.
And Jensen did a great interview with Ben Thompson last year where he talks about the
idea that they build their systems
full stack. Like their dream is that you own and operate a DGX super pod. And he says, we build our
systems full stack, but we go to market in a disaggregated way, integrating into the compute
fabric of the industry. So I think that's his sort of way of saying, look, customers need
to use us in a bunch of different ways. So we need to be flexible on that. But we want to build each
of our components such that if you do assemble them all together, it's this unbelievable experience
and we'll figure out how to provide the right experience to you if you only want to use them
in piecemeal ways or you want to use us in the cloud or the or you want to use us in the cloud, or the cloud providers want to use us.
Again, it's build the product as a system, build the system full stack, but go to market in a disaggregated way. And I think if I remember right in that interview, Ben picked up on this and was
like, wait, are you building your own cloud? And Jensen was like, well, maybe, we'll see. And of
course, then they launched DJX Cloud in a, well, maybe we'll see sort of way.
Yeah, you could imagine there are more NVIDIA data centers likely on the way that are
fully owned and operated. Speaking of all of this, we got to talk some numbers on margin.
This last quarter, they had a gross margin of 70%. And they forecasted for next quarter to have a gross margin of 72 percent. I mean,
if you go back pre-CUDA, when they were a commoditized graphics card manufacturer,
it was 24 percent. So they've gone 24 to 70 on gross margin. And with the exception of a few
quarters along the way for these strange one-time events, it's basically been a linear climb,
quarter over quarter
as they've deepened their moat and as they've deepened their differentiation in the industry.
We're definitely at a place right now that I think is temporary due to the supply shortage of the
world's enterprises and in some cases even governments. You look at the UK or some of the
Middle Eastern countries, like blank check, I just need access to NVIDIA hardware.
That's going to go away, but I don't think this very high, you know, 65% plus margin is going to
erode too much. Yes, I mean, I think two things here. One, I really do believe what we were talking
about a minute ago, that NVIDIA is not just a hardware company. They're not just a chips company.
They are a platform company. And there is a lot of differentiation baked into what they do.
If you want to train GPT or a GPT class model, there's one option. You're doing it on NVIDIA.
There's one option. And yes, we should talk about there's lots of less than GPT class stuff out
there that you can do. And especially inference is more of a wide open market versus training that you can do on other platforms.
But they're the best, and they're not just the best because of their hardware.
They're not just the best because of their data center solutions.
They're not just the best because of CUDA.
They're the best because of all of those.
So the other sort of illustrative thing for me that shows how wide their lead is,
we haven't talked about China yet.
The land of A800s.
Yes.
So what's going on?
Last year, China was 25% or sales to mainland China was 25% of NVIDIA's revenue.
And a lot of that is they were selling to the hyperscalers, to the cloud providers in China.
Baidu, Alibaba, Tencent, others. And by the way, Baidu has potentially the largest model of anyone.
Their GPT competitor is over a trillion parameters and may actually be larger than GPT-4.
Wow. I didn't know that. Yep.
Ah, that's wild. So then, I believe also in September of 2022, last year, the Biden administration announced
pretty sweeping regulations and bans on sales of advanced computing infrastructure.
David, they're export controls.
Don't say bans.
I mean, yes, that's a fine line.
And this is pretty close to b banned, what the administration introduced. As part of
that, NVIDIA can no longer sell their top-of-the-line H100s or A100s to anybody in China.
So they created a nerfed SKU, essentially, that meets the regulations, the performance regulations,
the A800 and H800s. Which I think they basically just crank down the NVLINK's
data transfer speeds. So it's like buying a top of the line A100, but not with as fast of data
connections as you need, which basically makes it so you can't train large models. Right. Or you
can't train them as well or as fast as you could with the latest stuff. The incredibly telling thing is that
those chips and those machines are still selling like hotcakes in China. They're still the best
hardware and platform that you can get in China, even a crippled version. And I think that's true
anywhere in the world. And there's been even a more recent spike of them because a lot of Chinese
companies are reading the tea leaves and saying, ooh, export controls might get even more severe, so I should get them
while I still can, these 800s. Yep. So, I mean, I can't think of a better illustration of just how
wide their lead is. Yeah, that's a great point. Talking about the rest of NVIDIA, just for a
moment, I mean, this episode is about the data center segment, but... Oh, you mean they still make gaming cards, too?
It is worth talking about this idea
that Omniverse is
starting to look really interesting. As of
their conference six months ago, they had 700
enterprises who had signed up as customers.
And the reason this is interesting
is it could be where their
two different worlds collide.
3D graphics with ray tracing,
which is new and amazing, and the
demos are mind-blowing, and AI. They have been playing in both of these markets since the workloads
are both massively parallelizable. That is the sort of original reason for them to be in the AI
market. If you recall back to way back our part one episode, the original mission of NVIDIA was
to make graphics a storytelling medium.
And then their mission has expanded as they've realized, my God, our hardware is really good
at other stuff that needs to be parallelized too. But fascinatingly with Omniverse, the future
could actually look like applications where you need both amazing graphical capability and AI capability for the same application. And I mean, for all the
other amazing uniqueness about NVIDIA that we've been talking about and how well positioned they
are, adding this on top, where they're the number one provider for graphics hardware and software
and AI hardware and software. Oh, and by the way, there's this huge application emerging where you actually do
need both. They're just going to knock it out of the park if that comes true. There was a super
cool demo at a recent keynote. It might have been at SIGGRAPH where NVIDIA created a game environment,
you know, fully ray traced game environment. It looks like a AAA game. It looks amazing,
you know, basically distinguishable from reality,
but like you really got to look hard
to tell that this isn't real
and this isn't a real human you're talking to.
So there's a non-playable character
that you're talking to, an NPC,
who's giving you like a mission.
And they show this demo.
It looks amazing.
Then they're like, the script,
the words that that character was saying to you
were not scripted.
That was all generated with AI dynamically.
So you're like, holy crap.
You know, you think about you play a video game, the characters are scripted.
But in this world that you're talking about, you can have generative AI controlled avatars that are unscripted that have their own intellig to project the weather in the future. So you can sort of know the real world potential things that your aircraft could encounter all in
a generated graphical AI simulation. I mean, there's gonna be a lot more of this stuff to come.
Yep, totally.
Another thing to know about NVIDIA that we really didn't talk about on the last episode, they're pretty employee efficient.
They have 26,000 employees.
And that sounds like a big number,
but for comparison,
Microsoft, whose market cap is only twice as big,
has 220,000.
So that is 5x the number of employees
per dollar of market cap going on over at Microsoft.
And this is a little
bit farcical since, you know, NVIDIA only recently has had such a massive market cap.
But the scale of the platform that NVIDIA is building is on the order of magnitude of
Microsoft scale.
Right. They have $46 million of market cap per employee.
Wild.
Crazy.
Which I think translates into the culture there
as we've gotten to know some folks there.
It really is a very unique kind of culture.
Like it is a big tech scale company,
but you never hear about the same kind of
silly big tech stuff that you hear
at other companies at NVIDIA.
As far as I know, I could be wrong on this.
There is no like, you know, oh, work from home or return to the office policy at NVIDIA. As far as I know, I could be wrong on this. There is no like, you know,
oh, work from home or return to the office policy at NVIDIA. It's like, no, it's just like,
you do the job. And you know, nobody's forcing anybody to come into the office here. And like,
they've accelerated their ship cycles. Well, I also get the sense that it's a little bit of a
do your life's work or don't be here situation. Like Jensen is rumored to have 40
direct reports and his office is basically just an empty conference room because he's just bouncing
around so much and he's on his phone and he's talking to this person and that person. And like,
you can't manage 40 people directly if you're worrying about someone's career ambitions.
Yeah. He's talked about this. He's like, I have 40 direct reports. They are the best in the world
at what they do. This is their life's work. I don't talk to them about their career ambitions.
Like I don't need to like, you know, yeah. For like recent college grads, we do mentoring it.
But if you're a senior employee, you've been here for 20 years, you're the best in the world of what
you do. And we're hyper-efficient. And I start my day at 5am, seven days a week and you do too.
It's crazy.
Yeah. There's actually this amazing quote from Jensen
that I heard on an interview with him that I was listening to.
Towards the end of the conversation in the interview, I asked him,
Jensen, you and NVIDIA do these just amazing things.
What do you do to relax?
And Jensen's answer is, I'm reading, this is a quote, direct quote,
I relax all the time I enjoy
relaxing at work because work is relaxing for me solving problems is relaxing for me achieving
something is relaxing for me and he's a hundred percent serious like a thousand percent serious
how old is Jensen the dude is 60 years. It kind of feels like all of his peers
have either decided to retire and relax or are, you know, relaxing while running their companies.
I think there's another crop of people that are doing that. And that is just not at all
interesting to him or what he's doing. And I kind of get the sense like he's got another 30 years
in him and he's architected the company in such a way that that's the plan.
I don't think there's anyone else there where they're like getting ready for that person to take over.
I think the company is a extension of Jensen's thoughts and will and drive and belief about the future.
And that's kind of what happens.
I don't know if there is or isn't a Jensen and
Lori Huang Foundation, but if there is, he's not spending his time on it. He's not buying sports
franchises. He's not buying mega yachts. Or if he is, he isn't talking about them and he's working
from them. Yeah, he's not buying social media platforms and newspapers. Yeah, totally. I mean,
it is quite telling that when you watch one of their keynotes, it's Jensen on stage and it's some customer demos.
But it's not like the Apple keynotes where Tim Cook's calling up another Apple employee.
It's the Jensen show.
Nobody would accuse Tim Cook of not working hard, I don't think.
But you go to those keynotes and it's like, Tim does the welcome and then the handoff.
And, you know, a parade of other executives talk about stuff.
Good morning.
Tim Apple. I love Good morning. Tim Apple.
I love it.
Love Tim Apple.
We got to have Tim on the show sometime.
That would be amazing.
Yeah, text him.
Text him.
All right, power?
Let's talk power.
All right.
So for listeners who are new to the show, this is the section where we talk about what
it is about the company that enables them to achieve persistent differential returns,
or in other words, to be more profitable than their closest competitor and do so sustainably.
And NVIDIA is fascinating because they sort of have a direct competitor, but that's not the most
interesting form of competition for them. Disintermediation is. Sure, ostensibly there's NVIDIA versus AMD, but like, AMD doesn't have
all this capacity reserved from TSMC, at least not for the 2.5D packaging process for the high-end
GPUs. AMD doesn't have the developer ecosystem from CUDA. They're the closest direct comp,
but it's Amazon building Tranium and Inferentia. It's if Microsoft decides to go and build their own ship
as they're rumored to with AMD.
It's Google and the TPU.
Facebook developing PyTorch
and then leveraging their foothold with PyTorch
with the developer community
to figure out how to extend underneath of PyTorch.
There's a lot of competitive vectors coming at NVIDIA
but not directly.
Not to mention all the data center hardware
providers that are their direct competitors now too. Intel, et cetera, on down the line.
Yep. Now, all that said, they've got a lot of powers. So as we move through these one by one,
I think let's just say them all and we can decide if there's something to talk about here.
Counterpositioning is the one where I actually don't think there's anything here.
I don't think there's anything that NVIDIA does where there's another company that's
actively choosing not to do that because any company would want to be NVIDIA right now.
I would have agreed with you, but I actually think there is strong counterpositioning in
the data center world right now. NVIDIA and Jensen put a flag in the ground several years ago
where they said, we are going to re-architect the data center.
And all the existing data center hardware and compute providers
had strong incentives not to do that.
But like right now, what do you think other data center hardware providers,
what are they not doing?
Yeah, fair point. They're all trying to put GPUs think other data center hardware providers, what are they not doing? Yeah, fair
point. They're all trying to put GPUs in the data center too. Everyone's just going to chase exactly
what NVIDIA is doing years behind them. That's the market right now. Yep. Okay, fair enough.
And the question is, will NVIDIA be able to stay ahead in ways that matter? That I think is the
entire analysis on the company right now is in what ways that matter. That, I think, is the entire analysis on the company right now, is
in what ways that matter to customers at large scale and large markets will they be able to
sustainably be ahead of people that are just chasing them and trying to copy what they're
doing because the margin profile is so fat and juicy that people don't want to pay it.
Yep.
So the second one, scale economies. This has CUDA written all over it. You can make
massive fixed cost investments when you have the scale to amortize that cost across. And when you
have 4 million developers who want to develop on your platform, you can justify whatever it is,
1,600 people who actively on LinkedIn at NVIDIA today have the word CUDA
in their job title. I mean, I'm sure it's actually even more than that, who just aren't, you know,
they're saying software or something like that. But thousands of people of an investment that
they don't make any money on software, they make a de minimis amount on software. But that is
amortized across the entire developer base. I think it's worth saying a bit more here on this
too, which we also talked about in our last episode. To me, the dynamics here are a lot
like Apple and iOS versus Android. Apple has thousands and thousands and thousands of developers
working on iOS. Android also has thousands and thousands of developers working on iOS. Android also has thousands and thousands of developers
working on it across a widespread ecosystem.
But at Apple, it's all tightly controlled
and it's coupled with hardware.
At Android, it's not.
And like, as a user,
maybe you'll get the latest operating system update.
Maybe you won't.
I think this is exactly the right framing here,
that NVIDIA is the Apple
of AI, and PyTorch is sort of Android because it's open source and it's got a bunch of different
companies that care about it. OpenCL is the Android as it pertains to graphics, but it's
pretty bad and pretty far behind. RockM is the CUDA competitor made by AMD for their hardware,
but again, new, not a lot of adoption. They're working
on it, but they've open sourced that because they realized they can't go directly head-to-head with
NVIDIA. They need some different strategy. But yes, they are absolutely running the Apple playbook
here. Yep. And I think in the current state of things, it's even more favorable to NVIDIA than iOS versus Android,
because NVIDIA has had first dozens and then hundreds and now thousands of engineers working
on CUDA for 16 years. Meanwhile, the Android equivalent out there in the open source ecosystem
has only just been getting going. You know, if you think about the delta of the timeline
between iOS and Android,
it was a year and a half, two years.
There's probably at least 10,
probably closer to 15-year lead than NVIDIA has.
And so we talked to a few people about this,
and we're like, oh, what's going on
in the open-source ecosystem?
Is there an Android equivalent?
And even the most bullish people we talked to were like,
oh, yeah, you know, now that Facebook has really moved PyTorch into a foundation and outside of Facebook, that means that other companies can now contribute, you know, a couple dozen engineers to work on it. And you're like, cool. So AMD is going to contribute a couple dozen, maybe 100 engineers to work on PyTorch. And so will Google, and so will Facebook, and so will everybody else. NVIDIA has thousands of engineers working on CUDA 10 years ahead.
I sent you this graph, David, of my estimated number of employees working on CUDA per year
since inception in 2006. And then if you look at the area under the curve and just take the
integral, it's approximately 10,000 person
years that have gone into CUDA. Like, good luck. Now, again, open source is a very powerful thing.
The market incentives are absolutely there for this to happen. Right. That is the interesting
point is every moat only works if the castle is sufficiently small. If the prize at the end of the finish line becomes
sufficiently large, you're going to need a bigger moat, and you need to figure out how to defend
the castle harder. I'm mixing so many metaphors here, but you get the idea. Yeah, I love it.
This was a perfectly fine moat when the addressable market was $100 billion. Is it at
a trillion-dollar market opportunity? Probably not.
Basically, it means margins come down and competition gets more fierce over time.
And I think NVIDIA totally gets this because part of this, as I was alluding to, is COVID-related.
But we talked way back in part one about how NVIDIA ended up to save the company moving to
a six-month shipping cycle for their graphics cards
when their competitors were on a one to two-year shipping cycle. That persisted for several years,
and then they relaxed back to an annual shipping cycle. There were annual GTCs.
Since COVID, Nvidia has re-accelerated to a six-month shipping cycle. They've been doing two GTCs a year most years since COVID,
which is insane for the level of technology complexity that they're doing.
Yep.
Imagine Apple doing two WWDCs a year.
Yeah.
That's what's happening at NVIDIA.
It's crazy.
So on the one hand, that's a culture thing.
On the other hand, that is an acknowledgement of like,
we need to be pedal to the floor right now to outrun competition.
We've built some structural ways to defend the business,
but we need to continue running as fast as we've ever run to stay ahead
because it's such an attractive race that we're in.
Yep.
All right.
So that's scale economies.
Let's move to switching costs now.
So far, everything of consequence, especially
model training, especially on LLMs, has been built on NVIDIA. And that alone is just a big pile of
code and a big amount of organizational momentum. So switching away from that, even from the software perspective, is going to be hard. But there are companies today in 2023, both at the hyperscalers and Fortune 500 companies
that own their own data centers, making data center purchase and rollout decisions that
will last at least the next five years.
Because these data center re-architectures don't happen very often.
And so you better believe that NVIDIA is trying as hard as they can to ship as much product
as they can while they have the lead in order to lock in that data center architecture for
the next 10 years.
Yeah, we talked to many people in preparation for this episode, but one of the most interesting
conversations was with some of our favorite public market investors out there, the NCS Capital guys.
Who I stole many insights from for this episode.
Oh, they're just so great. And obviously I've been following NVIDIA in the space for a long time.
They made the point that data center revenue and data center capex is some of the stickiest revenue that is known to humankind. Just the
organizational switching costs involved in data center procurement and data center architecture
standardization decisions. God, that's a mouthful even to say at Fortune 500 companies and the like
is like they're not changing that more than once a decade at most.
So even if we're sort of in this bubbly moment around the excitement of generative AI before
we necessarily know the full set of applications, NVIDIA is leveraging this excitement to go get
some lock-in. I've seen some people on the internet being like, they love how supply-constrained they
are. I don't think so. I think they're looking for capacity
in every way they can get it
to exploit this opportunity while it exists.
I completely agree with that.
Yeah, I think, you know, again,
we didn't talk to Colette, NVIDIA's CFO, about this,
but I strongly suspect if I were them,
I would be happy to trade some of this gross margin
right now for increased throughput on sales.
Yep.
But there's only one TSMC
and there's only so many fabs that they have
that can do the, what do they call it,
the 2.5D architecture.
Should we talk cornered resource?
Yeah.
This is probably the textbook cornered resource.
NVIDIA has access to a huge amount of capacity
at TSMC that none of their competitors
can get their hands on.
I mean, they did luck into this cornered resource a little bit. They reserved all that wafer supply
for a different purpose, partially crypto mining, but AMD doesn't have it. AMD does have a ton of
capacity, it's worth saying, at TSMC for their other products, data center CPUs, which they've
actually been doing very well in. But NVIDIA did end up with this wide open lane all to themselves on co-op capacity at TSMC, and they got to make the most
of that for as long as they have it. Yep. And I guess to say a little more, though,
it's not like this is not a commodity, as we talked about on our TSMC episode. Although TSMC
is a contract manufacturer, it is the opposite of a commodity,
especially at the highest and leading edge. It's like an invention delivered by aliens that very
few humans know how to actually do. Yes. It is worth acknowledging it's kind of a two-horse race
for LLM training. I know we've been harping on NVIDIA, but Google TPUs are also manufactured at volume.
You can just only get them through Google Cloud. And I think, I don't know if you have to use the
TensorFlow framework, which has been waning in popularity relative to PyTorch, but it's certainly
not an industry standard to use TPUs the way that it is to use NVIDIA's hardware. I suspect a lot of the volume of the
TPUs is being used internally by Google for BARD, for doing stuff in Google Search. I know they've
added a lot of the generative AI capability to search. Yep, totally. Two points on this.
Just sticking to the scope of this business and market discussion, this is a major casualty of
a strategy conflict at Google.
Obviously, the way you want to do this is the way NVIDIA is doing this of like,
your customers want to buy through the cloud, you want to be in every cloud.
But obviously, Google is not going to be in AWS and Azure and Oracle and all the new cloud
providers. They're only going to be in GCP. Maybe, David.
But I was going to say,
though, through the expanded lens, though, I think this makes sense for Google because
their primary business is their own products. Right. And they run among the most profitable
businesses the world has ever seen. So anything they can do to further advantage and extend that
runway, they probably should do. Nothing has changed through all of this with respect to the fact that
what the previous generation of AI enabled with machine learning
with regard to social media and internet applications
being the most profitable cashflow geysers known to man,
none of that has changed.
That is still true in this current world and still true for Google.
Yep.
The last one that I had highlighted is network economies.
They have a large number of developers out there
and a large number of customers
that they can amortize these technology investments across
and who all benefit from each other.
I mean, remember, there are people building libraries on top of CUDA
and you can use the building blocks that other people built
to build your code. You can write amazing CUDA programs that just don't have that many lines
of code because it's calling other pre-existing stuff. And NVIDIA made a decision in 2006 that
at the time was very costly, like big investment decision, but it looks genius in hindsight to make
sure that every GPU that went out the door was fully CUDA capable. And today, there are 500 million CUDA capable GPUs for developers to target. It's just very attractive.
I'm putting this in network economies. I think it's probably more a scale economy than a network
economy. But you could imagine a lot of people ho-humming around NVIDIA in 2006 to 2012 saying,
why do I have to make it so that my software fits on this tiny little footprint and
we can include CUDA taking up a huge amount of space on this thing and make all these trade-offs
in our hardware so that we can write, are people going to use CUDA? And today it just looks so
genius. Yeah, I mean, we've talked about this many times on the show, including with Hamilton
Helmer and Chen Yi themselves. But for platform companies like NVIDIA clearly is, there is this special brand
of power that is a combination of scale economies and network economies. And this is what you're
getting at. Yep. They do have branding power, for sure. Yeah, I actually think it's worth talking
about this a little bit. This is the nobody gets fired for buying IBM. I mean, NVIDIA is the modern
IBM in the AI era. Yep. Look, I don't feel confident enough to pound the table on this,
but given the nature of how the company started,
how long they've been around,
and the fact that they also have the market-leading product
in a totally different business in graphics,
which is both consumers but also professional graphics,
I think that probably
does lend some brand power to them, especially when the CIO and the C-suite at McDonald's is
making a buying decision here. Everybody knows NVIDIA. You're saying that they carried their
consumer brand into their enterprise posture. This is way, way, way down the stack in power,
but I don't think it's
hurt them. They've always been known as a technology leader, and the whole world has known
for decades at this point that the stuff that they can enable is magical. Yeah. There's a big
strength leads to strength thing here, too, where I bet the revenue results from last quarter
massively dwarf any brand benefit that
they ever got from the consumer side. I think it's just the fact that like, hey, look, everyone else
is buying NVIDIA. I'd be an idiot not to. Nobody is getting fired for buying NVIDIA anytime soon.
Yep. Right. Or taking a big dependency on them or targeting that development platform. It's just
the like, if you're innovating in your business, you don't want to take risk on
the platform you're building on top of. You want to be the only risk in the value chain.
All right, then the last one, right, is process power.
Yeah, and this is probably the weakest one, even though I'm sure you could make some argument that
they have process power. It's just that all the other powers are so much more valuable.
It's always so tricky to tease out. Yep. You know, I think the argument here would just be like NVIDIA's culture and their six-month shipping cycle that clearly they had in the past and they didn't have for a while and now they have again.
I don't know.
I think you can make an argument here.
Is it feasible?
Let's do a thought exercise.
Could any of their competitors really in any domain move to a six-month ship cycle?
That'd be really hard. Yeah. You know, could an Apple-sized company do two WWDCs a year? Like,
no. The question is, does that actually matter? There are so many people that are using A100s
right now. And in fact, most workloads can be run on A100s unless you're doing model training
of GPT-4. I just don't know that it actually matters that much or as much as other factors.
And I'll give you an example. AMD does have 3D packaging on one of their latest GPUs.
It's a more sophisticated way of doing real copper to real copper direct connection without a silicon interposer. I'm
getting into a little bit of the details, but basically it's more sophisticated than the process
that the H100 2.5D is using to make sure that memory is extremely close to compute. And does
that matter? Not really. What matters is everything else that we've been talking about, and nobody's
going to make a purchase decision on this thing because it's, you know, a little bit of a better
mousetrap. Yeah, thinking about this more, I think actually brand is a really important power for
NVIDIA right now. Yeah, and in a strength leads to strength way, so you can see why they're trying
to sort of seize this moment. Yep. Playbook? All right, let's move on to playbook. So one thing
that I want to point out is Jensen keeps referring to this as the iPhone moment for AI.
And when he says it, the common understanding is that he means a new mainstream method for interacting with computers.
But there's another way to interpret it.
Does this sound familiar, David, when I say a hardware company differentiated by software that then expanded into services?
Yes, yes yes it does. It's quite tongue-in-cheek to be referring to the iPhone moment of AI
when referring to oneself, NVIDIA, as the Apple. Because I really think that the parallels are
uncanny, that they have this vertically integrated hardware and software stack provided by NVIDIA,
you use their tools to develop for it
they've shipped the most units so developers have a big incentive to target that market it's the
best individual buyers to target because they're the least cost sensitive and they appreciate you
building the best experiences for them i mean it's the iphone but in many ways it's better because
the target is a b2b target instead of consumers. Yeah. The only way in which it's different is Apple has always had a market cap that sort of lagged
its proven value to users, whereas NVIDIA right now is exactly over their skis.
Well, let's save that for bull and bear at the end.
Great.
The second one is that they've moved on from becoming a hardware company to truly being
a systems company. While NVIDIA's chips are typically ahead, it really doesn't matter on
a chip-to-chip comparison. That is not the playing field. It is all about how well multiple GPUs and
multiple racks of GPUs work together as one system with all the hardware and networking
and software that enables that. They have just entirely changed the vector of competition,
which I think lots of companies can learn from.
And my third one here is this quote that Jensen had, again, from the same Stratechery interview,
which is, you build a great company by doing things that other people can't do. You don't build a company by fighting other people to do things that everyone can do. And I think it's so
salient. It comes out in all these interesting ways, one of which is,
NVIDIA never dedicated resources to building a CPU
until there was a differentiated way
and a real reason for them
to build their own CPU,
which is now.
And the way that they're doing it,
by the way,
is not terribly differentiated.
It's an off-the-shelf ARM architecture
that they're putting
some of their own secret sauce on.
But it's not like
they're doing Apple-style M3 creation of a chip from scratch.
It's not the hero product.
Right. There are many ways that NVIDIA sort of applies this, where I think we talked about in
the last episode, if they think it's going to be a low-margin opportunity, they don't go after it.
But the nicer way to say that is, we don't want to compete for things that anybody can do. We want to do things that only we can do. Oh, and by the way, we will fully realize
the value of those things when we do them. Yeah. I think there's maybe a related playbook theme
here for NVIDIA of strike when the timing is right. I suspect that a lot of the inner competitive
drive and motivation for Jensen and the company over the past 10, 15 years here
has been to really fight against Intel. Intel tried to kill them, as we talked about many times
in the previous episodes. We talked to somebody who framed it as Intel was the country club
and NVIDIA is the fight club. And back in the days, the Intel
country club didn't want to let NVIDIA in. Intel controlled the motherboard. Intel controlled the
most important chip was the CPU. Intel would integrate and commoditize all other chips into
the motherboard eventually. And if they couldn't do that well, then they'd try and make the chips
themselves. And they tried to run all these playbooks on NVIDIA, and NVIDIA just barely survived. And then in the data center, Intel controlled the data center for
so long. PCI Express, you know, that was the interconnect in the data center for so long,
and NVIDIA had to live in there. And I'm sure they hated every single minute of it.
But they didn't turn around 10 years ago and just be like, guess what, we're making a CPU too.
They waited until the time was right.
It is crazy. They used to have to plug into other people's servers.
And then they started making servers that plugged into other people's racks and rows and architectures.
And then they started making their own entire rows and walls.
And at some point here, they're going to start running their own buildings full of servers too.
And they're going to say, we don't have to plug into anything.
Yeah. But I think for a lot of other leaders,
it would have been hard
to have the patience that they've had.
Totally.
You only get to do the stuff they're doing
if you invested 10 years ahead of the industry,
were wildly inventive and innovative
in creating these true breakthrough innovations,
and were really, really right about huge markets.
Yeah.
None of this stuff applies
unless you're doing those three things.
Yeah, Fortune 500 CIOs aren't making buying decisions
if none of what you just said isn't true.
Right.
So there's this interesting conversation
I wanted to have with you
ahead of winding it up with the bull and bear case.
So think back to our AWS episode.
We talked a lot about how AWS is just locked in.
The databases are a ridiculously durable advantage.
Once your data has been shipped to a particular cloud,
often literally in semi-trucks full of hard drives.
Snowball, yeah.
It's hard to move off of it.
There's this sort of interesting question of,
will winning Cloud 1.0 for all these Google, Microsoft, Amazon,
will that toehold actually enable them to win in the Cloud AI era?
On the one hand, you'd think, yes, absolutely,
because I want to train my AI models
right next to where my data is. It's really expensive to move my data somewhere else to do
that. Case in point, Microsoft is the exclusive cloud infrastructure provider for OpenAI, which
runs, as far as we know, solely on NVIDIA infrastructure, but they buy it all through
Microsoft. Right. On the other hand, the experience that
customers are demanding is the full-stack NVIDIA experience, not this, oh, you found the cheapest
possible cost-of-good-sold way to offer me something that's like the experience that I want.
And sometimes the cloud providers have to offer me an A100 or an H100 because my code is way too
complicated to ever re-architect
for whatever accelerated computing devices they're offering me that's first party and
cheaper for them.
I don't know.
I just think for the first time in the last five years or so, I've sort of cocked my head
a little bit at the moat of these existing cloud providers and said, huh, maybe there
really is a vector to compete with them.
And cloud is not
a settled frontier. Yeah. Well, this is pejorative here. Cloud is a euphemism for data centers,
right? There's so much more to the hyperscalers and public clouds than just data centers, right?
But physically, they're data centers. Yeah, there is a mile of distance,
metaphorically, between like an Equinix and AWS.
Yep.
But they're data centers.
And there is a fundamental shift, at least according to Jensen, a fundamental shift that is happening in data centers.
So I think that probably does create some shifting sands that the cloud market is going to have to navigate.
Yep. I bet the way it plays out is that
where you landed in cloud 1.0 strongly dictates where you will land in this AI cloud era. Because
at the end of the day, if customers are demanding NVIDIA stuff, then the cloud providers have every
incentive in the world to make it so that you can run your applications great in their cloud.
But also, like, there's more to this too. Crusoe exists. CoreWeave exists. Lambda Labs exists.
These are well-funded startups with billions of dollars
that a lot of smart people think there's a major cloud-sized opportunity for.
Yep.
That would not have happened a few years ago.
Super true.
All right, let's do the bull case and bear case and bring this one home.
Oh boy.
We've been trying to delay this as long as possible.
This is the crux of the question right now. Yeah. in bear case and bring this one home. Oh boy. We've been trying to delay this as long as possible.
This is the crux of the question right now. Yeah. I mean, part of it is, is their existing moat big enough if GPUs actually become a hundred billion dollar a year market? I mean, right now, GPUs in
the data center are like a 30 billion dollar a year market going to like a 50 billion dollar
next year. And like,
if this actually goes the way that everyone seems to think it's going to go, there's just too many
margin dollars out there for these big companies to not invest heavily. Meta threw tens of billions
of dollars making the metaverse. I mean, Apple's put $15 billion into, rumored, into their headset.
Amazon's put tens of billions of dollars into devices, which by all means was a terrible investment.
How is Echo paying anything back?
Oh, man, total sidebar.
I'm so disappointed.
I have standardized my house on the Echo ecosystem, and it keeps getting dumber.
How in this world of incredibly accelerating AI capabilities are my Echos getting dumber. How in this world of incredibly accelerating AI capabilities
are my Echos getting dumber?
Well, they need to trainium and inferentia a little bit harder.
Ah, Jesus.
Okay, rant over.
Yeah, I mean, never doubt big tech's ability
to throw tens of billions of dollars into something
if the payoff could be big enough.
These are ludicrously profitable monopolies,
except for Amazon's not that profitable.
AWS's.
Yeah.
But Google, Facebook, Apple,
at some point here,
there's a game of chicken that ends
and some of these companies go all in
and say, yeah, we have smart engineers too.
Like, we're going to figure this out.
Yeah.
But also never underestimate
the inability of big tech
to execute on stuff that it thinks it can,
especially with major strategy shifts. Yeah. Yeah. All right. So let's actually do this.
Bear case. Let's start with the bear case. So you just illustrated, I think, bear case number one,
which is literally everybody else in the technology ecosystem is now aligned and
incentivized to say, I want to take a piece of NVIDIA's pie.
And these companies have untold resources.
Yep. And to put a finer point on that, let's look at PyTorch for a minute.
Now that all the developers or lots of developers are using PyTorch,
it does enable PyTorch to aggregate customers,
which gives them the opportunity to disintermediate. Maybe. You've
got to write a lot of new stuff underneath and ship a lot of hardware. I mean, the cloud service
providers have taken some steps here. It was originally developed by Meta, and while it's
open source, it's still hard for all these companies to invest in it if it's really sort
of owned and controlled by Meta. So now PyTorch has been moved out into a foundation
that a lot of companies are contributing to.
And again, it is an absolute false equivalence
to be like PyTorch versus NVIDIA.
But in real Ben Thompson aggregation theory parlance,
if you aggregate the customers,
you have the opportunity then to take more margin,
to disintermediate, to direct where that attention
is going, and PyTorch has that opportunity. That feels like the vector that a lot of these CSPs
will try and compete on and say, look, if you're building for PyTorch, it runs really well on our
thing too. Yep, for sure. No doubt that that's going to happen. All right, so that's bear case
number two, kind of as part of bear case number one. The next one is like, literally, the market isn't as big as the market cap reflects.
I think there's a pretty reasonable chance that there's some falter in the next 12 to
18 months where there's a crisis of confidence among investors, where at some point something
will come out where we all observe, oh, maybe GPTs aren't as useful as we thought.
Maybe people don't want chat interfaces.
And that crisis of confidence,
that mini bubble burst will trickle out
to America's CIOs and CEOs,
make it harder to advocate in the boardroom,
to make this big fundamental purchase
and re-architecture of our whole budget
from this year that we agreed on
that I'm trying to propose us changing.
There's a crypto-like element to a excitement bubble bursting that will,
for some companies, slow their spend. And the question is sort of like, when that happens,
because it's not an if, it's a when. I have a hard time believing that given all the hype around everything right now, AI will be even more useful than everyone believes,
and it will continue in a linear fashion where without any drawdowns,
everyone's excitement only gets bigger from here.
It may end up being way more useful than anyone thought,
but there at some point will be some valley or trough,
and it's sort of about how does NVIDIA fare during that crisis of confidence? It's funny, you know, again, we talked to a lot
of people for this episode, including a set of some of the foremost AI researchers and
practitioners out there and founders and C-suites of companies that are doing all this. And pretty much to a T,
they all said the same thing when we asked them about this question. They all said,
yeah, this is overhyped right now, of course, obviously. But on a 10-year timescale,
you haven't seen anything yet. The transformative change that we believe is coming,
you can't even imagine. The most interesting thing about the overhype is that it's actually showing up in revenue.
It's everyone who is buying access to all this compute
believes something.
And for NVIDIA,
because it's showing up in the form of revenue,
the belief is real.
And so they just need to make sure
that they smooth the gap to customers
actually realizing as much value
as the CIOs of the world
are currently investing ahead of.
Yep. So I think the sub point to that that's worth a discussion right now is like,
okay, generative AI. Yeah, is it all it's cracked up to be?
Well, David, I haven't asked you about this in like a month or so, but a month ago,
you were pounding the table insisting to me like, I have no need for, I've never used ChatGBT. I
can't find it to be useful. It's hallucinating all the time.
I never think to use it.
It's not a part of my workflow.
Where are you at?
Still basically there, including forcing myself to try to use it a bunch in preparation for this episode.
But also, as we talk to more people, I think I've realized that David Rosenthal's use case doesn't really matter here at all.
Right. like David Rosenthal's use case doesn't really matter here at all. A, because as a business, we are such a hyper-specialized,
unique little unicorn thing where accuracy and the depth of the work
and thought that we ourselves put into episodes is the paramount thing.
Well, and we have no co-workers.
There's so many things about our business that is weird.
Like, we never have to prepare a brief for a meeting. Right. All this stuff. Anything external that we prepare is a labor of love for us. And there is nothing we prepare internal.
I know people who use ChatGPT to set their OKRs, and I'm like, okay, what's an OKR? And they're
like, I wish my life were like that too. That's why I have ChatGPT do it. Right. Honestly, I think through doing this and talking to some folks and reading,
I think there's a very compelling use case for it for writing code right now.
No matter what level of software developer you are, from zero all the way up through
elite software developer, you can get a lot more leverage out of this thing and GitHub Copilot.
So is that valuable?
For sure, that's valuable. Yeah, the LLMs are unbelievably good at writing and helping you
write code. I'm a huge believer in that use case. Yep. And then I think, you know, there's the
slightly more speculative stuff, but you can actually sort of see it now of like that gaming
demo that I mentioned recently from NVIDIA of like, oh, you're talking to a non-playable character
that wasn't scripted. We did an ACQ2 episode recently with Chris Valenzuela from the CEO of
Runway. That was used in everything, everywhere, all at once. And he said, that's just the tip of
the iceberg. Like the stuff that you can do that is happening that's out there today with generative AI in these domains is astounding.
Yeah, I think what you're saying is one could be a bear on your own experience. Every time you try
to use a generative AI application, it doesn't fit into your workflow. You don't find it useful.
You're not sticky. But on the other hand, actually, what AI will be is a sum of a whole bunch of niches.
There's a video game market.
There's a writing market.
There's a creative writing market.
There's a software developer market.
There's a marketing copy market.
You know, there's a million of these things, and you just may happen to not fall into one
of the first few niches of it.
Yeah.
I think for me, at least, again, just speaking personally, too, I had a very strong element
of skepticism initially because the timing was just too perfect. It was like all UVCs out
there. You just told everybody about how crypto is the future and whatever you're talking about.
And then interest rates went to 5% and your world fell off a cliff. Oh, the number of people who were like out raising a fund and they're like,
the future is AI.
Yeah, right.
This is the best time ever to be investing.
And so there was a large part of me that I was just like, come on, guys.
Yeah, it's too perfect.
You're right.
It's too perfect.
But this most recent couple months in this quarter for NVIDIA has shown that, put all that aside, Fortune 500s are adopting this stuff, CIOs are adopting this stuff, NVIDIA is selling real dollars, and learning also about what it takes to train these models and the step scale function of knowledge and utility going from a billion parameters to 10 billion parameters to 200
to a trillion parameter models. Yeah, like something's going on there for sure.
So this leads me to my next bear case, which is the models will get good enough,
and then they'll all be trained, and then we'll shift to inference. And most of the compute load
will be on inference where NVIDIA is less differentiated. There's a bunch of reasons
I don't believe that. That is a popular narrativeated. There's a bunch of reasons I don't believe that.
That is a popular narrative, though.
One of the big reasons I don't believe that is the transformer is not the end of the road.
In a bunch of the research that we did, David, it's very clear that there are things beyond the transformer that are in the research phase right now, and the experiences are only going
to get more magical and only going to get more efficient. So there's sort of a second
bear case there, which is right now we threw a brute force kitchen sink at trading these things,
and all of that revenue accrued to NVIDIA because they're the ones that make the kitchen sinks.
And over time, like you look at Google's Chinchilla or Llama 2, they actually use less parameters than GPT-4 and
have equivalent quality. Or, you know, many other people can be the judge of that, but we're high
quality models with less parameters. So there is this potential bear case around future models
will be more clever and not require as much compute. It's worth saying that even today,
the vast majority of AI workloads
don't look like LLMs,
at least until very recently.
LLMs are like the current maxima
in human history of jobs to be done
that require a ton of compute.
And I guess the question is,
will that continue?
I mean, many other magical,
recent AI experiences
have happened with far less expensive
model training, like
diffusion models and the entire genre of generative AI on images, which we really haven't talked about
a lot on this episode because they're less compute intensive. But many tasks don't require an entire
internet of training data and a trillion parameters to pull off. Yep, that makes sense to me. And I
think there also is some merit to workloads are shifting to inference. That is happening. I agree with you. I don't think training is going anywhere. But until recently, you know, thinking back to the Google days, training was what everybody was spending money on. That's what everybody was focused on as usage scales with this stuff, then inference and inference, of course, being the compute that has to happen to get outputs out of the models after they're already trained, that becomes a
bigger part of the pie. And as you say, the infrastructure and ecosystems around doing that
is less differentiated than training. Yep. Okay, those are the bear cases. There's probably also
a bear case around China, which is a legitimate one because that's going to be a problem for lots of people.
A large market that they won't be able to address for the foreseeable future in a meaningful way.
And just what's going to happen generally.
Like, obviously, China is racing to develop their own homegrown ecosystems and competitors.
And that's going to be a closed-off market.
So what's going to come out of there?
What's going to happen?
Yep, that's definitely one too. My last one is a bear case, but it ends up not being a bear case.
For most companies, I would say that if they were trading at this very high multiple and they just
experienced this tremendous real growth in revenue and operating profit, that that sort of spike to the system when it goes away will
irreparably harm the company when things slow down. Stock compensation's an issue, employee
morale is an issue, customer perception's an issue, but this is NVIDIA. Yeah, this is nothing new.
The number of times that they've risen from the ashes after, you know, years-long terrible
sentiment with something mind-blowingly
innovative, they're probably the best positioned company or the company with the best disposition
to handle that when it happens.
Oh, I love that.
That's a great turn of phrase there.
You upped your training model on language there.
You should see the number of parameters.
I love it.
All right.
Just to list the bulk cases.
One, Jensen is right about accelerated computing.
The majority of workloads right now are not accelerated.
They're bound to CPUs.
They could be accelerated,
and that shifts from some crazy low number,
like 5% or 10% of workloads being accelerated today
to 50-plus percent in the future.
And there's way more compute happening in parallel,
and that mostly accrues to NVIDIA. Oh, I have one nuance I want to add to that.
On the surface, I think a lot of people look at that and they're like, yeah, come on. But I think
there actually is a lot of merit to that argument in the generative AI world and everything we've
talked about in this episode. I don't think Jensen and NVIDIA are saying that traditional
compute is going away or getting get smaller. I think what he's saying is that AI compute
will be added on to everything and the amount of compute required for doing that will dwarf
what's happening in general purpose compute. So like, it's not that
people are going to stop running SharePoint servers or that whatever products you use are going to
stop using their whatever interfaces that they use. It's that generative AI will be added to
all of those things and the use cases will pop up, which will also use traditional general purpose
CPU based computing. But the amount of workloads that
go into making those things magical is just going to be so much bigger.
Yep. Also, just a general statement on software development, writing parallelizable code is
really hard unless you have a framework to do it for you. Even writing code with multiple threads,
like if anybody remembers a CS college in class where they had a race condition
or they needed to write a semaphore, these are the hardest things to debug. And I would argue that a
lot of things that could happen in an accelerated way aren't just because it's harder to develop
for. And so if we live in some future where NVIDIA has reinvented the notion of a computer to shift
away from von Neumann architecture into this stream processor architecture
that they've developed.
And they have the full stack
to make it just as easy to write applications
and move existing applications.
Especially once all the hardware has been bought
and paid for and sitting in data centers,
there's probably a lot of workloads
that actually do make sense to accelerate
if it's easy enough to do so.
Yeah, that's great.
But so your point is that
there's a lot of latent,
accelerated, addressable computing out there
that just hasn't been accelerated yet.
Right.
It's like, eh, this workload's not that expensive
and I'm not going to pay an engineer
to go re-architect the system, so it's fine how it is.
How about that?
I think there's a lot of that.
So, bulk case one, Jensen is right about accelerated computing.
Bulk case two, Jensen is right about generative AI. I mean, combined with accelerated computing,
this will massively shift spend in the data center to NVIDIA's hardware. And as we've mentioned,
OpenAI is rumored to be doing over a billion dollars in recurring revenue on ChatGPT.
So I think there's, let's call it three billion, because that's the most sort of
credible estimate that I've heard. And maybe that was a, let's call it 3 billion, because that's the most sort of credible estimate
that I've heard. And maybe that was a forecast for next year. But like, they're not the only one.
I mean, Google with BARD, which I found tremendously useful, actually, preparing for this episode,
is not directly monetizing that, but they're sort of retaining me as a Google search customer by
doing it. There is a lot of real economic value even today. Not nearly the amount that's
sort of baked into the valuation, but I suppose the bear case of this is that everything has to
go right for NVIDIA, but the bull cases indications are things are going right for NVIDIA.
Third, bull cases, NVIDIA just moves so fast. Whatever the developments are, it's hard to
believe that they're not going to find a way to be really well positioned to capture it.
That's just a cultural thing. Four is the point that you brought up earlier,
that there's a trillion dollars installed in data centers, 250 billion more being spent every year
to refresh and expand capacity, and that NVIDIA could take a meaningful share of that. I think
today, what's their annual revenue at? Like 30 billion or something? Well, if you run rate this current quarter, then it's like $50 plus. So right now that puts them at like 20% of the current data center spend.
You could imagine that being much higher. Okay, wait, that includes the gaming revenue. It's
about $40 because the data center revenue is $10. So So 40 annualized. All right, so 15, 18%.
Yeah.
Woo!
But you could imagine that creeping up.
Again, if the accelerated computing
and generative AI belief comes true,
like they'll expand that 250 number
and they'll take a greater percent of it.
Yep.
An interesting way to do a sort of a check on this math
is to look at what other people in the ecosystem are reporting
in their numbers. TSMC, in their last earnings, said that AI hardware currently only represents
6% of their revenue, but all indications over there is that they expect AI revenue to grow
50% per year for the next five years. Wow. So we're trying to come at it from the customer workload side and say,
is it useful there? But if you come at it from this other side of, what do NVIDIA's suppliers
forecasting? And they have to put their money where their mouth is, building these
new wafer fabs to be able to facilitate that. And packaging and all the other things that go
into chips. So it's expensive for TSMC to be wrong. Yep. So that's another bull case. The last one
that I have before leaving you with one final thought. Are you saying you have one more thing?
Yes. Is that NVIDIA isn't Intel. And I think that's the biggest realization that you helped
me have. And it's not Cisco. Yeah. The comparison we were making in the last episode was wrong.
They are Microsoft. They control the whole software stack,
and they simultaneously can have relationships
with the developer and customer ecosystems.
And I mean, it may even be better than Microsoft
because they make all the hardware too.
Yeah, it may be old school IBM.
Right.
Imagine if IBM operated in a computing market of today's magnitude.
Computing was a tiny little market back then.
Right.
I mean, it was like that. I mean, it took the PC wave to disrupt IBM,
which was a personal computer in today's parlance, edge computing, device-based computing.
IBM dominated the B2B mainframe cycle of computing. And again, if you believe everything
Jensen is saying and how he's steered the company for the last five years. We are going back into a centralized data center, modern version of a mainframe dominated computing cycle.
Yep. I suspect a lot of inference will get done on the edge. You think about the insane amount
of compute that's walking around in our pockets that is not fully leveraged right now. There's
going to be a lot of machine learning done on phones that are going to like call up to cloud-based models for the hard stuff. No doubt. I don't think training is happening at
the edge anytime soon, though. No, I certainly agree with that. All right. Well, just like our
TSMC episode, I wanted to end and leave you with a thought, David, of what it would take to compete
with NVIDIA. Because my big takeaway from the TSMC episode was like, wow, that's a lot of things you
have to believe about a government putting billions of dollars in and hiring all this
talent.
And I was like, what's the equivalent for NVIDIA?
So here's what you would need to do to compete.
Let's say you could design GPU chips that are just as good, which arguably AMD, Google,
and Amazon are doing.
You'd, of course, then need to build up the chip-to-chip
networking capabilities like NVLink that very few have. And you'd of course need to build
relationships with hardware assemblers like Foxconn to actually build these chips into servers like
the DGX. And even if you did all that, you'd need to create server-to-server and rack-to-rack
networking capabilities as good as Mellanox, who was the best on the market, with InfiniBand that NVIDIA now fully owns and controls, which basically nobody
has. And even if you did all that, you'd need to go convince all the customers to buy your thing,
which means it would need to be either better or cheaper or both, not just equal to NVIDIA.
And by a wide margin, too, to this brand, you're not going to get fired for buying NVIDIA anytime soon. Like, this is the canonical, you got to be 10x better than NVIDIA on this stuff
if you're going to convince a CIO. Yep. And even if you got the customer demand, you'd need to
contract with TSMC to get the manufacturing capability of their newest cutting-edge fabs
to do this 2.5D co-auth lithography and packaging, which there, of course,
isn't any more of. So, you know, good luck getting that. And even if you figured out how to do that,
you'd need to build software that is as good or better than CUDA. And of course, that's going to
take 10,000 person years, which would, of course, cost you not only billions and billions of dollars,
but all that actual time
and even if you made all these investments and lined all of this up you'd of course need to go
and convince the developers to actually start using your thing instead of CUDA well NVIDIA
also wouldn't be standing still so you'd have to do all of this in record time to catch up to them and surpass whatever additional capabilities they
developed since you started this effort. So I think the bottom line here is it's nearly impossible to
compete with them head-on, and if anybody's going to unseat NVIDIA in the future of AI and accelerated
computing, it's either going to be from some unknown flank attack that they don't see, or the
future will turn out to just not be accelerated computing and AI,
which seems very unlikely.
Yeah.
Well, when you put it that way,
I think the conclusion that we can come to
is that Marc Andreessen was right.
In what years was this that we were talking about on?
It was like 2015 or something.
Yeah, like 2015, 2016.
They should have put every dollar of every fund that A16Z raised into NVIDIA's market price of the stock every single day.
Yeah, because they were seeing all of these startups doing deep learning, machine learning at the time, early AI, and they were all building on NVIDIA.
And they should have just said, no, thank you to all of them and put all building on NVIDIA, and they should have just said,
no thank you to all of them and put it all in NVIDIA. Mark is right once again. Strength leads to strength. There you go. There it is. Well, listeners, I acknowledge that this episode
generalized a lot of the details, especially for technical listeners out there, but also for
the finance folks who are listening. Our goal was to make this more of a lasting NVIDIA Part 3
big picture episode than sort of a how did they do last quarter and what are the implications on
that of the next three quarters. So hopefully this holds up a little bit longer than just some
current NVIDIA commentary. But thank you so much for going on the journey with us.
Yeah. We also, as we've alluded to throughout the show, we owe a bunch of thank yous to lots of
people who are so kind to help us out, including people who have way better things to do with their time. So
we're very, very grateful. I mean, one, Ian Buck from NVIDIA, who leads the data center effort and
is one of the original team members that invented CUDA way back when. Really grateful to him for
speaking with us to prep for this. Absolutely. Also, big shout out to friend and
listener of the show, Jeremy from ABC Data, who prepared four PDFs for us. Completely unprompted,
like an insane write-up for us about a lot of the technical detail behind this. Private blog posts.
Yeah, private blog posts. So our acquired community is just the best. You guys continue to blow us away. So thank you.
Julian, the CTO of Hugging Face. Oren Itzioni from AI2. Luis from OctoML. And of course,
our friends at NZS Capital. Thank you all for helping us research this.
Indeed. All right.
Carve-outs.
Let's shift gears. Carve-outs.
What you got?
My wife and I have been on an alias binge.
Oh, wow.
Yeah.
Jennifer Garner?
Yes.
I never saw it when it came out.
It is like the perfect early 2000s junk food
when you have one more hour at the end of the day
and you're just laying on the couch.
Ben, I never have one more hour at the end of the day.
I have a two-year-old.
But I really appreciate it. For 16 years from now when she goes to college. I'll keep that on my
list. Oh, you play games. Oh, that's true. But that's research. I'm just checking out the latest
graphics technology. So my review of Alias is it's a little bit campy. They repeat themselves
pretty often. I mean, it's weird to observe how much TV has changed between now and then
because they make very similar shows today, but they're just much more subtle. They're much darker. They leave much more
sort of to the imagination. And in the early 2000s, everything was just so like explicit and
on the nose and restated three times. I'm just glad the show doesn't have a laugh track, but
it's well worth the watch. Sometimes you have to imagine it has a different soundtrack because every episode has like a Matrix type song to it.
Bum-ba-da-bum-ba-da-bum-ba-dum-ba-dum.
Yes, that's right.
This is like the TV version of The Matrix, right?
Yes, but it's great.
I don't know, we're having a lot of fun watching it.
My car about related for my stage of life,
also something I missed and discovered recently,
we just watched our first full movie full disney movie with our daughter
major milestone and she freaking loved it i think we picked a great one moana which neither jenny or
i had seen before and in reading just a little bit about it afterwards, you know how super sadly Pixar kind of fell off in recent years?
Like, such a bummer.
I mean, they're still Pixar, but, like, they're not Pixar.
It's not the guaranteed hit every time that it used to be.
Yeah.
So Moana came out in this kind of generation with Tangled and some of the other stuff out of actual Disney animation after the Pixar acquisition that are just like, these are return to form, Eisner era, Disney animated,
just like fires on all cylinders.
And we loved it.
We watched with our brother and sister-in-law who don't have kids and are 30-somethings
living in San Francisco.
They loved it.
Our daughter loved it.
Highly recommend Moana, no matter what life phase you're in.
All right. Great. Adding it to my list.
And it's got The Rock. How can you complain?
There you go. Well, listeners, if you want to be notified every time we drop a new episode,
and you want to make sure you don't miss it, and you want little hints to play a guessing
game at our next episode, or you want follow-ups from our previous
episode in case we learn from listeners, hey, here's a little piece of information that we
wanted to pass along. We will exclusively be dropping those in the email, acquire.fm slash
email. It was so fun. I think you're about to talk about our Slack. It was so fun watching people in
Slack talk about the hints for this episode. We wrote the little teaser and I was like, oh,
everybody's going to know exactly what this is.
No one got it. I was shocked.
Yeah. Eventually somebody did, but it took a couple days.
Yeah. We have a hat. You should buy it.
And this is not a thing that we make a lot of margin on.
We just are excited about more people sporting the ACQ around.
So participate in the movement. Show it to your friends.
It's not our super pod, but you know.
Yeah.
The pod is the super pod.
If you come on Acquired LP,
you can come closer to the kitchen
and help us pick an episode once a season
and we'll do a Zoom call every other month or so.
Acquired.fm slash LP.
Check out ACQ2 for more Acquired content
in any podcast player and come talk about this in the Slack. Acquired content in any podcast player,
and come talk about this in the Slack, acquired.fm slash Slack.
Listeners, we'll see you next time.
We'll see you next time. Huh