Acquired - Nvidia Part II: The Machine Learning Company (2006-2022)
Episode Date: April 20, 2022By 2012, NVIDIA was on a decade-long road to nowhere. Or so most rational observers of the company thought. CEO Jensen Huang was plowing all the cash from the company’s gaming business into... building a highly speculative platform with few clear use cases and no obviously large market opportunity. And then... a miracle happened. A miracle that led not only to Nvidia becoming the 8th largest market cap company in the world, but also nearly every internet and technology innovation that’s happened in the decade since. Machines learned how to learn. And they learned it... on Nvidia.Sponsors:ServiceNow: https://bit.ly/acqsnaiagentsHuntress: https://bit.ly/acqhuntressVanta: https://bit.ly/acquiredvantaMore Acquired!:Get email updates with hints on next episode and follow-ups from recent episodesJoin the SlackSubscribe to ACQ2Merch Store!Links:Ben Thompson’s great Stratechery interview with JensenLinus Tech Tips tests an Nvidia A100 Episode sourcesCarve Outs:The Expanse short story collection, Memory's Legion Sony RX100 point-and-shoot cameraNote: Acquired hosts and guests may hold assets discussed in this episode. This podcast is not investment advice, and is intended for informational and entertainment purposes only. You should do your own research and make your own independent decisions when considering any financial transactions.
Transcript
Discussion (0)
Still got Swedish House Mafia Greyhound in my head from the pump up.
Nice.
Nice.
It is funny how all GPU companies, I was watching a bunch of NVIDIA keynotes and AMD keynotes
to get ready for this, and everyone is so techno, neon, lighting.
It's like crypto before crypto. Welcome to Season 10, Episode 6 of Acquired, the podcast about great technology companies
and the stories and playbooks behind them. I'm Ben Gilbert, and I am the co-founder and managing director of Seattle-based Pioneer
Square Labs and our venture fund, PSL Ventures. And I'm David Rosenthal, and I am an angel investor
based in San Francisco. And we are your hosts. When I was a kid, David, I used to stare into
backyard bonfires and wonder if that fire flickering was doing so
in a random way, or if I knew about every input in the world, all the air, exactly the physical
construction of the wood, all the variables in the environment, if it was actually predictable.
And I don't think I knew the term at the time, but modelable. If I could know what the flame could look like if I knew all those inputs. And we now know, of course, it is indeed predictable,
but the data and compute required to actually know that is extremely difficult. But that
is what NVIDIA is doing today. Ben, I love that intro. That's great. I was thinking like,
where is Ben going
with this? And this was occurring to me as I was watching Jensen sharing the Omniverse vision for
NVIDIA and realizing NVIDIA has really built all the building blocks, the hardware, the software
for developers to use that hardware, all the user-facing software now and services to simulate
everything in our physical world with their unbelievably efficient and powerful GPU architecture. And these building blocks, listeners, aren't just
for gamers anymore. They are making it possible to recreate the real world in a digital twin
to do things like predict airflow over a wing or simulate cell interaction to quickly discover new drugs without
ever once touching a petri dish, or even model and predict how climate change will play out
precisely. And there is so much to unpack here, especially in how NVIDIA went from making
commodity graphics cards to now owning the whole stack in industries from gaming to enterprise
data centers to scientific computing, and now even
basically off-the-shelf self-driving car architecture for manufacturers. And at the scale
that they're operating at, these improvements that they're making are literally unfathomable
to the human mind. And just to illustrate, if you are training one single speech recognition
machine learning model these days, one, just one model, the number of math operations like adds or multiplies to accomplish it is actually greater than the number of grains of sand on the earth.
I know exactly what part of the research you got that from because I read the same thing and I was like, you got to be freaking kidding me.
Isn't that nuts? I mean, there's just nothing better
in all of the research that you and I both did,
I don't think, to better illustrate
just the unbelievable scale of data and compute required
to accomplish the stuff that they're accomplishing
and how unfathomably small all of this is,
the fact that that happens on one graphics card.
Yeah, so great.
Okay, listeners, now is a great time to tell you about longtime friend of the show, ServiceNow.
Yes, as you know, ServiceNow is the AI platform for business transformation.
And they have some new news to share.
ServiceNow is introducing AI agents.
So only the ServiceNow platform puts AI agents to work across every corner of your
business. Yep. And as you know, from listening to us all year, ServiceNow is pretty remarkable
about embracing the latest AI developments and building them into products for their customers.
AI agents are the next phase of this. So what are AI agents? AI agents can think,
learn, solve problems, and make decisions autonomously.
They work on behalf of your teams, elevating their productivity and potential.
And while you get incredible productivity enhancements, you also get to stay in full
control. Yep. With ServiceNow, AI agents proactively solve challenges from IT to HR,
customer service, software development, you name it. These agents collaborate, they learn from each other,
and they continuously improve,
handling the busy work across your business
so that your teams can actually focus on what truly matters.
Ultimately, ServiceNow and Agentech AI
is the way to deploy AI across every corner of your enterprise.
They boost productivity for employees,
enrich customer experiences, and make work better for everyone. Yep. So learn how you can put AI agents to work for your people
by clicking the link in the show notes or going to servicenow.com slash AI dash agents.
And after you finish this episode, come join the Slack, acquired.fm slash Slack,
and talk about it with us. All right, David, without further
ado, take us in. And as always, listeners, this is not investment advice. David and I may hold
positions in securities discussed, and please do your own research. That's good. I was going to
make sure that you said that this time, because we're going to talk a lot about investing and
investors in NVIDIA stock over the years. It has been a wild, wild journey.
So last we left our plucky heroes, Jensen Huang and NVIDIA in the end of our NVIDIA,
the GPU company years, ending kind of roughly 2004, 2005, 2006, they had cheated death, not once, but twice. The first time in the
super overcrowded graphics card market when they were first getting started.
And then once they sort of jumped out of that frying pan into the fire of Intel now gunning
for them, coming to commoditize them like all the other, you know, PCI chips that plugged into the Intel
motherboard back in the day. And they bravely fend them off. They team up with Microsoft.
They make the GPU programmable. This is amazing. They come out with programmable shaders
with the GeForce three, they power the Xbox, they create the CG programming language with Microsoft.
And so here we are, it's now 2004, 2005,
and it's a pretty impressive company. Public company stock is high flying after the tech
bubble crash. They've conquered the graphics card market. Of course, there's ATI out there as well,
which will come up again, but there's three pretty important things that I think the company built in
the first 10 years. So one, we talked about
this a lot last time, these six month ship cycles for their chips. We talked about that,
but we didn't actually say the rate at which they ship these things. I actually wrote down
like a little list. So in the fall of 1999, they shipped the first GeForce card, the GeForce 256. In the spring of 2000, GeForce 2. In the
fall of 2000, GeForce 2 Ultra. Spring of 2001, GeForce 3. That's the big one with the programmable
shaders. Then six months later, the GeForce 3 Ti500. I mean, the normal cycle, I think we said
was two years, maybe 18 months for most of their competitors who just got largely left in the dust.
Well, I was just thinking, you know, yeah, the competitors are gone at this point,
but I'm thinking about Intel. How often did Intel ship new products, let alone fundamentally new
architecture? You know, there was the 286 and then the 386 and the Pentium and it got it to Pentium,
I don't know, five, whatever. Dude, I feel like the Intel product cycle is approximately the same as a new body style of cars. Yes, exactly. Every five, six years,
there seems to be a meaningful new architecture change. And Intel is the driver of Moore's law,
right? Like these guys ship and bring out new architectures at warp speed. And they've
continued that through to today. Two, one thing that we missed last time that is super important and becomes a big foundation of
everything NVIDIA becomes today that we're going to talk about, they wrote their own drivers for
their graphics cards. And we owe a big thank you for this and many other things to a great listener,
a very kind listener named Jeremy, who reached out to us in Slack and pointed us to a whole bunch of stuff, including the Asianometry YouTube channel.
So good. I've probably watched like 25 Asianometry videos this week.
So, so good. Huge shout out to them. But all the other graphics cards companies at the time,
and most peripheral companies, they let the further downstream partners write the drivers
for what they were doing. NVIDIA was the first one that said, no, no, no, we want to control this.
We want to make sure consumers who use NVIDIA cards have a good experience on whatever systems
they're on. And that meant A, that they could ensure quality, but B, they start to build up
in the company, this like base of really nitty-gritty low-level software developers
in this chip company and there's not a lot of other chip companies that have capabilities like
this no and what they're doing here is taking on a bigger fixed cost base i mean it's very expensive
to employ all the people who are writing the drivers for all the different operating systems
all the different oems all the different boards that, all the different OEMs, all the different boards that it has to be compatible with. But they viewed it as it's kind of an Apple-esque
view of the world. We want the control or as much control as we can get over making sure that people
using our products have a great user experience. So they were sort of willing to take the short
term pain of that expense for the long-term benefit of that improved user experience with their products.
That their users, high-end gamers that want the best experience, they're going to go out,
they're going to spend the time, $300, $400, $500 on an NVIDIA top-of-the-line graphics card.
They're going to drop it into the PC that they built. They want it to work. I remember
messing around with drivers back in the day and things not working. Like, this is super important.
So all this is focused.
And then, of course, they have the third advantage in the company is programmable shaders, you
know, which ATI copies as well.
But like they innovated, like they've, you know, done all this.
So all of this at this time, it's all in service of the gaming market.
And one seed to plant here, David, when you say the programmable shaders developers, the
notion of a NVIDIA developer did not exist until this moment.
It was people who wrote software that would run on the operating system.
And then from there, maybe it would get that compute load would get offloaded to whatever the graphics card was.
But it wasn't like you were developing for the GPU for the graphics card with a language
and a library that was specific to that card. So for the very first time now, they start to build
a real direct relationship with developers so that they can actually start saying, look, if you
develop for our specific hardware, there are advantages for you. And really our specific
gaming cards, like everything we're talking about, these
developers, they're game developers, all of this stuff, it's all in service of the gaming market.
So, you know, again, they're a public company. They have this great deal with Microsoft. They
bring out CG together. They're powering the Xbox. Wall Street loves them. They go from
sub a billion dollar market cap company after the tech crash, up to $5 to $6 billion by 2004, 2005.
Stock keeps going on a tear. By mid-2007, the stock reaches just under $20 billion market cap.
This is great. And this is all the stories. This is pure play gaming. These guys have built such
a great advantage in a developer ecosystem, in a large and growing market, clearly,
which is video games. Which on its own, that would be a great wave to surf. I mean, I think,
what's the gaming market today? 180 billion or something. And when we talked to Trip Hawkins,
who sort of like helped invent it, or Nolan Bushnell, you know, it was zero then. And so
Nvidia is sort of like on a wave that's at an amazing inflection point. They could
totally just ride this gaming thing and be an important company. It's not running out of steam.
I mean, like, how could you not be not just satisfied, but like more than satisfied with
this as a founder? You're like, yes, I am the leading company in this major market,
this huge wave that I don't see ending anytime soon. 99.9% of founders who are
themselves as a class, very ambitious, are going to be satisfied with that.
But not Jensen.
But not Jensen. So while all this is happening, he starts thinking about,
well, what's the next chapter? I'm dominating this market. I want to keep growing. I don't want NVIDIA to be just a gaming company. So we ended last time with the
little, almost a surely apocryphal story of a Stanford researcher sends the email to Jensen
and is like, ah, thanks to you, my son told me to go buy off the shelf GeForce cards at the local
Fry's Electronics. And I stuffed them into my PC at work.
And, you know, I ran my models on this.
He's a, I think it was a quantum chemistry researcher,
supposedly.
It was 10 times faster than the supercomputer
I was using in the lab.
And so thank you.
I can get my life's work done in my lifetime.
And Jensen loves that quote.
It comes out at every GTC.
So that story, if you're a skeptical listener, might beg two questions. First is a practical
one. You know, we just said everything's about gaming here. And here's like a researcher,
like a scientific researcher doing, you know, chemistry modeling using GeForce cards for that.
What's he writing this in? Well, it turns
out... Programmable shaders, right? Yeah. They were shoehorning CG, which was built for graphics.
They were translating everything that they were doing into graphical terms, even if it was not
a graphical problem they were trying to solve, and writing it in CG. This is not for the faint
of heart,
so to speak. Right. So everything is sort of metaphorical. He's a quantum chemistry researcher,
and he's basically telling the hardware, okay, so imagine this data that I'm giving you
is actually a triangle. And imagine that this way that I want to transform the data
is actually like applying a little bit of lighting to the triangle. And then I want you to output something that you think is the right color pixel. And then I will
translate it back into the result that I need for my quantum chemistry. Like you can see why that's
suboptimal. Yeah. So he thinks this is an interesting market. He wants NVIDIA to serve it.
If you really want to do that, right. It is a massive undertaking. It was 10 plus years to
get to the company to this point. You know, what CG was is like a small sliver of the stack of what
you would need to build for developers to use GPUs in a general purpose way, like we're talking
about. You know, it's kind of like they worked with Microsoft to make CG. It's like the difference between working on CG and like Microsoft building the whole.NET
framework for developing on Windows, you know, or today, even better, Apple, right? Like everything
Apple gives to iOS and Mac developers to develop on Mac. Right. Yeah. The analogy is not perfect,
but it's like, instead of just Apple saying, okay, Objective-C is the way that you write code for our platforms, good luck.
They're like, okay, well, you need UI frameworks. So how about AppKit and Cocoa Touch?
And how about all these other SDKs and frameworks like ARKit and like StoreKit and like HomeKit?
It's basically you need the whole sort of abstraction stack on top of the programming language to actually make it very accessible to write software for domains and disciplines
that you know are going to be really popular using that hardware.
Exactly.
So when Jensen commits himself and the company to pursuing this, he's biting off a lot.
Now we talked about they've been writing their own drivers.
So they have actually a lot of very low level. I don't mean low level like bad. I mean, low level, like
infrastructure, like close, very difficult systems oriented programming talent within the company.
So that kind of enables them to start here, but like still, this is big. So then the second
question, if you're a discerning investor, particularly in NVIDIA,
that you want to ask at this point in time is like, okay, Jensen,
you're committing the company to a big undertaking. What's the business case for that?
Show me the market. Don Valentine at this point would be sitting there listening to Jensen and
being like, show me the market. And not only is it show me the market,
but it's how long will the market take to
get here and it's how long is it going to take us and how many dollars and resources is it going to
take us to actually get to something that's useful for that market when it materializes
because while cuda development began in 2006 that was not a, usable platform for six plus years at NVIDIA.
Yep.
This is closer to on the order of the Microsoft development environment or the Apple development
environment than what NVIDIA was doing before, which was like, hey, we made some APIs and
worked with Microsoft so that you can program for my thing.
Right.
I'm going to flash way forward just to illustrate the insane undertaking of this.
I searched LinkedIn for people who work at NVIDIA today and have the word CUDA in their title.
There are 1,100 employees dedicated specifically to the CUDA platform.
I'm surprised it's not 11,000.
Yeah.
Okay. So like, where's the market for this?
Yes, Ben, you asked the, you know, the third question, which is, okay, the intersection of what does this take to do this?
And when is the market going to get there in time and cost and all that?
But even just put that aside, is there a market for this is the first order question.
And the answer to that is probably no at this point in time.
And what they're aiming at is scientific
computing, right? It's researchers who are in science-specific domains who right now need
supercomputers or access to a supercomputer to run some calculation that they think is going to take
weeks or months. And wouldn't it be nice if they could do it cheaper or faster? Is that kind of the
market they're looking at? Yeah, they're attacking like the Cray market, like Cray supercomputers,
that kind of stuff. You know, great company, right? But like, that's no NVIDIA today.
Right. And they were dominating the market. You know, yeah, it's scientific research computing,
you know, it's drug discovery. It's probably a lot of this work they're thinking, oh,
maybe we can get into more professional like hollywood and architecture and other professional graphics domains yeah yeah sure
but you know you sum all that stuff up and like maybe you get to a couple billion dollar market
maybe like total market and not enough to justify the time and the cost of what you're
going to have to build out to go after this to any rational person.
So, you know, here we come. Jensen and NVIDIA, like they are doing this. He is committed. He's drunk the Kool-Aid. 2006, 2007, 2008, they're pouring a lot of resources into building what
will become CUDA that we'll get to in a second. It already is CUDA at this point in time.
And I think Jensen's psychology here is sort of twofold. One is he is enamored with this market.
He loves the idea that they can develop hardware to accelerate specific use cases in computing
that he finds sort of fanciful. And he likes the idea of making it more possible to do more things
for humanity with computers.
But the other part of it is certainly a business model realization where he has spent the last,
gosh, at this point, 13, 14 years being commoditized in all these different ways.
And I think he sees a path here to durable differentiation, where he's like, whoa.
To own the platform.
You know, it's kind of the Apple thing again, to own the platform and to build hardware that's
differentiated by not only software, but relationships with developers that use that
custom software. Like then I can build a really sort of like a company that can throw its weight
around in the industry. A hundred percent. Jensen, I don't know if he used it at the time because he
probably would have gotten pilloried, but maybe he did. I don't think he cared. He certainly has used it
since. The way he thought about this was it wasn't just like, if we build it, they will come,
which is what was going on. The phrase he uses is, if you don't build it, they can't come.
So it's not even like, yeah, I'm pretty sure if we build it, they will come. It's one step
removed from that. It's like, well, if we don't build it, they can't even possibly come.
I don't know if they will come, but they can't come if we don't build it.
So Wall Street is mostly willing to ignore this in 2006, 2007, 2008.
The company's still growing really nicely.
This great market cap run leading up to right before the financial crisis.
But then, you know, you mentioned last time, I think it gets announced in 2006, maybe and closes in 2007.
AMD acquires ATI.
And ATI was a very legit competitor.
It was the only standing legit competitor to NVIDIA through its whole life.
But now AMD acquired it and I think they acquired it for what, six, seven billion dollars,
something like that. Something like that. So it was a lot of money. And then they put
a lot of resources like they weren't just acquiring this to, you know, get some talent.
Like they're like, no, no, this is gonna be a big pipeline for us. We're putting a lot of weight
behind this. We haven't done the research into AMD the way we have into NVIDIA, but the AMD Radeon line, which used to be the ATI Radeon line, that is how you think about AMD
as a company, is that they make these GPUs mostly for the gaming use case.
Yep. Before the acquisition, I think the first PC I built in end of high school,
beginning of college, I think I had a radeon card in it i think i was
probably in the minority i think nvidia was bigger but for whatever reason i liked ati at that point
in time so like they were legit well so here's nvidia now focusing on this whole other thing
and you're still in the gaming market which like we said is like massive rising tide your
competitor now has all these resources and AMD that's fully
dedicated to going after it. Mid 2008, Nvidia whiffs on earnings. Like this is natural. They
took their eye off the ball. Of course they did. And the stock gets just hammered.
Because anything that CUDA empowers is not yet a revenue driver, and they've totally taken their eye off of gaming.
Yes.
So, you know, we said the high was around a $20 billion market cap.
It drops 80%, 8-0.
This isn't just the financial crisis.
It's almost quaint, I think, you know, for me thinking back on the financial crisis now and, like, people freaking out the Dow, you know, or the S&P dropping 5% in a day.
I'm like, oh, that's a Thursday these days.
It is literally the Thursday that we are recording.
Yes. For a company stock to drop 80%, a technology company stock, even during the
financial crisis, they're not just in the penalty box. They're getting kicked to the curb.
Right. Are they done? The headlines at this point are, is NVIDIA's run over? If you're most CEOs at this point in time, you're probably calling up Goldman or, you know,
Allen & Company or Frank Cuatrone and you're shopping this thing because how are you going
to recover? But not Jensen. But not Jensen, obviously. So instead, he goes and builds CUDA and continues to build CUDA.
And this is just set context. We get excited about a lot of stuff on Acquired. But I think
CUDA is one of the greatest business stories of the last 10 years, 20 years, more. I don't know.
What do you think, Ben? I mean, I'd say it's one of the boldest bets we've
ever covered, but so were programmable shaders. And so was NVIDIA's original attempt to make a
more efficient quadrilateral focused graphics. Those were big bets. I think this is a bet on
another scale, though. This is a bet that we don't cover that often on Acquire.
Those were big bets relative to the company's size at the time, but this bet is like an iPhone-sized bet. That's exactly what this is. It's an iPhone-sized bet.
It is a bet the company when you are already a several billion dollar company.
Yes. An attempt to create something that if they are successful and this market materializes,
this will be a generational company. So what is CUDA? It is NVIDIA's Compute
Unified Device Architecture. It is, as we've referred to thus far throughout the episode,
a full, and I mean full, development framework for doing any kind of computation that you would
want on GPUs. Yeah, and in particular,
it's interesting because I've heard Jensen reference it as a programming language.
I've heard him reference it as a computing platform. It is all of these things. It's an API.
It is an extension of C or C++, so there's a way that it's sort of a language. But importantly,
it's got all these frameworks and libraries that live on top of it.
And it enables super high level application development, you know, really high abstraction layer development for hundreds of industries at this point to communicate down to Kudo, which communicates down to the GPU and everything else that they have done at this point.
This is what's so brilliant. So right after we released, the same day that we released part one,
the first NVIDIA episode we did a couple weeks ago, Ben Thompson had this amazing interview
with Jensen on Stratechery. And Jensen in this interview, I think, puts what CUDA is and how
important it is, I think, better than I've seen anywhere else. So this is Jensen speaking to Ben. We've been advancing CUDA and the ecosystem for 15 years
and counting. We optimize across the full stack, iterating between GPU acceleration, libraries,
systems, and applications continuously, all while expanding the reach of our platform by adding new
application domains that we accelerate. We start with amazing chips, but for each field of science, industry, and application,
we create a full stack. We have over 150 SDKs that serve industries from gaming and design
to life and earth sciences, quantum computing, AI, cybersecurity, 5G, and robotics. And then
he talks about what it took to make this. This is like the point we were trying to
like hammer home here. He says, you have to internalize that this is a brand new programming
model and everything that's associated with being a program processor company or a computing
platform company had to be created. So we had to create a compiler team. We had to think about SDKs.
We had to think about libraries. We had to reach out to developers and evangelize our architecture
and help people realize the benefits of it. And we even had to help them market this
vision so that there would be demand for their software that they write on our platform and on
and on and on. It's crazy. It's amazing. And when he says that it's a whole new programming,
I think he says maybe paradigm or way of programming, it is literally true because
most programming languages up to this point and most computing platforms primarily contemplated
serial execution of programs. And what CUDA did was it said, you know what, the way that our GPUs
work and the way that they're going to work going forward is tons and tons of cores all executing things at the same time.
Parallel programming, parallel architecture.
Today, there's over 10,000 cores on their most recent consumer graphics card.
So insanely, or dare I say embarrassingly parallel.
And CUDA is designed for parallel execution from the very beginning.
That's the catchphrase in the industry of embarrassingly parallel.
And it's actually kind of a technical term.
I don't know why it's embarrassing.
It's basically the notion that this software is so parallelizable, which means that all of the computations that need to be run are independent.
They don't depend on a previous result in order to start executing. It's sort of like it would be embarrassing for you to execute
these instructions in order instead of finding a way to do it parallel.
Ah, it's not that it's parallel that's embarrassing. It's embarrassing if you
were to do it the old way on CPUs, serially. I think that's the implication.
Got it, got it. This is so obvious that it's embarrassingly parallel.
Okay, now it makes sense. Now here's the CUDA grasp. We're going to spend a few minutes
talking about how brilliant this was. Everything we just described, this whole
undertaking, it's like building the pyramids of Egypt or something here. It is entirely free.
NVIDIA to this day, now this may be changing, we'll talk about this at the end of the episode,
has never charged a dollar for CUDA. But anyone can download it, learn it,
use it, you know, blah, blah, blah. All of this work stand on the shoulders of everything NVIDIA has done. But Ben, what is the but? It is closed source and proprietary exclusively to NVIDIA's hardware. stack. It's like if you were to develop an iOS app and then try and deploy it on Windows,
it wouldn't work. It is integrated with the hardware. So OpenCL is sort of the main competitor
at this point. And they do actually let OpenCL applications run on their chips,
but nothing in CUDA is available to run elsewhere. It's so great. Okay. So now you can see this is just like Apple
and it's the Apple business model. Apple gives away all of this amazing platform ecosystem that
they built to developers. And then they make money by selling their hardware for very,
very healthy gross margins. But this is why Jensen is so brilliant because back when they
started down this journey in 2006, even before
that, when they started and then all through it, there was no iOS, there was no iPhone. Like it
wasn't obvious that this was a great model. In fact, most people thought this was a dumb model
that like Apple lost and the Mac was stupid and niche and like Windows and Intel is what won the open ecosystem.
Well, but Windows and Intel did have proprietary development environments and,
you know, full stack dev tools.
Oh, yeah.
There's a lot of nuance here.
It's not like they were like open source per se, but it could run on any hardware.
Well, except that it couldn't.
It could only run on the Intel, IBM, Microsoft alliance world.
It wasn't running on Power PCs.
It wasn't running on anything Apple made.
That's true.
It's funny.
In some ways, NVIDIA is like Apple.
In other ways, they're like the Microsoft, Intel, IBM alliance,
except fully integrated with each other instead of being three separate companies. Yeah, that's maybe a good way to put it. It is sort of somewhere in between. There is
nuance here. Remember when Clay Christensen was bashing on Apple in the early days of the iPhone
being like, yeah, oh yeah, open's gonna win. Android's gonna win. Apple is doomed. You know,
closed never works. You got to be modular. You can't be integrated. And like, you know,
Clay was amazing and one of the greatest strategic. But I think that's just representative to me of like everybody thought that like the Apple model sucked. believe that NVIDIA was going to have the scale required to justify this investment,
or that there was a market to let them achieve the scale to justify this.
That's the thing. Even if you were to say, okay, Jensen, I believe you, and I agree with you that
this is a good model if you can pull it off. At the time, you could be Don Valentine or whoever
looking around, and maybe Don was still looking around because they probably still held the stock, being like, where's the market that's
going to enable the scale you need to run this playbook? All right. So are you going to take us
to 2011, 12? Where are we hopping back in here? If only the world worked like fiction and it
were actually like a truly straight line.
It's never a straight line.
We will get there.
And that is what saves NVIDIA and makes this whole thing work.
But they have some misadventures in between.
So a stock's getting hammered.
It's 2008.
And I'm just completely speculating on my own.
But they're in the penalty box.
They're committed to continuing to invest in CUDA and making general purpose computing on GPU a thing. I do wonder if they felt like, well, we got to do something to appease shareholders here.
We got to show that we're trying to be commercial here. So it's 2008. What's going on
in 2008 in the tech world? It's mobile. So in 2008, they launch the Tegra chip and platform
within NVIDIA. This may not be what saved the company. This is not what saved the company. This
is more a clown car style. Maybe that's too rough on
NVIDIA. But what was Tegra? People might recognize that name. It was a full-on system on a chip for
smartphones competing directly with Qualcomm, with Samsung. It was a processor, an ARM-based CPU,
plus all of the other stuff you would need for a system on a chip
to power Android headsets. I mean, this is like a wild departure. It leverages none of NVIDIA's
core skill sets, except maybe graphics being part of smartphones. But come on, if there's ever a
use case for integrated graphics, it's smartphones. Right. Right. Low power, smaller footprint.
Yep. Totally. Do you know, this is one of my favorite parts about the whole research.
Do you know what the first product was that shipped using a Tegra chip?
Uh, no, it was the Microsoft Zune HD media player.
That just tells you pretty much everything you need to know.
It did though, the Tegra system, it is still around sort of to this day,
empowered the original Tesla Model S touchscreen.
So like before any of the autopilot autonomous driving stuff, they were the processor powering just the infotainment, the touchscreen infotainment in the Model S. And I think that
actually starts to help NVIDIA get into the automotive market. The Tegra platform still
to this day is the main processor of the Nintendo Switch. Oh, they repurposed it for that? Yeah,
for that. And I think they still have their NVIDIA Shield proprietary
gaming device stuff that I don't know that anybody buys those.
Oh, this makes so much sense because they basically have walked away from
every console since the PlayStation 3.
Yep.
And so it's interesting that they have this thriving gaming division that doesn't power
any of the consoles except the Nintendo Switch. And I always sort of wondered,
why did they take on the Switch business? Because they kind of already had it done.
It's not for the graphics cards. It was as somewhere to put the Tegra stuff.
Fascinating. Quick aside, it's funny how these GPU companies have not been good
at transitioning to mobile. There's like a funny naming thing,
but do you know what happened to... So there's the ATI Radeon, which became the AMD Radeon
desktop series. They tried to make mobile GPUs. It didn't go great, and they ended up spinning
that out and selling all that IP to another company. Do you know the company?
Oh, I do not. Was it Apple? It is Qualcomm.
And today is Qualcomm's mobile GPU division and Qualcomm's good at mobile and so it's a natural
home for it. Do you know what that line of mobile GPU processors is called? No. It is the Ardeno,
A-R-D-E-N-O processors. And do you know why it's called
the Ardeno or Ardeno? No, that sounds super familiar, but no. The letters are rearranged
from Radeon. That's great. Yeah, that's great. So you're saying NVIDIA's mobile graphics efforts
didn't quite pan out? No. We didn't talk about this as much in the Sony episode,
but my impression of the whole Android value chain ecosystem
is that there's no profits to be made anywhere,
and Google keeps it that way on purpose.
Ironically, they make a lot of money now on the Play Store.
Ah, yeah, the Play Store and ads.
Right.
I do think the primary way that they monetize it is not having to pay other people The Play Store and ads. Right. I do think the primary way that they
monetize it is not having to pay other people to acquire the search traffic. Right. But I mean,
for like partners, like if you are making everything from chips all the way up through
hardware in the Android ecosystem, I don't think you're making like maybe if you were the scale
player, but like these things are designed to sell for dirt cheap as in products like there's
no margin to be had here.
Yep.
Yep.
Also, before we continue, you just did the sidebar on the AMD mobile graphics chip.
I see your sidebar.
I'm going to raise you one more sidebar that we have to include that you know because the
NZS guys told us about this.
So when NVIDIA is going after mobile, they buy a mobile baseband company called iSara, a British company called iSara in 2011.
You know where I'm going with this.
Oh, yes, I do.
This is so good.
It's a good seed plant to come back to later.
You know, because they're investing in mobile integrity is going to be a thing, blah, blah, blah.
And then a few years later, when they end up pretty much shutting down the whole thing, they shut down what they bought from iSara.
They lay everyone off.
The iSara founders, who made a lot of money when NVIDIA bought them, they go off and they found a company called GraphCore.
We're going to talk about a little bit at the end of the episode.
It's maybe one of the primary sort of...
NVIDIA bear cases.
NVIDIA bear cases, NVIDIA killers out there.
They've now raised about $700 million in venture capital.
Mobile.
In some ways, it's kind of like Bezos and Jet.com.
Yes.
If Jet had been successful.
I think that's sort of the GraphCore to NVIDIA analogy.
Yes.
Well, the jury's still out if anybody's going to be really successful in
competing with NVIDIA. Although I think the market now is probably ironically big enough that large
NVIDIA can be the whale and there can be plenty of big other companies too. So anyway, okay.
Back to the story. So NVIDIA is bumping along through all of this in the early, late 2000s, early 2010s. Some years, growth is 10%.
Maybe it's flat in others. This company has completely gone sideways. In 2011,
they whiff on earnings again. Stock goes through another 50% drawdown. It's cliche. I was going to
say it. I don't even know if you can say it about Jensen. Like, here we are. The company is screwed again. Like, everybody else would have given up,
but obviously not them. So what happens? Basically, a miracle happens. I don't know
that there's any other way that you can describe this except like a miracle. So maybe this is
actually not a great strategy case study of Jensen because it required a miracle. So maybe this is actually not a great strategy case study of Jensen because it required a miracle. Well, Jensen would say it was intentional that they did know the market timing
and that the strategy was right and the investment was paying off and that they were doing this the
whole time. Yeah, sure. In fact, even in the Ben Thompson interview, I think he said,
Ben basically lays out like, how did all these implausible things happen at exactly the right time?
And his response is, oh, yes, we planned it all.
It was so intentional.
Jensen did not plan AlexNet or see it coming because nobody saw AlexNet coming.
So in 2009, a Princeton computer science professor and also undergrad alum of Princeton, just like yours truly, woo, wonderful place, named Fei Fei Li.
Their specialty is artificial intelligence and computer science.
Starts working on an image classifying project that she calls ImageNet.
Now, the inspiration for this was actually a way old project from, I think, the 80s at Princeton called WordNet that was like classifying words.
This is classifying image,
ImageNet. And her idea is to create a database of millions of labeled images, like images that
they have a correct label applied to them, like this is a dog or this is a strawberry or something
like that. And that with that database, then artificial intelligence image recognition algorithms could run against
that database and see how they do. So like, oh, look at this image of, you know, you and I were
looking at it, be like, that's a strawberry. But you don't give the answer to the algorithm and
the algorithm figures out if it thinks it's a strawberry or a dog or whatever. So she and her
collaborators start working on this. It's super cool. They build the database. They use a mechanical Turk, Amazon mechanical Turk to build it. And then one of them, I'm not
exactly sure who, if it was Faye Faye or somebody else has the idea of like, well, you know, we've
got this database. We want people to use it. Well, let's make a competition. This is like a very
standard thing in computer science academia of like, let's have a competition, an algorithm
competition. So we'll do this annually. And anyone, any team can submit their algorithms against the ImageNet
database and they'll compete. Like who can get the lowest error rate, like the most number of
images, percentage of the images correct. And this is great. So it brings her great renown,
becomes popular in the AI research community.
She gets poached away by Stanford the next year.
I guess that's okay because I went there too.
So that's fine.
And she's still there.
I know.
I couldn't resist.
I couldn't resist.
She's like a kindred spirit to me.
Do you know?
I know you do know, but I bet most listeners do not know what her endowed tenure chair
is at Stanford today.
I do. She is the Sequoia chair.
Yes, the Sequoia Capital Professor of Computer Science at Stanford. So cool. Why does she become
the Sequoia Capital chair? And what does all this have to do with NVIDIA? Well, in the 2012
competition, a team from the University of Toronto submits an algorithm that wins the competition.
And it doesn't just win it by like a little bit.
It wins it by a lot.
So the way they measure this is 100% of the images in the database.
What percentage of them did you get wrong?
So it wins it by over 10%.
I think it had a 15%
error rate or something in the next. Like all the best previous ones have been like 25 point
something percent. Yes. This is like someone breaking the four minute mile. Actually, in some
ways it's more impressive than the four minute mile thing because they just didn't brute force
their way all the way there. They like a completely different approach. And then boom, showed that we could get way more accurate than anyone else ever thought.
So what was that approach? Well, they called the team, which was composed of Alex Krizebski,
was the primary lead of the team. He was a PhD student in collaboration with Ilya Sutskovor
and Jeff Hinton.
Jeff Hinton was the PhD advisor of Alex.
They call it AlexNet.
What is it?
It is a convolutional neural network, which is a branch of artificial intelligence called
deep learning.
Now, deep learning is new for this use case, but Ben, you weren't exactly right.
It had been around for a long case, but Ben, you weren't exactly right.
It had been around for a long time, a very long time.
And deep learning, neural networks, this was not a new idea.
The algorithms had existed for many decades, I think, but they were really, really, really computationally intensive.
They required to train the models to do a deep neural network.
You need a lot of compute, like on the order of grains of sand that exist on Earth.
It was completely impossible with a traditional computer architecture that you could make
these work in any practical applications.
And people were forecasting too, like, when with Moore's law, when will we be able to do this? And it still seemed like the far future, because not only did Moore's law need to happen, but you also needed the NVIDIA approach of massively parallelizable architecture, where suddenly you could get all these incredible performance gains, not just because you're putting, you know, more transistors in a given space, but because you're able to run programs in parallel
now. Yes. So Alex Nat took these old ideas and implemented them on GPUs. And to be very specific,
it implemented them in CUDA on NVIDIA GPUs. We cannot overstate the importance of this moment,
not just for NVIDIA, but for like computer science,
for technology, for business, for the world, for us staring at the screens of our phones all day,
every day. This was the big bang moment for artificial intelligence and NVIDIA and CUDA
were right there. Yep. It's funny. There's another example within the next couple of years, 2012-2013, where NVIDIA had been thinking about this notion of general-purpose computing for their architecture for a long time. In fact, they even thought about, should we relaunch our GPUs as GPGPUs, general-purpose graphics processing units? And of course, they decided not to do that, but
just built CUDA. Which is code word for like, we've been searching for years for a market for
this thing. We can't find a market. So we'll just say you can use it for anything. Right. And so
deep learning is generating a lot of buzz, you know, a lot from this AlexNet competition.
And so in 2013, Brian Catanzaro, who's a research scientist at NVIDIA, published a paper with
some other researchers at Stanford, which included Andrew Ng, where they were able to
take this unsupervised learning approach that had been done inside the Google Brain
team, where the Google Brain team had sort of published their work on this, and it had
a thousand nodes.
And this is a big part of the sort of early neural
network hype cycle of people trying cool stuff. And this team was able to do it with just three
nodes. So totally different models, super parallelized, lots of compute for a super
short period of time in a really high performance computing way or HPC as it would sort of become
known. And this ends up being the very core of what becomes
CU-DNN, which is the library for deep neural networks that's actually baked into CUDA that
makes it easy for data scientists and research scientists everywhere who aren't hardware
engineers or software engineers to just pretty easily write high performance deep neural networks on nvidia
hardware so this alex net thing plus then brian and andrew ing's paper it just collapses all these
sort of previously thought to be impossible lines to cross and just makes it way easier and way more
performant and way less energy intensive for other teams to do it in the future. Yep. And specifically to do deep learning. So I think at this point, like everybody knows that
this is pretty important, but it's not that much of a leap to say, if you can train a computer
to recognize images on its own, that you can then train a computer to see on its own, to drive a car on its
own, to play chess, to play Go, to make your photos look really awesome when you take them
on the latest iPhone, even if you don't have everything right. To eventually let you describe
a scene and then have a transformer model paint that scene for you in a way that is unbelievable that a human didn't make it.
Yep. And then most importantly, for the market that Jensen and NVIDIA are looking for,
you can use the same branch of AI to predict what type of content you might like to see next show up in your feed of content and what type of ad might work really, really, really well on you.
So basically, all of these people we were just talking about, I bet a lot of you recognize their
names. They get scooped up by Google. Feifei Li goes to Google. Brian went to Baidu. And he's
back at NVIDIA now doing applied AI. Brian went to Baidu. Jeff Hinton goes to Facebook. So, you know, all the
other markets, like even throw out, say you don't believe in self-driving cars, you don't think it's
going to happen or any of this other stuff. Like, it doesn't matter. Like the market of advertising,
of digital advertising that this enables is a freaking multi-trillion dollar market.
And it's funny because like that feels like, oh, that's the killer use case,
but that's just the easiest use case.
That's the most obvious,
well-labeled data set that
these models don't
have to be
amazingly good because they're not
generating unique output.
They're just assisting in making
something more efficient. But then, like,
flash forward 10 more years, and now we're in these
crazy transformer models with I don't know if it's hundreds of millions or billions of parameters,
things that we thought only humans could do are now being done by machines. And it's like,
it's happening faster than ever. So I think to your point, David, it's like, oh, there was this
big cash cow enabled by neural networks and deep learning in advertising. Sure. But that
was just the easy stuff. Right. But that was necessary though. This was finally the market
that enabled the building of scale and the building of technology to do this. And, uh,
yes, in the Ben Thompson, um, Jensen interview, Ben actually says this when he's sort of
realizing this talking to Jensen, he says, this has been talking the way value accrues on
the internet in a world of zero marginal costs, where there's just an explosion in abundance of
content that value accrues to those who help you navigate the content. And he's talking about
aggregation theory, duh. And then he says, what I'm hearing from you, Jensen is that yes,
the value accrues to people that help you navigate that content, but someone has to make the chips
and the software so that they can do that effectively. And it's like it sort of used
to be with Windows was the consumer facing layer and Intel was the other piece of the
Wintel monopoly. This is Google and Facebook and a whole list of other companies on the consumer
side. And they're all dependent on NVIDIA. And that sounds like a pretty good place to be.
And indeed, it was a pretty good place to be. Amazing place to be. Oh my gosh. The thing is like the market did not realize this for
years. And I mean, I didn't realize this and you probably didn't realize this. We were
the class of people working in tech as venture capitalists that should have.
Ooh, do you know the Marc Andreessen quote? Ooh, no.
Oh, this is awesome. Okay. So it's a a couple years later. So it's like getting more obvious, but it's 2016.
And Marc Andreessen gave an interview.
He said, we've been investing in a lot of companies
applying deep learning to many areas.
And every single one effectively comes in
building on NVIDIA's platforms.
It's like when people were all building on Windows
in the 90s or all building on the iPhone in the late 2000s.
And then he says, for fun,
our firm has an internal game of what public companies we'd invest in if we were a hedge fund. in the 90s or all building on the iPhone in the late 2000s. And then he says, for fun,
our firm has an internal game of what public companies we'd invest in if we were a hedge fund.
We'd put in all of our money to NVIDIA. This is like, it was Paradigm, right? That called all of their capital in one of their funds and put it into Bitcoin when it was like
$3,000 a coin or something like that. We all should have been doing this. So literally NVIDIA stock in 20,
like recent, like this is now known 2012, 13, 14, 15. It doesn't trade above like five bucks a share.
And NVIDIA today, as we record this as I think about two 20 a share, the high in the past year
has been well over 300. Like if you realized what was going on and, on, and again, in a lot of those years, it was not that
hard to realize what was going on. Wow. Like, it was huge. It's funny. So there was even and we'll
get to what happened in 2017 and 2018 with crypto in a little bit. But there was a massive stock
run up to like $65 a share in 2018. And even as late as I think the very beginning of 2019, you could have gotten it.
I tweeted this and we'll put the graph on the screen in the YouTube version here.
You could have gotten it in that crash for 34 bucks a share.
In 2019!
If you zoom out on that graph, which is the next tweet here,
you can see that in retrospect, that little crash just looks like nothing. You don't even
pay attention to it in the crazy run-up that they had to 350 or whatever their all-time high was.
Yeah, it's wild. And a few more wild things about this. It's not until 2016,
again, AlexNet happens in 2012, it's not until 2016 that NVIDIA gets back to the $20 billion
market cap peak that they
were in 2007 when they were just a gaming company.
That's almost 10 years.
I really hadn't thought about it the way that you're describing it.
But the breakthrough happened in 2010, 2011, 2012.
Lots of people had the opportunity, especially because freaking Jensen's talking about it
on stage.
He's talking about it at earnings calls at this point.
He's not keeping this a secret.
No, he's trying to tell us all that this is the future. And people are still skeptical.
Everyone's not rushing to buy the stock. We're watching this freaking magic happen,
using their hardware, using their software on top of it. And even semiconductor analysts who are
students of listening to Jensen talk and following
the space very closely. So I think he sounds like a crazy person when he's up there espousing that
the future is neural networks and we're going to go all in and we're not pivoting the business,
but from the amount of attention that he's giving in earnings calls to this versus the gaming.
I mean, everyone's just like, uh uh are you off your rocker well i think people had just
lost trust and interest you know after like there were so many years of like they were so early with
cuda and early taking out again they didn't even know that this like they didn't know alex net was
gonna happen right jensen felt like the gpu platform could enable things that the CPU paradigm could not. And he had this faith
that something would happen. But he didn't know this was going to happen. And so for years,
he was just saying that, like, we're building it, they will come.
And to be more specific, it was that, well, look, the GPU has accelerated the graphics workload. So
we've taken the graphics workload off of the CPU.
The CPU is great.
It's your primary workhorse for all sorts of flexible stuff.
But we know graphics needs to happen in its own separate environment and have all these
fancy fans on it and get super cooled.
And it needs these matrix transforms.
The math that needs to be done is matrix multiplication.
And there was starting to be this belief that like, oh, well, because the, you know, professor, the apocryphal professor told
me that he was able to use these program, the matrix transforms to work for him. You know,
maybe this matrix math is really useful for other stuff. And sure it was for scientific computing.
And then honestly, like it fell so hard into NVIDIA's lap that the thing that made deep learning work
was massively parallelized matrix math.
And they're like, NVIDIA is just staring down at their GPUs like,
I think we have exactly what you are looking for.
Yes.
There's that same interview with Brian Catanzaro.
He says about when all this happened, he says,
the deep learning happened to be the most important of all applications that need high
throughput computation. Understatement of the century. And so once NVIDIA saw that,
it was basically instant. The whole company just latched onto it. There's so many things to law
Jensen for. He was painting a vision for the future, but he was paying very close attention and the company was paying very close attention to anything that was happening. And then when they saw that this was happening, they were like an accident of history. In some ways, it feels so intentional that graphics is an embarrassingly parallel problem because every pixel on a screen is
unique. I mean, they don't have a core to drive every pixel on the screen. There's only
10,000 cores on the most recent NVIDIA graphics cards, but there's not, which is crazy, right?
But there's way more pixels on a screen. So they're not all doing every single pixel at
the same time, every clock iteration. But it worked out so well that neural networks also
can be done entirely in parallel like that, where every single computation that is done
is independent of all the other computations that need to be done. So they also can be done on this super parallel set of cores. It's just, you got to wonder, when you kind of reduce all this stuff to just math,
it is interesting that these are two very large applications of the same type of math
in the search space of the world of what other problems can we solve with parallel matrix
multiplication. There may be more.
There may even be bigger markets out there. Totally. Well, I think there probably will be.
A big part of Jensen's vision that he paints for NVIDIA now, which we'll get to in a sec, is
this is just the beginning. There's robotics, there's autonomous vehicles, there's the Omniverse,
it's all coming.
It's funny.
We just joked about how nobody saw this before the run-up in 2016, 2017.
There were all these years where Mark Andreessen knew whether he made money in his personal account or not.
We'll have to ask him.
But then in 2018, another class of problems that are embarrassingly paralyzable is, of course, cryptocurrency mining.
And so a lot of people were going out and buying consumer NVIDIA graphics cards and using them to set up crypto mining rigs in 2016, 2017.
And then when the crypto winter hit in 2018 and the end of the ICO craze and all that, the mining rig demand fell off. And this
had become so big for NVIDIA that their revenue actually declined. Right. Yeah. So a couple
interesting things here. Let's talk about technically why. So the way crypto mining
works is effectively guess and check. You're effectively brute forcing an encryption scheme.
And when you're mining, you know, you're trying to discover the answer to something that is hard to discover. So you're guessing. If that's not the right thing,
you're incrementing, you're guessing again. And that's a vast oversimplification and not
technically exactly right, but that's the right way to think about it. And if you were going to
guess and check at a math problem, and you had to do that on the order of a few million times
in order to discover the right answer, you could very unlikely discover the right answer on the order of a few million times in order to discover the right answer. You could very unlikely
discover the right answer on the first time, but that probabilistically is only going to happen
to you once if ever. And so, well, the cool thing about these chips is that A, they have a crap ton
of cores. So the problem like this is massively parallelizable because instead of guessing and
checking with one thing, you can guess and check with 10,000 at the same time and then 10,000 more and then 10,000 more.
And the other thing is it is matrix math. So yet again, there's this third application beyond
gaming, beyond neural networks. There's now this third application in the same decade for the two
things that these chips are uniquely good at. And so it's interesting that you could build
hardware that's better for crypto mining or better for AI, and both of those things have been built
by NVIDIA and their competitors now. But the sort of general-purpose GPU happened to be pretty darn
good at both of those things. Well, at least way, way, way better than a CPU.
Yeah. As some of NVIDIA's startup competitors put it today,
and Cerebrus is the one that I'm thinking of,
they sort of say, well, the GPU is a thousand times better,
or much, much better than a CPU for doing this kind of stuff.
But it's like a thousand times worse than it should be. There exists much more optimal solutions for, you know, doing some of this AI
stuff. Interesting. Really begs the question of like, how good is good enough in these use cases?
Right. And now, I mean, to flash way forward, the game that NVIDIA and everyone else,
all these upstarts are playing is really, it's still the accelerated computing
game, but now it's how do you accelerate workloads off the GPU instead of off the CPU?
Interesting. Well, back to Crypto Winter. The NVIDIA stock gets hammered again. It goes through
another 50% drawdown. This is just like every five years, this has got to happen.
Which is fascinating because at the end of the day, it was a thing completely outside their
control. People were buying these chips for a use case that they
didn't build the chips for. They had really no idea what people were buying them for. So it's
not like they could even get really good market channel intelligence on are we selling to crypto
miners or are we selling to, you know, people that are going to use these for gaming. They're
selling to Best Buy and then people go buy them in Best Buy. Right. And some people are buying them wholesale, like if you're actually starting a
data center to mine. But a lot of people are just doing this in their basement with consumer
hardware. So they don't have perfect information on this. And then, of course, the price crashing
makes it either unprofitable or less profitable to be a miner. And so then your demand dries up
for this thing that you, A, didn't ask for
and B, had poor visibility into knowing if people were buying in the first place. So the management
team just looks terrible to the street at this point because they had just no ability to understand
what was going on in their business. And I think a lot of the street was still this hangover of
skepticism about, is this a deep learning thing like what jensen okay
and so you know it's kind of any excuse to sell off it took but anyway that was short-lived to
the 50 percent dip because uh with the use case and specifically the enterprise use case for
gpus for deep learning like it just off. And so this is really interesting.
If you look at NVIDIA's,
they report financials
a couple of different ways,
but one of the ways
they break it out
is a few different segments
is the gaming consumer segment
and then their data center segment.
And it's like data center,
like what are they in the data center?
Well, all the data centers, right?
All of the stuff we're talking about,
it's all done in the data center.
Like Google isn't going and buying, you know, a bunch of NVIDIA GPUs centers for right all of the stuff we're talking about it's all done in the data center like
google isn't going and buying you know a bunch of nvidia gpus and hooking them up to the laptops
of their software engineers like is stadia still a thing like i think that's used for cloud gaming
and some like there are but it's all happening in the data center is uh my point right right i
guess what i'm saying my argument is every time I see data center revenue, in my mind, I sort of make it synonymous with this is their ML segment.
Ah, yes, yes. That's what I'm saying. I agree.
Yeah.
Now the data center, this is really interesting, again, because
they used to sell these cards that would get packaged, put on a shelf, a consumer would buy
them. Yeah, they made some specialty cards for the scientific computing market and stuff like that.
But this data center opportunity, like, man, do you know the prices that you can sell
gear to data centers for? Like, it makes the RTX 3090 look like a pittance.
And the RTX 3090, which is their most expensive high-end graphics card that you can buy as a
consumer, was $3,000. Now it's like $2,000. But if you're buying,
I don't know, what's the latest? It's not the A100. It's the H100.
So the A100, they just announced the H100.
And that's what, like 20 or 30 grand in order to just get one card?
Yeah. And people are buying a lot of these things.
Yeah, it's crazy. It's crazy.
It's funny. I tweeted about this and I was sort of wrong,
but then like everything there's nuance, you know, Tesla has announced making their own hardware.
They're certainly doing it for the, on the car, the inference stuff, like the full self-driving computer on Tesla's. They now make those chips themselves, the Tesla dojo, which is the training
center that they announced. They announced they were also going to make their own Silicon for
that. They actually haven't done it yet.
So they're still using NVIDIA chips for their training.
The current compute cluster that they have, that they're still using,
I want to say I did the math and assumed some pricing.
I think they spent between $50 and $100 million
that they paid NVIDIA for all of the compute in that cluster.
Wow, that's one customer.
That's one customer for one use case at that one customer.
Crazy. I mean, you see this show up in their earnings. So we're at the part of the episode
where we're close enough to today that it's best illustrated by the today numbers. So I'll just
flash forward to what the data center segment looks like now. So two years ago, they had about
$3 billion of revenue, and it was only about half of their gaming revenue now. So two years ago, they had about $3 billion of revenue,
and it was only about half of their gaming revenue segment. So gaming, through all this,
through 2006 to AlexNet, all the way another decade forward to 2020, gaming is still king.
It generates almost $6 billion in revenue. The data center revenue segment was $3 billion,
but had been pretty flat for a couple of years. So then insanely over the last two years, it 3x'd. The data center segment 3x'd. It is now doing over $10.5 billion a year in revenue, and it's basically the same size as the gaming segment. It's nuts. It's amazing how it was sort of obvious in the mid-2010s, but when the enterprises really showed up and said, we're buying all this hardware and putting it in our data centers. And whether that's the hyperscalers, the cloud folks, Google, Microsoft, Amazon, putting it in their data centers, or whether it's companies doing it in their own private clouds or whatever they want to call it these days, on-prem data centers, everyone is now using machine learning hardware in the data center.
Yep. And NVIDIA is selling it for very, very, very healthy gross margins, Apple level gross
margins. Yes, exactly. So speaking of the data center, a couple things. One, in... This is so NVIDIA.
In 2018, they actually do change the terms of the user agreements of their consumer cards,
of GeForce cards, that you cannot put them in data centers anymore.
They're like, we really do need to start segmenting a little bit here.
And we know that the enterprises have much more willingness to pay.
And it is worth it
i mean you buy these crazy data center cards and they have like twice as many transistors and
actually they don't even have video outputs like you can't use the data center gpus like the a100
does not have video out so they actually can't be used as graphic cards oh yeah there was a there's
a cool um linus tech tips video about this where they get a hold
of an A100 somehow, and then they run some benchmarks on it, but they can't actually
drive a game on it. Oh, fascinating. Yeah, so fun. Data center stuff is super high horsepower,
but of course, useless to run a game on because you can't pipe it to a TV or a monitor. But then
it's interesting that
they're sort of artificially doing it the other way around and saying, for those of you who don't
want to spend $30,000 on this and are trying to like make your own little rig at home, your own
little data center rig at home, no, you cannot rack these things. Don't think about going to
Fry's and buying a bunch of G-forces. Ironic because that's how the whole thing started. But
anyway, in 2020, they acquire an Israeli
data center compute company called Mellanox that I believe focuses on like a networking compute
within the data center. Yep. For about 7 billion, integrate that into, you know, their ambitions and
building out the data center. And the way to think about what Mellanox enables them to do is
now they're able to have super high bandwidth, super low latency connectivity in the data center between their hardware. So at this point, they've
got NVLink, which is their, it's like the, what does Apple call it? A proprietary interconnect,
or I think AMD calls it the infinity fabric. It's the like super high bandwidth chip to chip
connection. So think about what Mellanox lets them do is it lets them have these extremely high bandwidth switches in the data center to then let because they're buying solutions from NVIDIA. They're buying big boxes with lots of stuff in
them. You say solutions. I hear gross margin. That's such a great quote. We should like frame
that and put it on the wall of the acquired museum. It is true that acquiring Mellanox not only like enables this, now we have the super high
connectivity thing, but this is what leads to this introduction of this third leg of the stool
of computing for NVIDIA that they talk about now, which is you had your CPU. It's great. It's your
workhorse, you know, it's your general purpose computer. Then there's the GPU, which is really
a GPGPU that they've really beefed up. And they've
really, for the enterprise, for these data centers, they've put tensor cores in it to do the
machine learning specific 4x4x4 matrix multiplication super fast and do that really
well. And they've put all this other non-gaming data center specific AI modules onto these chips
and this hardware. And now what they're saying is,
you've got your CPU, you've got your GPU, now there's a DPU. And this data processing unit
that's kind of born out of the Mellanox stuff is the way that you really efficiently communicate
and transform data within data centers. So the unit of how you think about... The black box
just went from a box on a rack to now you can kind of think about your about, like the black box just went from a box on a rack to
now you can kind of think about your data center as the black box.
And you can write at a really high abstraction layer.
And then NVIDIA will help handle how things move around the data center.
All right, listeners, our next sponsor is a new friend of the show, Huntress.
Huntress is one of the fastest growing and most loved cybersecurity
companies today. It's purpose built for small to midsize businesses and provides enterprise
grade security with the technology, services and expertise needed to protect you. They offer a
revolutionary approach to manage cybersecurity that isn't only about tech. It's about real people
providing real defense around the clock.
So how does it work? Well, you probably already know this, but it has become pretty trivial for
an entry-level hacker to buy access and data about compromised businesses. This means cyber
criminal activity towards small and medium businesses is at an all-time high. So Huntress
created a full managed security platform for their customers to guard from these
threats. This includes endpoint detection and response, identity threat detection response,
security awareness training, and a revolutionary security information and event management product
that actually just got launched. Essentially, it is the full suite of great software that you need to secure your business, plus 24-7 monitoring by an elite team of human threat hunters in a security operations center
to stop attacks that really software-only solutions could sometimes miss.
Huntress is democratizing security, particularly cybersecurity, by taking security techniques
that were historically only available to large enterprises and bringing them to businesses with as few as 10, 100, or 1,000 employees at price points that make sense for them.
In fact, it's pretty wild.
There are over 125,000 businesses now using Huntress, and they rave about it from the hilltops.
They were voted by customers in the G2 rankings as the industry leader in
endpoint detection and response for the eighth consecutive season and the industry leader in
managed detection and response again this summer. Yep. So if you want cutting-edge cybersecurity
solutions backed by a 24-7 team of experts who monitor, investigate, and respond to threats with
unmatched precision, head on over to huntress.com slash
acquired or click the link in the show notes. Our huge thanks to Huntress.
Okay, so I said one more thing on the data center. Yes. That one more thing is, it's easy to forget
now. I know because we've just been deep on this. NVIDIA was going to buy ARM. Do you remember this?
Yes, they were. And in fact, this is going to be like a corporate
communications nightmare. Everyone out there, Jensen, their IR person, different tech people
who are being interviewed on various podcasts, were talking about the whole strategy and how
excited they are to own ARM and how NVIDIA is going to be, you know, it's good on its own,
but it could be so much better if we had ARM and here's all the cool stuff we're going to do with
it. And then it doesn't happen. They were talking about it like it was a done deal.
And now you've got dozens of hours of people talking about the strategy. So you're almost
like, it's funny that now after listening to all that, I'm sort of like disappointed
with NVIDIA's ambition on its own without having the strategic assets of Arm.
Yeah. We should revisit Arm at some point we did do the softbank acquiring arm episode
years and years ago now but you know you think arm like they are a cpu architecture company
whose primary use case is mobile and smartphones right so like everything that intel screwed up on
back in the misguided mobile era now Now they're going and buying like the most
important company in that space. You know, and it's interesting, like again, in the Ben Thompson
interview, Jensen talks all about this and maybe this is just justifying in retrospect, but I don't
think so. He's like, look, it was about the data center. Yeah. Like everything arm does is like,
great. And that's fine. But like, we want to own the data center. When we say we want to own the
data center, we want to own everything in the data center. And we think ARM chips, ARM CPUs can be really a really important part of that. ARM is
not focusing right now enough on that. Why would they? Their core market is mobile. We want them
to do that. We think there's a huge opportunity. We wanted to own them and do that. And indeed,
this year, NVIDIA announced they are making a data center CPU, an ARM-based
data center CPU called Grace to go with the new Hopper architecture for their latest GPU.
So there's Grace and Hopper.
Of course, the rear admiral, Grace Hopper, I think.
I think that's right.
Yeah, she was in the Navy.
It's a great computer scientist pioneer.
So yeah, like data center.
It's big. It's a great computer scientist, pioneer. So yeah, like data center. It's big.
It's interesting. So the objectors to that acquisition, and it's a good objection,
and this is ultimately, I think, why they abandoned it, because I get the regulatory pressure on this is, ARM's business is simple. They make the IP, so you can license one of two
things from them. You can license the instruction set. So even Apple, who designs their own chips,
is licensing the ARM instruction set. And so in order to use that, I don't know what it actually
is, 20 keywords or so that can get compiled to assembly language to run on whatever the chip is,
you know, if you want to use these instructions, you have to license it from ARM, great.
And if you don't want to be Apple and you don't want to go build your own chips or you don't
want to be NVIDIA or whatever, but you want to use that instruction set, you can also license
these off-the-shelf chip designs from us.
And we will never manufacture any of them.
But you take one of these two things you license from us, you have someone like TSMC make them,
great, now you're a fabulous semiconductor company.
And they sell to everyone.
And so, of course, the regulatory body is going to step in and being like, wait, wait,
wait.
So NVIDIA,
you're a fabulous chip company. You're a vertically integrated business model. Are you going to stop
allowing ARM licenses to other people? And NVIDIA goes, oh, no, no, no, no. Of course we would never
do that. Over time, they might do some stuff like that. But the thing that they were sort of like,
which is believable, beating the drum on that the strategy was going to be, is right now our
whole business's strategy is that CUDA and everything built on top of it, our whole software
services ecosystem is just for our hardware. And how cool would it be if you could use that stuff
on ARM designed IP, either just using the ISA or also using the actual designs that people
license from them,
how cool would it be if, because we were one company, we were able to make all of that stuff
available for ARM chips as well? Plausible, interesting, but no surprise at all that they
face too much regulatory pressure to go through with this. No, but clearly that idea rattled
around in Jensen's head a bunch and in nvidia's head because um well let's catch
us up to today so they just did gtc at the end of march the big uh developer the big gpu developer
conference that they do every year that they started in 2009 as part of building the whole
cuda ecosystem i mean it's so freaking impressive now. There are now 3 million registered CUDA developers,
450 separate SDKs and models. For CUDA, they announced 60 new ones at this GTC. We talked
about the next generation GPU architecture with Hopper and then the Grace CPU to go along with it.
I think Hopper, I could be wrong on this. I think Hopper is going to be
the world's first four nanometer process chip using TSMC's new four nanometer process, which is,
I think that's right. Amazing. To talk a lot about Omniverse. We're going to talk about Omniverse
in a second, but you mentioned this licensing thing. They usually do their investor day,
their analyst day at the same time as GTC. And in the analyst day,
Jensen gets up there. It's just so funny. I've been going through the whole history of this now,
of looking for a market, trying to find some market of any size. And he's like,
we are targeting a trillion dollar market. He's like a startup raising a seed round,
walking in with a pitch deck. We'll put this graphic up on the screen for those watching the video. It's an articulation of what the segments are of this trillion dollar addressable opportunity that
NVIDIA has in front of it. My view of this is if their stock price wasn't what it was,
there's no way that they would try to be making this claim that they're going after a trillion
dollar market. I think it's squishy.
Oh, there's a lot of squish in there.
But the fact that they're valued today, I mean, what's their market cap right now?
Something like half a trillion dollars.
They need to sort of justify that unless they are willing to have it go down.
And so they need to come up with a story about how they're going after this ginormous opportunity, which maybe they are, but it leads to things like an investor day
presentation of let us tell you about our trillion dollar opportunity ahead. And the way that they
actually articulate it is we are going to serve customers that represent a hundred trillion dollar
opportunity, and we will be able to capture about one percent of that.
God, it's just like a freaking seed company pitch deck.
If we just get one percent of the market.
Well, that's the thing. We're going to talk about this in narratives in a minute,
but this is a generational company. This is unbelievable. This is amazing. There's so much
to admire here. This company did what, like 20 something billion in revenue last year and is
worth half a trillion
dollars? They did $27 billion last year in revenue. Google AdWords revenue in the fourth
quarter of 2021 was $43 billion. Google as a whole did $257 billion in revenue. So you got to believe
if you're an NVIDIA shareholder.
Right. They're the eighth largest company in the world by market cap, but these revenue numbers are in a different order of magnitude.
You've got to believe it's on the come.
Yeah, you do. I mean, NVIDIA has literally three times the price-to-sales ratio of Apple,
or price-to-revenue, as Apple, and nearly 2x Microsoft. And that's on revenue. I mean,
fortunately, this NVIDIA story is not speculative in the way that an early stage startup is
speculative. Even if you think it's overvalued, it is still a very cash generative business.
Yes.
They generate $8 billion of free cash flow every year. So I think they're sitting on $21 billion
in cash
because the last few years have been very cash generative,
very suddenly for them.
So the takeaway there is by any metric,
price of sales, price earnings, all that,
they're much more richly valued
than an Apple or Microsoft or these FANG companies.
But it is, you know, extremely profitable business,
even on an operating profits perspective.
Well, you sold enough of that enterprise data center goodness and you can make some money.
It's crazy. They now have a 66% gross margin. So that illustrates to me how seriously differentiated
they are and how much of a moat they have versus competitors in order to price with that kind of
margin. Because think back, we'll put it up on the screen here, but back in 99, they had a gross margin of 30% on their graphics chips.
And then in 2014, they broke the 50% mark.
And then today, and this slide really illustrates it,
it's architecture, systems, data center, CUDA, CUDAx.
It's like the whole stack of stuff that they sell as a solution
and then sort of all bundled together.
And bundle is the right word. I think they get great economics because they're bundling so much stuff together. It's 66% gross margin business now. about a minute ago with ARM and the licensing. So at the analyst day around GTC this year,
they say that they're going to start licensing a lot of the software that they make separately,
licensing it separate from the hardware, like CUDA. And there's a quote from Jensen here. The important thing about our software is that it's built on top of our platform.
It means that it activates all of NVIDIA's hardware chips and system platforms.
And secondarily, the software that we do are industry-defining software.
So we've now finally produced a product that an enterprise can license.
They've been asking for it, and the reason for that is because they can't just go to open source
and download all the stuff and make it work for their enterprise. No more than they could go to
Linux, download open source software and run a multi-billion dollar company with it. You know,
when you were, we were joking a few minutes ago about you say solution and I see margin, you know,
yeah, like open source software companies have become big for this reason, you know,
data bricks, confluent, Elastic.
These are big companies with big revenue based on open source
because enterprises are like, oh, I want that software.
But they're not just going to go to give you JP Morgan.
You're not going to go to GitHub and be like, great, I got it now.
You need solutions.
So to Jensen and NVIDIA, they see this as an opportunity to,
I'm sure this isn't going to be cannibalizing hardware customers for them. I think this is going to be incremental selling services, and then they decide to start selling
those software and services a la carte, it's a strategy conflict. It's your classic vertical
versus horizontal problem, unless you are good at segmentation. And that's sort of what NVIDIA
is doing here, which is what they're saying, well, we're only going to license it to people
that there's no way that they would have just bought the hardware and gotten all this stuff
for free anyway. So if we don't think it's going to cannibalize and they're a completely different
segment and we can do things in pricing and distribution channel and terms of service that
clearly walls off that segment, then we can behave in a completely different way to that segment.
Yeah, and get further returns on our assets that we've
generated. Yep. It is a little Tim Cook though in, you know, Tim Cook beating the services
narrative drum. I mean, it is kind of, you hear public company CEO who has a high market cap and
everyone's asking where the next phase of growth is going to come from and saying,
we're going to sell services and look at this growing business line of licensing that we have.
Oh my goodness. But who else is going to do it wearing a leather jacket?
At is a great point. It's a great point. Frankly, Elon.
But well, we'll talk about cars in a second. Let's hold the Elon.
Okay. So a few other things just to talk about the business today that I think are important to know,
just as you sort of like think about, sort of have a mental model for what NVIDIA is.
It's about 20,000 employees. We mentioned they did $27 billion in revenue last year. We talked about this very high revenue multiple or earnings multiple or however you want to frame
it relative to FANG companies. They're growing much faster than Apple, Microsoft, Google. They're growing at 60% a year. This is a 30-year-old
company that grew 60% in revenue last year. If you're not used to wrapping your mind around that,
startups double and triple. But in the first five years that they exist, Google has had this amazing
run where they're still growing at 40%.
Microsoft went from 10% to 20% over the last decade. Again, amazing they're accelerating,
but like, NVIDIA is growing at 60%. I don't care what your discount rate is,
having 60% growth in your DCF model versus 20 or 40 will get you a lot more multiple.
Inflation be damned.
Inflation be damned. Inflation be damned.
Okay, a couple other things about specific segments of the business
that I think are pretty interesting.
So they have not slept on gaming.
Like we keep beating this NVIDIA data center enterprise machine learning argument.
Yeah, we haven't even talked about ray tracing.
Right, yeah, this RTX set of cards that they came out with.
The fact that they can do ray tracing in real time,
holy crap, for anyone who's looking for sort of a fun dive
on how graphics works,
go to the Wikipedia page for ray tracing.
It's very cool.
You model where all the light sources are coming from,
where all the paths would go in 3D.
The fact that NVIDIA can render that in real time
at 60 frames a second or whatever while
you're playing a video game is nuts. And one of the ways that they do that, they invented this
new technology that's extremely cool. It's called DLSS, Deep Learning Super Sampling. And this,
I think, is where NVIDIA really shines, bringing machine learning stuff and gaming stuff together where they basically
have faced this problem of well we either could render stuff at low resolution with less frames
because we can only render so much per amount of time or we could render really high resolution
stuff with less frames and nobody likes less frames but everyone really high resolution stuff with less frames. And nobody likes less frames, but everyone likes high resolution.
So what if we could cheat death?
And what if we could get high resolution and high frame rate?
And they're sitting around thinking, how on earth could we do that?
And they're like, you know what?
Maybe this 15-year bet that we've been making on deep learning can help us out.
And what they discovered here and invented in DLSS,
and AMD does have a competitor
to this, it's a similar sort of idea, but this DLSS concept is totally amazing. So what they
basically do is they say, well, it's very likely that you can infer what a pixel is going to be
based on the pixels around it. It's also pretty likely you can infer what a pixel is going to be
based on what it was in the previous frames.
And so let's actually render it
at a slightly lower resolution
so we can bump up the frame rate.
And then when we're outputting it to screen,
we will use deep learning to artificially...
At the final stage of the graphics pipeline.
Yes.
Yeah, that's awesome.
It's really cool.
And when you watch the side-by-side on all these YouTube videos, it looks amazing.
I mean, it does involve really tight embedded development with the game developers.
They have to sort of do stuff to make it DLSS enabled, but it just looks phenomenal. And it's so cool that when
you're looking at this 4K or even 8K output of a game at full frame rate, you're like, whoa,
in the middle of the graphics pipeline, this was not this resolution. And then they magically
upscaled it. It's basically making the enhance joke a real thing. That's so awesome. I'm
remembering back to the Riva 128 in the beginning
of when they went to game developers and they were like, yeah, yeah, yeah. All the blend modes
in DirectX, you know, you don't need all of them. Just use these. Yes, exactly. Exactly. And they
have the power to do it. I mean, they have the stick and the carrot with game developers to do
it. Oh, I mean, at this point, no game developer is not gonna make their games optimized
for the latest NVIDIA hardware.
The other thing that is funny
that's within the gaming segment,
because they didn't want to create
a new segment for it, is crypto.
So because they have poor visibility into it
and before they weren't liking the fact
that it was actually reducing
the amount of cards that were available
to the retail channel
for their gamers to go and
buy, what they did was they artificially crippled the card to make it worse at crypto mining.
And then they came out with a dedicated crypto mining card.
Yes. And so like the charitable PR thing from Nvidia is, hey, you know, we really,
we love gamers and we didn't want to make it so that the gamers couldn't get access to,
you know, all the cards they want.
But really they're like, hmm, people are just straight up performing an arbitrage by crypto mining on these cards.
Let's make that more expensive on the cheap cards and let's make dedicated crypto hardware for them to buy to do those.
Let's make that our arbitrage.
Yes.
Your arbitrage is my opportunity. So magically, their revenue is more predictable now,
and they get to make more money
because much like their sort of
terms of service data center thing,
they terms of serviced their way
to being able to create some segmentation
and thus more profitability.
Love it.
Evil, evil genius laugh.
The last thing that you should know
about NVIDIA's gaming segment is this really weird concept of
add-in board partners. So we've been oversimplifying in this whole episode saying,
you go and you buy your RTX 3090 Ti at the store and you run your favorite game on it.
But actually, you're not buying that from NVIDIA the vast majority of the time.
You are going to some third-party partner,
Asus, MSI, Zotac is one.
There's also a bunch of really low-end ones as well
who NVIDIA sells the cards to,
and those people install the cooling and the branding
and all this stuff on top of it.
And you buy it from them.
And it's really weird to me that NVIDIA does that.
I love how consumer gaming graphics cards have become the modern day equivalent of a hot rod.
Oh, dude, as you can imagine, for this episode, I've been hanging a lot on the NVIDIA subreddit.
And it's not actually about NVIDIA or NVIDIA the company or NVIDIA the strategy.
It's like, show off your sick photos of your glowing rig, which is pretty funny.
But like, it feels like a remnant of old NVIDIA that they still do this.
Like, they do make something called the Founder's Edition card, and it's basically a reference
design where you can buy it from NVIDIA directly.
But I don't think the vast majority of their sales actually come from that.
Oh, it's like one of the Android phones that Google makes, Pixel.
Yeah, it's exactly like that, the Pixel.
It's exactly what it is, yeah.
So I suspect that shifts more over time.
I can't imagine a company that wants as much control as NVIDIA does,
loves the add-in board partner thing, but they've built a business on it.
And so they're not really willing to cannibalize and alienate, but I bet if they had their way and
they're becoming a company that can more often have their way, they'll find a way to,
to kind of just go more direct. Make sense. Two other things I want to talk about. One is
automotive. So this segment has been like very small from a revenue perspective for a long time
and seems to not have a lot of growth. But Jensen says in his pitch deck, it's going to be a $300 billion part of the dam.
And I think right now it's something like, is it a billion dollars in revenue? I think it's
like a billion dollars, but it doesn't really grow. I don't even know if it's that much.
Don't quote me on that. So here's what's going on with automotive, which is pretty interesting.
What NVIDIA used to do for automotive is what everyone used to do for automotive,
which is make fairly commodity components that automakers buy and then put in there.
Every technology company has had their fanciful attempt to try to
create a meaningfully differentiated experience in the car.
All have failed.
You think about Microsoft and the Ford Sync.
Ford Sync.
Oh, wow. You think about CarPlay the Ford Sync. Ford Sync. Oh, wow.
You think about CarPlay, kind of, maybe a little bit works. And the only company that's really
been successful has been Tesla at starting a completely new car company. That's the only way
they're able to provide a meaningful, differentiated experience. NVIDIA is, my perception of what
they're doing is they're pivoting this business line, this like
flat, boring, undifferentiated business line to say, maybe EVs, electric vehicles, and autonomous
driving is a way to break in and create a differentiated experience, even if we're not
going to make our own cars. And so I think what's really happening here is when you hear them talk about
automotive now, and they've got this very fancy name for it. It's the something drive platform.
Oh, Hyperion Drive. Is that it? Something like that?
Something like that. But dealing with NVIDIA's product naming is maddening.
But this drive platform, it kind of feels like they're making the full EV, AV hardware software stack,
except for the metal and glass and wheels. And then going to car companies and saying, look,
you don't know how to do any of this. This thing that you need to make is basically a battery and
a bunch of GPUs and cameras on wheels. And like you're issuing these press releases saying you're
going in that direction, but none of this is the core competency of your company except the sales and distribution.
So like, what can we do here?
And if NVIDIA is successful in this market, it'll basically look like, you know, an NVIDIA computer, full software hardware with a car chassis around it that is branded by whatever the car company is.
Like the Android market.
Yeah. And I think we will see
if the shift to autonomous vehicles
is A, real, B, near-term,
and C, enough of a dislocation in that market
to make it so that someone like NVIDIA,
a component supplier,
actually can get to own a bunch of that value chain
versus the auto manufacturer
kind of forever stubbornly
getting to keep all of it and control the experience. Yeah. Which to do a mini bull and
bear on this here before we get to the broader on the company, you know, the bull case for that is
we were again, friend of the show, Jeremy messaging within Slack. Lotus is one of their partners.
Is Lotus going to go
build autonomous driving software? Like, I don't think so. Ferrari? No, you know.
Not at all. They're going to be NVIDIA cars, effectively.
Yeah.
Okay, last segment thing I want to talk about is how we opened the show,
talking about the NVIDIA Omniverse. And this is not Omniverse like Metaverse.
It is similar in that it's kind of a 3D simulation type thing,
but it's not an open world that you wander around in the same way that Meta is talking about,
or that you think about in Fortnite or something like that.
What they mean by Omniverse is pretty interesting.
So a good example of it is this Earth 2, this digital twin
of Earth that they're creating that has these really sophisticated climate models that they're
running that basically is a proof of concept to show enterprises who want to license this platform,
we can do super realistic simulations of anything that's important to you. And what their pitch is to the
enterprise is, hey, you've got something. Let's say it is a bunch of robots that need to wander
around your warehouse to pick and pack, if it's Amazon, who actually, Amazon is a customer. They
showcase Amazon in all their fancy videos. And they say, you're going to be using our hardware
and software to train models,
to figure out the routes for these things that are driving around your data centers.
You're going to be licensing certainly some of our hardware to actually do the inference
to put on the robots that are driving around.
When you want to make a tweak to a model, you're not just going to deploy those to all
the robots.
You kind of want to run that in the omniverse first.
And then when it's working, then you want to deploy it in the real world. And their omniverse pitch is basically,
it's an enterprise solution that you can license from us, where anytime you're going to change
anything in any of your real world assets, first model it in the omniverse. And I think that's
a really powerful, like, I believe in the future of that in a big way,
because I think now that we have the compute, the ability to gather the data and the ability to
actually, you know, run these simulations in a way that has a efficient way of running it and
a good user interface to understand the data, people are going to stop testing in production
with real world assets and everything's going to be modeled in the omniverse first before rolling out this is what an enterprise metaverse is going to be this is not
designed for humans humans may interact with this there will be ui you'll be able to be part of it
the purpose of this is for simulating applications and most of it think, is going to run with no humans there.
Yep. Pretty crazy.
Yeah. It's a good idea. Sounds like a good idea.
All right. You want to talk bear and bull case on the company?
Let's do it. Analysis.
So, I mean, they paint the bull case for us when they say there's a $100 trillion future,
we're going to capture 1% of it. There's $300 billion from automotive. Here's the four or five segments that add up to a trillion dollars of opportunity. Sure, that's like a very neat way
with a bow on it and a very wishy-washy, hand-wavy way of articulating it. So the question sort of
becomes, where's AMD fall in all this? They're a legitimate second place competitor for high-end
gaming graphics, and I think
will continue to be.
That feels like a place where these two are going to keep going head to head.
The bare case is that there's a TikTok rather than a durable competitive advantage for NVIDIA,
but most high-end games you can play on both AMD and NVIDIA hardware at this point.
The question for the data center is, is the future these general purpose GPUs that NVIDIA
continues to modify the definition of GPU to include specialized functions as well,
all this other stuff they're putting in their hardware?
Or is there someone else who is coming along with a completely different approach to accelerated
computing, where they're accelerating workloads off the GPU onto something new, like a Cerebris or like a
GraphCore that is going to eat their lunch in the enterprise AI data center market?
That's an open question. You know, it's interesting. People have been talking about
that for a while. The other big bear case that people have been talking about that for a while.
The other big bear case that people have been talking about, again, for a while now, is the big, big customers of NVIDIA that are paying them a lot of money.
The Teslas, the Googles, the Facebooks, the Amazons, the Apples.
And not just paying them a lot of money and getting,
you know, assets of value of that, they're paying high gross margin dollars to NVIDIA
for what they're getting, that those companies are going to want to say, you know,
it's not that hard to design our own silicon to bring all this stuff in house. We can tune it to
exactly our use cases, sort of similar to the Cerebrus GraphCore
Bear case on NVIDIA. I think in both of these cases, you know, it hasn't happened yet.
Well, there have been a lot of people who have made a lot of noise, but there have been few
that have executed on it. Like Apple has their own GPUs on the M1s, Tesla's switching, hasn't
happened yet, but switching to their own, for the full
self-driving, they're doing their own stuff on the car and they're switching.
Yep, that is switched on the inference side.
Yes.
On device, yes, that has happened. But look, NVIDIA is probably strong in that,
but I think the real thing to watch is the data center.
And Google is probably the biggest bear case there.
Yeah.
It's interesting to talk about these companies, and particularly Cerebrus,
because what they're doing is such a gigantic swing and a totally different take than what everyone else
has done. For folks who hasn't sort of followed the company, they're making a chip that's the
size of a dinner plate. Everyone else's chip is like a thumbnail, but they're making a dinner
plate size chip. And you know, the yields on these things kind of suck. So they need all the redundancy on those huge
chips to make it so that... Oh my god, the amount of expense to do that.
Right. And you can put one on a wafer. These wafers are crazy expensive to make.
Wow. So you get poor yields in the wrong places on a wafer and that whole wafer is toast.
Right. So a big part of the design of Cerebrus is this sort of redundancy and the ability to
turn off different pieces that aren't working. They draw 60 times as much power. They're way
more expensive. Like if NVIDIA is going to sell you a 20 or $30,000 chip, Cerebrus is going to
sell you a $2 million chip to do AI training. And so it is this bet in a big way on hyper-specialized hardware
for enterprises that want to do these very specific AI workloads. And it's deployed in
these beta sites in research labs right now. And not there yet, but it'll be very interesting to
watch if they're able to meaningfully compete for what everyone thinks will be a very large market,
these enterprise AI workloads. I mentioned Google that made a bunch of noise about making their own
silicon in the data center and then stayed the course and stayed really serious about it
with their TPUs. Their business model is different. So nobody knows what the bill of materials is to create a TPU.
Nobody knows really what they cost to run. They don't retail them. They're only available in
Google Cloud. And so Google is sort of counter-positioned against NVIDIA here,
where they're saying, we want to differentiate Google Cloud with this offering that depending
on your workload, it might be much cheaper for
you to use TPUs with us than for you to use NVIDIA hardware with us or anyone else.
And they're probably willing to eat margin on that in order to grow Google Cloud's share in
the cloud market. So it's kind of the Android strategy, but run in the data center.
One thing we haven't mentioned, but we should, is cloud is also part of the NVIDIA story
too.
Like you can get NVIDIA GPUs in AWS and Azure and Google Cloud, and that is part of the
growth story for NVIDIA too.
And NVIDIA is starting their own cloud.
You can get direct from NVIDIA cloud-based GPUs.
Data center GPUs.
Interesting.
Yeah.
It'll be very interesting to see how this all shakes out with NVIDIA, the startups,
and with Google.
I mean, all that said, though, like, I think, but look, NVIDIA is very, very, very richly
valued on a valuation basis right now.
Very.
With another very in there.
It depends if you think their growth will continue.
Are they a 60% growing company year over year over year for a while?
Then they're not richly valued.
But if you think it's a COVID hiccup or a crypto hiccup.
But to the bull bear case and both the startups and the big tech companies doing this stuff in-house, it's not so easy.
You know, like, yeah, Facebook and Tesla and Google and Amazon and Apple are
capable of doing a lot, but we just told this whole story. This is 15 years of CUDA
and the hardware underneath it and the libraries on top of it that NVIDIA has built to go recreate
that and surpass it on your own is such an enormous enormous bite to bite yes and if you're
not a horizontal player and you're a vertical player you better believe that the pot of gold
at the end is worth it for you for this massive amount of cost to create what nvidia has created
yep like nvidia has the benefit of getting to serve every customer. If you're Google, and their strategy is what I think it is,
of not retailing TPUs at any point,
then your customer is only yourself,
so you're constrained by the amount of people
you can get to use Google Cloud.
And at least with Google, they have Google Cloud
that they can sell it through.
Yep. Power.
Ooh, power.
So the way I want to do this section,
because in our NVIDIA episode we covered the first 13 years of the company,
we talked a lot about what does their power look like up to 2006.
And now I want to talk about what does their power look like today.
What is the thing that they have that enables them
to have a sustainable competitive advantage
and continue to maintain pricing power over their nearest competitor, be it Google, Cerebrus in the enterprise, or AMD in gaming. Yep. And just to
enumerate the powers again, as we always do, counter-positioning, scale economies, switching
costs, network economies, process power, branding, and cornered resource. So there are definitely scale economies.
The whole CUDA investment.
Yes.
Not at first, but definitely now
is predicated on being able to amortize
that a thousand plus employees spend
over the base of the three million developers
and all the people who are buying the hardware
to use what those developers create.
This is the whole reason we spent 20 minutes talking about if you were going to run this
playbook, you needed an enormous market to justify the capex you were going to put in.
Right. So very few other players have access to the capital and the market that NVIDIA does to
make this type of investment. So they're basically just competing against AMD for this.
Totally agree.
Scale economies to me is like the biggest one that pops out.
To the extent that you have lock-in to developing on CUDA,
which I think a lot of people really have lock-in on CUDA,
then that's major switching costs.
Yep.
Like if you're going to boot out NVIDIA, that means you're booting out CUDA.
Is CUDA a cornered resource oh interesting maybe i mean it only works with nvidia hardware you could probably
make an argument there's process power or at least there was somewhere along the way with
them having the six month ship cycle advantage that probably has gone away since people trade
around the industry a lot and that wasn't sort of a hard thing for other companies to figure out.
Yeah, I think process power definitely was part of the first instantiation of NVIDIA's
power to the extent it had power. Right. Yeah, I don't know as much today, especially because
TSMC will work with anybody. In fact, TSMC is working with these new startup billion-dollar funded
silicon companies. Yes, they are. Yes. Yeah, it's funny. I actually heard a rumor,
and we can link to it in the show notes, that the Ampere series of chips, which is the one
immediately before the Hopper, the sort of A-series chips, are actually fabbed by Samsung,
who gave them a sweetheart deal.
NVIDIA likes to keep the lore alive around TSMC because they've been this great longtime partner and stuff.
But they do play manufacturers off each other.
I even think that Jensen said something recently,
like Intel has approached us about fabbing some of our chips,
and we are open to the conversation.
Yes, yes, that did happen.
So there was this big cybersecurity hack a couple of months ago by this group Lapsus,
and they stole access to NVIDIA's source code. And actually, Jensen went on Yahoo Finance and
talked about the fact that this happened. I mean, this is a very public incident.
And it's clear from the demands of lapsus where some
of nvidia's power lies because they demanded two things they said one get rid of the crypto
governors like make it so that we can mine which may have been a red herring that might have just
been right them trying to look like a bunch of like crypto miner people and the other thing they
demanded is that nvidia open source all of its
drivers and make available its source code. I don't think it was for CUDA. I think it was just
the drivers. But it was very clear that like, we want you to make open your trade secrets so that
other people can build similar things. And that to me is illustrative of the incredible value and pricing power that NVIDIA gets by owning not only the driver stack, but all of CUDA and how tightly coupled their hardware and software is.
NVIDIA, as we just did this our most recent episode with Hamilton and Chen Yi, NVIDIA is a platform in my mind.
No doubt about it.
CUDA and NVIDIA and general purpose computing on GPUs as a platform. So
whatever, you know, all of the stew of powers that go into making that, that go into making
Apple, Microsoft, you know, and the like go into NVIDIA.
Yep. I think the stew of powers is the right way to phrase that.
Yes.
Anything else here or you want to move to playbook?
Let's move to playbook. So man, I have, I just wrote down in advance one that is such a big one for me. And I'm biased because I try to think about this in investing, particularly in public
markets investing, but like, man, you really, really want to invest in
whoever is selling the picks and the shovels in a gold rush, the AI, you know, ML deep learning
gold rush, uh, those years, gosh, oh my, 2015 into 2016. Like, duh, you know,
Mark Andreessen saying every startup that comes in here that wants to do AI and deep learning,
and they're all using NVIDIA. Like maybe we should have bought NVIDIA. Like, I don't know
if any one of those startups, any given one is going to succeed, but I'm pretty sure NVIDIA
was going to succeed back then.
Yeah, it's such a good point.
Kicking myself.
One I have is being willing to expand your mission.
So it's funny how Jensen in early days would talk about to enable graphics to be a storytelling
medium.
And of course, this led to the invention of the pixel shader and the idea that everybody
can sort of tell their own visual story their own way in a social networked real time way. Very cool. And now it's much more that wherever there is a CPU, there is an opportunity to accelerate that CPU. the best hardware, software, and services solutions to make it so that any computing
workload runs in the most efficient way possible through accelerated computing.
That's pretty different than enable graphics as a storytelling medium.
But also, they need to sell a pretty big story around the TAM that they're going after.
I think there's also something to the whole NVIDIA story, you know, across the whole arc of the company of, you know, it's sort of a cliche thing at this
point in startup land, but so few companies and founders can actually do it. Just not dying.
Yeah. They should have died at least four separate times and they didn't. And part of that was
brilliant strategy. Part of that was things going their
way. But I think a large part of it too was just the company and Jensen, particularly in these most
recent chapters where they're already a public company, just being like, yeah, I'm willing to
just sit here and endure this pain. And I have confidence that we will figure it out. The market
will come. I'm not going to declare game over. One that I have is, we will figure it out. The market will come. Not going to declare game over.
One that I have is, we mentioned at the top of the show,
but the scale of everything involved in machine learning at this point
and anything semiconductors is kind of unfathomable.
You and I mentioned falling down the YouTube rabbit hole with that Asianometry channel,
and I was watching a bunch of stuff on how they make the silicon wafers and my God, floor planning is this just unbelievable exercise at this point in history,
especially with the way that they sort of overlay different designs on top of each other on
different layers of the chip. Yeah. See more about what floor planning is. I bet a lot of
listeners won't know. So it's funny how they keep appropriating these sort of real world, large scale analogies to
chips. So floor planning, the way that an architect would lay out the 15 rooms in a house
or five rooms in a house or two rooms in a house on a chip is laying out all of the circuitry and
wires on the actual chip itself, except of course there's like 10 million rooms.
And so it's incredibly complex.
And the stat that I was going to bring up, which was just mind bending to think about
is that there are dozens of miles of wiring on a GPU.
Wow. That is mind bending because these things are like, you know, I don't know,
they're less than the size of your palm, right?
Right. And it obviously it's not wiring in the way you think about like a wire. I'm going to reach
down and pick up my ethernet cable, but it's wiring in the EUV etched substrate on-ship
exposure is probably the term that I'm looking for here, photolithography exposure.
But it is just so tiny. I mean, you can say four nanometers all you want, David,
but that won't register with
me how freaking tiny that is until you're sort of faced with the reality of dozens of miles of
quote unquote wires on this chip. Yeah. It's not like to me that registers as like, oh yeah,
that's like a decal I put on my hot rod. Four nanometers. I got the S version. But yeah,
like that's what that means. Okay. Here's one that I had that we actually even talked about, which I think will be fun.
So I generated a CapEx graph.
Ooh, fun.
We'll show it on screen here for those watching on video. Obviously, there's a very high-looking
line for Amazon because building data centers and fulfillment centers is very expensive,
especially in the last couple of years when they're doing this massive build out. But imagine without that line for a minute,
Nvidia only has a billion dollars of CapEx per year.
And this is relative for people listening on audio relative to a bunch of other,
you know, Fang type companies.
Yeah. So Apple has $10 billion of spend on capital expenditures per year. Microsoft and Google have $25 billion.
TSMC, who makes the chips, has $30 billion. What a great capital-efficient business that
NVIDIA has on their hands, only spending a billion dollars a year in CapEx. It's like
it's a software business. And it basically is. Well, it is, right? Like TSMC does the fabbing,
NVIDIA makes software and IP.
Yep. So here, this is the best graph for you to very clearly see the magic of the fabless business model that Morris Chang was so gracious to invent when he grew TSMC.
Thank you, Morris.
Another one that I wanted to point out, it's a freaking hardware company. I know they're not a
hardware company, but they're a hardware company with 37% operating margins. So this is even better than
Apple. And for non-finance folks, operating margins, so we talked about their 66% gross
margin. That's like unit economics. But that doesn't account for all the headcount and the
leases and just all the fixed costs in running the business, even after you subtract all that out,
37% of every dollar that comes in
gets to be kept by Nvidia shareholders.
It's a really, really, really cash generative business.
And so if they can continue to scale
and can keep these operating margins
or even improve them
because they think they can improve them,
that's really impressive.
Wow, I didn't realize that's better than Apple's.
Yeah. I think it's not as good as like Facebook and Google because they just run these like-
Well, those are digital monopolies. Like, come on.
Basically zero costs, digital monopolies in some of the largest markets in history,
but it's still very good. We want to thank our longtime friend of the show, Vanta, the leading trust management
platform.
Vanta, of course, automates your security reviews and compliance efforts.
So frameworks like SOC 2, ISO 27001, GDPR, and HIPAA compliance and monitoring, Vanta
takes care of these otherwise incredibly time and resource draining efforts for your organization
and makes them fast and simple.
Yeah, Vanta is the perfect example of the quote that we talk about all the time here on Acquired,
Jeff Bezos, his idea that a company should only focus on what actually makes your beer
taste better, i.e. spend your time and resources only on what's actually going to move the needle
for your product and your customers and outsource everything else that doesn't.
Every company needs compliance and trust with their vendors and customers. It plays a major role in enabling revenue because customers
and partners demand it, but yet it adds zero flavor to your actual product. Vanta takes care of all of
it for you. No more spreadsheets, no fragmented tools, no manual reviews to cobble together your
security and compliance requirements. It is one single software pane of glass that connects to
all of your services via
APIs and eliminates countless hours of work for your organization. There are now AI capabilities
to make this even more powerful, and they even integrate with over 300 external tools. Plus,
they let customers build private integrations with their internal systems. And perhaps most
importantly, your security reviews are now real-time instead of static, so you can monitor and share with your customers and partners to give them added confidence.
So whether you're a startup or a large enterprise, and your company is ready to automate compliance
and streamline security reviews like Vanta's 7,000 customers around the globe, and go back
to making your beer taste better, head on over to vanta.com slash acquired and just
tell them that Ben and David sent you.
And thanks to friend of
the show, Christina, Vanta's CEO, all acquired listeners get $1,000 of free credit, vanta.com
slash acquired. Okay, grading. So I think the way to do this one, David, is what's the A plus case?
What's the C case? What's the F case? I think so. And there's sort of an interesting way
to do this one because you could do it from a shareholder perspective where you have to evaluate
it based on where it's trading today and sort of like what needs to be true in order to have a
A plus investment starting today, that sort of thing. You mean like a Michael Mobison
expectations investing style? Yes, exactly. Or you could sort of close your eyes to the price and say, let's just look at the
company. If you're Jensen, what do you feel would be an A-plus scenario for the company,
regardless of the investment case? I kind of think you have to do the first one, though.
I kind of think it's a cop-out to not think about it like what's the bull and bear investment case from here as we pointed out many times on
the episode there's a lot you got to believe to be a bull on nvidia at this share price so what
are they well one big one is that they continue their incredible dominance and they're, what are they growing,
like 75% or something year over year in the data center.
Yep.
And they just sort of continue to own that market.
I think there's a plausible story there around all the crazy gross margin expansion they've
had from sort of selling solutions rather than, you know, fitting into someone else's
stuff.
I also think with the Mellanox acquisition,
there's a very plausible story around this idea of a data processing unit and around being your
one-stop shop for AI data center hardware. And I think rather than saying like, oh, the upstart
competition will fail, I think you kind of have to say that NVIDIA will find a way to learn from them and then integrate it into their strategy too. Which seems plausible. Yeah,
but they've been very good at changing the definition of GPU over time to mean more and
more robust stuff and accelerate more and more compute workloads. And I think you just have to
kind of bet that because they have the developer attention, because they now have the relationships to sell into the enterprise, they're just going
to continue to be able to do their own innovation, but also fast follow when it makes sense to
redefine GPU as something a little bit heftier and incorporate other pieces of hardware to do
other workloads into it. Yep. I think the question for me on an A-plus outcome
for NVIDIA from the shareholder perspective is, do you need to believe that all the real-world
AI use cases are going to happen? Do you need to believe that some basket, maybe not all of them,
but that some basket of autonomous vehicles, the omniverse, robotics, one or multiple of those
three are going to happen? They're going to be enormous markets and then NVIDIA is going to be
a key player in them. I mean, I think you do because I think that's where all the data center
revenue is coming from is companies that are going after those opportunities. I'm wrestling with whether that is something you have to believe or whether
that's optionality. The reason it would be only optionality, only upside is if the digital AI,
we know that that's a big market. There's no question about that at this point. Is that going
to continue to just get so big? Are we still only
scratching the surface there? How much more AI is going to be baked into all the stuff we do in the
digital world? And will NVIDIA continue to be at the center of that? I don't know. I don't have a
great way to assess how much growth is left there. That is kind of the right question though. Yeah.
They're at an interesting point right now. You know, there was all that early company stuff that we talked about in the first
episode, but at the beginning of this episode, you know, Jensen was really asking you to believe
it's like, Hey, we're building this CUDA thing. Just ignore that. There's no real use case for
it or market. Now there is a real, real use case and market for it, which is machine learning, deep learning in the digital world.
Undeniable.
He's also pitching now that that will exist in the physical world, too.
Yeah, the A-plus is definitely that it does exist in the physical world, and they are the dominant provider of everything you need to be able to accomplish that.
Yep. And if the real world stuff, you know, these little robots that run around
factory floors and the autonomous vehicles, and if that stuff doesn't materialize,
then yeah, there's no way that it can support the growth that it's been on.
I think that's probably right.
That would be my hunch.
Although saying that, though, does feel like a little bit of a
betting against the internet.
I don't know, man. Digital world's pretty
big and it keeps getting bigger.
Yeah, but I think we're saying the same thing.
I think you're saying that these physical experiences
will become more and more intertwined
with your digital experiences.
Yeah.
Autonomous driving
in electric vehicles
is an internet bet. In part, if you want to bet on
the growth of the internet, it'll mean you'll drive less. But it also means that you're just
going to be on the internet when you're driving. Yeah. Yeah. Or when you're in motion in the
physical world. That's actually, that's a bull case for Facebook, right? Is like,
is autonomous vehicles. Because if people are being driven instead of driving,
that's more time there on Instagram right uh it's so true okay what's the failure case it's
actually quite hard to imagine a failure case of the business in any short order it's very easy to
imagine a failure case for the stock in short order if there's a cascading set of events of
people losing faith i think maybe the failure case is this amazing growth for the past couple of years was pandemic
pull forward. It's so hard for me to imagine that that's like to the degree of a Peloton or a Zoom
or something like that. Right. By the way, I think they're a great company. They just got
everything pulled forward. I don't think NVIDIA got everything
pulled forward. They probably got a decent amount pulled forward. Hard to quantify, hard to know,
but it's the right thing to be thinking about. Yeah. All right. Carvouts. Ooh, Carvouts. I've
got a fun one, small one. Well, a collection of small things. Long-time listeners probably know
one of my favorite, I think my favorite series of books that have been written in the past 10 years is the Expanse series.
Amazing sci-fi, nine books. So great. The ninth book came out last fall. It was just,
even with like a newborn, I made time to read this book. That's awesome. Newborn plus acquired. I was like, I got to read this. That's how you know. Recently last month. So the authors have been writing short
stories like companion short stories alongside the main narrative over the last decade that
they've been doing this. And they released a compendium of all the short stories plus a few
new ones called memories legion and
it's just really cool like i mean they're great writers great short stories to read even if you
don't know anything about the expanse story but if you know the whole nine book saga and then these
like just paint little give you a little glimpses into corners and like characters that just exist
and you don't question otherwise but you're like like, oh, what's the backstory of that? I've been really enjoying that.
So it's like the solo of the Fantastic Beasts and where to find them.
Exactly. It's like nine or 10 of those.
Cool. Mine is a physical product. Actually, for the episode we did with Brad Gerstner on Altimeter,
we needed a third camera. And so I went out and bought a Sony RX100, a point and shoot camera,
and recently took it to Disneyland. And I must say, it is so nice to have a point and shoot
camera again. It's like funny how it's gone full circle. I was a DSLR person forever,
and then I got a mirrorless camera, and then I became a mirrorless plus big long zoom lens person.
But it's kind of annoying to lug that around. And then once I started downgrading my phone from the massive, awesome iPhone with the 3x zoom,
and I now have the iPhone 13 mini, I think that's what it is, with the two cameras and no zoom lens
is really disappointing. So it's pretty awesome. It fills a sort of spot in my camera lineup to
have a point and shoot with a really long zoom lens on it. And of course, like, it's pretty awesome it fills a sort of spot in my camera lineup to have a point and shoot with a
really long zoom lens on it and of course like it's not as nice as having a you know full frame
mirrorless with like an actual zoom lens but it really gets the job done and it's nice to have
that sort of like real feeling mirrorless style image that is very clearly from a real camera and
not from a phone that is,
it's slightly more inconvenient to carry because you kind of need another pocket.
Yeah. I was going to ask, can you put it in your pocket?
Yeah, I put it in a pocket. I don't have to have a sort of like a rapid strap around my
neck, which is nice.
Nice.
So the Sony RX100, great little device. It's like the seventh generation of it.
And they've really refined the industrial design at this point that's awesome that's awesome i actually just bought
my first camera cube like a travel camera cube thing for our uh alpha 7c is now that we have uh
literally it's four acquired for when after the altimeter episode i was like oh wow we gotta do
more in person yeah yeah ben brought his down i was like for sure, wow. We've got to do more in person. Yeah. Yeah. Ben brought his down. I was like, for sure, I'm going to need to bring this somewhere.
These cameras are just, they're so good.
They're so good.
All right, listeners, thank you so much for listening.
You should come chat about this episode with us in the Slack.
There's 11,000 other smart members of the acquired community, just like you.
And if you want more acquired content
after this, and you are all caught up, go check out our LP show by searching acquired LP show
in any podcast player. Hear us interview Nick and Lauren from Trova trip most recently.
And we have a job board acquired.fm slash jobs. Find your dream job curated just by us,
the fine folks at the Acquired Podcast.
And we will see you next time.
We'll see you next time.
Is it you? Is it you? Is it you?
Who got the truth now?