Closing Bell - CNBC Special Report: Nvidia CEO Jensen Huang 2/26/25

Episode Date: February 27, 2025

In this CNBC Special Report, Nvidia CEO Jensen Huang joins Jon Fortt for his first interview after the company’s quarterly earnings --- to discuss the numbers, the outlook, chip demand, and the AI l...andscape. Plus – investor and analyst reaction to Jensen Huang’s comments and the impact on shareholders of one of the country’s most important companies.

Transcript
Discussion (0)
Starting point is 00:00:00 Tonight, one of the most important stocks to the market just posted its quarterly report, and in just moments, we're going to bring you the CEO's first interview after reporting those earnings. Welcome to this CNBC special report. I'm John Ford. For the market cap of more than $3 trillion, some on Wall Street are calling NVIDIA's quarterly report a pivotal moment for artificial intelligence. Coming up, my interview with CEO Jensen Huang, where we discuss demand for NVIDIA's Blackwell chips and the impact of DeepSeq on its business. Then we'll talk to our panel of experts, including NVIDIA shareholder Josh Brown and AI expert Cassie Kozarkov. But first, our own Christina Partsenevelis just got off the NVIDIA earnings call, and she joins us with the key takeaways. Christina?
Starting point is 00:00:46 Thank you, Dion. Well, demand for NVIDIA's Blackwell products is absolutely exploding. Their CEO and CFO called it unprecedented in both speed and scale during the earnings call. And get this, their latest chip generation hit $11 billion in sales just in Q4, making it the fastest product ramp ever. And you remember those Blackwell products that were having all of those production headaches, talked about at headlines, overheating issues, packaging problems, yield concerns. CEO Jensen Wong admitted that hiccup cost them a few months, but he was quick to reassure investors they work through those challenges and he promised they won't face the same issues with upcoming Blackwell products like the Blackwell Ultra. So basically, the supply chain problems are clearing up, easing.
Starting point is 00:01:28 These production challenges and the launch of their new products are weighing on margins, which dropped to 71% for Q1, lower than analysts' estimates, and why we saw the stock fall initially right when the press release came out. But several times on the earnings call, CFO Kletkress reassured investors that margins would climb back up to the mid-70s range by the second half of this year, something she reiterated last quarter as well. There's one big question mark, though, hanging over the chip sector completely. It's what's happening with U.S. export controls and tariffs. CFO Colette Kress was pretty straightforward, saying it's still unknown what the Trump administration plans to do and when,
Starting point is 00:02:05 but she did mention that NVIDIA expects their data center sales to China to continue at the current pace, absent of any changes to export controls. At the end of the call, CEO Jensen Wang teased the launch of their new products, Blackwell Ultra, the next iteration Rubin, which is that architecture. But all of those details and more are going to be revealed at GTC, which is dubbed the Woodstock of AI. And often a big catalyst, a positive catalyst, I should say, for NVIDIA shares, John. Yeah. On all of these topics, pretty much, we unpacked in greater detail. You're going to hear from Jensen in just a moment.
Starting point is 00:02:42 But I think there's still a question among investors about the pace of demand going forward. And he made a case for it continuing because not just of the orders that they've got in right now and the indications, but the capital build-out, right? Precisely. He said that the increase in compute is only going to keep climbing with each iteration. So we know with DeepSeq and coming out in China, there was concerns that you could start, you would need less quality NVIDIA GPUs like the H800s. And you could, you know, do really well with that. He's implying that in the post-training world, you know, when you're doing time, when you get along that scale, that you can continue to buy even more chips. And we saw that with Grok 3.
Starting point is 00:03:28 That was Elon Musk's AI large language model. They originally had planned 100,000 chips, and then they doubled that to 200,000 in their data center. So that's just proving that you need even more hardware in order to keep up with this compute, at least for the near term, that's what he's saying. All right. Christina, thanks. Thanks.
Starting point is 00:03:45 And I started my interview with Jensen Huang, CEO of NVIDIA, talking about the Blackwell ramp, which he called extraordinary. Here's what he had to say. We had a fantastic quarter, a terrific ramp. Nothing was easy about it. And a couple of quarters ago, of course, people were worried about how successfully we'd be able to ramp something as complex as Blackwell. Blackwell, people don't, you know, maybe they just forget that it's not just a chip, but
Starting point is 00:04:14 it's a whole system. That system is a ton and a half, has a million and a half components. You know, it's incredibly hard to build. It's built in 350 plants around the world and 100,000 different operators, factory operators, contribute to building this thing. So I think it was logical to be concerned about the ramping of Blackwell, but we have now successfully ramped Blackwell. The other concern that people had was the Hopper-Blackwell transition. It might create a pocket, an air pocket.
Starting point is 00:04:49 And I think we're now well successfully behind the air pocket. We're going to have a good quarter this quarter. We had a great quarter. We're going to have a good quarter next quarter. And we've got a fairly good pipeline of demand for Blackwell. Yeah. The guide was also above, as you mentioned. Now, earlier today, speaking of demand, I was talking to Amazon CEO Andy Jassy. He told me that as of now,
Starting point is 00:05:12 if he had more AI resources to sell through AWS, he could sell more. That's kind of the short-term signal of demand that you talked about on the call. Tell me more about the midterm signals that investors should be aware of that give you confidence in the continued demand, the scale-outs of data centers, AI factories, relative to what you've historically seen. The short-term signal are just our POs and the forecasts. And on top of that, the things that are not forecasted are new startup companies that are spinning off. And some of these are quite famous. And, you know, at the risk of forgetting any of them, I won't mention any of them, but there's some really, really fantastic startups that have come out as a result of new reasoning AI capabilities
Starting point is 00:06:06 and artificial general intelligence capabilities that they have breakthroughs in. And several of them, there's several of them that are related to agentic AIs, really exciting companies. And there's several of them related to physical AIs. There's just handfuls of each one of them, and each one of them needs additional compute. And that's, you know, the type of things that Andy talks about because they need to go to AWS, and they have urgent need for more compute right away. And so that's on top of what we already knew to have POs and forecasts and such. The midterm comes from the fact that this year's capital investment for data centers is so much greater than last year's and of course we had a very large year last year we
Starting point is 00:06:49 had a great year last year it stands to reason that with Blackwell and with all the new data centers going online we're gonna have a fairly great year now long-term the thing that's really exciting is we're just at the beginning of the reasoning AI era you know this is this is the time when AI is thinking to itself before it answers a question instead of just immediately generating an answer. It'll reason about it, maybe break it down step by step. It'll do maybe some searching in its own mind before it creates and composes a smart answer for you. The amount of computation necessary to do that reasoning process is hundred times more than what we used to do. So if you
Starting point is 00:07:31 could imagine, we thought computation, the amount of compute necessary was a lot last year. And then all of a sudden, reasoning AI, DeepSeq was an example of that, ChatGPT 4.0 is an example of that. Grok 3 reasoning is an example of that. So all of these reasoning AI models now need a lot more compute than what we used to were expecting. Well, let me stop you there. Because some people took DeepSeq to mean actually that you need less compute, right? Because the initial report was that they were doing more with less but you're saying in fact some of what came out of deep seek was the opposite that there's going to be more compute demanded unpack that for me there are
Starting point is 00:08:18 three phases in how AI works how AI is developed largely number one one is pre-training. It's kind of like us going through high school. A lot of basic math, basic language, basic everything. That basic understanding of human knowledge is essential to do what is the next step which is called post-training. In post-training you might get human feedback. You know it's like a teacher showing it to you. We call it reinforcement learning human feedback. You might practice and do thought experiments. You're preparing for a test.
Starting point is 00:08:53 You're doing a whole lot of practices. We call it reinforcement learning AI feedback. You could either also do tests and practice, and we call it reinforcement learning verifiable reward feedback so now basically it's AI's teaching AI's how to be better AI's that post training process is where an enormous amount of innovation is happening right now a lot of it happened with these reasoning models and that computation load could be 100 times more than pre-training.
Starting point is 00:09:28 And then here comes inference, the reasoning process. Instead of just spewing out an answer, when prompted, it reasons about it. It thinks about how best to answer that question, breaks it down step by step, might even reflect upon it, come up with several versions, pick the best one, and then presents it to you. So the amount of computation that we have to do even at inference time now is 100 times
Starting point is 00:09:55 more than what we used to do when ChatGPT first came out. And so all of a sudden, the combination of all these ideas, largely related to reinforcement learning and synthetic data generation and reasoning, all of this is just causing compute demand to go sky high. Now, tell me about price performance, because this is one of those things that I think a lot of investors are trying to figure out. They see the hyperscalers coming out with their own chips. They say, hey, boy, there are a lot of customers who don't have a lot of money to spend.
Starting point is 00:10:28 They want to get the most bang for their buck. So they view Tranium, Inferentia, Microsoft chips, Google's chips as potential competitors for NVIDIA. But at the same time, you talked on the earnings call about performance per watt, right? And so it seems to me at GDC, and I know our Jim Cramer is going to be there with you next month, that's where you introduce new products and new powerful use cases. If your performance of what you're coming out with next is that much higher than what's otherwise available on the market, does that mean, would you argue that your price performance actually ends up better than what's cheaper? The reason why we sell so
Starting point is 00:11:12 much is because our price performance is the best. And it's absolutely the case that performance per watt is incredibly important. And the reason for that is because a data center is only so large That data center could be 250 megawatts or it could be a gigawatt But within that data center, however, however large it is you want the amount of revenues You can generate to be as high as possible So you want to generate you want to do two things you want to generate very high quality tokens Ai is ultimately expressed in tokens and that's what you monetize, dollars per million tokens. You want to generate very high quality tokens because you could get better pricing on that,
Starting point is 00:11:56 better ASP. On the other hand, you want to get as many tokens out of that data center as you can and in order to do that, your performance has to be excellent, and your performance per watt has to be excellent. And so the simple way of thinking about that is if your performance per unit energy is the highest, the revenues you can help a company generate is the absolute highest. And if you look at the way we are driving our roadmap between Hopper and Blackwell, our token generation rate for these reasoning AI models can be as high as 25 times. That's the same thing as saying that factory can generate 25 times more revenues using Blackwell
Starting point is 00:12:42 than you could using Hopper before that. Right. Which is the reason why demand for Blackwell is so great. And then, of course, we're on a roadmap that's once a year. And every single year, we're increasing our performance per dollar, performance per energy, performance per watt, so that everybody's data centers become more energy efficient on the one hand, generate more revenues on the other hand. So let me ask you about your China business.
Starting point is 00:13:08 You talked on the call about how the percentage of revenue is half what it was before export controls. Does the emergence of DeepSeq, which some have cast as a workaround some of those restrictions. Does it tell us anything about the effectiveness of export controls? It's hard to tell whether export control is effective. The thing that I can tell you is this. Our percentage revenues in China before export controls is twice as high as it is now. There's a fair amount of competition in China. Export control and otherwise, Huawei, other companies are quite rigorous and very, very competitive. And so I think that ultimately software finds a way.
Starting point is 00:14:03 Maybe that's the easiest way of thinking about it. You know, software is always, whether you're developing software for a supercomputer or software for a personal computer or software for a phone or software for a game console, you ultimately make that software work on whatever system that you're targeting, and you create great software. And so that's kind of the beauty of software engineers. They're incredibly innovative and clever in this way. Our architecture is about the flexibility of software, which is kind of nice. But ultimately here in the United States, if you look at where we are now compared to what is controlled, GB200 is probably something along the lines of 60 times the token generation rate of what is being shipped in China that's currently export controlled. And so the separation of
Starting point is 00:14:56 performance is quite high. And, you know, ultimately, what we know we experience is a great deal of competition there. Real quick, I think last quarter you said demand for Blackwell was insane. This quarter you said it's extraordinary. Those about the same? Is one better than the other? I would say that my feelings about Blackwell is better today than it was last quarter.
Starting point is 00:15:22 And the reason for that is because we, of course, ramped up into production. We exceeded our target and the teams did an amazing job. As you recall, we had a hiccup in our design flaw in Blackwell that we found early on last quarter or quarter before that. And we recovered tremendously well and I'm very proud of the team for that. And so, for those reasons, I feel pretty great from an execution perspective. From a demand perspective, you know DeepSeq was fantastic. It was fantastic because it open-sourced the reasoning model that's absolutely world-class. Just about every
Starting point is 00:15:59 AI developer in the world today has either incorporated R1 using, it's called distillation, distilled from R1, or using techniques that have been open sourced out of R1 so that their models could be a lot more capable. Across the world, AI has become better as a result of the last several months. And so I'm excited about that and the demand for computation, for inference time, for test time scaling, which is one of the reasons why Grace Blackwell MVLink 72 is so exciting. That feature is now more prominent than ever, more demanded than ever. And I think the demand side of it is more exciting too.
Starting point is 00:16:40 Jensen Wong, thank you, CEO of NVIDIA. Hope to see you soon. Thank you, John. Great to see you. NVIDIA shares now fractionally lower in extended trading. Joining me to discuss the interview and the quarters, Josh Brown, Ritholtz Wealth Management co-founder and CEO and NVIDIA shareholder. He's also a CNBC contributor and AI expert, Cassie Kozarkov, CEO of IT services and consulting firm Kozarkov and Google's former chief decision scientist. Cassie Kozarkov, CEO of IT services and consulting firm Kozarkov, and Google's former chief decision scientist. Cassie works closely with companies on their AI strategies. Great to have you here.
Starting point is 00:17:13 Josh Brown, NVIDIA has pretty much been at these levels. It was at these levels where it's trading after hours today about six months ago, right? But you were in this a long time ago. Do you buy the arguments as an investor here about why demand is going to continue to be strong, why margins are going to continue to recover, the others aren't catching up? Yeah, so I think I'm not a technologist, I'm an investor. And from an investing standpoint, I think I have to buy those arguments because not three weeks ago, I heard from the five or six biggest customers of NVIDIA all confirming that absolutely nothing has changed and they don't foresee anything changing about their plans for this year. So I think when you think about what we just heard tonight, it's just confirmation of what we had already heard from the customer side.
Starting point is 00:18:07 That's number one. And then John, I think, by the way, you did a fantastic job with Jensen Wang. I think when you talk about how important this stock is to the market and why we're all here at 7 p.m. East Coast time discussing these results, again, this is the second largest name in the S&P 500. It's 6.3% of the index. It's 8.1% of the Q's. It's 18.9%
Starting point is 00:18:33 of the SMH semiconductor ETF. It's really important that what we just heard is kind of solidifying what the CapEx side told us. Like, I can't overestimate how important it is. We don't need NVIDIA to run to 160 tomorrow. But just the fact that it's hanging in there after hours and that the upside surprise was indeed enough to give it that buoyancy, it's truly important to investors all over the world. Well, Josh, you said you're not a technology expert. I'm not either, but that's why we have Cassie. You were Google's chief decision scientist. So using this type of technology, and I know you've been tracking the way AI is evolving,
Starting point is 00:19:17 able to do multi-part tasks, take context into consideration. Amazon was just talking about this earlier today with Alexa Plus. Do you really think that the demand for NVIDIA chips for that really high-end performance remains strong based on where you see the software going? Oh, absolutely. I mean, AI is the backbone of the future.
Starting point is 00:19:39 Hardware is the backbone of AI. And I don't find myself thinking on the order of you know fluctuations of a day or two but let's talk years or decades. AI adoption is absolutely in its infancy and AI in many ways I think we got the wrong view from Hollywood. AI is what software wished it could be a few decades ago. And now we're actually catching up to that. So the ability of anybody to talk to a machine, be understood, and get something done is incredible. It's world-changing.
Starting point is 00:20:16 And the participation that we're going to see from that as companies catch on to, wow, what could I do if everybody was an engineer of some sort? So are we like in the late 80s with the PC revolution talking about, oh, what's the market share? What's the market size really like for Intel? Is that kind of where we are? It's kind of like that. Or maybe we're in 2011 talking about what is mobile? Are people going to want smartphones? Josh, when you look at the landscape overall for potential AI investment, you look at NVIDIA's place in it.
Starting point is 00:20:53 Is your sense that the software and platform story that NVIDIA has been telling, that they're getting full credit for that yet? I think they're getting full credit for it. But what I think is fascinating, John, is that we're not wildly overpaying for it, at least not at today's prices. And I want people to have this context
Starting point is 00:21:14 because it's really important. I would pin the beginning of the AI era, I think Cassie probably would too, at let's say November of 2022. So just like call it January 1st, 2023. Nvidia right now at a 29 forward PE, I'm not going to say it's a cheap stock, but it's not materially expensive relative to the earnings growth that we think they'll deliver. Over the last five years, it's had a maximum price earnings ratio of 89 times.
Starting point is 00:21:46 That was actually March of 2022 before anybody was talking about AGI or chat GPT, of course. The lowest price that you were able to buy the stock, believe it or not, was February of 2023. It sold at a 16 multiple. People were worried about banks and Silicon Valley at the time. It's had an average price earnings ratio of 42 times earnings over the last five years. So again, at 29, no one's saying it's a screaming bargain, but we are not bidding this stock up to stratospheric levels. I think we're all being very responsible with its current valuation. And I do think there's opportunity to make money from today's price because of that starting valuation that we see today. OK, Cassie, put your old Google hat on for me, because I was asking
Starting point is 00:22:36 Jensen about these homegrown chips that the hyperscalers and I'm talking about Amazon, Google, Microsoft and sometimes Oracle, that they're trying to design their own chips and hardware to run AI software more efficiently. If you're at Google, are you thinking about how not to have to use so much NVIDIA? Or are these things in totally different categories where the uses that they're trying to come up with to make their infrastructure more efficient is just different from what customers are going to demand and need NVIDIA for. I think maybe the word is in the hyper and the scalar. You're trying to do as much as possible with whatever resources you have.
Starting point is 00:23:16 So you're going to want more of whatever is available as long as there is a market. And so when we look at cloud services, which is a very growing business, you have to have the hardware behind that and AI is going to be everywhere. So yes to everything. Yes to everything. What are you seeing out of China right now?
Starting point is 00:23:42 It used to be 10, 15 years ago, I think a lot of U.S. talking heads underestimated the innovation, the creativity out of China. Oh, they can't make consumer apps that'll make it over here. Their enterprise stuff is lower quality. They've actually done quite a bit of interesting AI work. How do you see that affecting NVIDIA? Well, first, what I would like to see out of China is something that leads ahead of what U.S. companies are doing and not catches up to, because a lot of the rhetoric really makes it sound like China is so ahead. So, you know, let's see that. But're not you're saying Let's see evidence of right But at the same time they're doing a good job compensating for what they don't have
Starting point is 00:24:32 One could say that would one be right sure I'm wondering because you're the expert here. I would say that the US is very much ahead look I I would say that the U.S. is very much ahead. Look, I might say as far as a year ahead, perhaps, but I'm not a China expert. So you should ask the China desk. Well, you know, it's hard to get straight answers out of China, especially from the Chinese government. Josh Brown, there are a number of other stocks in the market, and I'm thinking about software mostly here, that follow NVIDIA's lead.
Starting point is 00:25:11 I'm putting Dell and Supermicro and some of those equipment makers aside. What is your expectation for how the AI trade overall digests the overall optimistic story that Jensen not only told to analysts, but told us here on CNBC tonight. What does that mean for the overall narrative of AI's growth and potential? So I think broadly speaking, people are going to wake up tomorrow and digest 48 sell side notes, most of which will be some form of overweight slash bullish slash
Starting point is 00:25:46 accumulate. I know they keep coming up with new ways to say we like the stock. But on balance, what they're going to read from the analyst community is, I think, once again, like a sober version of awestruck. I think the sell side is going to try its best to keep its enthusiasm in check. But they're talking about the second half launch of Blackwell Ultra on the conference call. And somebody's like, well, wait a minute. You had a hiccup with Blackwell. It launched a little bit late while you dealt with that. Are you still seriously coming out with the upgrade already in the second half of this year, the new version. And Jensen's like, yeah, we're doing this. So I feel like you're going to get a buoyant tone
Starting point is 00:26:31 from the sell side. And I think that will extend to the other names that you mentioned, John. But NVIDIA is really unique. I don't think there's a substitute. And again, at 29 times forward, there doesn't need to be. I don't think you have to buy the fourth best. Look at the people that bought AMD a year ago. It's in a 50% drawdown. You know how many people bought that stock saying, I don't know, Lisa Su and Jensen Wong, they're actually cousins. Jensen used to work at AMD. You know, I missed Nvidia. I'll buy AMD. I always find that to be a mistake. It's not that none of the peripheral stocks can work. I guess what I'm saying is if you have
Starting point is 00:27:11 a portfolio without NVIDIA, you're trying to make a bet on autonomous vehicles, machine learning, augmented reality, virtual reality, data center, etc. Like, don't miss this name and don't go looking for the poor man's version. I got it. Pick your favorite and you got your favorite and it's working. Cassie, you're meeting with clients talking about AI. How much are they talking about the possibilities and trying to understand that and the performance that they need for those possibilities versus concerned about costs at this stage? Where is the conversation with these people you're talking about? The conversation is very little about costs, very much about possibilities, and also very much about
Starting point is 00:27:58 the completely different paradigm that some of these technologies require in terms of thinking about ROI. What does it mean to figure out the value of a generative AI system working at scale in the enterprise? And so what we see is for individual users, it's enough that it feels useful. But scale demands to be measured when we're going to do things seriously at scale in the enterprise you have to rethink all your frameworks of value and of automation and we are on the cusp of management getting the hang of that and that is going to be such a phenomenal blossoming of use cases and then we're going to talk about oh no no, how do I actually get enough of what I need, not just in terms of hardware, but also in terms of experts who can help.
Starting point is 00:28:52 All right. Josh Brown, last word for you here. Is this the place where you can buy NVIDIA still? Yeah. I mean, I would love it. It rallied 4% into the print tonight. So I would love it if it pulled back 4% for people that are not in it. I'm not buying more just because it's already big enough in the context. It's a 10,000% return since 2015. So I don't need more NVIDIA personally. But if somebody does not have this in their portfolio, I don't think it's a wild stab in the dark to be buying it here.
Starting point is 00:29:27 And, you know, quite frankly, if you're not in it, I think the lower it goes, the better. OK. Clearly said, Josh Brown. Thank you very much, Cassie Kozarkov. Great perspective. And we just heard from Jensen Huang after the earnings call. Thank you for joining us for the CNBC special report. We'll have much more coverage of NVIDIA tomorrow, starting on Worldwide Exchange at 5 a.m. And Shark Tank starts now.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.