Big Technology Podcast - Intel's CEO Shares His Plan To Win The AI Chip War — With Pat Gelsinger

Episode Date: December 13, 2023

Pat Gelsinger is the CEO of Intel. He joins Big Technology Podcast to discuss how the AI chip war is playing out and why it's growing more competitive. In this episode, we break down the various compo...nents of the chip business, the difference between CPUs and GPUs, who's ahead in the AI chip war, NVIDIA's underrated weakness, and why Intel is getting back into the foundry business after a couple of failed attempts. Tune in for a vibrant, deep discussion of the state of the AI chip business and the state of Intel's comeback. --- See how Intel is bringing AI Everywhere on Thursday, December 14 at 7:00 – 8:00 a.m. PST / 10:00 – 11:00 a.m. EST for a keynote featuring CEO Pat Gelsinger and other Intel leaders livestreamed on the Intel Newsroom.  --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Transcript
Discussion (0)
Starting point is 00:00:00 The CEO of Intel joins us to talk about the company's AI plans and the broader chip war. All that and more coming up right after this. LinkedIn Presents. Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world and beyond. We have a very special show for you today because the CEO of Intel, Pat Gelsinger, is here with us today. Pat, welcome to the show. Hey, great to be with you, Alex. look forward to having a chat together and talking to all of your listeners today.
Starting point is 00:00:35 Great. Well, thank you for being here. We're talking about Intel. And I think the question that people have about Intel, I mean, obviously, you're in the middle of a comeback is we all remember the Intel inside days and that Intel wasn't everything and then something fell off. So what exactly happened at Intel and what is the comeback looking like? As briefly as you can. Yeah. And, you know, over an extended period of time, Intel didn't have technical leaders in the CEO chair, you know, made some bad technical choices, you know, had sort of lost its way in its core product execution. And then probably the most critical one, Alex, was we lost process technology leadership. You know, we bet against this key technology called EUV and that ended up being critical. And we went from being multiple years ahead and process technology and continuing war's law to multiple years behind. And boy, you know, when I came up in charge two and a half years ago, it was rebuilding the company, setting a new strategy, getting the product execution, but most importantly,
Starting point is 00:01:40 getting back to process technology or silicon leadership in the core of the company again, and opening up a new strategy to become a foundry for the industry. Okay, great. So I want to talk about all of this, but let's just do a quick 101. to get started. So there's so many different parts of the chip world. I mean, can you talk us through exactly what they are? There's design, manufacturing, and not every chip manufacturer manufactures their own. Yeah. And, you know, we used to describe Intel as IDM, integrated design and manufacturing. And for the most part, the world has moved to foundry, you know, that there are
Starting point is 00:02:22 companies that do foundering or manufacturing for others and then fabless companies. So a fabless company would be Nvidia, AMD, Qualcomm. They don't actually build and run their own fabs. Intel's one of the few companies left that actually builds and runs our own fabs. But we had only manufactured our chips in the past. And this is what I've called IBM 2.0, that we're going to be designing chips, you know, like zions and, you know, our core chips for PCs, but we're also going to be a manufacturer of our chips and for the industry. And so you can sort of think about those two parts of the semiconductor industry, the manufacturing, being a foundry, and being a provider of the chip designs themselves. Right. So design. Some companies just design. Some companies just
Starting point is 00:03:11 manufacturer. Some companies manufacture other people's chips. And Intel wants to do all those. That's right. That's right. And that's the new strategy that I've laid out, that we're going to be this manufacturer for the world. But we've also said, hey, we have to rebuild Western manufacturing. And this has been the Chipsack in the U.S., the EU Chipsack, that all of a sudden the world had moved to only manufacturing in Asia. And when we saw in COVID, boy, you know, our supply chains got disrupted. Auto plant stopped because we didn't have a $1 semiconductor. So we to rebuild Western manufacturing at scale as well. So it's been another key piece of the strategy. And obviously with the passage of the Chips Act in the U.S. and Europe, we've been quite successful getting that underway as well. We're here with Intel CEO, Pat Gelsinger. So Pat, just to, you know, go back to basics, can you also tell us what the difference is between a CPU and a GPU? Well, CPU is this general purpose compute device. You know, as you think about it, It runs everything.
Starting point is 00:04:18 You know, it could run a web service. You know, it could run, you know, an application, you know, it runs Zoom, right? It runs, you know, every, you know, your recipe program. It runs everything and all the software. So it's called a general purpose CPU. A GPU instead is really built for a very specific class of workloads. So generally those have been called throughput workloads. So it does lots of floating point processing and,
Starting point is 00:04:47 matrix operations, so it's very dedicated for things like graphics, matrix, and now, right, has worked out to be uniquely good at things like AI. And so it's a very specific set of apps that have become very important. Why is the GPU so good for AI? Well, AI tends to have very specific operations that is doing and that all it's doing is compute, compute, compute, compute, right? Whereas a CPU is sort of saying, oh, if then, and jump over here. run this application. So it's very specific, you know, and largely emerged from the whole graphics space where all is doing is vector graphics, you know, rasterization, a very narrow set of compute workloads. So it's designed, basically, you can sort of think about it, you know, as your general purpose sedan. That's sort of the CPU. And the GPU, all it does, it gets on the F1 track, and all it does is go fast on very specific workloads. interesting and obviously it worked really well for gaming and that's kind of was invidia's specialty is that how invidia just ended up running away with the game here was that
Starting point is 00:05:57 they built this GPU for gaming and it ended up being they kind of lucked into it being good for AI yeah and it very much is that way and jensen and i you know we've known each other for 35 years you know this general purpose workload and we always are adding more capabilities to the CPU but over here, it was always just go really fast for graphics. And then you got really lucky that the AI workload sort of looked a lot like the graphics workload. So as I joke with Jensen, I said, you know, you just were really true to that mission of throughput computing and graphics. And then you got lucky on AI. And he said, no, no, no, Pat, I got really lucky on AI. But now it's interesting because you have in video, okay, they're the clear leader. But every single day,
Starting point is 00:06:45 it seems like another company is announcing their own GPU. I know that Intel's had its own Pontavecchio chip in development, but also you have accelerators, right, which is basically ways that companies like Amazon and Google will modify chips in order to be able to run AI workloads. In fact, Google just trained it, apparently its entire Gemini model on its own accelerator, not needing Nvidia at all. So just take us into that race a little bit.
Starting point is 00:07:12 And does it seem like, I mean, Nvidia's lead for a long time, That seemed steep, but it seems like less so now. Yeah, and what we expect, you know, and when we think about AI workloads, you can think about training and inferencing. And you can think about that like a weather model. How many people create the weather model, that's training, versus how many people use the weather modeling. Oh, that's lots of people, you know, local forecasters, you know, scheduling, route maps,
Starting point is 00:07:40 all that kind of stuff, use weather models. For the training application, you now have what NVIDIA does, accelerators like what we're doing with Gowdy, but also then the TPU from Google, the Traneum from Amazon, what Microsoft just announced with Maya, what AMD announced, because the software there, right, you know, is very specific in this class. So if I can run that Python code, as it's called, you know, the key language in this case, then, well, I'm going to go compete at that. And those machines are getting big and fast, so a lot of people are pursuing that. But in the inferencing, then you sort of say, hey, how do I mainstream that application? That's an area that actually is just another workload, and we're going to do a lot of inferencing on our standard CPUs of the Zeon product line as well. So we expect that there's going to be a lot of competition in the AI space.
Starting point is 00:08:33 And finally, for Intel, you know, we're also going to be a foundry. We're going to be the manufacturer for many of those chips as well. So we want to be the manufacturer for NVIDIA, for AMD, for Google, for Amazon. We want to be their manufacturing partner, even if we're not using our design chips. Yeah, I'm definitely going to talk to you about the manufacturing in a bit. But let's just stick with the design here. So, I mean, it does seem like what you're saying, though, is that this landscape is going to be a lot more competitive than it has been previously. I mean, you have a company like Nvidia that added, what, $600 billion to its market cap in one year.
Starting point is 00:09:10 like there are going to be others that are going to be trying to get in. Does that sound right to you? Absolutely. You know, and I sort of put them into two classes, Alex. There's going to be those that build their own. You know, and that's what you see Amazon, Microsoft, Google are doing. They're going to say, hey, I'm going to own this and do this myself. And then there's going to be the general providers in the marketplace. And that's going to be Intel, AMD, NVIDIA, I think will be the three big ones in that space. so there's going to be do it ourselves we're going to own the full stack of hardware and software which is the big cloud guys and then there are those who say hey i'm going to sell my chips to everybody and i expect those would be the big three okay so wait which which part are you going to compete in then both or both yeah you know we're going to you know because i want to be a foundry
Starting point is 00:09:58 to what amazon does what microsoft does what google does and i'm going to sell my chips and i'm going to sell my chips to the enterprise customers who want to do this with their data on premise as well as to the big cloud guys as well and today you know biggest customers for invidia today are probably Microsoft who's putting up their big farms but they're saying hey no i'm going to build my own chip i'm going to build Maya so that i do just like what google is doing with their own tPU as well i want to own that margin and i'm going to do it on my architecture as well so intel i think uniquely has two bites of the apple here to pursue right it's interesting to hear you You talk about how the AI inferencing or the running of these models are going to be done on CPU chips.
Starting point is 00:10:41 I mean, you just kind of explain the architecture and the use of a CPU, and it seems like even still a GPU would be better for AI functionality. But are you saying that actually the CPU will be fine? It seems like it was built for something different. Yeah. And what's going to happen is AI is going to get added to every application. Right. So everybody's going to start saying, how do I bring AI? into my apps. So imagine I'm running SAP, right? I'm going to do a lot of my normal SAP and all
Starting point is 00:11:12 that runs my CPUs today, but then I want to add some inferencing capabilities into my SAP environment. You know, we believe, and we're adding these matrix functions onto our CPUs, so we're extending the workloads of our general purpose CPUs to do a better job at AI. And so if the workload is just running inferencing, oh, it'll probably run better on a GPU. But if it's running a lot of things, we're going to make it just run great on the CPU. We're finding great interest from customers to do that today. You know, and for my standard CPUs today in the data center, you know, we see about a third of the purchases are being based for AI workload. So we're already seeing that characteristic emerge quite strongly today.
Starting point is 00:11:59 So explain this one to me then, because Intel, has its own Pontvecchio trip, which is apparently, you know, a GPU trying to do some of this other stuff. But how are you, you know, how are you going to balance that with running AI on your CPU chips? Yeah. You know, so some, you know, if the workload is running on the CPU, it's just going to stay running on the CPU. Right. But then we're also going to, for these environments where all that you're doing is running AI, then we're going to offer our accelerators as well. And Pontavecchio and Gowdy, we're bringing those together into a single product line, you know, going forward because we're going to compete in that space as well.
Starting point is 00:12:35 You know, we're going to be building, you know, components, whether they're GPUs or CPUs to capture as much of the market as possible. Okay. So I know about Pontavecchio. It's a GPU. Gowdy CPU that runs AI functions. It's a, you know, it's called an accelerator. It really is designed for these unique matrix functions that are seen in AI workloads.
Starting point is 00:12:58 So just give us an honest assessment of where Intel. stands today compared to NVIDIA? Like, are you getting close in terms of like the volume or where is the where how do you compare right now to them? Yeah. No, NVIDA is the runaway market share leader today, right? We give them credit for that. You know, we're now seeing our growth rate and quarter to quarter we approximately doubled the growth rate, you know, but we're still small market share, you know, today. But we're rising quickly because customers are looking for alternatives, you know, today because this is demand and they also want better price and different you know, features. So our business here is growing very rapidly, you know, but it's from a much
Starting point is 00:13:36 smaller base, but we are now winning some of the performance benchmarks. So all of a sudden customers are saying, huh, you know, they're showing up winning some of the benchmarks. You know, I want an alternative. NVIDIA is short on supply and I'm getting much better TCO from Intel. You know, hey, let's go start testing this and we're getting a lot of interest in our value proposition. Yeah, in some ways this supply crunch, you know, really can end up working in your favor because people do need something. Yeah, and it's both a supply crunch for the supply chain, right? And some of our packaging and wafer capabilities, people are saying, hey, can you help
Starting point is 00:14:12 us people who might not have considered Intel as a foundry supplier or all of a sudden saying, hey, can you manufacture, right? Even people we compete with on the product side saying, hey, can I be your manufacturer? But, you know, if your chips are working today and I can build my next AI farm, you know, using you, a lot of interest there as well. So unquestionably, the supply crunch is working for us. Okay, great. So we've talked now about design. I think we've done enough on that. We can talk now about manufacturing. People are talking a lot about TSM, Taiwan Semiconductor, and the fact that A, like, I think during COVID, a lot of people realize that it was a strategic liability to have
Starting point is 00:14:50 core manufacturing for the U.S. be done offshore. Now, it's going to take a while for us to get to that point, but there's legislation, there's funding, the Chips Act, that's going to give companies like yours an opportunity to start to build some serious foundry capabilities in the U.S. One question to you to start. Intel has tried a couple times to build foundry capabilities for others, I think twice before, and it hasn't worked out. It's very different to basically manufacture your own chips than to manufacture other people's chips. It takes, you know, off-the-shelf technology, process, all that stuff. So what gives you confidence that this time is going to be different for Intel?
Starting point is 00:15:33 Well, several things that we're doing differently this time. And the first, you know, I'll say on the first attempts, they were hobbies, right? It was sort of like, let's go try. You know, we really weren't taking it seriously as a company. This time, I have met the future of the company that we are going to. Why not? Like, why was this a hobby? And then we can talk a little bit more about why this is so important now.
Starting point is 00:15:54 And, you know, fundamentally, the Intel business was going really well before, and this Foundry business model was still pretty nascent. So it was sort of like, ah, that model's emerging. TSM's doing pretty good. You know, let's go try it in a few places. But it wasn't taken deeply intentional as a core part of the strategy. And not very profitable, from what I understand, is that this is kind of like the least profitable part of the whole process. Well, hey, TSM has gotten pretty good profits. And how they figured it out.
Starting point is 00:16:25 Yeah. Yeah, they have figured out how to make good profits here. So this is now a very profitable business, you know, almost as profitable as the chip business itself in many respects. You know, secondly, the ecosystem has become much more mature. And Intel before was very proprietary. So if you wanted to use my foundry, you had to be proprietary on me. Well, now we have standardized our processes like the rest of the industry.
Starting point is 00:16:51 So it's much easier to use us. as a foundry. You know, third, I'd say everybody post-COVID realizes, oh, my gosh, you know, we desperately need a Western foundry at scale. You know, this is super important. And we're finding that interest from customers because they see their supply chains is very fragile. You know, and they have become so dependent on one company, one island, one port. You know, there's a lot of industry interest as well as government interest to build us as a world-class foundry. So we're well in the way. And it's become a key piece of the strategy that I've laid out. How would you assess the geopolitical risk to Taiwan? I mean, you said one country, one island. Obviously, it's Taiwan.
Starting point is 00:17:38 We've seen already, you know, Russia invade Ukraine that put a lot of people's antennas up. This might happen in Taiwan. What's your perspective on how serious we should take this? Well, you know, this is one where it's going to take years to rebuild these supply chains. Right. Like how many years? It took us three decades to have our supply chains move to Asia. You know, what we've said is, hey, we've gone from 80, 20 to 2080 in Asia. Wow. By the end of the decade. So, you know, seven, eight years, I think we can get close to 50-50 by the end of the decade. And if we accomplish that, right, over a seven or eight-year period, I think the world is going to
Starting point is 00:18:17 sleep much better at night, you know, because, hey, this is, you know, a blockade of the Taiwan Straits and all of a sudden the island browns out in 30 days, you know, this becomes very precarious, right? And we can fix it. And, you know, not just the economic, but the national security benefits of this are huge. Why is it going to take so long? It takes five years to build the new fab. Right. So, you know, what I've described, you know, we're a couple of years underway on this, But, you know, if we accomplish us by the end of the decade as we've laid out, you know, that is spectacular. You know, for a layman, why does it take five so long to build one of these factories? Well, you know, these factories, you know, first they are just amazing, right?
Starting point is 00:19:01 And I, you know, I just love people to come and visit the factories. These are the largest construction projects on Earth today, building the smallest things that have ever been built on Earth. Yeah. It really is amazing, the precision manufacturing, the chemicals, and so on. You know, it takes us about five years to have one of these factories up and running on a leading edge process technology. You know, the total project is about $30 billion, right, to build one of these factory complexes. You know, so it's an enormous capital investment, right? And, you know, I end up with like 7,000 tradespeople, you know, to work for almost four years to build one.
Starting point is 00:19:43 of these locations it truly is right a manufacturing marvel building the most advanced science that's ever been done on earth now pat i've spoken with people in the early days who were there at texist instruments and were part of this offshoring of chips to taiwan and basically what they said was um it was so it was the least the least profitable part of the whole process and they just didn't care they didn't think about it strategically um i'm curious I'm curious if you think that it was a mistake to let so much manufacturing leave the U.S. And then also, like, you know, it seems like a good hedge to have a plant here, but just in terms of a profitable business line and a good business, like it's going to be very expensive to run in the U.S., don't you think? Curious to have you weigh in on both of those.
Starting point is 00:20:33 Yeah, two things. One is, yes, it was a mistake. And I think the world realized how big a mistake it was in the middle of COVID, you know, that we allowed our supply to. change to become so fragile. And as I say, what aspect of your life isn't more digital? Everything's more digital going forward, right? And everything digital needs semiconductors. You know, where oil reserves are has defined geopolitics for 50 years. Where technology and fabs are for the future is more important. You know, so with that in mind, yes, it was a mistake. And now, you know, with the Chips Act, you know, we've taken the most significant industrial policy legislation
Starting point is 00:21:12 since World War II to correct that error. Now, part of the Chips Act was to level the playing field, right? It was to close that economic gap that we see with Asia today. So it's designed to, right, bring those back on parity so that the investments that we're making are competitive with those of Asia in the world. And I believe as the decade goes forward, we rebuild the ecosystems, we can systemically and structurally be closing those gaps. In addition to the aid that's been needed to immediately fix some of that huge economic gaps that we have today.
Starting point is 00:21:50 Okay, so there's some subsidy there that sort of makes the economics work. Yes. And so we are big technology podcasts, so I have to ask you about Apple, right? They've started designing their own chips and they're doing a pretty good job of it. So I'm curious from your perspective as a chip manufacturer, what do you think about the position that Apple's in today? And I mean, clearly the performance is quite good on the chips they've designed. Is that something that you, I mean, obviously not every company is going to do it. But how about you assess their effort?
Starting point is 00:22:20 Well, you know, they used to use Intel chips. And when Intel stumbled, Apple stepped in and did their own chips. So ultimately, my objective is build better chips that they want to use our chips versus doing it themselves. But it also shows that this idea of the foundry ecosystem has become very mature. that a company like them could step in and build very good chips. And remember, they build chips for their applications. So they highly optimize them just for the Mac and for the iPhone as well. They don't do everything like the Intel chips do across many different markets.
Starting point is 00:22:55 They optimize them solely for their applications and products. And they've done a super good job. And I'd say over time, hey, I'd hope to give them a better product that they could use my chips again. But I still want to be a manufacturer for them, even if they, choose to keep designing their own chips going forward. I want to become a foundry for them, just like they use TSM today. And have you talked to them about that? Are we still seven years away from that being a reality?
Starting point is 00:23:20 Of course I've talked to them. I've talked to everybody in the industry, you know, Qualcomm, Nvidia, AMD, Google, Apple, Broadcom, et cetera. I want them all running on our factories because that is better for them to have our technology, better for them to have more resilient supply chains and I'm going to make it a good business proposition for them as well. Okay. You have a big AI event coming up this week. Can you tell us a little bit about what people can expect there? Yeah. We call it AI everywhere, right? And in this sense, you know, AI isn't just going to be for these big high-end cloud and training environments,
Starting point is 00:23:58 but how do we make it available across every PC, across every edge device, as well as our chips for the data center. And we'll be introducing, you know, two new chips. One is our main CPU, right, Xion Gen 5, that we have further enhancements for the AI workload. And we'll be introducing Core Ultra, you know, which is for the client to put AI capabilities directly into your PC. Okay, sounds good.
Starting point is 00:24:27 Anything you think I missed or anything else we should know? Yeah, and I just say for this AI Everywhere thing, you know, Intel's showing up here saying, we're the volume provider, you know, and much like the Centrino event was, you know, 20 plus years ago that made Wi-Fi and access points and every coffee shop had to have, you know, Wi-Fi service. You know, it just changed the, not just the PC, but the entire way that people use computing. We see this AI PC having that same kind of shift where all of a sudden maybe I don't type to my computer anymore. I just talk to it in the future.
Starting point is 00:25:05 It knows when I'm there. It translate languages. It has new insights and capabilities. It becomes my personal bot. We just see it ushering in a new generation. And Andy Grove, one of the founders of Intel, described the PC as the ultimate Darwinian device. And we think we're about to go through a major evolutionary step in the life of the PC. And that begins today.
Starting point is 00:25:30 One quick follow-up. What is an AI computer? You mentioned an AI computer. Yeah. You know, think about your PC today that now has built-in AI capabilities. We're all of a sudden, instead of having to go to the cloud to get a model, all of a sudden, my PC is able to record, translate, summarize, you know, be vision tracking in flight, you know, where you could be speaking in Korean and I could be hearing you in English and vice versa. In real time, I'd look away from the screen, right? And it would summarize the conversation when I'm outside of the meeting, you know, before the call, you know, before my next call with you, it would say, hey, you know, on this date in December, you spoke to Alex and remember this is his birthday coming up and don't forget to remind him to bring flowers home. You know, all of those kind of things would be part of that AIPC experience as well as we see it shifting and changing the form factors as well. Cool stuff, Pat. Thank you so much for joining.
Starting point is 00:26:32 Hope to keep up this conversation as we go forward. Look forward to it as well. Thank you so much. All right, everybody, thanks for listening. We'll see you next time on Big Technology Podcast.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.