Limitless Podcast - The $10T AI Scaling Race: Who's Winning, And Who's Certain To Lose

Episode Date: September 26, 2025

Over the last 30 days, more than $1T has been pledged to build 50–100 GW of power for AGI, an unprecedented, Manhattan-Project-scale race to convert energy into intelligence. We break down ...who’s winning and why: XAI/Elon’s Colossus 2 and deep vertical integration (Tesla Megapacks, in-house chips), OpenAI’s Stargate deals with Oracle/SoftBank plus a reported $100B NVIDIA tie-up, Microsoft’s Fairwater push, Intel’s geopolitics—while Europe lags and China stockpiles energy and homegrown GPUs. Is this an AI CapEx bubble or the new reality where compute becomes the last thing we spend money on—and where being months late can cost you the race?------🌌 LIMITLESS HQ: LISTEN & FOLLOW HERE ⬇️https://limitless.bankless.com/https://x.com/LimitlessFT🔐 KGEN - The Verified Distribution Protocol: https://bankless.cc/KGEN------TIMESTAMPS00:00 Intro02:25 Colossus09:21 OpenAI Goes BIG16:29 The $100B Bet22:19 Intel & Apple?26:01 Where Is Anthropic?28:33 Europe Is Cooked29:41 China Is On Fire36:01 Are We In A Bubble------RESOURCESJosh: https://x.com/Josh_KaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures⁠

Transcript
Discussion (0)
Starting point is 00:00:03 I can't believe I'm saying this, but over the last 30 days, over $1 trillion has been committed to building out over 50 to 100 gigawatts of power to build AGI. X-I, OpenAI, Meta, Microsoft, Google have each committed hundreds of billions of dollars to building out the next generation of data centers that are going to power the most powerful and transformative technology we've ever seen. Now, I'm just going to call it. I think Elon wins this one. Despite being over five years late to the race,
Starting point is 00:00:37 in the last two years, he's built out one of the best frontier AI models the world has ever seen and assembled some of the best engineers and builders to scale compute and data to train these models. Not only that, but Elon's leaving nothing to chance. He wants to own the entire table. Tesla, which started off as an electric car company, now produces megablocks and megapacks that can power cities. he's building robots, it's never going to end. Josh, I want to remind the audience that in order to build the best AI models,
Starting point is 00:01:08 you need tons, trillions of dollars worth of compute to reach what we perceive as AGI. I think X-AIR is going to win it, do you? I think they are so far currently. Just based on like the, again, rate of acceleration, we always talk about this. They're moving the fastest. They have the most cracked team of engineers who are working on this. They have the most vertical integration. Colossus two. Come on.
Starting point is 00:01:29 I mean, Colossus 2, it's going to be pretty amazing. The thing that I've really come to grips with recently is that this is the largest capital expenditure project any one has ever done before and everyone is doing it independently. This is like the Manhattan project at a scale that we could never imagine, except instead of one entity working on this, there are several of them spending the largest company in this world spending all of their additional money and even going into debt to pay for this. And there's a strong case to be made that, like, this could be the last thing that we ever spend money on. The transformer kind of created this way of converting electricity into intelligence. And there's a very strong case to be made that the more energy we can get, the more we should just funnel it into generating intelligence. And that's exactly what's happening with these companies. I mean, they're spending a huge amount of money.
Starting point is 00:02:16 It seems like XAI is in the lead. And I think you have some fun things to share about the Colossus 2 buildout, which is the new buildout for XAI. Can you please walk us through what's going on here? So just to remind the listeners, compute is the most important thing to determine how good your model is going to be. And I want to start off with XAI and Elon Musk's strategy. They recently announced this year something known as Colossus 2, which, as this tweet says, is going to be recognized as the world's first gigawatt-scale AI training data center. Simple translation is, it's going to be the biggest data center to train AI models that we've ever seen before.
Starting point is 00:02:51 And I think Elon has invested over $10 billion already into building this thing out. With the goal to scale this to 10 and 100 gigawatts eventually, which is enough to power multiple, multiple towns. And even a terawatts in there, I see. I was almost scared to say that because I was like, can we do this? But, you know, trillions are being spent at this point. To give some context for the listeners, Colossus 1, the team deployed 100,000 GPUs,
Starting point is 00:03:19 which is the thing that you need to kind of like train these. models, and it took them 122 days. With this new Colossus 2, which is much, much bigger than the original one, it took them 19 days. Two digits to build out the first iteration of this. So Elon's ability to scale these data centers is just actually insane. And I want to kind of point out this snippet from Semiconductor, which do analysis on infrastructure races, saying XAI built in six months, what took 15 months for Oracle, Cruzeau, and and open AI to build. But I think, Josh, Elon has another secret strategy to winning this war. Can you tell me about it? Yeah. Well, I mean, the Elon strategy and the reason why all of these
Starting point is 00:04:03 things work so well is because he is amazing and manufacturing and production. And I think a lot of people overlook how difficult it is to actually make things in the physical world. It's really difficult to manufacture things. And he learned this over the decades through Tesla, through SpaceX, through actually creating physical hardware in the physical world, in a space in which a lot of these companies really only create digital goods or handheld products. A lot of this has to do with vertical integration and the unfair advantage of having this suite of companies that kind of complement each other. One of them, like we're seeing on screen now, is Tesla and the advent of the megablock, which basically not only allows you to create a microgrid by having a transformer
Starting point is 00:04:42 connected to a bunch of batteries, but it allows you to smooth the electricity from the grid to your power systems that are training the models. So a big problem when you're training large models is there's this thing called like power jitter, where every time the GPU spin up, they require a lot of energy and they spin down very quickly, and those jitters are very difficult for the grid to kind of smooth out. So these megapacks do that. They are also working on another effort where they're going to create their own in-house chips, starting with AI5 and then eventually AI6, and the assumption is that, well, eventually these chips will be powerful enough that they could just embed them in their own data servers, data centers, and actually use their own in-house
Starting point is 00:05:19 chips to train this. So not only are they really great at the vertical integration, but they're just really hardcore at creating these products. And I mean, like you said earlier, EJAS, the speed at which they're able to deploy this stuff is just faster than everyone else. And whether it's a testament to the actual work culture or them doing it all themselves versus a company like Open AI who's outsourcing a lot of the actual infrastructure to Oracle and a lot of the construction stuff to a third party company. X-I is doing a lot of this in-house. And I think that's going a long way
Starting point is 00:05:47 and actually making them move so much faster than everyone else. He just, how old do you think X-A-I is? Like, they haven't been around that long. Has to be like, what, three years? They're quite young? It's like two and a half years. It's less than three years.
Starting point is 00:06:01 And Open AI has been around for nine. That is a long time. And granted, a lot of the earlier years were spent just doing research and preparing the world for the world of AI. But X-I moves so fast. I think all of these things kind of converge together to come to their advantage. And as we move faster and further along on this path to AGI,
Starting point is 00:06:19 that rate of acceleration on an exponential curve starts to really make a big difference. And just the more I think about this, like these outrageously large numbers, the trillions of dollars, I mean, this is the biggest capital expenditure ever made in history. It's like seven or eight Manhattan projects all going at once from all the largest companies in the world. And it kind of makes sense. Like it seems like this is the last thing you'll need to spend money on. Yeah, it actually reminds me of something Zuck said in an interview last week, which was, I am willing to lose hundreds of billions of dollars on a massive mistake if it means I have a 10 to 20% chance of building out super intelligence. The point being is it is such a high stakes game that everyone's playing right now that they can't afford to lose.
Starting point is 00:07:07 Let's say AGI is achieved in five years' time. Let's say that's your projected estimate. and you achieve it in three years time, that two-year lag is going to cost you your entire company, and that was the point that he was making. Yeah, absolutely right. This is the most important race ever. Yeah, things move so fast along the exponential curve. If you are off by a couple of months on that vertical line, you are just, you are in trouble. And we're going to talk about meta and Zuck and Open AI and all of the other crazy news.
Starting point is 00:07:31 But first, we do have to make mention of our sponsor, KeyJEN. KeyGen is building the world's largest verified distribution protocol, aka Verify. What is Verify? you may ask, it focuses on ensuring only real high-quality users participate in digital platforms, addressing issues like fake accounts, bots, and fraudulent activity. How does this work? Well, this protocol uses advanced biometrics and fraud prevention technologies to block fake users. The system is essential for AI and consumer applications alike for developers who require high-quality, fraud-free data, and engagement to train models and expand their products globally. There are
Starting point is 00:08:05 already used by over 200-plus clients across AI gaming, DFI, and consumer apps, and has reported ordered for them over 99.8% reduction of fake accounts. So if you need to train your AI on real data, check out Kijan. We'll have a link in the show notes. Thank you, Kijan. EJ, back to you. Let's talk about Open AI because this seems like a pretty big thing that's happening in the world of data and training that they just announced this week.
Starting point is 00:08:29 Okay. So we have XAI and Elon, which, as you pointed out, is the newcomer to this race, but somehow leading the front. But they have some history with Sam Altman, who is, of course, the founder of Open AI. They go way back. They actually founded Open AI together, right? But then a rivalry ensued when the two of them kind of broke up because they didn't align on the same types of business interest. Sam wanted to take it from a non-profit to a for-profit. He wanted to start building AI in a certain type of way that Elon didn't agree with. And so Elon broke off and founded XAI. And that's kind of like
Starting point is 00:09:03 how those two companies kind of have progressed since. But Sam has been, Equally, if not more aggressive in building out data centers to train the best AI models. If you remember, Josh, I think earlier this year, actually the start of this year, they announced their Stargate project, which is basically the name that they're giving to the different types of data centers that they're building to train these different models. These Stargate data centers exist all over the world. Geographically, I think they have the highest concentration. I think they've planned for five to ten of these Stargate sites to be in the U.S., but they're building one. in the United Arab Emirates, as well as a multiple in Europe,
Starting point is 00:09:43 just to have this presence and support for open AI users anywhere in the world. So the Stargate Project initially announced to invest $500 billion over the next four years as part of a larger AI infrastructure bill that they committed in tune with the American government as well. But they then had some banger announcements over the last week, Josh. The first of two being their deal with the deal with. Oracle, which is a $300 billion compute deal, where Oracle will supply the GPUs and the compute to help train Open AI's models. And this commitment is going to be had over five years.
Starting point is 00:10:23 This in tune results in Oracle's stock surging 42%, which led to a lot of critics saying, okay, so they don't have the compute. They announce this deal. Oracle stock jumps up 42%. and they use the money to buy Nvidia GPUs, and it kind of releases results in this kind of cycle. But the point is, open air commits to buy $60 billion of compute per year, 300 billion total over the next five years, which results in 4.5 gigawatts of power,
Starting point is 00:10:53 roughly the output of two Hoover dams. Enough for about 4 million homes, Josh. Yeah, you do. Okay, so I went through the math. I actually queried Grock about how much energy one gigawatt is. Just for reference, because we always talk about, like, okay, it's about 834,000 homes, but it's like kind of outrageous thing about exactly how much it is. So with solar panels, each gigawatt is equal to approximately 3.3 million standard residential
Starting point is 00:11:19 solar panels. So multiply that by five, you are like over 17 million solar panels. It's equivalent, each gigawatt is 400 large wind turbines. So multiply that by five, you're at 2 million wind turbines. So the numbers here are staggeringly large. Like this is a significant, this will be become a significant percentage of the total energy output. And it's probably a likely trend that continues, where the more energy we actually throw, the greater percentage energy we throw towards solving AGI, the more efficient everything else gets. So it makes sense that these numbers are going to keep growing larger and larger. I mean, think about a terawot that Yademala mentioned earlier in the post. That's what these companies are aiming for. Where they get that energy, I don't know,
Starting point is 00:12:02 but I think the downstream effects of this war are really cool. Like if a company is a able to conjure up one terawatt of energy, the assumption is that we've gotten some sort of technological breakthrough that allows some sort of downstream fact that will not only allow these data centers to get better, but also like our cost per electricity goes down a lot. And the world always gets better when the cost per kill a lot decreases. And we see this across the board. There's no energy rich, energy poor countries that are rich. But Josh, like some of the math isn't mathing, dude. Like look at this. Open AI will spend $60 billion a year, six times it's current revenue. So I only have to imagine that open AI is structuring some sort of deal,
Starting point is 00:12:42 and we're going to talk about one with NVIDIA actually soon, where they're kind of raising money in the future, or they're agreeing with like SoftBank and Oracle that, okay, you give me this amount of GPUs and by that time we'll have raised the money to begin with. And we're kind of seeing this with a kind of follow-on announcement that they made after this Oracle announcement, which is they're going to build five new Stargate sites in cooperation with Oracle and SoftBank. Masayoshi-san is putting up a ton of money to support this as well. And the details are pretty interesting and of course includes a lot of large numbers. Four hundred billion dollars pledged over the next three years locations in Ohio, Texas, New Mexico, plus an unnamed Midwestern side.
Starting point is 00:13:21 So again, we're concentrating very much in the U.S. It's funny. A.I. computer is pretty much becoming like a war chest literally between nations and whoever has the most power basically wins, which is I guess what's instigating a lot of the USA versus China rivalry. And a statement from Sam I found really interesting is he goes, AI can only fulfill its promise if we build the compute to power it. If we are limited by compute, we'll have to choose what to prioritize. No one wants to make that choice. So let's go build. And that's his reasoning and justification for spending this amount of money. And it's funny, Sam has mentioned twice publicly now, once in a blog post and once in an interview with CEO Jensen Huang about the deal that we're about to mention,
Starting point is 00:14:00 that they can choose between education for all or curing diseases. And I think that's a very interesting way of approaching the sales pitch to this is like, hi guys, you have to choose. Unless we get more electricity, you're going to have to make this very difficult choice. And I don't love that. And I'm starting to see the tactics that some of the CEOs are using in terms of messaging. Like, Dario of Anthropic recently came out this week. And he was talking down on open source.
Starting point is 00:14:24 Sam Altman is now making this like critical emotional decision to the public. So it's interesting to see how they actually deliver this and try to get the messaging across. So it becomes appropriate to raise and deploy this much energy. this much power and AI compute? Yeah, I mean, I think the frustration that he's probably facing is people have been promising AGI, including Sam, and notice how he didn't mention AGI once in that letter that you're referencing. And they haven't really been able to deliver it at the consumer level.
Starting point is 00:14:53 They're achieving it with coding. They're achieving it with math and a bunch of other nerdy, very niche things. But they haven't achieved it to the wider audience where it gets the masses to start believing in it. I think he's trying to therefore supplement that with. with a very strong and purposeful vision. And it kind of gets the story or the point across, right, when he says, we either cure diseases or we have the best education for children,
Starting point is 00:15:15 which one do you want? We don't want to have to choose. And it makes you kind of think. But yes, moving on to the most recent announcement, which I actually think is the craziest. NVIDIA is investing $100 billion in Open AI. A billion. A hundred billion dollars.
Starting point is 00:15:31 This is after already committing, what was it, like $5 billion in one of their? series whatever the hell, whichever letter they did, this partnership will supply 10 gigawatts of GPUs to fuel open AI's data center growth. That is a staggering amount of GPUs. I think this is strategic. I think that Sam wants some sort of guarantee that Jensen and MVIDIA are going to deliver open AI GPUs and they're not going to kind of falter on their promise because Jensen's supplying XAI. Jensen's supplying meta. Jensen's supplying Google as well, right, to an extent. So there's kind of like this trust issues forming between the competitors with so much
Starting point is 00:16:15 reliance on Jensen. Obviously Jensen's just sitting back laughing and enjoying all of this, but I think this is a strategic alignment. Do you have any comments? I mean, it's like 10 gigawatts here, 10 gigawatts there. Suddenly, like, you're talking about some serious power, some serious infrastructure, some serious money. Where this is all coming from, I don't know.
Starting point is 00:16:33 it's very clear that there is a reliance on a single man being Jensen Huang that all of this is built on top of. So it's just, I mean, it's another big deal that we will see how it plays out. We have yet to see anyone really eclipse more than a gigawatt. So to go 10x, a full order of magnitude, in the physical realm where you actually have to build physical infrastructure, like, okay, this is great, go do it. Like, let's see what happens when you go do it because so far no one's been able to figure this out yet. Now, if you're listening to this episode and this is starting to sound like a bit of a pyramid scheme or a Ponzi scheme, you wouldn't be alone. Like a lot of people are looking at all these recent announcements and they're kind of connecting the dots and they're realizing, hang on a second, if OpenAI invests $100 billion in Oracle to buy cloud computing services and then Oracle invests $100 billion in Nvidia to buy the graphic cards, but then Nvidia just announced that they're reinvesting $100 billion in Open AI. to build AI systems, isn't it just the same money that we're talking about, which doesn't even
Starting point is 00:17:36 exist, which hasn't even been raised, which hasn't even officially been committed just yet, but all of their stock prices are massively soaring. So, like, which comes first, the chicken or the egg. And it's just kind of like a funny point that I think is worth mentioning. Josh and I are equally as excited about this growth. And we think AGI and investing in compute is super important. but we're also not trying to put the cart before the horse. And we're admitting that this might feel a little bubbly and it might actually be a little bit of bubbly. But that's a little bit bubbly.
Starting point is 00:18:07 But that's the risk that we're willing to kind of take or investors are willing to take to build this out. You know what, EJA, I don't care for the big numbers anymore. They mean nothing to me. $100 billion means nothing to me. Go build the infrastructure. Go launch it. Go show me half a million coherently training GPUs.
Starting point is 00:18:24 That's what I'm interested in now. because everyone has a big deal. Everyone has a gazillion gigawatts incoming. Go build the damn data center, please. And I mean, like you mentioned earlier, it seems like XAI is very much on their way. But they're not the only ones. We have more companies that are working on this.
Starting point is 00:18:38 What are we got next? Microsoft. No. Another giant. We have a tiny company called Microsoft. So Satya last week announced this thing called Fairwater, which is basically Microsoft's data center, which they hadn't entered the race yet.
Starting point is 00:18:54 I want to point that out. They were working on a lot of this on site at some of their offices or slightly just adjacent to their offices, but now they're going fully committed into building out their own AI supercluster. He goes, if intelligence is the log of compute, it starts with a lot of compute. And that's why we're scaling our GPU fleet faster than anyone else. He goes on to explain how Microsoft had basically already built out 10 gigawatts of capacity, which is pretty big. but he wants to go much, much harder than that, targeting another 10 gigawatts.
Starting point is 00:19:27 I'm noticing that, Josh, all of these companies are touting 10 gigawatts. It's the new gigawatt now, dude. And if I remember prior to that, it was 500 megawatts. And so we've come a long way. We are going by orders of magnitude almost every quarter now at this point.
Starting point is 00:19:45 Again, seems kind of bubbly, but this is basically Microsoft's attempt at building out a super cluster to train their own models. Now, what I find is interesting here, Josh, is Microsoft was very closely aligned with Open AI. In fact, they were the premium cloud and compute provider for Open AI up until, I think, about a month ago, where they sort of gone through a bit of a subtle breakup, where Microsoft is now kind of like supplying more of Anthropics models to their co-pilot users
Starting point is 00:20:15 and Google's Gemini models. And Open AI has announced that after 2030, they're basically not really going to be relying on Microsoft as much. So this I see as a clear signal that Microsoft is going to be going out on their own and maybe even training their own model. Yeah, this is very much the, this is the AI Game of Thrones. This is how it goes. Everyone is out for themselves.
Starting point is 00:20:37 You have deals when it's convenient. You destroy those deals when it's not convenient. Where the only person that matters in this, the king of them all is Jensen and those GPs, because without those GPs, none of this happens. So you start to see how these dynamics work and play were. I mean, Microsoft was using OpenAI to the extent that it was valuable. The second there became resistance and a lost value, see you later. We're moving on to the next big one.
Starting point is 00:21:02 And I think that's going to be the nature that we continue to see as we go through. But there's more. Intel is the news. What does Intel do? I don't think I've touched an Intel product since they stopped making the chips for the MacBooks. So what are they doing here, please? Okay. So we've come a long way.
Starting point is 00:21:17 And by a long way, I don't mean a good way. Intel's stock price and general infrastructure or quality of chips that they were producing declined pretty massively over the last three years, I would say. To the point where they, I'm sorry to call it this, but it is, they needed to get a government bailout, Josh. Did you know that the US government currently owns 10% of Intel? Yes. In fact, we mentioned it on an episode a week or two ago,
Starting point is 00:21:42 which was a huge deal. Like, the government is now investing in private markets and they own a company. I've never seen national. on this scale until probably the, until since the industrial revolution. I've never seen nationalization of a core technology so aggressively as the U.S. government today. And yeah, they own 10% of Intel.
Starting point is 00:22:03 And actually, Jensen Huang and Vida invested $5 billion into Intel as well. And the breaking news this week is Intel has asked Apple to make a similar investment into them. Why I found this funny, Josh, isn't to do with Intel at all. but it's to do with Apple. Because we're giving a breakdown of the top companies and their strategy towards AI, and I'd be remiss if we didn't announce our friends
Starting point is 00:22:27 or talk about our friends. I wasn't going to mention Apple at all because they have zero strategy, zero data centers, zero gigawatts. But I wanted to throw them a bone. If they do end up going forward with this investment, I do think it will end up being a really good strategic move for themselves. As you mentioned,
Starting point is 00:22:45 they use a bunch of Intel chips for their MacBooks, and potentially they might end up using a design chip that Intel makes for whatever Apple AI product. Maybe that's me being too optimistic, but I see it in the future. Intel also signed a really close partnership with Nvidia recently, which agrees between both of them that they're going to be building out very specific GPU architecture and GPUs that work cohesively amongst their product suites. So I think all in all, Apple has a chance to take a shot here. Okay.
Starting point is 00:23:16 It makes sense that. I mean, Apple has their own silicon that they make. And they've been adding these like the neural cores into it to do the AI training. To me, it seems like my interpretation of this news is Intel is kind of becoming a national, a matter of national security where we need to just onboard our chip manufacturing and the government is getting involved and saying like, you cannot die. We need you. Jensen, go help your friends. Tim, go help your friends.
Starting point is 00:23:38 Like, we need them to live. And that's kind of what we're seeing here. I'm not sure Apple like has much for use for them. I was looking at an iPhone trailer down recently. They have the Apple chips. They have Qualcomm. processes for cellular. And that's like, that's kind of it. So this might just be like, hey, guys, you got to go help them and like, we'll spare you on tariffs and things like that.
Starting point is 00:23:54 It's my take at least. So I'm kind of bullish on Intel just through all the subsidy that I presume they will be getting going forward. What was the name of that ex-open AI researcher, Leopold, Abenshryber? Oh, yeah. Do you made a killer. Yeah. Yes. He left Open AI and he started a fund. I think he raised a billion dollars to invest in companies that were in accordance to this massive thesis that he wrote on how AI and AGI was going to pan out. And one of the biggest investments that he made with that fund was in Intel. This was before anyone was talking about Intel. And he is now up 120%. So one of the best investors in returns ever.
Starting point is 00:24:37 I just, I have one last question for you. Boyer's Anthropic. they seem to be a big company. Yeah, right? Like Anthropics, the ones who make Claude, the ones who like leading code model with the frontier for a long time, kind of quiet. Oh, you mean the one that's fudding AI every day
Starting point is 00:24:54 despite them being an AI company? Sorry, yeah, okay. I've got to call it out, right? Okay, so I'll tell you exactly where they are through the words of Elon Musk himself, which is, a tweet that I have pulled up here, winning was never in the set of possible outcomes for Anthropic.
Starting point is 00:25:11 An absolutely savage sentence to describe that Anthropic hasn't really got any compute providers. They haven't really made much effort or as much investment in data centers, and they're running out of funding. They're running out of backing. Josh, I have a take that you may or may not agree with. I don't know how hot this is. Okay. I think your favorite company Apple acquires Anthropic. That's a hard take.
Starting point is 00:25:38 That'd be kind of crazy. I'm not sure what Apple would do with Anthropic. No? Well, they need a model that they can own themselves. But they need a small model. They need a model that can run on iPhones. They need a model that can, like, improve my MacBook. And I'm not sure Anthropic is worth that.
Starting point is 00:25:56 Probably not. Like, Apple's making pretty good local models. And to be honest, it's not that hard. In fact, open source models in the next year are probably going to be good enough where they can just grab one, make it custom for their workflow, and then it's good. I don't think Apple's going to be making. bleeding edge LLMs. Okay, so fine.
Starting point is 00:26:13 Then maybe they're just two loser companies. Because Anthropic, for some reason, doesn't have the funding. Yeah, the way I see the AI race currently is there's three players. It's Google, it's OpenAI, and it's XAI. Everybody not listed there is not really interesting. And we'll probably have some sort of eventual demise or just will not exist at the frontier. That includes Microsoft. That includes perplexity.
Starting point is 00:26:39 that includes Anthropic, that includes anybody else, it's really challenging. And as these numbers get bigger, as the infrastructure grows, it's only going to get harder. This is like a tremendous scale that we've never tried to do before. And it's really going to be difficult. So to even have a seat at the table now is very impressive. It's going to be really interesting to see, I mean, just over the next six months, how all these buildouts are going to go. Because if you are late on a buildout, if you cannot actually complete this new 10 gigawatts of compute,
Starting point is 00:27:05 you are cooked. That's it. The frontier has moved on without you. Well, speaking of being cooked, I want to take a second to talk about Europe. What we've just discussed are all the companies that are in America, USA, USA. And you might be wondering, well, what's the rest of the world doing here? How's our friends over across the sea? The answer is not very well, but they're celebrating incredibly tiny wins.
Starting point is 00:27:30 And I mean 4,000 GPU tiny. So we just spoke about a bunch of data centers that are being invested in by separate companies to the tune of hundreds of billions of dollars for millions of GPUs. Germany is celebrating, setting up 4,000 GPUs to help them become digitally sovereign. 4,000. I used to do like a reselling where I would buy the new hot Nvidia GPUs when they first came out with using Boston and resell them. And I feel like I have accumulated like a comparable amount to the entire country.
Starting point is 00:28:05 That's like very disturbing. 4,000 GPUs is like, we're talking about a million coherent GPUs. So like, I'm happy for you, but you have some work, my European friends. Sorry, they're just going to keep, they're going to keep over-regulating. That's the biggest gripe that I've had with Europe in general. Whatever new fancy technology takes place and gets created, they just over-regulate the hell out of it. So much so that founders want to leave the country or the region and they want to come over and build an American. And I don't blame them.
Starting point is 00:28:35 But there is a foreign adversary, Josh, which presents themselves as a real threat to the US winning this compute race. In fact, I wouldn't actually say that it's America versus America. I would frame it as it's very much America versus China. There are two real things that I want to highlight here, because there's a lot we can cover here. It probably deserves an entire episode to itself. So point number one is China has secretly amassed. Over 3,300, that's according to GROC and Open AI, gigawatts of energy supply over the last decade.
Starting point is 00:29:15 This is before AI became a mainstream topic and use for this energy. They just built out this infrastructure. And as we know, China has been kind of the infrastructure and manufacturing kings for a while now. There are the kings of building out cheap tech that can scale, that's efficient, cost-efficient, energy-efficient. all of the efficiencies, right? And they are now funneling all this energy
Starting point is 00:29:39 towards training some of the best AI models. But they had a linchpin, Josh. They didn't have the best hardware, chips, GPU specifically. So they relied on our friends at Nvidia on Jensen Huang. And this tweet kind of highlights another major concern, which is the second topic. In 2022, the US banned high-end Nvidia GPU exports to China, wanting to slow down AI development in China.
Starting point is 00:30:07 So China invested in homegrown AI chip production and caught up in three years. Now they are banning buying Nvidia AI chips. And this highlights the second story, which is China's banning tech companies from buying Nvidia AI chips. And I dug into this, Josh. In fact, I covered this in our most recent newsletter,
Starting point is 00:30:27 which we released last week. You guys should definitely go and check this out. And it compared the chips that Alibaba had created. and Huawei. And they are as good as not the latest iteration of Nvidia GPUs, but the iteration just before this. So they have caught up so unfathomably quickly that they present themselves as a real threat. Yeah, the thing about China, and we talked about this in a lot of previous episodes with Deepseek
Starting point is 00:30:54 and other open source companies like that, is they are so good at being resourceful, where they can take the limited resource constraint as the biggest. benefit to them, and they're able to optimize within the confines. So the second they unlock additional resources, those optimizations bubble out into this huge explosive growth. And what we're seeing now is they've been, I mean, a lot of these large language companies, these AI labs, they've been accustomed to having these tremendous amounts of constraints on how they can actually train these models. But they've still been at the frontier because they've optimized the software so much. So now that China is starting to build their own in-house optimized GPUs, well, that's a
Starting point is 00:31:34 really big deal because if you think about the way they've been building GPUs in the past, I mean, they haven't been that good, but they've been kind of being resourceful because they don't have all the capics that Nvidia has. They don't have all the resources. Invita has. Now that they are starting to really ramp that up and they have, they clearly have the software side. Now they're getting the hardware side in addition. That becomes a serious thing to look out for. Like, Nvidia's monopoly suddenly starts to look a little shaky because I mean, at the same time, China is moving really quick. The benefit of China moving very quick means there is more Nvidia for the U.S. And because there's more invidia in the U.S., all of these
Starting point is 00:32:08 projects that we have grow much quicker. So I think it's good for China, it's good for the U.S. We just need to make sure that it is not bad for the U.S. and that we still continue to move quickly because, my God, China is moving very fast. And if we don't keep up this hardcore rate of acceleration, we are also going to be in trouble because, I mean, like we mentioned earlier, you are late on that curve, it moves very quickly. And if we are one cycle behind, if we get left in the dust, that's a huge difference that we're going to notice. It's funny. The way I picture America versus China is America in the West throw money, tons of money at the problem. China doesn't have as much money, so they need to be more resourceful, as you point out. And it's
Starting point is 00:32:50 not just chips. It's not just energy, right? Over 50% of the world's top AI researchers reside and are produced in China, right? So they have the talent density, they have the manufacturing and scaling capability, they have the energy, and now they might have the chips. I wouldn't be surprised if we saw some of the best AI models
Starting point is 00:33:09 in 2026 come out of China, but we'll wait to see. We'll probably see some signals, right? Like this first signal is China's banning external GPUs, where now they feel confident enough in their own GPUs that they will force everyone to go in-house. The next signal we probably get, which is when we should start raising some red flags, is when they stop making these models
Starting point is 00:33:26 open source. They've been making all the models open source because that's how you kind of get this iterative development. That's how you access more developers. Once these models start to become closed source, once they start to have their own in-house GPUs from China, that's when things get a little scary. That means that they have caught up. And that means that we are then, we are the neck and neck and they're confident enough to keep all of this now behind closed doors. And this is a very high-stakes race. This is probably the new Cold War, right? Because, I mean, you never really know what the downstream effects of reaching AGI will be until we get there. You can assume it's probably pretty important and pretty impactful.
Starting point is 00:34:00 So that is the breakdown of all the top AI companies and their scaling efforts. You might notice that we didn't mention Google. That's because Google keeps a lot of stuff under wraps. They buy a hell of a lot of Nvidia chips and GPUs, but they also have their own in-house TPUs, which they used to train these models. So once we dig up more information, we'll definitely talk about that on the show. But Josh, I want to kind of round this episode up with one final, question, which isn't one that you're going to be unfamiliar with. Are we in an AI
Starting point is 00:34:31 CapExpo? And it's funny because I mentioned this in the last episode, but so much has happened since that last episode that I need to ask you again, are we crazy or is this valid? I think I probably have the same answer as I did last episode, which is compute will be the only and last thing we will need to spend money on. At a long time scale, at a short to medium term time scale, I don't know. How much money is left to throw at this before it starts to run dry, before you can no longer prove to investors that it is worthy, it is able to generate the revenues required to offset the costs. It seems, I mean, directionally, this is absolutely not a bubble. This is the real thing. This will be the only thing we spend money on forever. The conversion
Starting point is 00:35:14 of energy to intelligence is tremendous. But along the way, I don't know, we're moving quick. And, I mean, like we said, 10 gigawass is very hard. That's an order of magnitude bigger than where we are. If they can build it, then we are probably going to extend the length of this bubble, maybe not even see it. But in the world where we start to run up against walls in this development of these order of magnitude gains every couple of months to a year, that's when things can get a little bit shaky. But the numbers are getting big quickly. So I have to say, do you have any strong feelings either way? No, I think if you're not spending most of your time, money and effort on training the best AI models,
Starting point is 00:35:54 a large part of which is just simply spending money on computer infrastructure, you're going to lose. You're not even trying to play in the first place. So I think it's valid. Will this end up as a winning move? I think there'll be a lot of failure, if I'm being honest with you. I think we'll find out retroactively or in hindsight that a bunch of this money was misspent. We didn't talk about the potential red herring of having a completely new AI architecture be produced, which drives the cost of compute down even more to train a frontier model. And of course, that's kind of like not wishful thinking, but kind of with my tinfall hat on. But I think right now
Starting point is 00:36:32 it's the right move. Yeah, that sounds about right. And again, we'll see. We'll be here right on the frontier covering it all, keeping everyone up to date. I am very excited to learn more about Google. I am so optimistic on Google, but they are very private in how they handle everything. And you know what, to be fair, Anthropics are fairly private too. So there's a lot still left to be known, but there's only so many ways you can hide a 10 gigawad data center. Like, you can see that from the stars. So people will find this out. We will get the answers we need.
Starting point is 00:36:58 And we'll report them right back here on the show for everyone to hear. But yeah, I guess that said, Ejai, any parting thoughts before we wrap here? Nope, that's it. I hope that this was a useful overview. understanding the pure effort and money that is going into building out these data centers and why they're building these out, I think it's the most important race to watch, and we're going to be covering it very closely as we produce more episodes on the show. Separately, we've been getting so much feedback from you guys.
Starting point is 00:37:29 We ask for feedback at the end of every episode thinking, oh, maybe we'll get a few more comments. Our YouTube comments and our DMs are piled up, and it is some of the most useful feedback that we've got, everything from the graphics that we're using on these episodes to the way that we sign off these episodes, to the topics that we discuss, to things that we might have missed and things that we might be wrong on. So I appreciate all of the people that are calling us out. I appreciate all the people that are cheering us on. If you aren't liked and subscribe to any of our episodes, please like and subscribe. And if you have anyone that you think might be
Starting point is 00:38:06 excited by the episodes, stuff that we talk about, please share it with them. we will see you on the next one. Peace.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.