Limitless Podcast - Elon's Recipe for Winning the AI Race: Grok5 and Colossus

Episode Date: September 23, 2025

In this episode, we examine advancements in AI through Elon Musk's xAI, focusing on Grok4 Grok Fast. We discuss Musk's claim that Grok 5 could achieve AGI (artificial general intelligence) an...d Grok4's impressive benchmark improvements.We highlight Grok Fast's two million token context window for enhanced efficiency at lower costs. The episode also explores the competitive AI landscape shaped by significant investments from tech giants.------🌌 LIMITLESS HQ: LISTEN & FOLLOW HERE ⬇️https://limitless.bankless.com/https://x.com/LimitlessFT------TIMESTAMPS0:00 The Rise of Grok 51:41 Charting the Path to AGI3:32 Breakthrough Techniques in AI Training5:06 The Power of Plain Language9:49 Grok 4 Fast: A Game Changer14:21 The Future of AI Accessibility18:14 Reinforcement Learning Revolution21:58 Colossus 2 and the Energy Race26:00 Global Investments in AI Infrastructure28:15 Closing Thoughts and Future Episodes------RESOURCESJosh: https://x.com/Josh_KaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures⁠

Transcript
Discussion (0)
Starting point is 00:00:03 It's been a big couple of weeks for Elon. We had a few pretty hit episodes last week talking about Starlink, talking about the AI-5 chip. And this week, it's just another big breakthrough, EJAS. This week, we're coming out with a lot of new GROC and XAI news, which is pretty exciting. I mean, one of the leading headlines, he said, I now think XAI has a chance of reaching AGI with GROC 5. Never thought that before. And now there's two things that kind of spawn this. One, which we'll get into a little bit later, which is the GROC fast model.
Starting point is 00:00:29 It is remarkable. It is a full order of magnitude better than anything else for its size. and it is really, really impressive, but the thing we're going to start with EJAS is this chart that we're showing on screen right here, which is the single thing that convinced Elon, wait a second, maybe, maybe just maybe, GROC 5 could actually lead to AGI.
Starting point is 00:00:45 And it's because we're seeing this crazy anomaly on the chart where GROC 4 was kind of ahead, but somehow without any new major release, GROC4 is now way ahead. So, EJES, can you explain to us, like, what's going on in this chart? How did they get so good so fast without a major new model release? I mean, this didn't even come from XI, did it?
Starting point is 00:01:03 It's a good question. And no, it didn't come directly from X-A. It actually came from two random AI researchers, one called Jeremy Berman and the other one called Eric Pang, who tweaked GROC-4's model, also known as fine-tuning, to basically make it a hell of a lot smarter. And so they put it to the ultimate test, Josh. It's this thing called the ARC-AGI benchmark. And for those of you who have not been spending it. all your time researching benchmarks. The ARC AGI benchmark tests how good your AI model is at being intelligently human. What I mean by that is it presents the AI model with puzzles that it's never seen before,
Starting point is 00:01:49 that it can't possibly have been trained to solve and sees how good it does. Now, Josh, let me ask you this question. Before GROC 4 itself was released, what do you think the highest score was on this benchmark? A lower, but I'm not sure how much lower. I don't know this particular numbers, but maybe I'll guess 5 to 10% lower than what the best is now, kind of like an incremental improvement.
Starting point is 00:02:12 Nope, nope, no, no. It was way, way lower. In fact, it only scored between 5 to 8% from the top models, from OpenAI, Google, and all those kinds of things. And then GROC 4 came along, and it broke that frontier and scored 22%. Guess how much these two models,
Starting point is 00:02:31 that these two random AI researchers scored on this index. Wait, so you're telling me, I'm looking at the screen. I'm saying 79.6% is that right? Is this a 4x multiple on base GrogFord? 80%. And this had nothing to do with the XAI team at all. I want you to focus on this chart that I'm showing you right now. And look at my cursor circling around these two orange dots that are off into the distance.
Starting point is 00:02:57 You see Grockfall thinking over here, which was basically the, the heaviest, most expensive model that Elon and the XAI team released when they launched GROC 4, and they were just completely beaten by these two models. But I'm sure you're probably thinking, Josh, how the hell did these two researchers do that? And, you know, why aren't they being hired by Elon immediately? They don't have the resources of a giant lab. They're competing against, I mean, if you remember, these people are getting billion-dollar offers to come work for a single employer and there's a collection of these.
Starting point is 00:03:29 So how is it that one individual? It's being a collection of these people. So these two researchers introduce two novel ways of training their models. One is called open source program synthesis, and the other is called test time adaptations. Before I get into an explanation as to how these work, I want to remind the audience that what really makes a model really intelligent is largely part in due to the data that it's trained on.
Starting point is 00:03:59 people spend so much money. I'm talking hundreds of millions to billions of dollars to acquire the best data to train their models. And the reason why this is so important is the model when it's trying to answer a question draws on the data that it's been trained on, right? So it's hoping that it can look back on the data that it's been trained on
Starting point is 00:04:19 and find the right answer somewhere in all of these tokens and characters, right, Josh? These researchers decided to flip that completely on its head. this thing called open source program synthesis where the model designs its own bespoke solutions in real time. So it doesn't even look at the data that it was trained on. It just looks at the puzzle that it's presented with and it tries to break it down into smaller components. So let's say the puzzle has 10 different steps to reach to the end goal, the correct answer. It'll break it down into 10 different little steps, whereas normally a model will just look at the complete set of 10 steps
Starting point is 00:04:57 and think, hmm, how do I get from step one to step 10? It just solves each step one at a time. And that was like the massive breakthrough that they made. And if this sounds familiar, you're probably thinking of this technique known as reinforcement learning, which basically has like the model like repeatedly go at a problem over and over again. This is pretty similar, but it's the next step up in that field. Okay, got it. Yeah, this news kind of really annoyed me because of how seemingly simple,
Starting point is 00:05:27 It was. I mean, Jeremy Berman, in the case of this, I got some examples of specifically how he did it. And he was originally writing in Python code, but then he switched to just writing instructions in plain English. And I think this is such an important thing that a lot of people forget, I mean, myself included, I'm speaking for myself here, that a lot of this really challenging, difficult work with engaging with LLMs is really just done in plain English. You're just writing sentences to a model in hopes that it produces a better output for you. It's not this crazy complex code base, although that exists deep down. But the way that they achieve this is actually just by writing plain English. So I did a little bit of digging. I have a few notes on how it works. And his system, it basically starts by having Grock 4. He chose Grock 4 as his model of choice. It produces 30 English descriptions of rules to transform inputs into outputs. So it takes that and then it tests these descriptions on training examples by pretending each is a test and scoring how well they match the correct outputs. And then the top five descriptions get revised individually with feedback on mistakes, like highlighting the wrong cells and stuff like that. And then it combines the elements into the top one to create these pool of description. So it kind of has this
Starting point is 00:06:29 iterative loop where it tests itself, it creates more examples, it gets better data, it confirms that it's the right output. And that's generally why you see the actual outputs of this model are a little more expensive. But the quality of it is amazing because it just continues to do this like self- iterative loop on itself and get better and better and better. Again, all in plain English. So if you are listening to this podcast in English, you are fully capable of doing this because you speak the language. And this isn't anything crazy. It's just very refined prompts that you're feeding to a model
Starting point is 00:06:58 that result in these unbelievable outputs that are now best in the world. That's the coolest part to me, E. Jazz. I don't know about you. No, no, I agree. And it reminds me of Andrew Carpathy's hit tweet three months ago where he goes, the new number one programming language
Starting point is 00:07:15 turned out to be English. It's English. Right? And kind of like to emphasize, again, how important this is. This isn't just another frontier breakthrough of another benchmark. I'm talking about the hardest benchmark that has just been three-xed by two random researchers, right? This is, again, puzzles that on problem sets that an AI model has never seen before. Typically, when you put an AI model up against a benchmark, it has some kind of context.
Starting point is 00:07:49 kind of think of yourself taking an exam at school or at university. You can look at pass papers. You can look at books. You kind of know what topics they're going to talk about. This is completely foreign to an AI model. And therefore, it is the hardest test. So to have something achieve this almost feels like, and Josh, I hate to say it, but I have to say it, like AGI. And I think the fact that none other than Elon himself was taken completely aback by this. I mean, again, to reiterate the tweet, I now think, X-AI has a chance of achieving AGI with GROC-5, never thought that before. And the fact that he is now saying, hey, by the way, GROC-5 starts to train in a few weeks.
Starting point is 00:08:30 And you know what? I think it's going to be out by the end of this year. I think just speaks to the importance of this development. Yeah, I think one of the things that was really startling for me was the realization of how little resources it takes to get so good. and then I was wondering, well, why clearly this isn't anything super novel, although they did do some unique training frameworks. And I think the reason that I, the conclusion that I came to was just scale. I mean, the cost per query, the cost per token of these new super high-end models that just came out is very high.
Starting point is 00:09:06 And you can't really scale that to a lot of people because the companies are just resource constrained. So it leads me to believe and leaves me to think. Well, what happens when a company with a lot of resources dedicates all of their brain power to, this specific type of reinforcement learning, like we're going to see with GROC 5, and they do so in a way that's compressed enough that's efficient enough to actually run it at scale on the servers without melting everything down without charging $1,000 a month per membership. And I think that's probably what we see with GROC 5 is this new juiced up reinforcement learning, but efficient and actually built for scale. And I mean, even if it just launches at the specs of these two individual researchers,
Starting point is 00:09:43 that's a huge win because that's an incredible model. Yeah. And it's open source. It's open. It's open. It's not available for everyone. It's pretty remarkable. Yeah. So I think very interesting things coming. If I was a betting man, I would be betting big on GROC 5. I think they very much see a solution that people really want. I was just thinking about why both of us are finding this development, both amazing, but really annoying.
Starting point is 00:10:06 And I think it's because to some degree, we both believe that in order for AI models today to get to AGI, we would need to completely re-architect how they're designed. you know, Transformers was the big breakthrough. That's why models that we know and use today are so smart. But it's not as smart as we expected it. And there was this kind of like lag of improvement. And now we suddenly see a 3x improvement where this model is kind of breaking this leading benchmark. And so now I think I'm starting to believe that maybe if we invest hundreds of billions of dollars in the post-training part, where typically we've been investing in the pre-training, in the compute, but if we invests,
Starting point is 00:10:46 in the post-training, we may clearly reach AGII before redesigning the entire thing up front. Does that resonate with you, Joshua? Am I just, do I sound crazy? No, it does. It does. It's funny because we frequently record the show and you expect to be surprised. And then something happened. You're like, oh, my, I wasn't expected to be surprised in that way.
Starting point is 00:11:03 And I think this is one of those things where, I mean, I wasn't expecting to see a new leader in between major model releases from an independent researcher. So the fact that this is even possible really just blows the doors off of a lot of expectations I had. And this isn't even the only interesting news this week from the XAI team because they released new model alert, GROC for Fast. Let me tell you. When I saw how this model worked, I was like, whoa, this is, again, blown away super impressed. Can you run us through some of the highlights, please? We have this spec sheet. Yeah. So first of all, the leading headline, 2 million token context window is outrageous. I think the current leader is Google with the Gemini model. They have 2.5 Pro and
Starting point is 00:11:45 flash, both of them, I believe, have a million tokens. This is two million tokens of context. For those that aren't aware, context is the basically active memory of a language model. It's the more context you can collect, the more clarity it has into the actual data that it's talking about and conversing with. You want that number to be bigger. This is the biggest by far, by a doubling. So that's a really important headliner. The second one, probably even more outrageous, 47 times cheaper than Rock 4, which is crazy because when you look at it. it on the scale below, if you can drill down just a little bit, Grog4 is right in line with every other great model. It is, uh, Grog4 Fast is just beneath O3. It's above deep seek. It's
Starting point is 00:12:26 above Cloud 4 sonnet. It's above Cloud 4 opus. It's just like this remarkable model that is better than a lot of the leading models, but 47% cheaper than the base model. And I think that's going to be a pretty interesting thing when we get into like scaling these models and using them for code. Ejiz, we talked last week about how good the Grom models for coding because it was so cheap and so effective. This is another case of that. And the way they did that, I was so interested in how they were able to come up with like this secret sauce to do it. And basically what they did is they taught to model to spend its brain power only on tools when it helps. So they use this like large-scale reinforcement learning to train rock
Starting point is 00:13:03 four fast to choose when to think in depth and when to answer questions quickly. So what that resulted in was about 40% what we're seeing here on the screen. Forty percent fewer thinking tokens on average than we've gotten from the previous model, which is a significant difference. Oh, and by the way, it's number one on LM Arena. So this was crazy. EJez, what were your reactions when you saw the team dropped this? I already thought these tokens were cheap. I thought these models were cheap enough. Do you remember when open air released GPT5? They kept flexing GPT5 mini saying, hey, you now have the power of our previous best model, but actually it's more intelligent. And it's like, I think it was something like for five times cheaper. And I was like, holy shit. Holy crap. I was like, that is like crazy magnitude. And now it's like, now we've got 47x cheaper than Grog 4. Grog 4, by the way, was already cheap compared to some of the frontier bottles. So I don't know how far this can go, but kind of zooming out, I have never been more confident than now that cutting edge superintelligence will be available for anyone and everyone. This isn't going to be some kind of closeted technology
Starting point is 00:14:11 where only the rich can buy devices and run it. I think anyone and everyone will have fair game access to this. And think about the dynamics that that changes up, Josh. Like, you can have someone in the complete middle of nowhere with a cell phone attached to Elon Musk's new 5G Starlink satellite that's beaming down to him. And he could kind of produce something that the world ends up using because he has access to this cheap bottle
Starting point is 00:14:37 that is actually super intelligent and can be used to create whatever crazy invention that he has or she has that dreams up. I just think this is insane. Yeah, the efficiency improvements are the thing that's always most exciting to me because, I mean, as we get more cheaper tokens
Starting point is 00:14:51 and as the tokens become more portable and lightweight, I mean, you could have the world of knowledge on your phone even without necessarily an internet connection because these models are getting so lightweight, so condensed, so effective. It's like, it's really, it's unbelievably impressive. And what I was really interested in is comparing this to the other models because I know Google was kind of doing a similar thing. They were leading
Starting point is 00:15:11 along the frontier. Oh, here. Here's this post from Gavin Baker that I love because it shows how Google has kind of dominated this thing called the Pareto Frontier. And on the chart, you can very clearly see how there's this kind of arc that hugs the outer bounds of all of the models. And it shows that, like Gemini Pro has been really good on a few things. So I briefly want to just talk about the Pareto Frontier concept because it's really interesting. And it will explain to you exactly why Grock 4 is way out there. It totally shattered what it is. So, I mean, basically, it's funny.
Starting point is 00:15:40 I was doing a little bit of research on this, and the Pareto Frontier is done by an Italian economist named Velfredo Pareto. So I just thought that's a fun fact because great name. Basically, it comes from the economist and decision theory. And it's a way to decide optimal tradeoffs when you have multiple objectives you're trying to achieve all at the same time. So imagine you're trying to optimize two things that might conflict a little bit. Like, you want to make product as powerful as possible, but also inexpensive as possible,
Starting point is 00:16:04 like these models. So in this scenario, there's this set of best possible solutions where you can't improve one aspect like the power without making the other aspect like the cost worse. And what we're seeing in this chart here is Google has made a series of those decisions, those tradeoffs that have led to the absolute Pareto optimal outcome along this outer band. What Grock has done is they actually made a new tradeoff that isn't necessarily a tradeoff. It's more of an innovation that allows them to unlock this perceived frontier, this limiting factor that was on the outer band, and just shatter it and create a new
Starting point is 00:16:35 Pareto optimal tradeoff using these best things. And they did that by doing a lot of magic, but basically what they have now is they have a really smart model that actually sits above Gemini 2.5 Flash and not too far below the pro model, but it is a order of magnitude cheaper. And I think that's where that outlier, that cost effectiveness is really unbelievable when it comes to distributing these tokens widely. So now if you're writing code, if you're creating an application, if you're just, if you're paying for tokens, this is very clearly the model you want to use. what you just described is Elon and XAI literally charting a new path, which is kind of like very behavioral of Elon in general.
Starting point is 00:17:14 And another thing that I thought was really cool about this is the reinforcement learning infrastructure team was kind of key behind getting this model as fast and as cheap and as efficient as we're describing it, right, Josh? They used this kind of like agent framework, which was extremely compatible with the infrastructure that they used to train. and iterate on this model in the first place. And what I wanted to point out here is there's a theme between the two topics that we've discussed so far on this episode, Josh. Number one, when we described the two models
Starting point is 00:17:45 that the researchers created that broke the RKGI benchmark, they specifically used a technique which used reinforcement learning, a new reinforcement learning technique. And the reason, if you remember, why Jeremy Berman picked Grockfall specifically was he said it was the best. best reasoning model because it in the way that had been trained via reinforcement learning.
Starting point is 00:18:09 And now we're seeing yet again this GROC fast model achieving what it can because of reinforcement learning. So I'm seeing a theme or noticing a theme here where XAI and Elon are basically the leaders in reinforcement learning, which I think is going to probably play in their favor. Maybe it's a hint that the models that are going to be closest to AGI that are the quickest that are the cheapest are embedded in reinforcement learning techniques that are just completely breakthrough. Yeah, it seems like the team really reasons. I mean, this is a core Elon notion, but they really do reason from first principles and what's important and what matters. And you're seeing that throughout the entire product as they advance. And I think what's really
Starting point is 00:18:48 exciting, what I'm most stoked about for this show in general is to compare this next round of models. Like, will Gemini 3 and GROC 5, like how are they going to? compete with each other because those are both going to be remarkable models. And it seems to me like those are like, those are currently the top dogs. I mean, as far as GPD5 was kind of a little bit of a miss, Anthropics been a little bit quiet. Gemini and XAI are on fire. But this also, there was, there was one last thing of news before we sign off today. Well, I was going to say, like, I'm highlighting this sentence here for those who are just listening. And it says, you know, we built this reinforcement learning infrastructure team with a new agent.
Starting point is 00:19:29 framework to help train GROC 4 fast, but specifically so that we can harness the power of Colossus 2. And if I remember correctly, Josh, there was some breaking news around Colossus too. Elon was getting into some fights. Can you walk us through it? Yeah, it's funny. There was this whole report from semi-analysis, which does a really great job. I highly recommend checking them out. And they released this report on the XAI data center buildout. And it was so funny to see, because a lot of times you just see satellite pictures or you read headlines and you're not really sure what's going on. The sole purpose of some analysis is to actually have boots on the ground, check the satellite images, and look at it with a scientific engineering point of view where they actually
Starting point is 00:20:06 understand what is going on. And they shared their findings in one of these articles. And I found one of these stories was so funny because it's such a testament to how the XAI team works, where they were having problems with their energy generation in Memphis, Tennessee, because people were complaining and they were having a tough time getting permits. And the core crux of every large AI data center is energy. So they were like, this is unacceptable. We need energy immediately. So what do they do? Well, they jumped over the state lines. They went over to Mississippi a couple miles down the road and they built these new generators right down the road across the state line. They got the permits they needed. They said, you don't want us Tennessee? We'll just
Starting point is 00:20:44 go right over to Memphis. You could see here. They took the power lines. They ran them back into Tennessee and now they're powering the data center. And part of the article was this funny story, but part of the article also is Colossus 2 being built in the sheer scale that Colossus 2 is going to be, and it's going to be over a gigawatt of energy, which is, I don't know how many hundreds of thousands of homes is going to power, but this is like a remarkable amount of power and a tremendous amount of GPUs. And they're planning to make these all coherent and they're using them exclusively, I believe, to train this new GROC5 model. So as this new training center comes online, they will be using this new cutting-edge world's largest supercomputer to train the world's
Starting point is 00:21:23 perceivably best model. But I found this funny because the day that this article came out, there was another post from another CEO of a very prominent company saying, hey, wait a second, we have something a little bit bigger than Colossus I currently. And that was from Microsoft's CEO, Satya Nadella. And he had this post where he said they just added over two gigawatts of new energy capacity. So, Ejazz, this is just a really crazy brawl between these people.
Starting point is 00:21:53 people who are building larger and larger AI data centers. And it eventually leads to the big news that just dropped a little earlier today. But do you have any commentary before? We get to the huge number. Yeah, there's actually one thing I wanted to point out, which is when Elon first announced that he was building out this Colossus 2 data center, it made headlines that it cost $20 billion. And everyone thought it was crazy. Everyone was yelling, this is an AI Kappex bubble.
Starting point is 00:22:18 There is no products that prove that all this investment makes sense. And now you have Satya Nadella, CEO of Microsoft, announcing that he's probably going to be investing twice as much of that to build two gigawatts of new capacity. Again, validating that there is a need for energy and compute to train these new models. Don't forget that Microsoft last week acquired a random European data center for, I think it was about, what, $10 billion, which caused its stock price to 3X because it itself was. Was it worth that much at the time of the reporting happening? And then it leads us to the even bigger announcement, which released this morning, which is Nvidia will be investing not one, not 10, not 20,
Starting point is 00:23:08 but $100 billion in OpenAI over the next couple of years. And you might be asking why. Well, it's because Open AI is going to be investing in so many data centers that is going to produce so much power. I don't know how many gigawatts. I think it's actually 10 gigawatts, which is 10x Colossus 2, 5x Fairwater,
Starting point is 00:23:32 which is Satya Nadella's thing, for all my mathematician fans out there. It is just crazy. Josh, are we in a bubble? Or is there a need for all of this? So here's the thing. I keep going back and forth about the bubble conversation
Starting point is 00:23:47 because $100 billion is such an outrageous amount of money to spend on... On making what is already a remarkable language model, even more remarkable. The product is great. And at least me personally, as a user of these products, I'm definitely getting closer to a wall of things that I use them for, where if a model is marginally smarter, my experience doesn't get that much better. But I was, so that's like one school of thought. And then the other is thinking, well, this is probably the only thing we'll ever need to spend money on going forward ever. So it makes sense to throw all of it at it now.
Starting point is 00:24:22 Because in the case that you do solve Asia, you get hyperintelligence, it solves all of your problems. And it gives you the better questions to ask in order to solve better problems. So it really, it would appear assuming that we continue on this trajectory of improvement, that it makes sense to take every disposable dollar you can to get better and better compute. And this will probably just extend forever, as we are able to harness more energy from the sun, from nuclear energy, a lot of that new energy and compute will just go to making better AI, which will then serve better downstream effects for how society works. So is it a bubble on the long term? I think absolutely not on the short term. I don't know. Where do you get the revenue from?
Starting point is 00:25:03 I don't know. I mean, it's a ton of money, but you know what? I think the reason why you and I feel this disassociation between the amount, how large these numbers are in investing in infrastructure versus what we're actually seeing is we're not going to be seeing AGI before some other fields or some other professions see it first, right? The clear example is coding. Coding has just been on an absolutely exponential improvement rate that has beaten out any other AI feature ever. You now have AI models that can code as well as a senior staff engineer, which is getting
Starting point is 00:25:38 paid like $300,000,000 a year, right? So my guess is this investment is worth it. my guess is the investment is going to come to fruition in professions, in use cases, in jobs that we won't see, but we'll maybe talk about or see the kind of like effects. Maybe it's in science where we create a new drug that cures cancer or whatever that might be, right? I think different types of professionals will see AGI and reap the rewards of this investments before average consumers see it. And then I think the other thing that I want to mention, Josh, is this isn't specific. to US or Western spending. In fact, our foes over the seas in China or in Asia
Starting point is 00:26:22 have been working on this for like the last five years. They've been building out massive data centers, which I think has like built up in aggregate of like 300 gigawatts over the next five years at least. And they've been investing in this so heavily. So it's not just a Western thing. It's an Asian thing as well. China is investing so heavily in this.
Starting point is 00:26:43 if this is a bubble, if we are completely wrong, this will be the biggest, most highest profile L that the world has taken. It's not just going to be a US thing. It's not just going to be a MIP something. It's not just going to be a Sam Altman thing. It's going to be an everyone's involved type of thing. Kind of like world ending event.
Starting point is 00:27:03 Yeah, too big to fail. So I do love this incentive structure where everyone is incentivized to make it work because everyone is equally at risk in terms of their exposure to the technology. So that I think I could be happy to sleep at night, where at least U.S. and China are aligned in one thing in which they want to achieve AGI. They want the smartest models. They're going to make their money pay off the best they can. So, hey, all the power to them. But I think, is that a wrap for today, EJS? We got anything else? That is it. That's a wrap on our little XAI mini episode. There was one fact that I wanted to just do a little, like, fun fact check, which is a gigawatt. And according to GROC, it powers approximately 750,000 to 850,000 to 850.5. 50,000 average U.S. homes per one gigawatt. So the scale we're talking is like a tremendous amount of gigawatts.
Starting point is 00:27:50 I mean, this, this InVedia project is 10 of those, which means that's about, I mean, on the high end, 8.5 million U.S. homes can be powered by a singular data center. So we're going to hope this works out. I think right now it seems like, I mean, GROC is cooking. The XAI team is on fire and they are in between models. I cannot wait until they get this new colossus strain cluster up or even Microsoft's. I mean, Microsoft's got a huge cluster. what are you doing with it, dog?
Starting point is 00:28:14 Like, let's see, let's see your stats. Let's see your numbers. Put a number up on the Arcage I leaderboard. But yeah, I think that's a wrap on all the fun, exciting new things about XAI. The comment section is by energy stocks. Yeah, buy energy stocks. We read all the comments. I read every single comment.
Starting point is 00:28:30 I try to reply to them too. So I would love for you to share either what you think about the show or who you think is winning this AI race currently. Do you, like, are we just kind of like, do we have Elon derangement syndrome? Are we just kind of like obsessed with everything he builds? Or is this act? It feels like it's pretty grounded. I feel like we have some good examples about how well they're doing. So I love to hear if you agree or disagree.
Starting point is 00:28:50 That would be a fun little thing for the comments. But anyway, that's wrap on today's episode. We have a couple more exciting ones coming this week. So buckle up. The next one, actually, the next one coming. I think EJA, myself, and we might even have a guest for that episode. We'll be probably in an all-out brawl. It's good that we're recording remotely because we might like,
Starting point is 00:29:05 blood could possibly be drawing next episode. Yeah, buckle up for that one. There's a lot to look forward to this week. But that's it for this episode. So thank you so much for watching as always. Please don't forget to subscribe, like, comment, all the fun things. Share it with your friend.
Starting point is 00:29:18 And we will see you guys on the next one.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.