Limitless Podcast - THIS WEEK IN AI: Real Anthropic Revenue, Meta Muse Spark, OpenAI's Next Model

Episode Date: April 10, 2026

Meta's MuseSpark is a new an AI model potentially impacting over 3 billion users, and worth comparing performance with Claude Opus and GPT. We also address Anthropic's revenue surge from a G...oogle deal, highlight public unease about AI, and explore a new robotic lamp for household chores. Finally, we touch on SpaceX's TeraFab project with Intel, poised to transform AI chip production.------🌌 LIMITLESS HQ ⬇️NEWSLETTER:    https://limitlessft.substack.com/FOLLOW ON X:   https://x.com/LimitlessFTSPOTIFY:             https://open.spotify.com/show/5oV29YUL8AzzwXkxEXlRMQAPPLE:                 https://podcasts.apple.com/us/podcast/limitless-podcast/id1813210890RSS FEED:           https://limitlessft.substack.com/------TIMESTAMPS0:00 Meta's MuseSpark0:43 Merging AI with Personal Data1:35 The Idea of Personal AGI4:06 Visual Intelligence in Action7:16 Ethical Concerns of AI Models10:10 Anthropic's Revenue Controversy14:07 Accounting Practices Under Scrutiny15:07 OpenAI's Next Big Model20:15 Data Centers and Public Sentiment23:30 Innovations in Household Robotics26:55 SpaceX's AI Chip Manufacturing31:23 Society's Divide on AI Progress33:11 Weekly Tech Roundup Conclusion------RESOURCESJosh: https://x.com/JoshKaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures⁠

Transcript
Discussion (0)
Starting point is 00:00:00 This week, Meta released Muse Spark, its new flagship AI model built from scratch by Alexander Wang's Superintelligence Lab. And noteworthy for the very first time, it's not open source. They closed source the entire model, but the model isn't what caught our attention. It's the distribution. MewSpark is rolling out across Facebook, Instagram, WhatsApp, and the Rayvan Metaglases to over 3 billion users. That's more than every other AI company on Earth has. This is a really big deal. Med has been calling us personal superintelligence.
Starting point is 00:00:29 You log in with your Facebook. or your Instagram account, and then the model pulls from your social graph that it has based on the past what decade that you've been using these applications. So no other model in the world has the type of information that this new meta AI model is going to have. Now, here's where it's interesting. Meta is also developing another research project in parallel called Tribe V2, something that almost nobody's talking about, but it's trained on brain scans from human beings. And it understands how you react to certain imagery and certain videos so that it knows when a video or a piece of content is going to light up a part of your brain that makes you interested and engaged.
Starting point is 00:01:03 Now, they're claiming to do both of these separately and in parallel, but you have to ask the question, what happens when these things merge together? What happens when meta knows everything about you? They know everything that's going on in your brain, and they're able to serve the absolute best content possible. They become the 21st century drug dealers through this artificial intelligence. And maybe that's a doomer take. I know, EJAS, you have a very different take on this, perhaps. We're going to talk about that and a lot of other things on today's episode, including Anthropics, not so real annual recurring revenue. We have some news around Open AI and then some cool robotic arms that we actually had the guest on a few months ago. He is back with the final
Starting point is 00:01:37 products. There's a lot going on this week, but first, meta, EJAS. Meta's a pretty big story this week. They released Muse Spark, which is the first frontier AI model that they've released in over a year now, which is a crazy amount of time in the AI world. Now, I'm thinking about the story arc of meta, and it's hard to be over-enthusiastic about this model. Last year, Zuck burned around $75 billion on AI CAPEX. He then fired 600 of his AI staff and then hired 150 more staff spending $25 billion to do so. $15 billion of which he used to hire one guy, Alex Wang,
Starting point is 00:02:15 who leads meta-superintelligence labs, who helped build this model. So I'm expecting big things at this point. Unfortunately, the model underperforms, Opus 4.6 and GPT 5.4 across the most important benchparts, which is coding and reasoning. So it's easy to be bearish about this model, but I dug into it more. And what I realized is that's not what Zuck or META is going after at all. They're going after what I'm dubbing personal AGI. Personal AGI is basically super intelligent AI that's trained on your own
Starting point is 00:02:46 personal data. It'll become an AI assistant that basically lives and breathes you. So how is Meta going to be able to pull this off versus all their other competitors who are leagues ahead of them, well, they have something that none of them have, which is the personal data and very incriminating data of 3.4 billion daily active users. So not only do they have a crap ton of data, they also have the data being refreshed every single day. Now, I'm personally not a user of Facebook, but I use WhatsApp. I use Instagram every single day. So they have already collected a bunch of data around what type of conversations I'm having, what type of media that I'm liking, what kind of memes that I enjoy, and they can use this to engineer the perfectly crafted AI
Starting point is 00:03:29 assistant that no other lab can do. And this is what they're doing. So the benchmarks that do matter, if you look at this table over here, they excel, or this model excels in visual reasoning and multi-modality. So it can look at a picture and understand it and perceive it better than Opus 4.6 and chat GPT 5.4. And so if they're going to end up building a model that ultimately can feed you a better algorithm or feed you the best content, this is going to be the bottle that enables that, and that's what they're doing. Yeah, the personal AGI is, it's a good way to put it.
Starting point is 00:03:58 I don't think this needs to be everything for everyone. This just needs to be better than nothing, because the reality is that a lot of people still aren't using these AI tools in their day-to-day life. What I found interesting is the examples that they showcase. Now, we don't actually know what it looks like applied to these social media networks. We're not sure what that integration is going to look like. but they showed a few examples in the blog post that they shared, which is it has the visual intelligence. When you point your camera at something like a refrigerator, say, it knows what's in there.
Starting point is 00:04:23 It can calculate the amount of protein in that block of cheese or the amount of calories in that like hamburger that's sitting there. And I think that's really interesting. They had another example of someone who was doing yoga and you could actually analyze the positions of the arms and the legs and the body and you could see if they were doing it properly. So the visual element of it, I think the native multimodality is something that's pretty noteworthy and interesting. Yeah, here's the example of the food. Here's the yoga pose. I'm not sure what that looks like fully integrated into our social media experiences today,
Starting point is 00:04:54 outside of what could be just like a much stronger algorithm. But they did release meta.ai, which is a website that you can go to to actually try this out yourself and run some tests. So I would encourage anyone who's curious, who wants early access to the model before it hits your Instagram feeds or WhatsApp to go actually on the website and check it out for yourself. It's available for anyone with an account. Yeah, and it's been a labor of love for Zuck. I think he famously said on an interview about a month ago that he is willing to spend the entire wallet on AI Kappex and training the best AI model versus risking losing out on this race entirely.
Starting point is 00:05:27 He would rather make an incredibly expensive mistake than produce a sub-tier model. They spent the last nine months pre-training this model. Fun fact, typically pre-training takes around a couple of months and then you kind of go into post-training. So they've invested a lot of time, money and heart into this. So that's why I'm kind of like, I'm kind of split as to why this model isn't as good enough. They delayed it. They delayed its own launch about a month as well. But the KAPEx spend keeps ongoing.
Starting point is 00:05:53 They today announced a partnership with CoreWeave to the tune of $21 billion. So they're just pushing ahead with all the data center type stuff. But you mentioned a model earlier on, Josh, which kind of went viral a couple of weeks ago. and this is the scary part of this entire story. I would love to think that meta is in my best interests and wants me to become a better productive human. But stuff like this, this new model, Tribe V2, which basically reads your brain signals
Starting point is 00:06:21 and predicts what type of content is going to stimulate different parts of your brain and make you more engaged in certain types of content doesn't give me hope. Because presumably they're going to use this model to understand what types of content stimulate different people's appetites. for videos, reels, memes, or whatever that might be, and then AI generate this exact amount of content
Starting point is 00:06:42 that they can feed you in the timeline because they already have the distribution, right? They already have the social media platforms. They can create and perfect the content and own that without relying on creators. They now have a recursive dopamine loop that they can just trap you in. Doesn't give me hope.
Starting point is 00:06:55 Maybe that's a doom of take. Well, I'm curious what the stated intention of this model is, right? It's like, why are you trying to understand how the brain reacts to certain type of content? And share it's interesting. I know they're working on their meta glasses and they have that like wristrap that's kind of like a neural interface that detect your hand movements. And understand this information is probably more helpful to their future hardware products. But also, I mean, the clear and obvious use case for this is understanding what types of impulses trigger when you watch a specific type of content and then funneling that into your feed on a daily basis. And Luke who is behind the scenes, he was mentioning earlier that in a way they become like this high end drug dealer that can feed you like some. dopamine here or some cortisol here and they know exactly how a video is going to hit your brain. And in fact, they can optimize those videos through a very tight feedback loop to improve them to a point in which it can guarantee that the part of your brain that they want to fire will fire.
Starting point is 00:07:49 And that level of deep understanding, not only from the data and preferences they've collected over the last decade, but also now understanding the biological brain and truly at a deep level how it works. I can see the good use cases for it, but man, it gets scary quick because they have their vibes app. I don't know if anyone remembers it. I'm not sure anyone's even used it, but it's the short form scroller that is AI generated content only. And this is like injecting that with steroids. And I could see a world in which this content gets very good, very quick. And is it a good or bad thing?
Starting point is 00:08:18 That is TBD. You can argue that I would prefer to get good ads that are personalized for me that actually sell me things that I'm interested in versus nonsense. But at the same time, it's not my decision because they will have more understanding of how my brain works than even I will. And that's a little unnerving. I'm going to argue on the side of Skynet. It's a bad thing. Prove me wrong, Suck, if you're watching this or anyone from Meta, I would love to know. Tell me the counter argument.
Starting point is 00:08:43 But the good news is, even if they are going off to your brain and hijacking your dopamine circuits, you might be able to get paid for it. This was a story that broke a few weeks ago where this kid, or I think she's like a young adult now, basically won a court case against Meta and YouTube, where she's, She got paid $3 million for the effects of social media, specifically Meta's platforms and YouTube, had on her depression and personality, her core years that formed her adolescence.
Starting point is 00:09:14 She got a $3 million paycheck. I'm wondering if I could do this, because I definitely suffered from a lot of those growing up. Yeah, what a come up. I mean, it sets a somewhat dangerous precedent, right? It's like I spend a lot of time on my computer, and my mental health, sure, it could be improved. Am I eligible for a $3 million raise?
Starting point is 00:09:28 And if not me, I certainly know some other people nearby. that definitely qualify for this if that's the parameter that counts. So $3 million paycheck, I don't know, not a bad deal. Dangerous precedent if people are going to be able to start claiming that as they go because, I mean, again, this company has, it's done a lot of great things, but social media as a whole has generated a lot of damage, specifically around the meta-owned companies. And we'll see what happens. Meta has been disappointing. They continue to be disappointing, but I am hopeful that a founder-run company led by someone like Zuck can figure out a way to really turn this into something special and meaningful and positively impactful. So we'll see.
Starting point is 00:10:08 I think that is the meta story. Now we have another story that has left me a little unnerved, which is the annual run rate revenue story between Anthropic and Open AI and how they're actually counting their revenue. Because the headline is that what? Anthropic just went from 19 billion to 30 billion in annual revenue in like a month? Yeah, so the breaking news here was that Anthropic had signed a multi-billion dollar deal with Google to basically use their TPUs to train Claude, but there was a hidden nugget in this article, which revealed that Anthropics revenue run rate, that ARA, has officially hit $30 billion. Now, for context here, at the end of last year, it was $9 billion. At the start of the year, it had just about hit around $12 billion. Last month,
Starting point is 00:10:55 it hit $19 billion. In a single month, it has gained $11 billion. Now, that's the result of Claude Opus being amazing and the rumored Mithos model, which is now confirmed. We did a whole episode about this, being as good as people claimed. So people are obviously buying Claude subscriptions and running their revenue rate up to $30 billion.
Starting point is 00:11:16 I thought this was amazing until we had a conversation this morning, Josh, where you said that this is fake news? This is nonsense. The accounting is technically gap compliant, but the way that they get there is so vastly different that you really can't compare the two companies together. I mean, this example is showing Anthropic is at $30 billion of annual revenue, and Open AI is at $24 billion. But then explain to me why OpenAI is valued at twice the current valuation of Anthropic. It's an accounting problem.
Starting point is 00:11:43 Now, OpenAI has to deal with Microsoft, where OpenAI shares 20% of its revenue with Microsoft. And the financial statements, it counts those sales before that deduction. But for like Azure cloud customers buying Open AI models, OpenA only books 20% of that cut as the revenue. Anthropic, they have a deal with AWS, Google, and Microsoft, and all three of those cloud providers resell cloud to their customers. Anthropic books the entire thing as revenue, and then marks off the 80% cut as a line item in marketing expenses.
Starting point is 00:12:13 And that is such a huge different. I mean, both of those are technically gap compliant, but they produce very different top line numbers. And I think this is really important and a lot of people are missing this, that when they see Anthropic projecting a $30 billion annual recurring revenue, that's only projecting out what they've currently done for the last four weeks, and it's counting all revenue, including the amount that they're going to have to give back to AWS,
Starting point is 00:12:39 Google Cloud, and Microsoft Azure. And that's like a huge accounting difference. That is a big problem that I don't think a lot of people are taking note of. I have so many thoughts on this. Number one, how is this compliant and legal? like why are they allowed to do that? Number two, this is financial accounting crime. Like they shouldn't be able to do that.
Starting point is 00:12:57 But something isn't adding up for me, which is one thing that I see a lot of these articles comparing is how much revenue they're making, but also how much they're burning. So for example, with Open AI, they're making, let's say, $25 billion this year, but they're also burning $25 billion. And so if Anthropic is moving the line item for their cost center to just say marketing, shouldn't their burn rate still be higher than what is being reported on? Like something doesn't make sense. Like, is the financial times just wrong?
Starting point is 00:13:27 And they're not, they're missing this line item completely? Or is Anthropic genuinely just not burning as much still? And they're still on track to making a profit by 2028. Because that's what all the projections have it at, right? They're going to be making a profit or turning a profit much sooner than Open AI is, who are spending way too much on CAPEX. So there's something I'm missing, though. Yeah, I wonder if they're not counting it as burn because it's,
Starting point is 00:13:49 It's technically not. Like they are earning money on it. Just 20% of what they are actually putting on the accounting buck. So like they're not actually burning cash. They're just accounting for more cash than they're going to keep. Right. But the 80% is being put on the marketing expenses book, right? Well, I may I'm misunderstanding that?
Starting point is 00:14:08 That one we're going to have to talk to an accountant about. I'm not sure. I do know that we are comparing apples to oranges though when we look at open AI in Anthropics run rate. And I think this is something that an IPO is going to fix. Once we have all these documents publicly stated, again, a lot of this is insider reporting. You need a subscription just to access this document that we're showing on the screen. Thank you, Louma.
Starting point is 00:14:27 It is this like really messy article, also in the article, or really messy accounting. And also in the article, they mentioned how they count their revenue, which is just projecting their prior four weeks out for a multiple of 13 to account for all 52 weeks. So there's a lot of hand-waving going on in order to make these numbers go up into the right. I just think it's important when you look at the number. these headlines to really take it with the grain of salt and understand that Anthropic actually didn't gain $11 billion in revenue in like a week. That's not happening. They're just doing these funny accounting methods to make things look like they are going great. And they are going
Starting point is 00:15:02 great, just perhaps not as great as people perceive. And it might be getting a lot greater for open air. I mean, talking of like the IPO rumor mill and like boosting valuations, it's all marketing, right? It's all storytelling. And there was a story that Axios broke this morning before we started recording that Open AI plans to release their next model. Now, they're dumbing it spud. Some people are calling a GPD 5.5. Some people are calling it GPD6. But apparently, it's going to be so good that they have to do a limited release that they can't release it publicly because it's too good or too dangerous. Now, if that sounds similar, it's because that's exactly what Anthropic did this week announcing Claude Methos, their next AGI-I-like model, which they're not releasing publicly, because
Starting point is 00:15:46 it's a cybersecurity risk. It discovered a thousand plus security vulnerabilities. And what was interesting about that entire news cycle earlier this week is someone commented and said, well, if it's that good and if it's that expensive, we're probably not going to get to use this model for another couple of months, basically. And Open AI's head of model training, Tebow basically said, I wouldn't count on that. He said, um, dot, dot, dot, which implies that open AI is going to release a very similarly capable model very soon. So that's what, that's the news that basically Axios broke. But I just want to point out, I want to check myself here, that it might also be fake news. This post from Dan Shipper apparently spoke to someone within OpenAI and he basically said,
Starting point is 00:16:29 his contact basically said that we're talking about a different model here that is hyper-focused on cybersecurity specifically and it'll be a separate release to GPT6. So at this point, I have no idea what's going on, but I know that Open AI's valuation for their IPO is probably gone up in the space of time that this has happened. Yeah, I can't imagine they don't already have a model that is close to Mythos, if not already there. And if they don't, then they're, I'm sure, just weeks behind actually having something like that. What I found interesting about this story is they're working, it says that they're working
Starting point is 00:17:00 explicitly on a cyber product, which implies that it's for cybersecurity. And with Claude Mythos, they weren't actually training it on cyber at all. It was just a downstream effect of really powerful code. And when I think about the pivot that Open AIs had recently from retail to enterprise, a lot of that focus has been around code. So I wonder if they're just building a really strong coding model, and this is a downstream effect, or if they're genuinely training something explicitly on cybersecurity. And I have to imagine, if it's trained explicitly on cybersecurity, it probably will become better than Mythos. And then you have to ask the question, what happens three months from now when these tools actually do become available?
Starting point is 00:17:36 And also, who's going to decide when to release them? If Open AI is taking the Claude route or the Anthropic route and they're keeping it private, well, now suddenly we have everyone's worst nightmare, where the labs are kind of all working together, they all have the most powerful stuff. And there is no counterbalance to whatever they decide to do with it. And they, in a way, become those kingmakers. And when you think about the Department of War contract that we had this whole episode about is a big mess, Anthropic kind of has all the leverage.
Starting point is 00:18:03 And if Open AI moves with them, they have the ability to crack nation-state software at will. And that seems really powerful. And they are now the gatekeepers. And I don't know, Chris, a lot of interesting questions as we move into this next paradigm of Blackwell models that are unbelievably powerful. But as we mentioned earlier, Project Glasswing Anthropic mentioned their breaking tier model Claude Mythos, which is coming out. We did an entire episode about this earlier this week. Definitely go and check this out. Now, I want to end this segment on a bit of chart crime, Josh, because now that you've told me about the fake revenue numbers, I can't look at this Wall Street Journal analysis and think is Anthropics numbers fake? On the left here, we see
Starting point is 00:18:43 yearly AI model training costs and it basically shows that Open AI is spending way too much money, and Anthropic is spending a fraction of that. But now I'm realizing that that's probably in projection with their revenue. And if you look at the bottom left over here, it shows that they're kind of the same. So I think there is some chart crime or county crime happening with Anthropic, and I want that to be talked about more. But it's not all rosy with Open AI. They did have the UK Stargate project go down, right? The whole Project Stargate thing is really just a huge disappointment. It was supposed to be this giant grand buildout, domestically, internationally.
Starting point is 00:19:17 I think they also did this in Saudi Arabia, somewhere in the Middle East. They were planning to build one. They didn't do it. In the UK, they're not doing it. In the U.S., they're not doing it. Project Stargate was really just as big, like, Raura project that was initiated with the government. Elon famously said the post the day that it was announced, you don't have the money to do this.
Starting point is 00:19:35 Turns out he was right. no one actually wants to foot the bill for this. Logistically is very technical and challenging. And it turns out it's just really hard to build things at scale, particularly internationally in places like Europe and the UK, where there's a lot of regulatory issues, both environmental and energy related, that just make doing these things incredibly difficult. And we're seeing it here domestically. We have a story a little bit later about AI data centers and how they're just having a really tough time getting them online. And this is probably one of the future trends that we're going to look out for is just the idea that building these data centers is hard
Starting point is 00:20:07 and not always just because of the technicalities, but also because of the regulation and legislation associated with this and the public sentiment. It's not looking good. There is certainly a rift in the world right now between people who want to build these things and people who do not. Shifting gear slightly. I wish Anthropics Claude Mythos was the main headline, but they're still shipping other products somehow. Maybe they're using an AGI like model to do so, right? they announced a new product called Claude Managed Agents, which is basically, think of AWS, but for spinning up AI agents. It's a platform that allows you to design an architect an agent through a single prompt or a couple of prompts. You can amend the memory, a bit of its own custom
Starting point is 00:20:47 design, and then launch it, which may not sound novel, right? You could create AI agents before, but there's a distinct difference here. Typically, when you create an AI agent, you can't scale it to your millions of users, let's say if you're a Fortune 500 company, because you need to set up a bunch of other production and dev tooling environments in order to support that. That typically takes anywhere between three to six months. Now, I ran a bit of the math on this. Typically, an app development at scale for, say, a million users, costs around $50,000, depending on the specific type of feature or product you want to launch. This reduces it down to a hundred bucks. That's like a 500x reduction, and you can do it in under an hour. But there is a new cool thing
Starting point is 00:21:27 that this product unlocks. Josh, can you guess it? I'm going to guess the end of call it work or am I being dramatic? Well, that's also that. But they figured out a new revenue run rate for them. Of course they did. Of course they did. Right? Now, listen to this genius plan, right? Typically,
Starting point is 00:21:46 every single AI model provider charges you based on the amount of tokens you use. Tocons in, tokens out, you pay a subscription or you pay an API cost. Anthropic for this product is charging you for the amount of time that your agent takes to
Starting point is 00:22:01 think of a solution. So the tokens it's using to think is now being charged to the tune of eight cents per call. Now, if you assume that, and I actually don't need to assume, they used a live example of sentry processing one million bug reports. If each agent session is 10 minutes, that's 166,000 session hours, which turns out to be around $13,000 per run in fees, that all adds up massively. If you're a massive corporate enterprise where you distribute this platform to whatever, 50 plus teams. You end up making millions of dollars. Genius unlock Anthropic. Well done. I'm clapping for you. Also, you're going after all the other startups out that. You probably killed out a bunch of different agent harness startups that will value that billions of
Starting point is 00:22:46 dollars. Well done. If you use a computer, you just have to assume that you're months away from no longer needing to use a computer for anything you don't want to do. It will just be automated. You can have the computer watch your screen, emulate your decision making processes, do all of the clicking and thinking that you would need to do in order to accomplish whatever you're trying to do. And it's been a recent realization, particularly with mythos about how close we are to this reality. And I have to imagine that all these features are being built with that model and are therefore resulting in this incredibly fast iteration loop that Anthropics having where every day we get some like groundbreaking technological impact. And if you just push this out three months,
Starting point is 00:23:22 six months, even 12 months, there's no way that you're going to need to use your computer for anything you don't want to. It will just automate the entire process for you. So Anthropic with another big win, Quad is just on fire. The rate of acceleration is truly through the roof. Josh, are you a Black Mara fan? Huge fan. We watched every episode. Huge fan.
Starting point is 00:23:42 So what if I told you there is now a robotic lamp that you can buy that turns into pincers that can fold your clothes, make your bed, and maybe make you a cup of coffee? I say 100% chance it stares me through the heart when I'm sleeping. But I'm going to hope that's not the case because we actually did have this founder on the show a while back to talk about this product, which is now out. Yeah. Kind of. Exactly.
Starting point is 00:24:06 Explain what's going on here. Okay. So what you're seeing on your screen is a lamp, which basically extends into a robotic arm and it can do a bunch of chores for you. So what you're seeing on the screen is this lady is apparently putting her clean laundry on the front of her bed and her lamps are now activating. And now you can see their claws coming out. And they can start folding your clothes and making your bed.
Starting point is 00:24:30 or starting your record player. And the point is, robots are going to be pretty pervasive in human society. They may not necessarily look like humanoids, and that's the point that's sincere in Arantan, the founder, is aiming at. Now, the last time that we spoke about this founder, he only had a mock-up. And to be honest, the lamp looked pretty scary. The pincers were much larger. It's nice to see that they've now got like a kind of curved metal piece over the pincers,
Starting point is 00:24:56 so maybe they can't necessarily stab you. It's more colorful. It's more amenable. It looks kind of slow, but the good news is you can start ordering this right now. I signed up on the wait list and I got an email the other day saying, hey, you can now order this thing. I don't know when they're going to start delivering it, but it might be something that I try out.
Starting point is 00:25:11 Are you trying it? No, I will not be trying this out. I love founders who are trying cool things. And I think building a narrow use robot like this is so cool. The design is awesome. It's a lamp that does robot things. And that's very cool. When I look at the trailer here, and this isn't really showing many use cases, one of which is the folding of laundry, you kind of see the actuators. They're little like fidgety. They're not quite smooth. They're like, it doesn't seem that it's moving very quick. I can't imagine there's many use cases. Like, we're looking at a robotic lamp that is being manually adjusted and turned. Like, why is that happening? There's a lot of questions I have about what the actual use cases of something like this are and how effective it is at those use cases. But again, I love the idea that people are trying new things and trying to build something unique.
Starting point is 00:25:57 with beautiful design that's actually effective inside of a home. And as soon as you get yours delivered, I'm coming over and I'm trying it out because I want to see how all this thing actually works. I might be dead. So I don't know. Maybe you're recovering my corpse at that point, Josh. But in other news, SpaceXAI is back and they've signed a massive partnership with Intel, American made fabricator of AI chips. And the question becomes, why on Earth are they doing this? Well, there's a few different reasons. Number one, Elon Musk announced something called the TerraFab, which is pretty much the most ambitious AI chip manufacturing project that is ever going to be achieved if it does get achieved over the next, say, five to 10 years. The idea is to achieve
Starting point is 00:26:39 one terawatts worth of compute. 80% of those AI chips are actually getting sent out to space to harvest the sun's energy to train AI models, presumably GROC. Now, if all of that sounds insane, we have a bunch of episodes that you should go and check out and we explain everything. But the point is, why Intel specifically? I think there's two main reasons. Number one, you need these AI chips to be American made and American manufactured. AI has become a huge geopolitical weapon, and Taiwan, the threat of Taiwan being taken over by China and TSM being within Taiwan is a massive threat to the US AI production of AI models, GPUs, etc. Invidio realized heavily on TSMC. Intel is the closest American-made lab or
Starting point is 00:27:22 manufacturing plant that we can get to building A-grade AI chips. But why is it, Elon signing up with Intel specifically. There was this little nugget that I saw Robert Scobel post about, which is there's this compound called gallium nitride, which basically makes these AI chips radiation hardened, which, there you go, is going to make it perfectly suitable to launch these AI chips into space. So Elon's already thinking way in advance. Robert got into a bit of trouble because he published that Elon had liked this tweet,
Starting point is 00:27:51 aka confirming that this was partially the reason why they did that. But yeah, this might be the next unlock for achieving the tariff happen. It's pretty cool. It's one of the most ambitious projects, I think any company is undertaking on Earth. There's no one who's really trying to offset this monopoly that exists on chip fabrication and production. And I think one of the most important things that's underrated about the TerraFab is the fact that they have a separate staging facility separate from the TerraFive that has all of the
Starting point is 00:28:16 required pieces needed to make these chips. It has the lithography. It has the masking. It has the packaging. And what that allows you to do is iterate very quickly on the actual design of these chips. A lot of times a chip gets submitted. and then a year goes by or even longer until you actually have the full thing completed,
Starting point is 00:28:32 this compresses that iteration cycle because it's all into one roof and allows them to make chips that are far better very quick because they can do all the testing in one place. And they can even do that prior to the tariffab going fully online because it's a small sample set of that. The tariffab is going to be hard. I'm sure there's going to be a lot of negative press as they make mistakes and as things get delayed. But the outcome of the tariffab is so profound that it's hard to imagine a world in which Tesla does accomplish this at scale, or SpaceX AI does accomplish this at scale, and they are collectively not the most valuable company in the world, because that implies not only do they have the chips, but they have the
Starting point is 00:29:05 robots, they have the satellites, they have the spaceships, they have all the infrastructure required for this next generation of embodied AI, of space trained intelligence and super intelligence, and I'm really bullish on it. Intel's badass. I'm glad they're helping them, And I'm just so stoked for the Tirefab in general. This is going to be a fun one to follow over the next few years. Yeah, I have a huge Intel back. So please, please, please keep signing all these partnerships. Please.
Starting point is 00:29:30 Now, the reason why Josh and I started the show, the reason why we do Limitless is we're very optimistic about the tech. Now, we know that there's a lot of Dumas takes. But the fact is we believe A is going to change the world for good. And there are many different ways that we think that's going to happen. And we're going to track every single new story that supports that. But there are obviously people in the world that domer. that don't believe that's the case. And unfortunately this week, we had a pretty serious story
Starting point is 00:29:54 where someone fired 13 shots into the home of an Indianapolis counselor with a note reading no data centers left at the scene. Now, we don't know the exact motivations of this person because I don't believe they've been caught just yet, but you can hypothesize what the takes are, which is stuff that we've covered on the show before, which is people are worried that data centers are going to empower AI models that are going to eventually replace them or take their job. They're worried about the energy costs, they worry about the water consumption. Now, the issue with this is, number one, AI data centers take up less water than your average golf course. That's like in your neighborhood itself. We did a whole episode covering this. Number two, for the electricity
Starting point is 00:30:36 charges that kind of increase for people's bills, a lot of governments and states have mandated that the AI labs responsible for this pay for that extra surplus so that it doesn't actually hit you. Also, we're working on different ways to deal with this electricity consumption, like launching GPUs into space. So it's just a sad and very concerning story to see. There's a growing contingency of people that are against data centers. And I understand the concerns, but this shouldn't be the way to deal with it. This is dark. You know what's way worse than not having data centers? I'm trying to get mythos first. And then using it to attack all of our infrastructure. And then using it to iterate and build even more powerful models that are even more dangerous,
Starting point is 00:31:13 more harmful, and then applying that back to us. And I think the impact of that is far greater than the impact of some what some patch of grass that is so detached from most towns getting turned into a data center and I think the the moral dilemma here is that people are saying they want one thing and then doing something else yes and it's it's disturbing to see the sheer size of the population that doesn't want to move forward as it relates to AI and progress totally unaware of the fact that this moves on whether we're a participant or not. And as these things become more powerful, there's a lot more profound downstream effects
Starting point is 00:31:55 of not having these models in our court, having them like having the power on our side. And I hope that this becomes more of a realization for a lot more people. There's this clear divide happening now where it's like people who are using these AI tools to further empower themselves and to do better work in their lives
Starting point is 00:32:13 or to handle more things in their personal lives, and then those who don't. And that K curve that's going to come out of this in the economy and society just throughout our general day-to-day lives is going to become pretty wide. And I hope a lot of people really reflect on what it looks like to actively be on the wrong side, be on the slowdown side of history,
Starting point is 00:32:37 beyond the, what is it, the decelerationist side of history and what the actual implications of that are as we continue progress forward. I don't know. It makes me sad, but hopefully it's something that will change over time. And perhaps it's just a messaging thing. It's funny. A lot of people that hate SpaceX, we're very excited about Artemis. So perhaps the goals are the same. We just need to package them differently in a way that's more digestible that these mobs can get behind. And I don't know, maybe it's a messaging thing. Maybe it's a moral thing. But that's where we'll leave you at the end of this week on the conclusion of the AI Roundup. That's four episodes in a single week about all of the hottest topics.
Starting point is 00:33:15 If you missed anything, you can go back and watch them. I think it's been a big week. I mean, we had a few huge models released. We had Mythos. We had Open AI versus Anthropic. We had Gemma 4, which was very cool and a very powerful model. But that wraps up everything for this week. Eja's any final thoughts before we let these lovely listeners go?
Starting point is 00:33:33 Nope. Thank you guys for watching and listening. I'm curious if you guys have any thoughts on any of the topics that we've discussed today or if there are any topics that you think we have missed or that you want to hear more of. We are trying to cover any and every breaking topic as well as some novel analysis into the actual tools. It's one thing announcing the tools. It's another thing using them. We're going to be doing more demos in the future. But yeah, that's the end of the agenda for this week. And we will see you next week. Thank you guys so much for listening. See you guys in the next one.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.