Limitless Podcast - Elon vs Sam Altman: The $134 Billion Lawsuit Could End OpenAI

Episode Date: January 20, 2026

Elon Musk is full-sending into a $134 billion legal battle with OpenAI over its shift to a profit-driven model, revealing diary entries from OpenAI President Greg Brockman. We also cover Ope...nAI's pivot to advertising, turmoil at Thinking Machines, and Tesla's advancements in Full Self-Driving technology.------🌌 LIMITLESS HQ: LISTEN & FOLLOW HERE ⬇️https://limitless.bankless.com/https://x.com/LimitlessFT------TIMESTAMPS0:00 The Elon and OpenAI Drama2:40 The Lawsuit Escalation7:40 OpenAI's New Ad Model9:35 Monetizing Free Users15:34 The Competition Heats Up19:35 Tech Developments Beyond Drama25:14 The Rise of XAI28:05 Tesla's Full Self-Driving Update33:27 A New Partnership with Cerebrus38:47 Closing Thoughts and Future Speculations------RESOURCESJosh: https://x.com/JoshKaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures⁠

Transcript
Discussion (0)
Starting point is 00:00:00 So everyone knows how messy breakups get when there's money involved. Now, imagine that breakup is between two of the most powerful people in tech. There's $134 billion at stake, and someone's personal diary just became court evidence. This is Elon versus OpenAI, the saga that just keeps continuing to escalate. And I guess maybe we'll start with the quick version, where Elon co-founded Open AI in 2015 with a $38 million donation, thinking they were building a nonprofit to save humanity from Big Tech AI monopolies like Google. Flash forward to today, Open AI is now worth half a trillion dollars, partnered with Microsoft, about to go public, and Elon is saying, hey, I got played. The receipts dropped. We have Greg Brockman, who's Open AI's president. He had journal entries from 2017.
Starting point is 00:00:44 We're going to get into it. It was really fascinating. And Greg posted a rebuttal. The drama is kind of starting to get out of control. And there's a lot of new updates this week that we're going to dive into. So grab your popcorn and let's get into this. Yeah, okay, so let's start off with the drummed between these two. By the way, I have to acknowledge, I'm reporting from the Batcave. It's a little dark, so you're going to have to deal with me being a bit of a silhouette. Yeah, yeah, Josh is on the West Coast for listeners here, and I'm on the East Coast right now, so it's past sunset. It's past my sunset over here. So let's jump straight into it. Elon has sued many companies and many founders in his time, but his favorite and most judiciest lawsuit has been to open air who have wronged him, Josh.
Starting point is 00:01:23 He initially invested $38 million, or rather donated $38 million, to what was then a nonprofit OpenAI, committed to building AI for the open source good so that it wouldn't get into the hands of evil. Fast forward to today, and obviously Open AI's structure has changed into something that could kind of loosely, definitely be categorized as a for-profit. And so the lawsuit's been going kind of back and forth for a while, but it kind of culminated over the last two months. To give you the quick headlines, at the end of last year, the judge refused Sam Altman's request to dismiss the case completely, saying that Elon had sufficient evidence and that it was going to go to trial, which is now going to happen in April of this year. But Elon stepped up the gas over the last week and said that he's coming for it all. He's coming for what his original $38 million donation would be in today's Open AI's valuation.
Starting point is 00:02:25 Do you want to guess what that number is, Josh? Some astronomical amount. I mean, this has to be big. It is $137 billion. He's requesting, he's going for the neck, basically. Oh, rather, $134 billion. My bad. So here are the key details. Musk's expert lawyer, he goes by the name of Paul Wazen, values the damages between,
Starting point is 00:02:48 between 79 to 134 billion dollars based on his original $38 million donation. So the question that becomes, well, what's really changed? Why has Musk got like the upper hand now? And it's something as you alluded to earlier, which is Greg Brockman, the acting president of Open AI. We got your diary, bro. Kept a diary of the entire sequence of events or history of open area up until this day, which he had to legally give.
Starting point is 00:03:18 to the court for review. Oh, that's so violent. Could you imagine your personal diary? Dude, dude, also why? It is like a 16-year-old girl's diary. And by that, I mean, he has documented everything, down to the line of whether he thinks it's morally ethical to do what he was doing back then.
Starting point is 00:03:38 So to give you an idea, look at this quote, Josh. Look at this tweet right here. He goes, it'd be wrong to steal the nonprofit from him. him he's referring to as Elon, to convert to a B-Corp without him. That'd be pretty morally bankrupt. And he's not an idiot. And then he goes on to say, I cannot say that we are committed to the nonprofit, don't want to say that we're committed,
Starting point is 00:04:03 if three months later we're doing the B-Corp, then it was a lie. Not feeling so great about all of this, the true answer is that we want Musk out. Can't see this turning into a for-profit without a very nasty fight. So the long story short is Elon is suing very aggressively for what his original donation is in an equity stake in the Open AI is today to the tune of $134 billion. And he has pretty much a smoking gun, Josh. But there was a rebuttal from Greg Brockman himself saying that Elon Musk had let some important context or left some important context out of his claim. And I'm showing this on the screen right here where, you know, he's.
Starting point is 00:04:46 He mentions out, we've got to figure out how we transition from a nonprofit to something, which is essentially a philanthropic endeavor. So what you're seeing in blue on the screen here is what Elon is claimed in court. He's saying, hey, I've always kept my notion that I wanted opening eye to become a philanthropic endeavor. But he left out what was in red, which is him saying, I know that we need to transform this into a B-Corp or a C-corps. So there's a bit of inambiguity and games being played from Elon. Which side do you take on this? Well, I'm trying to look at this and evaluate this as neutral as possible. And one party, we're seeing the journal entries, which one, I have a lot of questions how they got that journal or the diary and they knew that it even existed.
Starting point is 00:05:25 Because that seems like a very personal thing you wouldn't really want to tell people about. So how the lawyers, one, discovered it existed and then two got access to it. I'm sure there's some funny stuff going on behind the scenes of this case, just to kind of provide more evidence. But in terms of the evidence that's been provided so far, you have one party who is like, hey, we don't actually want this person to be at the company. I think we just want to remove him. But I kind of am siding with Elon in this instance for now because it's very clear that they wanted out. And even though it's clear that Elon observed,
Starting point is 00:05:55 it was probably necessary to become a B-Corp, he still was ousted. And in the case that it became a B-Corp, he still does rightfully own those shares of equity. So I guess for now I'm team Elon, but more than anything, I'm team drama, man. This is great content. And we're going to continue to follow this as we go through this court case
Starting point is 00:06:12 because I'm sure this is just the tip of the iceberg. Dude, I saw a hilarious tweet earlier this week, which said that my thesis for Anthropic winning the AI race is simply because they have zero drama. Seriously. All six co-founders are still there. No one's left. They've had the lowest employee attrition of any of the major AI labs. They've just kind of got their crap together.
Starting point is 00:06:36 And opening is the complete opposite. It's an absolute wrecking ball, as we're going to find out, on other things later this episode. The final thing I'll make on this point is the Open Air and Microsoft partnership. Josh, if Elon ends up able, like if Elon ends up getting $134 billion, that dissolves the Microsoft Open AI relationship completely
Starting point is 00:07:00 because that is the equivalent of Microsoft's stake in Open Air as well. So Elon's forcing hand basically gets ahead of him. Like Satya doesn't have the power that he originally thought he had. And now poor Microsoft, is just caught up into this, even though they had nothing to do with the inception of the company, and they weren't involved until much later. And I think the reason why this matters beyond the drama, even, is because a lot of people believe that if Elon wins, I mean, it could fundamentally challenge how AI companies are valued. And some people are kind of whispering about whether this trial could be
Starting point is 00:07:29 a catalyst that could impact the actual AI bubble that we've been building. If you can remove almost like $150 billion out of that and move it into another entity, that's a huge swing that is like I'm not sure if the market's going to be able to handle that because Open AI has so many obligations to make money and pay people back. And I guess on that note, maybe we can get into one of the new things that they rolled out in order to generate some revenue this week, which is their new ad model. And Ijaz, if you remember, just last year, I guess two years technically now because we're in 2026, but in 2024, Samo was on stage and he said, ads are kind of the last resort. We don't really like the ad model. We don't believe in it. We don't
Starting point is 00:08:08 need it. Flash forward to today, there are ads rolling out in Open AI. So let's read, we'll start by reading what they announced, which says in the coming weeks, we plan to start testing ads in chat GPT, free and go tiers. We're sharing our principles early on on how we'll approach ads guided by putting users in trust transparency first as we work to make AI accessible to everyone. So they have like these four principles that they outlined. What matters most, responses in chat GPT will not be influenced by ads. As are always separated and clearly labeled. your conversations are private from advertisers. Pro plus business and enterprise tiers were not have ads.
Starting point is 00:08:42 EGISD, what do you think about this? Okay, I have many thoughts. Okay, so let me give you a kind of lay out of the land. Open AI today has roughly 800 million weekly active users. Josh, guess what percentage of those users pay for their subscription? I know this because we mentioned this in a previous episode. It is close to single digit percent. It's very low.
Starting point is 00:09:05 low. It is 5%. Yeah, that's not fine. So the lesson that we've learned from this is you could have subscriptions, but it's not enough to keep you afloat. Open AI is projected to blow $20 billion this year alone. You need something else to pay for it. So you need to somehow monetize the free users. And the classic model that everyone's used for decades now is ads. So they're turning it on for two specific tiers. the free users who are paying nothing to get access to chat GPD and this new tier which launched at the same time that they announced ads, Josh, called chat GBT go, where it's a subscription where you pay $8 a month and you get access to not the best chat GPT models, but, you know, just below the best. So let's look at some of the napkin math here. There's roughly 600 million non-paying weekly users or monthly users for chat chatt Gpti models. at GPT. If each of them were to pay $2 in ad revenue for the year of 2026, opening I stands to make around $1.3 to $1.7 billion, which sounds like it's a lot. But again, their spending budget is $20 billion. So it doesn't really move the needle that much. And they're projecting by 2030
Starting point is 00:10:23 to have made $15 per free user, which then pushes them up into the realm of $36 billion. But you can imagine that their spending budget by then, Josh, is going to be multiples of what they're spending this year. So to kind of put into context, it's not looking great, but my argument against that is I think AI is going to be the ultimate form of selling ads in the future. Everyone is going to surface their intense, be it like, I want to buy something, I want to explore something through some kind of AI chat bot. And if open AI or chat GPT rather becomes the face or the doormat of the internet, then they can charge whatever they want per user. And to kind of give listeners a context of how much money you can actually make,
Starting point is 00:11:06 let's look at meta, right? In this tweet that I have up here, in 2025, Meta made $58 per user just purely from ads. And if you want to look at like the behemoth that is Google, they made $237 billion last year from ad revenue. That makes up 77% of their entire profit. So there's a lot of money to make from ads. But opening eye needs to figure out a way to go from their $2. projection to something where Mehta's hitting like $60 or even Google at around $80 per user.
Starting point is 00:11:38 Yeah. I think it's an interesting testament to, I guess the human nature of how we think about sponsoring and advertising. I was very optimistic in the early days that advertising, the traditional ad model, the freemian model, was going to go away in the advent of AI. And that was just some hopeful optimism. Didn't really have any reasoning why. But I think it's become clear that that will not be the case, and this push to monetize free users will continue to be this durable thing that continues into this next iteration of technology, mostly based on the fact that people would much prefer to pay with their attention than their dollars. And particularly when it's good information. Now, the question that I have, as it relates to this story in particular, is how much data are
Starting point is 00:12:20 they going to be collecting and using in order to make these ads actually useful? Because one of the core principles that they shared is that your data is private, your data is being held. It's not influencing the ads that you see. But the reality is that ads can be a good thing if they are hyper-personalized. So where are they going to draw that line? Where are they going to take that line in the sand in order to give people higher value? We actually have some information on that exact question that I'm showing on the screen here. They state in their official post, ads do not change chat GPT answers. So we know that the answer that you're going to get isn't going to be influenced in any way specifically by an advert. And then it goes on to say your chats to chat
Starting point is 00:12:59 GBT are not shared with advertisers. Ads will be clearly labeled. And chats that include sensitive topics such as health, mental health, or politics are not eligible for ads. So they're taking a kind of hybrid approach here where they're not explicitly sharing all the data or rather prompts that you're sharing within your conversations, but it's taking kind of themes from your conversations and sharing them with advertisers saying, hey, these are the general vibes that our users are speaking about chat GPT with, maybe your product could be a well-suited fit for this. Another interesting point
Starting point is 00:13:29 to make here is, well, what do the ads actually look like? Is it going to be something subtle where it says, this was shocking to me. Right? Well, I actually like this. Shocking in a good way. No, well, shocking and not really the most optimal way. When you look at the amount of screen real estate this takes up for people who are listening, about a
Starting point is 00:13:45 third of the screen is an ad. That's a large percentage of screen real estate for a single advertising slot to go, which is probably going to be worth a pretty hefty premium and a very strong inconvenience to people who don't want to pay for chat GPT. So this, in a way, does degrade the experience, but should also make them a good bit of money because, man, if you're getting 25 to 30% in the screen for an impression, that's like a pretty, you're going to see it.
Starting point is 00:14:09 Like, there's no way your ad blockers, like your eyes are going to glaze over it and not notice because it is such a profound placement on the actual display. Well, it's also not your average impression, right? Because this is a kind of like... hyper-personal impression where like the conversion rate is or probability of it converting into someone purchasing your product or service is probably way, way higher. Now, to give Open AI credit, I like that they're excluding things like health, politics, and also people that are under the age of 18. Because AI is very, very persuasive, both in a good way, but in a really, really bad way. The other kind of lens that I apply to this is the competitors are going to eat this.
Starting point is 00:14:53 up, Josh, because, listen, Google is getting plenty of cash flow from all their other business says. So they do not need to turn on ads. Today, if you are a freemium user of Open AI and you get tired of this real estate that you just referenced, this sponsored ads, you don't want to see that crap, you can just go and use Gemini. And there's no ads at all. And let me tell you, Google will be willing to subsidize no ads for as long as it takes to kill Open AI. Anthropic, on the other hand, isn't a gigantic monopoly, but they're already making so much money from the people that want their Claude Code product. They've already produced something so valuable, Josh, from their subscription that they're making tons of money. They're projected to make
Starting point is 00:15:30 $70 billion by 2028, so they don't need to pull off the stunt for ads. So Open AI really seems like they've got that back against the one. There's no other way for me to describe it, to be honest. Yeah, they're a small fish in a very, very big pond. I mean, they're competing with companies like Google that have an infinite balance sheet relative to theirs, and they have the ability to subsidize and absorb any additional cost required to win additional market share. And that's what we're seeing as we look at these starts of weekly active users, decline for chat GBT and increase for all the others across the board. While Anthropics, it's cozy and soundly with their incredible coding model and their
Starting point is 00:16:02 business to business service. Now, I think it's time to get back into the drama news. This is the gossip part of the show because we have some more. Dude, I wish Sam versus Elon was the only cat fight going on. But do you remember that angel from OpenAI that couldn't do no wrong and actually called Sam out for being moralistically unethical. Miro Morati, who left and started her own company, her own AI lab, called Thinking Labs. They just lost thinking machines, rather.
Starting point is 00:16:33 They just lost total 50% of their founders or their founding team this week. They lost their CTO Barrett Zoff, due to, as this tweet says, unethical conduct, according to two sources familiar with the matter. those two things that were unethical was a in-work relationship, which he didn't disclose. And also apparently he was leaking information to competitors. Yikes. Josh, do you want to guess which competitor, if it was true? Well, if I had to take a guess, considering where they all came from, I'm going to guess they were returning back to Open AI, which seems very clear to me.
Starting point is 00:17:12 Absolutely. Look at this tweet from Fiji Simo, the CEO of OpenAI applications, who at the same time Miramirati announced that these guys were leaving. She announced, welcome Baratsov, Luke, Metz, and Sam, so basically the three exact people that left thinking machines this week joined Open AI. So I have a feeling that they were already getting tired of their work. They didn't really feel focused under Miramarati's leadership and decided that the best place to go back was Open AI.
Starting point is 00:17:42 you welcome them back with open arms. Hey, I got a question for you, actually. What does the key machines do? Dude, I have no idea. I'm not sure anybody has an answer. $2 billion seed raise. I'm not sure anybody has an answer. This company, as far as I'm concerned,
Starting point is 00:17:59 they have shipped one small product that I don't think, it was just kind of a research product, wasn't very impressive, and nothing else. But they've raised $2 billion in their seed. They're looking to raise even more money. And there is clearly a lot of drama going on in-house. There's this funny, funny post. I actually dropped it in our agenda.
Starting point is 00:18:14 If you don't mind opening it, it's titled Jim. And there is a post that someone saw looking into the offices, and it has like these weights that are custom branded with thinking machines. As a testament to how kind of outrageous the spending is, yeah, this photo, it's so funny. If you scroll down a little bit, you could see
Starting point is 00:18:30 the plates on the gym are thinking machines' plates. And ironically, there's no 45s in this picture because, I mean, maybe AI researchers don't don't aren't really trusting. They need to be, who knows. But I think it's a funny testament to the fact that there are bubbly things happening, and one of which seems like it's thinking machines. They've raised a ton of money. They haven't really come up with the product. There's clearly a lot of drama where half of the co-founders now have returned back to where they're
Starting point is 00:18:53 left from, so clearly they're not liking it. And it seems like there's this lack of direction. And you can give them the benefit in the doubt in the case that they are a research lab, and they're hoping to discover some novel algorithmic improvement. But we've discussed this in previous episodes where an algorithmic improvement is very difficult to capture value from because it very quickly becomes commoditized and democratized throughout the sack. So even in the best case scenario, it seems difficult to see thinking machines anywhere else besides either being acquired or just kind of fizzling out. And I mean, at this point, it wouldn't even surprise me to see Open AI acquire them again. The mirror will come back to Open AI. They'll get the remaining three
Starting point is 00:19:30 co-founders and they'll just continue to move on as they were previously. But that is to be determined. In other AI news, perhaps we'll go off the drama and get back to the hard tech, the people who are actually building stuff, not talking about stuff. And that is XAI, who has just released the first, or has just made it the first gigawatt scale coherent training cluster in the world. And they have not been around, they've been around the least amount of time out of any other company in the world. So it's pretty ironic the fact that the youngest company is now the first to reach this critical scale of one giga Earth over energy.
Starting point is 00:20:04 Yeah, this is a big deal. So Colossus 2 and Colossus 1 are XAI's specific data centers that they use to gather all their GPUs and train their AI models and inference their AI models. And the unique part around Colossus specifically is Elon is an infrastructure scaling genius. When he started Colossus 1, he scaled it to 400 megawatts. lots of compute capacity in 122 days.
Starting point is 00:20:34 Do you know how long it should have taken him, technically? I mean, this is like a year's long project. Dude, four years. A year to just set up the infrastructure, about four years to get at life. He did all of that in 122 days. Jensen Huang called him something like short of an absolute genius that no one can compete with. That's one tenth the time. Four months, right?
Starting point is 00:20:57 Yeah. And so he did that kind of midway last year and he thought, what, I'm going to put my foot on the gas even more. And he started with Colossus 2. And by the by, sorry, by the start of this year, or rather not by the start of this year, but January 17th specifically,
Starting point is 00:21:12 he got one gigawatt of compute total live across Colossus 1 and Colossus 2. He is officially the quickest AI startup to get to one gigawatts worth of compute beating Open AI who started this stuff years ago. This is
Starting point is 00:21:28 a two year startup. Oh my God. We got to show the chart. Please, go to the chart at the bottom of this. Yeah, please, please, please. Let me go to this chart. Where is the, wait, where is the chart? Oh, it was at the bottom of the post. Oh, yeah, there you go.
Starting point is 00:21:37 Sorry, right. Look at this scale. It is just absolutely insane. To kind of like put this into context for people, we're talking about 555,000 GPUs. That's half a million GPUs worth $18 billion. He deployed it that quickly. And if you want to understand why he got the edge
Starting point is 00:21:56 over every other lab that's been working on this for multiple years, and somehow Elon's just kind of like swept the rug with them. He thought outside of the box. He played in the gray area. And what I mean by that is, of course, he flew in gas turbines, Josh. He flew in gas turbines to power his data centers. If he couldn't get access to electrical grids,
Starting point is 00:22:15 he would set it up himself. He would literally thread the wires himself. And if he couldn't get any energy from the national electric grid or the state electric grid, he would fly in Tesla megapacks to power this. He's thinking way outside the box where other AI labs are just kind of waiting for regulatory buttons to be overcome,
Starting point is 00:22:32 which is going to take years, to be honest. Yeah, and so much so that these two data centers, Colossus 1 and 2, actually sit in different states, but right on the state line. So they could lobby in two states at once. They have access to double the amount of senators to help get the legislation pass that they need in order to build these things. They have really, like, if you think about how to build a data center from first principles, you want to optimize all of these traits that they have actually been doing.
Starting point is 00:22:55 And they have that unique advantage that you mentioned, where they have partnerships with companies like Tesla to give them access to these megapacks. And a lot of people don't realize when you do these training runs, there's a couple hundred thousand GPUs that spin up very quickly and then power down. And there's a huge variance in energy that comes through the grid in order to make this work. And when there's a physical combustion motor that's actually spinning, it's very difficult to speed it up and slow it down that fast. So you need something quicker. You need things like megapack batteries. And the only company what that makes batteries this size at this scale with this type of software stack ready to go
Starting point is 00:23:27 is Tesla, is the megapack, and they have this competitive advantage. And in a world in which the winner is the person who can deploy resources, the fastest, and most efficiently, it very clearly seems that XAI is going to continue to take this lead as they move forward just because of the sheer rate of improvement. And if you look at the chart, I mean, it's a vertical line to get to Colossus 2. And the next person who's coming after this is OpenAI, but that chart looks like it, It's going to beat them sometime in 2027. So, I mean, granted, there will be variants here,
Starting point is 00:23:59 but it seems like XAI finally has taken the lead, and I don't see any world in which they don't continue to dominate this lead. Yeah, I agree with you. Elon's taking a very large gamble here, which is the more compute I have, the better AI model I can build. And that was in contention last year, but Google proved it after they spun up a bunch of TPUs that they produced a leading model.
Starting point is 00:24:21 And I have a feeling that GROC-5, whenever that releases in this quarter or first half of the year is going to be an absolute beast. I can't wait. To put it to context, by the way, because we throw around a lot of these numbers like one gigawad, what the hell does that mean?
Starting point is 00:24:36 That is the total power that is needed to power San Francisco City. But not just during normal hours. I'm talking about peak hours. If you turn on all the lights, use up all the energy, the most energy, that's how much power is coursing through
Starting point is 00:24:52 Colossus 2 right now. And I've got news for you guys. He's set up Colossus 3 already. And he has raised another $20 billion, $5 billion oversubscribed to buy even more GPUs. And he is the largest purchaser of Nvidia's Blackwells and their latest Vero Rubens as well. So again, it's a big gamble. If it pays off, Grok's going to be the best, dude. Man, when these varirubin chips come online and they have clusters this size at this scale, there's an impossibility we don't H. AGI at that point. There is so much intelligence, so much energy and compute power, solving problems. And this is happening quickly. I mean, this is by next year. We're done. Crazy. Things are moving quickly. But maybe on that note, particularly the Tesla one,
Starting point is 00:25:36 we have updates. Well, come on. We're talking about XAI. This is one of six that Elon manages. And you're telling me that he's at a bang a week for Tesla as well. Absolutely incredible. So there's a funny thing. I'm in L.A. right now and I've been taking Waymos around all week. and this obviously is Tesla's largest competitor, and they're fantastic. They work very well, but they work in this geo-fenced area where I wanted to go to Santa Monica Pier,
Starting point is 00:25:57 and I actually couldn't get there, even though it was only four miles away, because it wasn't in the geofenced area. I wasn't able to get there. FSD is solving this problem. FSD is going to be fully autonomous everywhere, and in fact, just this week they announced that they are removing the ability
Starting point is 00:26:12 to purchase a license February 14th on Valentine's Day, which is kind of sad, because that's a heartbreak thing. And on Valentine's Day of all days, it's not a cool thing, but it signals a few things to me. One is since the beginning of time, it was made very clear that the price of full self-driving would only go in one direction, and that's up. Because the value of having a license to basically grant you unlimited full self-driving miles for the remaining value of the car is a very high premium.
Starting point is 00:26:38 And as they get closer and closer, that premium will get higher. Now, we've gotten so close to the finish line that they've removed it entirely. And that license no longer exists. So if you want to buy unlimited, lifetime full self-driving chauffeur miles, you have about a month to do it. And if you don't do it in that time,
Starting point is 00:26:56 that's it, you're done for. Wait, question for you. So let's say you buy a new Tesla after this deadline, Josh. Can you move your lifetime FSD subscription to your new car? There is a month grace period after the window
Starting point is 00:27:11 until sometime in mid-March where if you are a current owner, you can transfer the license to the new vehicle. But after that, I suspect it's done. And it's not coming back. back. But dude, didn't people pay like $8,000 to get FST for life? I own the license. So it's, it's disappointing, but it's feel, like it's been expected for a very
Starting point is 00:27:30 long time. And the car's last forever. But now the car becomes much more valuable because it does have lifetime free full self-driving miles. And that amount of inference power is really impressive. On the back of this post that we're sharing right now, which is the hardware plan as they move forward and as they're actually powering these systems. So AI4 is currently, what's on the cars right now. That's what is driving, full self-driving. That is what is going to get to full level five autonomy where the cars will drive themselves. But AI5 is coming next. And AI5 is a new chip design that they've been working really hard on basically every week for the last three months. And AI5 is built entirely for full-self driving in a way that AI4 was
Starting point is 00:28:10 not. So traditionally with GPUs, the way they're built is for large, I think it's floating point matrix math, some like very elaborate data set. And it's good for just, general purpose, but basically what Tesla and the AI team are doing is designing, you could think of it like a TPU, but for full self-driving vehicles and for autonomy on machines. So that includes things like Optimus and things like any of the Tesla vehicles. And I'm sure it will find its way into the data centers. In fact, I was listening to an episode with one of the XAI employees, who I guess is ex-XAI now, because he's no longer with the company after that podcast. But he was mentioning how the plan is actually to take these AI5 chips and all of the chips moving forward and actually
Starting point is 00:28:48 place them into data centers because they're going to be so effective, so efficient relative to others, that Tesla will now soon become not only a large chip manufacturer, but will do the Google thing and will vertically integrate the chips into its own technology stack as it trains these models and starts to build its quest for AGI. He's already working on the future iterations of this chip as well. AI6, AI7, and then Dojo 3, he's resuming that product as well. He mentioned that he's going to be doing these iterations in nine-month cycles. So what is typically taken two years,
Starting point is 00:29:24 he's accelerating chip design to a new one every nine months, which is just an insane cadence and kind of reminds me of what Jensen is doing at Nvidia, which brings me to, because Jensen's like doing a new GPU model every year, and that's what he's targeting.
Starting point is 00:29:39 Which leads you to my point, we had a conversation you and I, Josh, months ago last year, where I said that I think Elon might be going down the path where he's not exactly trying to compete with Nvidia, but he wants to become independent from them eventually. I'm not saying this is a direct hit
Starting point is 00:29:55 towards it. I think your comparison to the AI models, the AI 5 chips rather being better compared to Google's TPUs is accurate. But if he's not just going to put these in cars, but he's going to put these in humanoid robots and whatever robots Tesla builds in the future, maybe even his spaceships, if they do a cross-collaboration, We already know that there's a lot of synergies between SpaceX and Tesla.
Starting point is 00:30:20 This ends up becoming one of the most valuable GPU providers that could be. And I wonder if Tesla starts to get valued similarly to an Nvidia-esque for maybe custom specialized chips for robots in the future. Yeah, their plan is to develop a terra fab factory. So their plan is to actually make these chips in-house to become someone like Nvidia, but do so in a way that's vertically integrated. And you could see in that post, he's kind of sharing what each one of those. ships are going to be used for and we could almost reverse engineer this to create a
Starting point is 00:30:49 timeline of when we'll get to AI 7 which he says will be the space-based AI compute and if you assume about nine months per iteration that takes us to 27 months so two years and change after AI-5 releases so maybe AI-5 comes out sometime later this year after that we have about two years so you're looking at like 2028 2029 end of the decade for space-based AI compute to become really a a super power. And you have to assume that by that time, starships will be working very, very well. They'll get the cost of kilogram down. And that's probably when we'll start to see this new AI in space-based narrative actually literally taking off and getting into outer space. So I found
Starting point is 00:31:29 that interesting too. So now we kind of have this loose estimate for timelines as well. And one final bonus on the full self-driving part of this show is the real live implementation of this right now, where we're seeing this post. This person drove 13,000 miles fully autonomously. from the west coast to the east coast and then partially back. This was all done using a production Tesla. There was no early access information, and it was actually verified through the software that's on the car that 100% of these 13,000 miles were driven autonomously.
Starting point is 00:31:58 That includes parking, that includes charging, that includes driving, detours, anything that needed to happen. It was done 100% autonomously. And I think that's why you're seeing this premium for these licenses. And the second they go away, you're on a monthly plan, and that monthly plan is going to change price. Right now it's $100 a month, but I suspect it will continue to go up
Starting point is 00:32:15 as the cost per mile goes down. And that's basically it for the autopilot section. I mean, it's exciting. It's happening quickly. It's out there in production. You could go sit in them today, and it really does work. You have your own chauffeur.
Starting point is 00:32:28 To be honest, the most shocking thing from this news update, Josh, is that you spent the week driving around in Waymos, dude. You know what? Keep your friends close and your enemy's closer, man. I got to know what's going on over there. And there's 2,500 of them out there right now, right? Oh, and it looks like it. It's unbelievable. Just a sidebar on L.A. There are robots everywhere.
Starting point is 00:32:48 There's robots rolling down the street delivering food. There's Waymoes that are driving in the streets. It feels like a futuristic place relative to New York City. Do you make it out to the Tesla diner this time? Yes, and enjoyed every moment of it. In fact, I was there the week that it opened up because I was such a fanboy. It's a cool experience. I would recommend anyone who is in LA. Is the food that good? It's really fun. It is very good. It's all locally sourced. It's high quality ingredients. It tastes great. It's a really fun and novel experience. So it's worth making a pit stop if anyone's around.
Starting point is 00:33:16 Okay, well, if you guys are based on the West Coast or better yet in LA, definitely go check out the diner or get a ride in a Waymo. Let us know, is it safe? Do you get into a car crash? I personally much to know. And share some picks. It's fun. You get an opportunity to actually sit in the future and live that before a lot of people do.
Starting point is 00:33:33 So I would highly advise. Now, final topic of the week here is a new partnership between Open Eye and Cerebris. I know, I, Jaze, you're a Cerebris Gep fan. Def fan. You're a surreber's guy. What's going on over here? I'm more kind of concerned that Open AI is spending money again, despite them burning so much money. How much money are we talking? $10 billion over the next three years, just $10 billion. By the way, they don't have that $10 billion. Lord knows what they don't have any dollars. They haven't made a profit. Yeah. They haven't made any profit, but with ads, of course. But yes, this is Open AI's latest and greatest partnership,
Starting point is 00:34:08 or rather investment, which is to the tune of $10 billion in this company called Cerebrus. Now, when I read this headline, Josh, I had a flashback because I'd seen the name Cerebrus before. And I remember I'd seen their name in the context of them moaning about another investment, which was NVIDIA's $20 billion investment in a company called GROC, which makes custom, let's call them custom GPUs,
Starting point is 00:34:36 but they're not exactly GPUs. GROC with a Q. GROC with a key. This is that $30 billion acquisition that just happened. $20 billion acquisition. 20 billion dollar licensing acquisition, exactly. And Nvidia made this acquisition so that they would get access to GROC with the Q's LSUs, their language-specific units or whatever the hell it stands for. And basically, it allows them to process inference at a much cheaper rate.
Starting point is 00:35:04 And the reason why that's interesting is, okay, trading the model is all well. done and you need GPs to do that, but afterwards, you have a ton of people like Josh and I just sending prompts every single hour, and it requires a different type of chip that you can make more performant, save you more money, and also make you more money, right? If you can lower the cost of inference, you end up making money on the back end, right? That's what we all want. And so opening, I thought, hmm, I don't have a grok. What's the next best company to invest in? And that will is Cerebrus. Cerebrus at the time of the Nvidia acquisition complained, saying,
Starting point is 00:35:41 GROC's not good enough, our chips are better, basically, because they wanted, it's kind of salty that I didn't get the investment. Now they're getting it from Open AI, Josh. And so why would Open AI do it at this time? Well, they received a big bit of criticism for one of their products, Josh, and that is their coding model called Kodax. They were told that it was too slow. And ClaudeCode, not only was specifically.
Starting point is 00:36:04 but a lot quicker. Well, Josh, a bit of news dropped in the last week, which actually you and I didn't even catch. We called it at the last minute. OpenAI dropped Codex max high. God knows why they called it that, but it's their latest coding model, which is a lot smarter than it was. It's almost at parity with Claude Code, but not quite as good. But most importantly, it's quicker. And the rumors state that the reason why it's 15 times quicker is because they're using
Starting point is 00:36:33 cerebrous chips. So they have this specific custom chip. It's huge. I've seen a picture of this thing. It's the size of a dinner plate. It looks absolutely ridiculous. But it's more performant. I'm going to allow Open AI to charge more on the back end whilst also delivering enough compute for anyone and everyone that wants to use it for coding. Yeah, this is a, it's a 10 times faster than GPU-based inference performance. So this is going to be a huge increase in performance. Now, they have the performance. Do they have the actual intelligence? that's the next question that remains to be seen. So is it going to be valuable if it's faster, even though the tokens that is generating are a bit inferior to anthropic? I don't know. But again, this is kind of a testament to the strategy of Open AI,
Starting point is 00:37:16 which is being fighting basically every war on all fronts. They're trying to win consumer. They're trying to win institutional and business. They're trying to compete on coding while also doing image generation while also doing video generation. So they're trying to be the best across the board. And it seems like as a result, They're becoming kind of not the best at anything.
Starting point is 00:37:36 And I'm not sure where that strategy leads, but it appears as if they're going to continue doing that based on their ad strategy and now this acquisition of Cerebra. So time will tell how this actually pans out. Do you want to hit it is worthwhile? Do you want to hear my tinfall hat conspiracy? Yeah, what you got, Josh? I also read earlier this week, Josh,
Starting point is 00:37:54 what is the Neuroblink competitor that Sam Sutted? Is it merge, merge labs or something like that? It's merge, right? I think that's probably right. I know he has a competitor. Guess which AI Lab made a massive investment in Mudge Labs this week as well? Who? Open AI.
Starting point is 00:38:11 No. So he's just buying his own bags across the board. It's annoying, right? Because he also secretly has equity in this thing. Yes. He's using investor money as exit liquidity. That's kind of... That's exactly what he's doing.
Starting point is 00:38:23 Yeah, it's suss, right? On the back of a $135 billion lawsuit. Yes. I think this is probably going to be a big trend for the year is just evaluating the ethics and strategy of open AI as they have gone from non-profit to for-profit and now continue to acquire the same companies that are on a CEO's balance sheet. It's interesting. There's a lot of good lore here that we still have to unpack, but I think that's enough unpacking for today. We covered quite a bit. That's going to be the end of the episode. We have another roundup
Starting point is 00:38:53 coming later this week. There's so much stuff to talk about. But yeah, that was another Well, I was going to say, you might have noticed if you're listening to this that we didn't mention two of the hottest topics this week. Anthropic releasing Claude, co-work, and Google taking over the entire AI world with four product releases, including personal intelligence. We actually made dedicated episodes to both of those. And if you don't know what we're talking about, you missed it. And you should definitely go and watch those. In fact, Josh wrote a banger of a newsletter essay in our newsletter. So you should sign up for that as well.
Starting point is 00:39:28 Turn on notifications wherever you listen or watch these things. I know some of you can't bear to look at our faces, so you just listen to our voices. That's totally fine. Turn on notifications, subscribe. It helps us out so much. And yeah, we'll see you on the next episode. We're filming a pretty interesting one. Can we leak a little bit of alpha here?
Starting point is 00:39:43 Yes, please. For anyone who subscribes to the newsletter, the piece that's coming out this week, EJES, I believe, is going to be on XAI. And about why XI is undervalued, how valuable it can get, kind of a continuation of the conversation we had today. So if that seems interesting, that's going to drop on Wednesday of this week. So subscribe in order to get access to that. And see a little bit more detail as to why we suspect XAI is going to have the success that we believe it will be, which will be very high. But yeah, that concludes the episode. Thank you all so much for watching.
Starting point is 00:40:12 We appreciate it. I appreciate your sharing with your friends, doing all the good things. And we will see you guys in the next one. See ya.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.