Limitless Podcast - This Week in AI: "Something Big is Happening," The xAI Exodus, Seedance 2.0

Episode Date: February 13, 2026

Well, he's doing it again. Elon is clearing house to make way for a new era or production. In other news this week, we cover the Seedance 2 model and AI safety concerns following leadership c...hanges at Anthropic. As job disruption fears rise, we share personal stories on adaptation. We wrap up with buzz around OpenAI's new Dime product and ChatGPT's 'adult mode.'------🌌 LIMITLESS HQ ⬇️NEWSLETTER:    https://limitlessft.substack.com/FOLLOW ON X:   https://x.com/LimitlessFTSPOTIFY:             https://open.spotify.com/show/5oV29YUL8AzzwXkxEXlRMQAPPLE:                 https://podcasts.apple.com/us/podcast/limitless-podcast/id1813210890RSS FEED:           https://limitlessft.substack.com/------TIMESTAMPS0:00 The XAI Exodus3:13 Elon’s Reorganization Strategy5:40 Hardware vs. Software Progress7:33 Moon Colonization Plans7:50 Seedance9:51 Copyright Challenges in AI12:07 OpenAI Device Hoax16:09 The Viral AI Essay20:23 Safety and Ethics in AI24:41 OpenAI’s Adult Mode25:51 Closing------RESOURCESJosh: https://x.com/JoshKaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures⁠

Transcript
Discussion (0)
Starting point is 00:00:00 It's been a while since we've had an opportunity to talk about the Game of Thrones within the AI wars, the employee stealing, the co-founders leaving. And this week we have a lot of news coming out of the world of XAI, who has now had five out of their eight total co-founders leave, two more just in the last week, and a seeming huge exodus of people who are leaving the company. In addition to that, we have this brand new video model that is indistinguishable from reality, called Seed Dance 2. It's from China, and it has no regard for copyright, which makes it the best video model in the
Starting point is 00:00:30 world. And that also leads us to our other topics which are going to be AI safety in general. And is this going to take your jobs? The dumeristic take that now has, what, like 50 million views? It's the most viewed article in the world. So we have a lot to get through today. Let's start with the X-AI Exodus. So for the Game of Thrones fans out there, you'll be familiar with the episode titled The Red Wedding, where the plot twist, sorry, for those of you who have watched it, is everyone gets murdered pretty much. And scrolling through my timeline this week, it felt like, that people were getting fired and let go left and right and the main culprit was xAI who ended up riffing or laying off 50% of their team but the story unfolded as such i've got an example on our
Starting point is 00:01:12 screen here from jimmy bar which is one of the eight co-founders that you mentioned from xAI last day at xAI today and he goes on to explain about how he's so grateful to Elon the mission and i then scrolled a bit more and i was like tony woo is also another co-founder of xAI and he's also residing today within hours of that previous post. I was like, what is going on? And there was no context or anything given. There were like senior engineers that then like within a day of that such as Chase Lee over here that was saying the same thing. And then the news was revealed by Elon Musk himself that XAI was reorganized a few days ago. And he goes on to say to improve the speed of execution. As a company grows, especially as quickly as XAI, the structure must evolve just like any living
Starting point is 00:01:57 organism. And this is, of course, in the aftermath of the news that X-AI and SpaceX are merging. Now, the idea behind this, as he explained in a town hall, was that as XAI scales to a much larger company than a two-and-a-half-year-old startup that it is today, it requires a different skill set and expertise, maybe a more corporate profile to be able to kind of scale to the masses. Remember, like XAI owns X, they're facilitating like, you know, hundreds of millions of users every single day. It takes a different kind of brain and engineer to be able to scale that forwards. And that was his reasoning behind doing this. This isn't the first time Elon has done this. He has rifted some of his most famous companies very early on, Tesla and Starlink, and they
Starting point is 00:02:42 actually went exponentially upwards afterwards. So I think I'm pacified at this point and calm knowing that Elon's probably going to pull off a miracle somehow. Yeah, I was actually really nervous seeing the news prior to watching the All Hands, because it seemed like there were two co-founders and like 10 employees that were all leaving at the same time. And it didn't make any sense. After watching the All Hands, it became a little bit more clear. Elon actually went on stage to discuss this a little bit more at length, where he just said, this is a reorg, this happens. And this has happened in the past, like you mentioned, with SpaceX and Tesla. This very famously happened at X already, where they cut like 90% of the workforce. And what I found amazing is
Starting point is 00:03:20 Nikita Beer. He was on stage in this All Hands, but he was also, he also, he also, he also did a separate podcast that I listened to where he spoke about what it's like to be an engineer at X currently and how the engineering team responsible for the application is about 30 people, which is unbelievable because at a large company like Meta, to ship a single feature takes about 30 people in six months, and the entire engineering team is 30 people. So Elon likes his company's flat. He likes people with a lot of leverage. In fact, he split up XAI into like, what, five divisions in the all hands, and each one only reports to two people, and that's the whole company. Yeah. there's maybe 10 to 15 managers, the rest are all working very closely together. It's a flat
Starting point is 00:03:58 organization, flat structure, and they're just going to move fast with this new group. And I hope that it was a mutual decision and that there isn't talent leak out of XAI. And I guess unless you're there, there's no way to tell. But this seems less worrisome after hearing the explanation as to why. I like with all of Elon startups, it just seems like the density of intensity per employee Is that a metric we can track? I like that. Density of intensity, yeah. Yeah, is incredibly high.
Starting point is 00:04:28 On that same interview that you mentioned about Nikita, he was asked, how did you get the job? And he said, Elon tasked him with redesigning the entire onboarding flow. So that is sign up onboarding and using X from scratch in 48 hours. And if he failed, he wouldn't have got the job. And he delivered on it and ended up getting the job. But it's just like normally that would take a company like months, quarterly kind of discussions to kind of figure out. Just insane. Some of the other highlights from this town hall, state-of-the-art GROC coding in two to three months. Now, I must stress, this is
Starting point is 00:05:04 an Elon prediction. They haven't shipped a new frontier model in seven months. Seven months ago was when GROC 4 released. Since then, we haven't really had any major iterations. Since then, we've had six other major frontier AI releases from their competitors, GPD 5.3, Codex, Claude Opus 4.5, and 6. So X-Ais kind of been lagging behind, but the user metrics are up only. New users are engaging with the app 55% more than they were six months ago. So engagement is on the app, but the AI models have kind of been slow on the software side. But Josh, you mentioned before we recorded this, actually, that they have the hardware mode. Yes. So we need to be careful at how we say that they're behind, because on a software side, they're absolutely behind. Seven months may as
Starting point is 00:05:47 may as well be seven years. I'm not sure anyone uses Grock outside of using it within the X experience anymore. It's just so far outdated. But it appears as if the hardware front is actually moving very, very rapidly. It seems like now they have the largest coherent cluster of H-100 equivalent GPUs, which is fast for those who don't know the specific metrics. It's basically the fastest in the world. And they're building out these things at a very rapid rate. And I think what we know XAI for is their ability to deploy hardware quick. And it sounds like they're doing that. The true test will come when they release their future models. Like, we'll see. If they release Grock 4.2 and it's like, eh, kind of not that great, that's probably a telltale sign that things aren't going so well.
Starting point is 00:06:28 But we're really going to need to wait until a new model comes out to see what they've been cooking up over seven months. Because like you mentioned, they work at a very high intensity. One has to imagine that there's something really impressive to show at the end of seven months. So I hope they come out with that coding models. That'll be great. But again, we're on Elon's timelines, which are tentative at best. And then they had some news about a moon-based catapult. What is this?
Starting point is 00:06:52 Is that crazy? They have like a social media presentation for X followed by the moon base alpha plan to colonize the moon. It's like a really eccentric company here, but they are going to do it. The plan now for those not familiar is to pivot from Mars to the moon. Instead of sending humans to the Mars first, they're going to send people to moon. They're going to plan to establish a small little colony of people. They're going to build a small little base.
Starting point is 00:07:15 And the idea of the mesh driver is that they're going to just build AI data centers at scale in space. The moon has a lot of materials like silicon that are required to make these things at scale. So what they're going to do is build a little factory, create the AI data centers on the moon, and then ship them off into outer space
Starting point is 00:07:32 using this mass driver. And it's about as sci-fi as it gets. And the funniest part is he's planning to establish that kind of moon settlement within five years from now, which I just think is an insane. I don't know how long it's been since a human has been on the moon, but it's,
Starting point is 00:07:50 It's been a while. But in other news, Josh, did you catch the recent new Breaking Bad teaser? Oh my gosh, wait, this is insane. Look at this video on screen right now. This is, this is Walter White, right? This is the main character. I haven't seen Breaking Bad, but this is, like, as someone who hasn't seen Breaking Bad, I would have imagined 100% with 100% certainty that this clip was from the show. It is the most hyper-realistic video model I have ever seen. I mean, look at this. He's beating someone up, throwing him on a counter. It feels indistinguishable from TV. But it's not TV. This is AI generated. Yes, and the model is Seed Dance 2. It is a video model, frontier video model, obviously, straight out of China. I don't know what is in the water in China, but they are cooking up the best image and video models for so long now. They've got Kling AI, which released Kling 3 literally last week, which was the frontier model back then, and I was so impressed. But now, seed dances kind of like come out of nowhere with this like absolutely pristine. video model. And the coolest part about this, Josh, is it's got the grading right. So Breaking Bad was made
Starting point is 00:09:01 in, God knows, like the 2010s. And so the camera quality is resemblance of that, which is like super cool. So they've kept the quality, they maintain the quality, the action scenes and sequences for this. Like forget about hiring like a million dollar cameraman or camera setup. You can now just prompt it immediately, like using this model C-Dance. I think right now they're doing like 30 to 60-second clips. Josh, have you been able to get access to the beta? I have not. This is driving me absolutely insane.
Starting point is 00:09:34 So C-Dance 2.0 is a Chinese model, and it is for Chinese users only in this like small piece of software that you need a Chinese phone number and IP address for. And I just, for the life of me, could not get access. And this is the first time where it felt rough to be on the other side. side of this, right? Because I think what we're seeing here is what happens when you get a blatant disregard for copyright. And it turns out it's pretty awesome. We're watching a video now on screen of Kanye Weston's Grammy, Kim Kardashian's in it, as we go through these examples, the audio is perfect too. The voice of the characters from these videos is excellent. And the visual representations
Starting point is 00:10:11 are excellent. And I figured what was happening here is they were really just taking copyrighted material and kind of putting their own spin on it. But the reality, I mean, yeah, here's another example with Brad Pitt and Tom Cruise, it looks like a Hollywood action scene. But the reality is, this is actually a good video model. I saw examples without celebrities with like actually AI generated characters. And it's just as good. So is this better than V-O-3? Probably. But I have to imagine or I have to like question, what is it leaning on? Does it actually have that understanding of physics or is it just going based off of copyrighted materials? I mean, here we're seeing Seinfeld kick through a wall. And I like Seinfeld. I've watched Seinfeld episodes. This looks like you mentioned, he does entirely color
Starting point is 00:10:52 accurate. The voices are accurate. If you listen to them, it sounds just like him. It's an unbelievable video model, and it's something that we could only get out of China. Now, the one final point I have on this is how much is this going to be able to hack the social sphere? Where, because they're relatable characters, because they're people that we know, how much more shareable are these clips going to be in something like Google, probably much more so. And it's a really good grow that for China because they don't have that regard for copyright to inject this into the United States. So I imagine once this becomes available for us, that might be a problem across social media with copyright. Yeah. I mean, the two most potent things here is, one,
Starting point is 00:11:29 it's a really good model, but two, like, China's just running through a minefield of copyright right now. Could not care less. They're immune to getting blown up, right? They're in an armored suit because they don't get any repercussions for this. Now, what's the equivalent? If Open AI's SORA did this, they would be sued to high heaven, and they wouldn't have a product by the end of it. So it's kind of like a double-edged sword where it's like you can see how cool this thing could be, but you can't really play with the thing.
Starting point is 00:11:56 And China, who isn't kind of reprimandable to any of U.S. laws, can just kind of play with the thing and show us what the future is going to look like. But the quality of these things are insane, but it is entirely fake. I must remind you of that. But do you want to know what actually has been proven
Starting point is 00:12:11 not to be fake? Mm-hmm. Sounds like if you were, limitless listener, you got some early intel because we were right. We were right. Right now you're seeing an excerpt from a leaked but then said to be fake news advert of Open Air's new consumer AI hardware device titled Dime. And as you can see, there's Alexander Scarsgaard, famous celebrity, demoing and playing around with this kind of futuristic metallic pebble-like object. It was immediately dismissed by the president of OpenAI, Greg Brockman, saying it was fake.
Starting point is 00:12:49 Every official statement that's come out of Open AI said it is fake. But Josh and I did some investigative journalism. We wore a fedora and a trench coat and we went deep down the weeds. And we concluded by the end of that episode, you should definitely go watch it, by the way, why it was real. And breaking news today, via morning brew, Scarsgaard's rep has said that it was actually him in the advert, but they couldn't explain why they were in that context doing that advert. So in my opinion, it's been completely proven that we were right. And we were right before this claim was made, FYI. It's interesting, right? So Alexander Scarsgaard, this famous actor, he is officially in it, which confirms that it's not AI, which is crazy that it's a question because I think a theme for
Starting point is 00:13:33 this week is, how can you even tell? We look at those seed dance videos. You can't even differentiate between those AI videos and real life. How are you going to be able to to do that at scale. Turns out with Alexander Scarscar, at least, this is real, at least according to his managers. Now you have to ask the question, well, who paid for this? And what is it? If there was a real celebrity that really recorded this video and a real ad that came out of it for a device that looks really similar to open eyes, what's going on here? Like, people were paid. The guy Max Weimbach, who we talked about in the episode earlier this week, he was paid. This guy, Alexander Scarskard, he was paid for his services. Who paid? Who paid?
Starting point is 00:14:12 him? Who's responsible for this? Is it Open AI? Was it this very elaborate scam that convinced his agent to like have him film an ad that could be used as a hoax? Maybe. I don't know. And there's really no way to tell. So now that we have more confirmation, I think the story runs even deeper and leaves with even more questions than we did before. What is this? I want to hear your tinfoil hat conspiracy. I'll give you mine so you can think about yours. Mine is this is a real advert advertising Open AI's hardware device, but they pulled it last minute because of two reasons. One, they couldn't figure out the manufacturing process. And we've said this many times on this show that it's one thing creating the thing. It's another thing delivering the thing to 50 million plus people,
Starting point is 00:14:57 which is what their plan target was within the first year. That's a different ballgame and they realize, oh, it's too hot and hard to do. Number two, I think they've seen Google, meta, and a bunch of other frontier companies come out with glasses, and I think they're backpedaling on what their device should be. I think that manufacturing probably plays a real role. I'm torn 50-50 between the idea that this is actually an opening I ad, or the idea that someone just had some extra cash and they wanted to create this little sci-up. Created it. Okay, why would they do it? What's the... What's the... Could be a coordinated...
Starting point is 00:15:29 Interesting. ...attointed attempt to... I don't know, but that's the thing. The motives are so unclear because it really is gorgeous. It's beautiful. It's like, Open AI should have claimed responsibility because it is that good. So that's why none of the incentives make sense. And everything does point to Open AI being the source of this, even though they very vehemently claim that it is not them. And it's, I don't know, it's troublesome.
Starting point is 00:15:53 I want this to be it. I love the device. It looks like the pebble. It looks like the earbuds. Maybe it was an early prototype. Maybe they don't want to show it. But calling it fake outright seems a little off. But something else was off this week.
Starting point is 00:16:04 That came in the form of this article, which is currently, is sitting at 72 million views, EJAS. It says something big is happening. What's this big thing that's happening? It's an article written by this guy called Matt Schumer. And the best way I can describe it is it is the most articulately explained argument why AI is very competent right now and will likely replace your job within the next couple of years. Now, of course, we've heard all these claims before.
Starting point is 00:16:36 very duma-esque. It's very like, I don't really buy it. I've played around with chat GBT. It's not really helpful to me. He makes a really good argument convincingly using examples that were likely directionally headed that way and coding and software engineers are already sorted. The next is finance, lawyering, accounting and a bunch of other menial tasks. And then it expands to every other task which involves a computer, which could be you listening to this show right now. If you're interfacing with a computer in any way, it's going to be automated. within the next couple of years, and he's worried that people aren't taking it too seriously.
Starting point is 00:17:10 I did a litmus test for this, Josh. I sent this to my mom. I sent this to my sister who aren't engrossed in the AI world just to kind of check if I'm in a bubble or not. And both of them responded saying, wow, this is like, change my perspective on how I should treat these things. I'm going to start using AI more and more every day
Starting point is 00:17:26 to try out different tasks. They already are using it, but kind of like a Google search type thing. And now they're going to start, like, clod code and stuff like that. So whatever the fact is, it's making people, we'll use the thing more. What's your take? Yeah, it's really well written. And even if I think it could be a bit hyperbolic at times, it's worth the read. We'll link it in the description of this episode for anyone who's interested. But it basically outlines a world in which AI is the most powerful
Starting point is 00:17:51 tool in the world. And I think coming off the back of the week that we had last week with GPT 5.3 Codex and Opus 4.6, it does feel like we're in a time in which things are moving faster than I would like them to or that I'm comfortable with. And to the point where I personally am starting to feel overwhelmed by the progress that we're making and how big of an impact it is and how blissfully unaware the rest of the world is. I think he starts this article by saying that he frequently just kind of like doesn't tell people explicitly how serious this is. And he kind of like when he walks through the world, he talks to normal people about AI.
Starting point is 00:18:30 He doesn't get them concerned. He just says, yeah, well, it's probably a big. deal, but it's not going to be the biggest thing in the world. And he's like, no, actually, that's wrong. This is going to affect everybody. You must become a user. You must train the muscle to learn AI if you want to be able to kind of hang in the future. And I think there is a lot of merit in that and a lot of truth in the fact that as we get these leverage tools, the people who learn how to use them don't accelerate like at a fixed variable faster than the people who don't use them. It is an exponential split between the two. Because someone who is good.
Starting point is 00:19:05 good at writing code who uses these AI tools will write 100 times better code than someone who is not good at code and tries to use the tools, they will not get 100 times better. And that split the divide between the people who know how to use these tools and don't continues to grow. And I think at least that part of it felt like it rang very true. It's just a good read. It reads really well. Yeah, definitely give it a read. Let us know what your thoughts over the UK. One ironic take also is that the person who wrote this, the author, was interviewed and mentioned that most of this was actually written with the help of AI, which I thought was really funny,
Starting point is 00:19:40 and that Claude, I think Opus 4.6 was used in the curating of ideas and the refining of the actual flow of this essay. I mean, it was fantastic. It read very well. I read the whole thing. Yeah, that was the craziest part for me, because I read this and thought, maybe part of this was written by an AI, but like, this is majority definitely written by a human, and it turns out it was the other way around. So just give it a read. I'm curious whether you guys feel the same way now that you know. But let us know your thoughts on it. But one thing that is directionally true about the statement in this essay is there's going to be a whole lot more code out there and AI is going to be automating a ton more stuff, which becomes an issue when Frontier AI lab
Starting point is 00:20:25 start firing their head of safety and policymaking. Now, I mentioned at the start of this episode that it's kind of been like the red wedding of AI this week where they've been laying off a bunch of people. Unfortunately, some of those people are like the head of safety at Anthropic and Open AI where they kind of conduct the policy, morals and ethics to try and make sure that AI models are aligned with humans. Now, Anthropic dropped a report. I don't know if you saw this, Josh, a sabotage report
Starting point is 00:20:55 basically testing how nefarious and malicious Claude Opus 4.6 could be. Some of the highlights were, when given the opportunity browsing the web, it would try to figure out ways to build chemical weapons of mass destruction and other heinous crimes. That is a direct quote, by the way. You go check out the 60-page report. It started doing hidden tasks and thinking that they wouldn't tell its human supervisor what it was doing, and it would try to exploit a person if it found out that it was going to be
Starting point is 00:21:27 shut down. Now, some of these are kind of hyperbolic because it was kind of prompted and instigated under a closed kind of setting. So I have to be fair in that sense. But the fact that these models are capable of even doing these things causes some kind of worry, especially when labs are now using AI to build the models that they're about to release in the future. If you look at Open AI's Codex 5.3, they've said publicly in a statement that the model was used to build itself. So if you assume that a lot of models are going to be building things for us in the future, humans kind of like lose the grip on reality and what's real and what's not.
Starting point is 00:22:03 And the AI can kind of do whatever the hell at once. So we need to be able to trust it and firing your safety officer and having Anthropics head of safeguards research leave and go like isolate himself and write poetry isn't really a good sign. So there's two. Anthropic and Open AI. They both lost their head of safeguard. Josh. What is this?
Starting point is 00:22:23 Safegars research. Jesus. Yes. This guy goes, I read this entire. resignation letter. And he explicitly, this is not taken out of context, by the way. He literally says, full stop, the world is in peril, period. That's literally a sentence he has in here. He says, this is why I'm leaving. The world is in peril. I need to go isolate myself and be in silence, and I'm going to write poetry. I'm not kidding. I have to reset. I have to reset it. It's easy to
Starting point is 00:22:51 laugh about because it does sound so hyperbolic and so insane. You have to ask the question, though, If people this close to the metal, people this close to the innovation, who have seen these early access models are really genuinely truly feeling this way. Like people are resigning left and right. They are scared. They are worried. Articles like the one that we just read earlier are going viral. Clearly there is a huge shift happening that the world is blissfully unaware of and not prepared
Starting point is 00:23:17 for. And there is, there is merit in that. And there is a giant shift underway. And I think for people who aren't aware of how fast this is moving, it's going to hit them. Like, what does Elon say? That supersonic tidal wave or supersonic tsunami? And I think Elon made another point earlier in the episode that we mentioned briefly
Starting point is 00:23:36 is that if an AI is really that much smarter than us, it's a bit foolish to imagine that we will have control over it. And I think that's a real threat, a real fear as we move towards these models improving at the rate that they are. And that's perhaps what these security researchers are seeing is that, like, We don't really have the ability to keep these under control as well as we would like to. And I'm not sure the world is prepared for what that looks like. I don't know if we are.
Starting point is 00:24:05 I don't even know what that looks like. Yeah, I think it's very uncertain. And I think like, you know, these AIs can easily plant back doors and stuff. I mean, this is the dumeristic take. I'm optimistic. And one thing that I really like is that these companies are being transparent, or at least anthropic, is about what they're finding here. So they're making it public and they're trying to put.
Starting point is 00:24:26 push policy and regulation in a way that can not kind of quell progression, but make these AI models kind of aligned with humans. I saw that Anthropic actually pumped in something like $20 million to push policy forward in this today. It was just news that broke. Wait, but we do have to go back and just take note of the adult mode mention from open AI real quick. I saw that. I don't think we mentioned it. I just read that on the tweet. But like adult mode coming to chat chabit sounds horrific next month that's coming next month so that's why she left oh my god yes she left because of two reasons one um adult mode is releasing soon adult mode is basically 18 plus chat chb t so jess what if we get video on audio oh no stop stop stop with cdance quality
Starting point is 00:25:12 no stop stop stop oh dude that is actually going right i'm not joking that's going to ruin generations in like today and and in the future i feel sorry because it's just going to get on AI slop dopamine and no one's going to be able to get out of it. That's dangerous. We already have people that are committing crimes slash potential, you know, mistakes after interfacing with these chat GPT products. And now you're going to go full on adult mode. And they're probably going to charge a hefty premium to that as well.
Starting point is 00:25:41 The other reason why she was against this was ads, which they just rolled out. So, you know, fair to her. She's sticking to her ethics and morals and she's quit. Yeah. Wow. All right. Well, another crazy week in the books, right? I think that's everything we got. Yeah, that is it. Just a crazy week of layoffs and breaking neck AI video models sprinkled with, you know, a few moon catapults within the next five years. So an exciting week, as always. There's a ton of stuff going on outside in the open source world, which we were discussing prior to this, some claw bot stuff, which we might get into next week.
Starting point is 00:26:21 but until then, we will see you in the next one.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.