Limitless Podcast - Anthropic Just Got Hacked by China. These are the New Front Lines.

Episode Date: February 25, 2026

Anthropic came forward with a statement accusing China's open source AI labs of theft via distillation, taking data from 16 million fake conversations. With Google and OpenAI echoing similar... concerns, we examine the ethical dilemmas of "distillation attacks" and the hypocrisy within the U.S. AI industry. As the Pentagon leans on AI for national security, we discuss the precarious balance between innovation and ethics. Perhaps the most important conversation of our lifetimes.------🌌 LIMITLESS HQ ⬇️NEWSLETTER:    https://limitlessft.substack.com/FOLLOW ON X:   https://x.com/LimitlessFTSPOTIFY:             https://open.spotify.com/show/5oV29YUL8AzzwXkxEXlRMQAPPLE:                 https://podcasts.apple.com/us/podcast/limitless-podcast/id1813210890RSS FEED:           https://limitlessft.substack.com/------TIMESTAMPS0:00 Exposing China's AI Theft2:22 The Scale of China's Distillation Attack4:26 Legal Boundaries and Ethical Dilemmas5:50 The Pentagon's AI Dependency7:05 Balancing Safety and Speed in AI8:56 Hypocrisy in AI Practices10:20 China's AI Innovations and Open Source13:16 The Strategic Shift in AI Development15:00 The Moral Dilemma of AI Warfare19:57 Concluding Thoughts on AI Ethics------RESOURCESJosh: https://x.com/JoshKaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures⁠

Transcript
Discussion (0)
Starting point is 00:00:00 China just got exposed for stealing our AI. In a new report from Anthropic, three top Chinese AI labs were exposed for having 16 million fraudulent conversations with Claude with one specific goal to try and steal its capabilities to train their own models. Now, the week before, Google said the same thing about China attacking their Gemini models. The week before that, OpenEye said the same thing. The top three American AI labs are blaming China for trying to hack their own AI models. But here's the twist in the story. What China's actually doing may not actually be illegal in the first place. In fact, this is something that every AI company is doing to get ahead in the AI race.
Starting point is 00:00:39 In this episode, we're going to explore what all these reports confirm and whether distillation, the hacking vector, is actually a bad thing. Yeah, so it starts with this blog post that Anthropic publisher earlier this week that says it's a title, Detecting and Preventing Distillation Attacks. And I guess maybe it's helpful to just kind of define distillation as a concept before we get into what they're accusing China. of. And basically, the way it works is there is a teacher and a student model. So the teacher is the large model. That would be Anthropics Claude Opus model. It's this huge model. They've spent hundreds of millions, billions of dollars training it and turning it into the model that we use every day. That model provides these high quality outputs to the student, which is the smaller model that is getting distilled from. So basically, the smaller model, the distilled model,
Starting point is 00:01:22 learns to mimic the outputs of the larger model, but does so at a fraction of the cost because it's able to kind of cherry pick the types of outputs that it gets by prompting it very specifically. So anybody with sufficient access to a model and enough prompts can actually get enough information to emulate the large model with a much smaller data set. Now, the outputs are not always as good as the large model, but they're significantly cheaper and oftentimes very close. So in the case that you get an extra breakthrough or two on top of that, you can build a pretty impressive model and allegedly that's what's happening with these models from China, at least according to Anthropic. At least that's what they say.
Starting point is 00:01:56 But why is what you just described a good thing? It's because all the hundreds of billions of dollars that are invested in building out the best AI model isn't sustainable for the long-term future. In fact, if you want to have a model that's small enough to fit on your phone, but as intelligent enough as the top models, it needs to be distilled through that process that you just explained. So it's going from a big model to a smaller model that is just as intelligent in certain specific ways. The stats from this China hack on Anthropic, Josh, are kind of insane. So I mentioned 16 million exchanges, but they spun up 24,000 fake Anthropic accounts. Now, I have to specify, Anthropic does not allow Chinese users to access their models for the specific reason that adversaries to the U.S. could get access to superintelligence that they're building.
Starting point is 00:02:47 So Deepseek, I'm going to name some names now. Deepseek, one of the top AI labs, which caused the stock market to crash at the end of 2024, I believe, were responsible for 150,000 of those exchanges. Moonshot AI, 3.4 million, minimax, which is a favorite that you and I have spoken about on this show, 13 million exchanges of those 15-minute conversations. There is an argument here that the open source gold rush that has been happening in China was mainly because they were stealing U.S. secrets. Wouldn't that be funny if that was the case? And then if that's also the case, then what do you do about it? I mean, Anthropic kind of came out and they were very upset about this
Starting point is 00:03:23 clearly. But at the end of the day, it's like kind of on them, the onus is on them to protect their systems and prevent this from happening. There's a really great post that you have on screen. And it's a joke. It says, my son asking me a lot of questions. It's a distillation attack, obviously. And I think it's kind of funny where like the irony is, and we can get into the hypocrisy of the whole thing, is that Anthropic as a company very much has done this in the past in order to get where they are. And they are kind of the person who's crying wolf now saying, wait, we're getting attacked. This is not allowed. We should not be able to do this.
Starting point is 00:03:56 Yeah, I mean, Anthropic is doing this with their own models, right? They've distilled Claude Opus into their haiku model. Google's distilled Gemini Ultra into Gemini Nano. This is a common practice. So then the question becomes, which part of this is illegal? What is China done that is illegal? Well, it's two things that are. Anthropicus claim. Number one, they've got this fancy terms of service, which they're lawyers that
Starting point is 00:04:18 have been paid millions of dollars that have dropped it up, which says, hey, hey, hey, if it's our model that you're doing this too, you can't do that. We've patented this thing. It's going to be illegal and we're going to see you in a court of law. The issue is China is in China, and they don't abide to the U.S. legal system at all, which brings me to the second thing that they've violated, which is a geographical restriction. They don't let anyone in that region access clause. So the fact that China has been able to pull this off from the top AI labs means that they've illegally spun up accounts to do this. Well, you know who doesn't care about laws is China? Like, they could not care less. In fact, this is the time for wartime CEOs. Like, in very many ways, this is the largest
Starting point is 00:05:00 war that's being fought between the U.S. and China. And it's around AI. And I think for them to say, that's against our terms of service, this is wrong. Like, that is not a grounds for defending yourself because clearly they have no regard for any sort of law. I mean, you look at Seed Dance 2.0 and how it violates every copyright law under the sun. And yet, people don't care. It's the best video generation model in the world that exists. So it is a challenge to claim that because they're violating in terms of service, this is an illegal thing that you shouldn't be able to do. And they've not just cut off China, but I think it's important to know that they've also cut off other frontier AI labs. They famously had this beef with XAI recently where they cut off all
Starting point is 00:05:40 of the Claude Code access to other labs. So Anthropic has been very controlled and closed down in who's actually able to access their models. And it sounds like someone was able to bypass that and they just got pretty upset about it. Well, I mean, the Pentagon is relying on the likes of Anthropic, X-AI and Open AI to fund the warfare effort against China, right? To your point, we're in like a wartime position. These AI models are being used as a geopolitical weapon. And so whoever owns the best model per se can advance the quickest. So it's like an economically dependent thing. And this whole drama with the Pentagon has been,
Starting point is 00:06:17 the Pentagon has been using Claude for pretty much quite a lot of covert activity, including the recent capture of Nicholas Maduro, the former, I guess, president of Venezuela. And the issue now is that Anthropic is restricting Pentagon's access, like American-owned self-defense against these kinds of things. So the Pentagon is getting fed up and issuing them an ultimatum and saying, listen, if you don't figure this out, we're going to classify you as a threat to the country. Now, I have to give credit to Anthropic for maintaining their identity evenly across every single
Starting point is 00:06:52 facet, but I don't think it's the smart way to do it, because at the end of the day, there's going to be things that require more uncensored versions and you just need to be compliant with that fact, because to your point earlier, Josh, Claude, OpenAI, chat GPT, has become a national asset, and so it needs to be treated as such. Yeah, it's a matter of national security. And the thing about Anthropic that's unique to Anthropic, and I'm not sure many other companies in the AI space is their mission statement, where if you talk to any employee who works at Anthropic, they'll tell you the purpose of the company is safety and alignment. And I think while it's a valiant effort and incredibly important, it doesn't really bode well for the current state of affairs in which
Starting point is 00:07:31 velocity, momentum, and just raw speed to get to the best model possible. is actually beneficial. So I think what we're seeing here is there's just these increasing conflicts with, I mean, the secretive defense and the Pentagon wanting access to do things that they deem to be a matter of national security and like XAI wanting to go and build code using their tools. And they're like, no, no, no, no, no, that's not how we want this used. We're not going to allow that. And then the rumor is, is that apparently the Pentagon actually just kicked out Anthropic and now GROC and the XAI team is responsible for being the AI provider for the Pentagon. So I found that interesting too. It's just like a little side development. Well, I mean, like what you're getting
Starting point is 00:08:10 at there is that some of these AI models or AI companies in America are kind of being super hypocritical. This tweet actually explains it really well. Hey, did you hear about the little like $1.5 billion lawsuit that Anthropic had to pay out over pirating or illegally downloading seven million books to train their own models? Open AI is facing similar lawsuits against newspapers or across newspapers, code repositories and authors, I'm pretty sure Anthropic got sued for using Reddit data to train their models. Google trained their entire model over the index data that they took. Now, the question then becomes, is that fair?
Starting point is 00:08:48 Who are paying the authors and creators of the content where these AI labs that have like amassed hundreds of billions of dollars worth of valuation? Who's paying those creators? No one is, right? So you could argue that that is a form of distillation. Now, obviously, that's looking at it in a very black and white. right face, but I do think it's hypocritical. And most importantly, the memes are just so, so good here. You've got people asking Claude in Chinese, what model are you? And then replying, hey, I'm
Starting point is 00:09:16 deep seek. And then you've got this one here where it says, I can't believe someone would just steal from Anthropic like this. Anthropics spent millions of man hours, handwriting, code, text, art, and books. Obviously, you know, tongue and cheek, this isn't actually real. The point that's being made is that all information is kind of taken or stolen or interpreted in some way shape or form. So what makes it any different for China in this regard? The crux of the argument is that the same foundation that Anthropic built its models on is the foundation that Chinese models are building their foundation on. It's just one level kind of up where they clearly stole, maybe not stole content, but they clearly used the content that we've produced as humans over time to train their model.
Starting point is 00:09:58 What Deep Seek is doing is the next layer up. It's taking the, I guess, the quantized version of all the human intelligence that we've developed and then distilling that one layer up. It's easy to see why they would be upset, but it's also easy to see why everyone is kind of deeming them as hypocritical. It's like, again, you know that you are a nation state actor, like relatives the rest of the world, in one of the most important wars that's being fought. You know that you are going to be getting attacked. You know that these people are going to be coming for you to build their own models in the race for this AGI and beyond. And to think that it's not going to happen and to be upset when it does just seems wrong. And I think that's probably where a lot of the backlash is coming from is because,
Starting point is 00:10:37 I mean, again, it's on them to solve for these issues before they happen or accept the consequences if they don't. And that's just what happens here. I mean, there's like, this is a, this is a bar fight. There are no rules in this fight. It is the only thing that you're trying to do is get to AGI as fast as possible. And clearly, China doesn't care. Can I say something in China's defense? And maybe this is a hot take. Their models be banging recently. Okay, like they have been churning out new model updates from the likes of Alibaba with Kwen 3.5. By the way, if you haven't tried this model out, apparently it's really amazing with agents. It's absolutely crushed benchmarks.
Starting point is 00:11:14 Once again, open source. We've got Minimax AI that we mentioned earlier, which was the biggest perpetrator of this distillation attack against Anthropic. It's the most used model on OpenRouter. Also, what's interesting is like Minimax 2.5 is the most popular Chinese model for OpenClaw 2. And I personally used it. Like when I was running into the OAuth issues with Claude, because they were kind of threatening. Again, they were threatening to ban users for using OAuth, for going around things. They're just the hardos.
Starting point is 00:11:39 Like, they have no fun. But when they were threatening to, like, break people's accounts and ban them, I switched over to Minimax 2.5. And it actually worked very well. And it's a fraction of the cost. And I was like, hey, if you're going to push me away, I'm going to go here to these models that get the job done for me. And Minimax was that one. I have a question for you. Like, where are you geographically located right now?
Starting point is 00:11:59 Are you in China? No, I'm certainly not. Okay. So it looks like they're just giving you free access to do these things. There's no geographical jurisdictions that they're kind of like placing on your restrictions. They're just letting you do the thing. It's awesome. Like all of these models are open source.
Starting point is 00:12:13 These are kind of embellishments that America should be propagating, but they're not. They're playing the opposite. They're playing kind of secretive. And it's not working out in their favor. I mean, you've got minimax. All these latest Chinese labs, by the way, GLM5, KimiK2.5, minimax are crazy good at computer use and agentic tooling. Kimmy K2.5, actually, for the OpenClaw fans out there, released a browser extension, and it's actually really good because the major issue with using
Starting point is 00:12:40 OpenClau was that there was security issues. Well, they created a sandbox environment that you can now use it. So they're innovating at scale, and to the case that they might be stealing certain secrets, I don't think this is regarded as a hack or a stealing thing. I actually just think they're trying to get better models out to more people. And hey, if America can use it, it's hardly a geopolitical thing. So I don't know, I'm kind of in the defense of China here and maybe that's a hot take. And then kind of finally, I just want to put one thing on that China note. I'm not sure that it's out of their own goodwill of their heart. I think it's like the reason their open source is probably because they're behind. I have, I would imagine that if they did have an Nvidia equivalent
Starting point is 00:13:20 in China that was creating top tier GPUs and they did have the, yeah, they did have these leading models like Opus 4.6 and GPT 5.3, they would close it down because there is so much value in owning that. But because China's behind, there's value in being open and sharing it and gaining as much adoption as possible as quickly as possible. And it seems like it's more strategic and tactical than out of the goodness of their heart. But I mean, again, open source really benefits everyone. And as a U.S. citizen, I've used plenty of the Chinese models and they work awesome because they're just so cheap and effective. Well, to be clear, it doesn't benefit everyone. It benefits the users of those models, right? Because the American AI labs, their valuations are going to tank if you have a
Starting point is 00:14:03 Chinese open source much cheaper version that can run on much less expensive hardware. So it makes sense that the Chinese models are basically going the open source route so that they can kind of like chip away at American valuations and then, as you said, Josh, entrenched users in it. But it's not even just American LLMs. There's Chinese models that are specifically, just good in China and beat a lot of the American models. Like, what you're looking on the screen now is not Transformers 6. It is a 30 second video from C-Dance 2.0, which is a Chinese video model, which is just at the front of its own race.
Starting point is 00:14:39 It's basically at the top of its kind. And it's super cheap to produce like Hollywood cinematic effects right now. Seedance 3, the stats will leak the other day. 10 to 18 minutes of continuous cinematic video. So we're going from 30 seconds to almost like a casual episode on, I don't know, like on your network TV's worth in a matter of seconds. It's just kind of insane to see. And I don't think that, you know, this is a nudge against China. I just think like, you know, this thing is accessible to anyone and everyone should be at scale soon. When you're in a bar fight, the dude who like smashes the bottle over the counter and starts waving it around as a weapon, like that's the guy that wins. The person who are armed and willing to break the rules and willing to do whatever it takes. to win, that's the person that wins. And China time and time again has proven that that's what they're willing to do. And it creates this really difficult moral dilemma between companies like Anthropic that I genuinely do believe have people's best interests at hearts, but none of the
Starting point is 00:15:37 incentives align with that mission. There is no incentive for being safe when the opposers on the other side of the planet have no regard for it. Because should being safe slow down our progress that only allows them to catch up or accelerate ahead. And then we are living under a world in which it is run by Chinese rules from Chinese models. And it's this impossibly difficult dilemma that they're trying to navigate. And I really have a lot of empathy for that because it's a difficult place you want to create this safe superintelligence, this safe AGI that doesn't harm the world. But at the same time, you do need to be a wartime presence.
Starting point is 00:16:12 You need to lock down your endpoints. You need to have detection for 24,000 fake accounts that are extracting tons of data for you. Like this is a serious issue and I really hope that this is kind of like a warning cry or just like a refocusing for a lot of these AI labs in how important it is to keep your stuff locked down or just do whatever needs to be done to win this race. To round this up, I see a few things happening going forwards. And number one, I think companies like Anthropic and maybe even Open Air and Gemini or Google to an extent are going to start locking. down their APIs in a few ways. Google started locking down their thing to open claw, their API to OpenClaw this week. Anthropics started doing the same after announcing this distillation attack. Now, this is not going to be good for NetNet for users because, you know, they say that
Starting point is 00:17:05 they're preventing Chinese hacks, but really like it's the software engineer in America that suffers from this. And I would say it would have the opposite effect that they want, which is these software engineers who can't afford, you know, to spend tens of thousands of dollars every month to access top-tier models are just going to go to these Chinese models. So it's going to have the opposite effect of what you actually want. I think the other thing that we have to recognize, which is just the uncomfortable truth is, this isn't a conversation about AI models and the AI race.
Starting point is 00:17:29 I think this is a geopolitical discussion. This is America versus China, as it always has been. And to Darryor's point in, what was it? It's the name of it, Davos. He stated that, you know, giving or selling GPUs or selling model access to China is the equivalent of giving them the keys to nukes, right? Because if you assume that these AI models are going to become intelligent enough, they're going to be used against each other's adversary. So you can't necessarily or you don't necessarily want to give China access to these side of things.
Starting point is 00:18:00 The progress of AI and the safety of AI will fall to that lowest common denominator. Where like we want a good video model, well, China doesn't care for copyright. They go and create seed dance. Anthropic doesn't want to cooperate with the Pentagon. And it wants to make sure that the Pentagon does things a little safer than the Pentagon would like. well, Grock is there to step in and to fill that void. And the reality is that while these morals are so important to stand on, they're so incredibly difficult to enforce because the stakes are as high as they are. And I think when we look at the Game of Thrones, how to evaluate
Starting point is 00:18:35 all the positions of all these companies, it's becoming increasingly clear that the moral compass is going to become increasingly complex as the stakes get higher. And a company like Anthropic, who wants to be Anthropic, is going to have a very very important. very difficult time maintaining that, even though it's probably critical for the safety and well-being. The other thing I was thinking about is when these types of hacks, hacks using distillation, removes all the safety caps that American AI labs put in. So for example, if you had an uncensored version of clod, you could use it to create or help you create biochemical weapons. But Anthropic puts in safeguards so that you aren't able to do such things, right? Chinese model labs that
Starting point is 00:19:18 are distilling models to train their own models don't have that safety limit. You need to rely on China being able to do that and not adding any nefarious backdoors. So like I see the point around like, you know, model American model labs being responsible for their own thing and understanding that they are now a national level asset and they need to kind of respond effectively. But equally, we can't necessarily just be relaxed and let China do similar things like this. So it's, it is a tricky one. I think without doubt the frontier of modern warfare, again,
Starting point is 00:19:48 against these two nations looks like an AI model attacking each other. I don't think it's got anything to do with weapons. It's got quite the opposite. That's why the Pentagon cares so much. That's why they're signing deals with Open Air and GROC to create drone warfare technology and so much more. So I think this is in the end. We're going to see way more attacks from this.
Starting point is 00:20:05 Maybe even in switched roles, I don't know. But interesting to see, nevertheless. Yeah, it's, I mean, again, this Game of Thrones is just going to keep getting more interesting, higher stakes. People are going to start sacrificing more and more. and this is just the most recent example of Anthropic being the one in the crosshairs, but I'm sure it's just a matter of time
Starting point is 00:20:23 until others are as well. But I think that concludes the episode today. That is the update on the anthropic drama. That's everything we need to know about it. And I guess the problem for today, which I'm curious about, is like, kind of where do you stand on the issue? It's complicated because in a way,
Starting point is 00:20:38 everyone is right and everyone is wrong. Like, everyone is breaking the rules, but does, like, what are the rules actually, are they actually able to be enforced? I don't know. But yeah, I'm curious to hear just general takes on the issue here. It's a good one. Like, how do you feel?
Starting point is 00:20:53 Like, for those of you who are playing around with, like, Kimi K2.5 or Minimax, like myself, do you feel, like, more likely to pick them up now that you know what's going on? Or are you just kind of on the side of like, ah, this is happening and it's cheap for me to use and I can run it privately at home. Maybe it doesn't matter. I don't know. We didn't talk about that. Are you now more or less inclined to use these models?
Starting point is 00:21:13 Dude, I, okay, I'm just going to be very honest. I'm still going to use these models because I'm not exactly convinced even though I understand where Anthropics coming from that distillation is such a bad thing I think they need to figure out a way to prevent people from distilling them
Starting point is 00:21:28 if you can access it via an API you've got a security issue, not a national threat. Yeah, I think I'm probably in the same boat where I will continue to experiment with Seedance because Seed Dance is so much better than everything else and I'll just use the best products at the time and I hope that the American companies continue to provide the best products
Starting point is 00:21:44 And yeah, I guess that concludes today's episode. So if you did enjoy it, please don't forget to share it with your friends. That's a big way to help us grow. Liking, subscribing, commenting. If you're listening to this podcast, rating five stars goes a long way. And yeah, we have an amazing substack that comes out twice a week that you can also subscribe to. Everything is linked down below in the description. And EJAS, unless you got anything else, I think that's it for today.
Starting point is 00:22:07 No, that's it. So we'll see you guys on the next one. I'll see you folks. See you guys. a smile and call it progress for a while is friendly fast and bleep so getting all the free

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.