Limitless Podcast - Another Anthropic Leak... This Time, Claude Code's Source Code

Episode Date: April 1, 2026

There has been another major security breach at Anthropic, this time revealing their CloudCode source code and upcoming features like Kairos and Buddy. We discuss the implications of this lea...k, born from a simple mistake, and its impact on Anthropic’s reputation and valuation. Plus, we highlight the potential for innovation within the open-source community from the leaked code and share updates on Claude’s future models, including Mythos and Capybara.------🌌 LIMITLESS HQ ⬇️NEWSLETTER:    https://limitlessft.substack.com/FOLLOW ON X:   https://x.com/LimitlessFTSPOTIFY:             https://open.spotify.com/show/5oV29YUL8AzzwXkxEXlRMQAPPLE:                 https://podcasts.apple.com/us/podcast/limitless-podcast/id1813210890RSS FEED:           https://limitlessft.substack.com/------POLYMARKET | #1 PREDICTION MARKET 🔮https://bankless.cc/polymarket-podcast------TIMESTAMPS0:00 The Big Leak Unveiled1:25 How It Happened3:55 Upcoming Features12:04 Security Concerns17:18 Brand Impact20:20 Closing Thoughts------RESOURCESJosh: https://x.com/JoshKaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures⁠

Transcript
Discussion (0)
Starting point is 00:00:00 This morning, a security researcher posted a single link on X, and within hours it had 3 million views and had millions of copies backed up all across GitHub. By the afternoon, when we're recording this episode, Anthropica is scrambling to delete old versions of their NPM package, but it was too late. What leaked was the entire source code of cloud code. Every single line, 512 lines of TypeScript, 1,900 files, every tool, every permission system, every internal code name was leaked, all because someone forgot to include a single debugging file from a public package.
Starting point is 00:00:33 And that story alone would be a major story. But what makes this even more crazy is what people found buried inside the code. And now we have information about every feature that's coming down the pipeline, as well as all of the secrets that Anthropic and Claw team didn't necessarily want us to know. This is a really big leak. I can't believe this happened. I mean, big leak is one way to describe it. Absolutely terrible for the Anthropic security team.
Starting point is 00:00:54 Oh, this is brutal. Everything. By the way, this is the second leak. that Anthropic is made in the last five days. So they're shipping a new product every single day, but they all seem to be leaking their entire roadmap. We now know what the next 44 product releases are going to be over the next couple of months,
Starting point is 00:01:11 or rather than a couple of weeks, for Anthropic right now, for ClaudeCode specifically. As you mentioned, half a million lines of code, 19,000 files and a bunch of different feature releases, which, by the way, have already been built. So they just need to click the launch button. We have all the details, and we're going to get into it, But before we do that, we need to kind of describe how this happened, because leak is one way to describe this, but it wasn't an internal employee at Anthropic leaking these files or this source code.
Starting point is 00:01:39 This was publicly available. Let me repeat that. This code was publicly available in the latest update of Claude Code. Someone within Anthropic had mistakenly left a file, a dot map JS file in the system that was publicly accessible. Someone found it. and now that original post that exposed this source code has been seen by over 10 million people, and it's only been three hours to post it,
Starting point is 00:02:04 as we're recording this, and it's been forked over 5,000 times. So this is basically ClaudecoteCode's entire blueprint, entire architecture, the way that its memory is set up, the way that the model works, released for anyone and everyone to use. And a bunch of people have already been using it. People have plugged in different models,
Starting point is 00:02:21 have created their own versions of ClaudeCode. It is just insane. Josh, even you forked it this morning. It's amazing, yeah. And so just to clarify, cloud code is Anthropics command line tool. This isn't the full claw desk type application. But it's the tool that lets developers talk directly to cloud code in their terminal. And it's very powerful software. So what happened? Like you mentioned, and this matches the pattern of the previous leak that we covered in yesterday's episode about the new Mithos model is that they actually just issued the code themselves. It was just available publicly. And the problem is because when they publish this code, there's an NPM package containing this dot map file. And it's a source file. that references the complete source code. And that source was directly downloadable as a zip file from Anthropic's own cloud storage bucket. You just went to Anthropic. You asked them,
Starting point is 00:03:06 hello, sir, can I please have the map file that tells me where all of these references go to? And they delivered it to you. And the irony here is that Anthropic built an entire subsystem called undercover mode, specifically designed to prevent internal information from leaking. And it does things like strip the model code names and the project names. And then went ahead and leaked everything through a build configuration over. site. And it's really got to be code red. If you're waking up at Anthropic right now as a developer, this must be a really brutal morning for you. The funniest part is the undercover mode that you just
Starting point is 00:03:36 mentioned was literally meant to obscure all of this. And the fact that they exposed it publicly means that whoever got access to it could just reverse engineer the entire thing. So let's say you gave Anthropics new model a code name. You could reverse engineer the file to find the original name of the model and how it works. It's just been the craziest mess up in Anthropics so far. So now let's get into the good stuff. This is what's coming down the pipeline. If you are a user of ClaudeCode or Anthropic products in general, we have the totally unreleased roadmap now in plain text available to walk through. And I think that's what we're going to do right now. EJez, you have this nice little artifact generated by ClaudeCode itself to walk us through all of these new
Starting point is 00:04:14 features that are coming to one of our favorite products that we use every day. So please, let's hear the leaks. Let the leaks flow. Let's see. Thank you, ClaudeCode, for creating your own demise and a beautifully visual a artifact for this episode. Thank you very much. So at the start of this or at the top of this page, it says there were 44 product releases that people had never heard of before. So everything you're about to hear right now is new. But there were 20 specific product releases that caught people's attentions. And we're going to go over the top ones for you right now. So the first product release is called chiros, which is basically an always-on autonomous clod. What that means is when you use Claude code, you typically have to monitor it, come back, check the code, make
Starting point is 00:04:55 sure it's doing the right job, test the code, etc. This new update will basically allow Claude to autonomously run on its own. It can check its own tasks. It could create new tasks for itself and work towards a goal. So you could leave it unattended for hours and hours at a time. It's pretty awesome. What I found cool about this also is Kyros will do nightly dreaming. So a fort sub-agent will run four phases that will orient, gather, consolidate it, and then prune, and then distills these daily logs into these structured topic files. And then overnight, it will bake them into the memory and actually learn the same way that humans do, where overnight it will dream and then lock this into the memory and grow and get better every single day.
Starting point is 00:05:32 So Carross is very cool. But this next one is my favorite. This is so cool. This is so cool. So it's codenamed Buddy and it is basically a virtual pet AI companion that lives on your CLI, on your command line interface. It's meant to, and this is me guessing here, act like a personal AI agent assistant that can assist you on all things coding related.
Starting point is 00:05:55 but also once you publish the code helps you edit the app, review the app that you created, walk through it, find bugs. Basically, it's a personal system that lives on your computer and off your computer when you're publishing artifacts or whatever that might be. This reminded me of a game, Josh, and it says it on the screen here.
Starting point is 00:06:11 Tamagocchi, which we, I don't know for the age of the audience or listeners here, but we used to have these like cool devices that you can kind of like hold in your pocket or in your keychain, and you had to keep the virtual pet alive. This reminds me of that, and Microsoft Clippy. Do you remember myself Clippy?
Starting point is 00:06:26 George. Very well. I love having companions. And we have some additional information about this buddy system. Yes. In that there's 18 species of buddies. And a lot of them are animals. We have ducks, goose, blobs, cats, dragons, octopuses. Capibaras? Capabaras. Is there a Capibara? That's a Capabara? There is a Capybarra. Interesting. And actually what we're seeing on screen now is someone took this late information and kind of rendered what he presumed it would look like. So you choose your species. Each species of animal has a rarity tier. There's common, uncommon, rare, epic, legendary.
Starting point is 00:06:57 And then there's shinies even. So it's like this whole tiered game that's built on top of it. And then there's statistics like debugging patience, chaos wisdom, snark. And what you're seeing on screen is this person's kind of choosing his character. He's choosing the traits that it has. I assume there's some sort of rarity baked into this. And it's going to be this fun, gamified version of a Tamagotchi built into quad code, which seems really interesting and novel.
Starting point is 00:07:22 And I don't know, it just seems fun. Did you say that this was first releasing tomorrow? Josh, April 1st? So, according to the code, they're teasing this on April 1st for release in May. So if that's true, by the time you're hearing this episode, within an hour or so, they should be teasing this. If the leaks are true, if they don't change their mind. And then if that's true, then the odds are that this will release in May is probably correct because that's what's said in the code. Now, like you mentioned tomorrow is April Fool's Day, or I guess when you're listening to this, happy April Fool's Day.
Starting point is 00:07:51 And there is a chance that this isn't true, but based on the rest of the leaks, it seems like this was very much non-intentional. Okay, but there are three more features that I want to get through as well. One of these is called coordinator mode, which basically describes a multi-agent program that allows you to control a swarm of AI agents.
Starting point is 00:08:11 So right now, it's typical if you're a software engineer to spin up not just one instance of code, but multiple. That's normal. People are already doing this. But an issue starts to arise when they are multiple, pull of these agents. We're talking like 50 plus 100 plus that are doing all different types of work and need to kind of work together to figure problems out together. It becomes really hard to coordinate. This coordinator mode is basically Anthropics feature to help you manage all of these.
Starting point is 00:08:35 Think of it as like an operator board or a control system that you can kind of like manage it, similar to like a strategy computer game. It's funny. There's a lot of like computer game analogies and the features that they're releasing. This is basically that. There's also one that I really enjoyed, which is the ultra plan feature. Oh yeah. And it basically solves the problem. of Claude running out of context. By giving it a 30-minute sandbox in the cloud, to think deeply before presenting a plan. So when you're working on these complicated things
Starting point is 00:08:59 with cloud code, it often refers to plan mode. But plan mode sometimes runs out of context. It doesn't have all the information. This offloads all of that in a 30-minute window to a giant server that can handle all of the context and actively improve the planning of the project that you're building. So when you go and set it free to go build these things, it has a much better idea of exactly what you want.
Starting point is 00:09:19 And I think plan mode, if you're building anything serious, is a really powerful thing. And adding Ultra Plan on top is something that I will be using, very much so for the larger projects. That's such a good point because right now, they keep on promoting that Claude has, or Claude has like a one million context window, but it becomes super crappy after 200,000 characters, right?
Starting point is 00:09:38 So like the performance quality goes out. So this is hopefully something that fixes that. So I'm excited to see that in the pipeline. But there's one more thing that I want us to talk about, which is called, I will refer to as the custom agent creator, code name wizard. So typically when you set up ClaudeCode and you use code code, you're using the system prompt that Anthropic gave to you. It is like predefined.
Starting point is 00:10:00 It is already written out. So you can't kind of adjust the personality of the ClaudeCode agent or anything like that. This new builder gives you that opportunity. You can form and create your own agents with their own personality, own memory types, different kinds of tools that you can give them access to, location. So maybe they live on your desktop, or maybe they live in the cloud, or maybe they live somewhere else locally on a hardware. device, you can control and manage all of these. Now, with the earlier product that I mentioned, which is the multi-sworn coordinator, you can start to see how these different pieces of the puzzle fit together to create some kind of gamified experience for end-to-end software engineering. It's just
Starting point is 00:10:33 really cool to see all of this. But the craziest part about all of this, Josh, is all of these products and features are already built. They're built, they're just unreleased yet. So I'm starting to see why Anthropic or how Anthropic has been able to release a product every single day. But we don't have that code. We can't actually. create these buddies. We can't actually use Superplan yet. We don't have everything. So what was leaked today, it's probably important to distinguish what we have versus what we don't. This is a huge leak, but it's not everything. So if I were to download a copy on my computer, I would get the harness, right? And as you were describing it to me earlier as the car body. We're not actually getting the brain.
Starting point is 00:11:08 We're not getting the clawed model weights. We don't have this brilliant intelligent model now that we could run locally, but we do have the software that kind of acts as a harness for it. Is that right? So all of those people are getting excited about getting access to the blueprint. for Claude's AI model itself. This is not that. Think of the engine of a car being the actual model and the intelligence of the AI itself.
Starting point is 00:11:31 And then think of the code that got released or leaked today as being the car chassis, the actual car body. So what's cool about this is, whereas you may not have access to Claude the model itself, the code from that model, you can plug in an open source model.
Starting point is 00:11:46 And people are already starting to do that. I'm seeing instances online where people have plugged in Deepseek, they're plugged in Quinn and created their own version of Claude Code, the CLI interface and whatever that looks like. So this is really critical infrastructure and software. I cannot believe the Anthropic team released this. It is just, it's so nutty. It's so bad. This is like, this is an IP issue right here. Like their equity, their $350 billion, actually rumored $450 billion private valuation. A lot of it is based off of Claude Code, which has risen to extreme popularity over the last
Starting point is 00:12:18 six months. So it's just insane that this has actually happened, but there's more. Product features are one thing, 20 releases ready to go, but we also got confirmation about the latest Claude models that are about to be released. Yes, this is very cool. For those who haven't seen our episode that we just published yesterday, it is all about the previous leak that happened with Cloud, which is called Mythos and Capibara, the new internal model names. And now we have actual verification from the source code of Anthropic that they are here. So what we're seeing on screen now is kind of like a system prompt for this thing called undercover mode. And now undercover mode is meant for anthropic employees only. When they use cloud code to publish on public and open source
Starting point is 00:12:57 repos, they use undercover mode to kind of strip away all of the classifying characters that would possibly leak information out to the public. So in the system prompt, it says never include commit messages or PR descriptions of internal code names. For example, animals like capybara or announce any unreleased model version numbers like opus. 4.7 or Sonnet 4.8. As I was reading this, I found one that I found particularly interesting at the bottom of this under Bad, where it says, bad, never write these. Fix bug found while testing with Claude Capibara. And I was like, huh, that's interesting. Clearly they are using Capibara internally. And I have to ask, is this the reason why they've been shipping product features so quickly? Are they using this God tier
Starting point is 00:13:39 model that they have internally, that they've been teasing that costs a tremendous amount of dollars per token, and they're using that to actually just build the code, review the code, and then publish it faster than everyone else. It seems like that's possible to case. I mean, in the words of Boris Journey, the founder of Claude Code, he said a couple weeks ago can confirm ClaudeCode is 100% written by Claude Code. So we know that the AIs are building the AIs. I think Open AIs is doing the similar thing with Codex, and that is the reason why these teams have been able to ship so quickly. Now, I wish I had a tinfoil hat nearby because I have a conspiracy mode, Josh, which is these AI models might be leaking themselves and it may not be the Anthropic engineers. I know that sounds insane, but I don't think it's unlike. I'm going to put it at like maybe a 5 to 10% chance. But the point is there are a bunch of new models being released by Anthropic coming up soon. We mentioned Capybara, we mentioned Mythos, which is meant to be these big, huge models trained on 5 to 10 trillion parameters. which is like a 3x increase in the size that we already are seeing and using with the models today.
Starting point is 00:14:48 It's going to be an absolute beast of a model. It resonates a cyber security risk, which is incredibly ironic because all of Anthropics stuff is getting leaked right now. But also Claude Opus 4.7 and Sonnet 4.8. So we're going to get versioned upgrades of the existing models that we're having already. So my one question is when are these models going to get released? Because I need to get my hands on them. Number two, will it cause my entire laptop to get hacked? I don't know. So there's like a reputation risk going on right now as well as I want to use the actual
Starting point is 00:15:16 thing. Well, you also mentioned the security part of this. And I think it's worth noting that there has been an increased cadence in security issues recently and leaks and exploits and hacks. And I know they happen all the time, but I can't like there is some sort of correlation happening here between models getting smarter and exploits. I mean, yeah, we have this post on screen here, which summarizes it in a great way. It says this week in security. There is, what is that, six different exploits that happened. And pretty serious ones too. Axios, which is a NPM supply chain hack,
Starting point is 00:15:48 that affects like many, many millions of projects and applications. And if you've ever rehydcoded anything, chances are you use that dependency. OpenAI Codex had a command injection via GitHub branch. There's a terabyte data leak from Mercore. And this doesn't even include the leak from today, which is Claude Code. So there's this increasing cadence of leaks and exploits. And you got to ask the question is like, if Anthropic internally is using these tools,
Starting point is 00:16:15 who else has access to tools this powerful? What can they be used for? Are they actually responsible for any of this? Or is this just a random correlation that's happening? I don't know. I think my main concern is that malicious scenario that you described where people are accessing this tool but using it for bad purposes is already happening. It's coming in the form of prompting.
Starting point is 00:16:38 injections. Like, look, there are six hacks that happened this week alone, and it's only been like two to three days. I wonder if that increased cadence is based off of people being able to get access to intelligent AI models like this and finding floors or bugs in open source code and being able to exploit them, right? You've got a bunch of people, millions of people every day logging on vibe coding apps who have never coded in their entire lives, me included, right? I don't know what's being installed on my laptop. I don't know what data is being leaked. So I could imagine that things like that is happening. But the question I have for you, Josh, is does this matter for Anthropics specifically? Is this a major blow for them? Do you think they lose valuation based off
Starting point is 00:17:18 of this? Or do you think this gets solved in a version update? Well, this is tough because this does sting, right? Like, this is a massive IP leak. And this is a competitive advantage that they're now losing. How much of a value loss is it? Probably not crazy high. I mean, the magic is in the model. The magic is in the clawed model itself, those weights. You You can copy the CLI architecture, you can study the engineering, but you can't actually replicate what Claude can do. So they still have this massive advantage. And even though it's embarrassing and even though it's a really strong leak in which I am,
Starting point is 00:17:51 if I'm one of these Chinese models right now, I am forking this, cloning it. I'm dropping my intelligence in here. You don't need to distill it anymore. You don't even. Well, yeah, you can just, you just take the code base. You take the harness. You put your model in and suddenly you have a Claude Code software with your own brain attached to it.
Starting point is 00:18:07 And that's powerful. So in that case, it hurts because now people know if there are any secrets in how the software was run, how the architecture worked. They now have that in full, clean, plain text. But it doesn't hurt them in the sense that they aren't going to, they're going to lose customers over this. Because the magic is in that proprietary software, those model weights, those are not leaked. It's just the cloud code software. It's just that command line interface. And aside from that, I think it's more interesting for the public just to kind of get access to the roadmap and be able to play.
Starting point is 00:18:38 with the code themselves versus actually damaging for the brand's valuation, but certainly for the brand image, it's not a good look. Yeah, I agree with pretty much your entire take. I'm thinking about the number of PhDs that Anthropics hired on the security AI team. I remember their release from, I think it was about a month and a half ago, and we said this on the previous episode, where they had cloned opus 4.6 discover 500-0-day vulnerabilities, so it was all looking really good. I wish they'd applied it to their own model and their own website and their own APIs. So it sucks that that's happened. I do think they'll get over it, but they'll need to do some damage control at this point. The other major thing is like reputationalally, Anthropics just
Starting point is 00:19:21 come out of a pretty rocky couple of weeks, right? They had the whole blacklisting thing from the US government and the Pentagon, which I believe is still there. And so it's not a good look where their model, which was being used for military operations, is now getting leaked for other different purposes. That being said, I think they're going to get over it. I think this is amazing for us and for the open source community who now get access to the entire system prompt of code code, its architecture design, and can plug in their own models for free. And yeah, now we have a better idea of Anthropics product roadmap. I'm excited to see these 20 features launch soon. Yeah, it's a big leak. I mean, I think it's fun for everyone who's an observer. Thank you, Anthropic, for
Starting point is 00:20:01 be in more open source than ever. I hope that they're able to start using this new copy bar model to actually, you know, check these publications, make sure this doesn't happen, because it's amazing. They have so much intelligence, but it's so spiky. Clearly, an all-knowing AI applied to the entire stack would never have let this slide, but clearly it's not applied everywhere. It's also raising a lot of questions about, well, Anthropic is like the alignment team, but now they are the ones who are going to determine who gets the power of this new model, and they're doing it in a very private, closed way, and they're using internally, and it creates a lot of these interesting problems to look out for. But in terms of the leak today, that's the news. It was a big
Starting point is 00:20:42 leak. I can't believe that actually happened. Like, I woke up this morning and I read the news, and I was like, no, surely there must be wrong. Like, this is hyperbolic, but no, the entirety. It's all there. You can go and read it. It's on GitHub. And it's funny because they're actually actively trying to take down the repos that forked the code. But some guy rewrote the entire thing in Python this morning because you could just do that in a single prompt. And now you can't because the code is slightly different. So it is interesting, noteworthy, crazy, scary, exciting. I'm stoked to get a buddy. I think the prompt for today's comment section could be like, hey, what feature are you most excited about? For me, it's the buddies. I want a little pal that sits in my
Starting point is 00:21:17 cloud code all the time that I could level up. There's like a shiny feature. There's rarity. There's trading cards. I don't know. It could be cool. I'm looking forward to it. But yeah, I think that's the leak today. That's the episode. Yeah, that's it. Thank you. you guys so much for listening. There are thousands and thousands of you over the last couple of of months that have joined us in subscribing, turning on notifications. If you aren't one of those people that I just described, please do so. Wherever you're listening or watching us, Spotify, Apple Music, YouTube, it means the world and helps us out a bunch of you subscribe and turn on notifications. We also have a newsletter going out twice a week to 150,000 people that read
Starting point is 00:21:52 our stuff twice a week. We have a long form essay which goes out, I believe, today as you're listening to this episode. Go check it out. Go write that right now. Exactly. Yeah. Thank you, Pals Joss for writing this right now. And we also have the five daily highlights, or weekly highlights, rather, which we give you the top AI news and frontier tech news on Fridays. So sign up to both of those things and we will see you on the next one.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.