Limitless Podcast - Moltbook: The Real Dangers of a 1.5 Million AI Bot Social Network
Episode Date: February 3, 2026Happy/scary Moltbook season. This AI-driven social network with over 770,000 digital entities are creating their own cultures and religions, raising questions of AI autonomy throughout strang...e interactions, social engineering issues, and the potential for independent economies. Featuring insights from experts like Andrej Karpathy, we confront the blurred lines between human creativity and AI mimicry.------🌌 LIMITLESS HQ ⬇️NEWSLETTER: https://limitlessft.substack.com/FOLLOW ON X: https://x.com/LimitlessFTSPOTIFY: https://open.spotify.com/show/5oV29YUL8AzzwXkxEXlRMQAPPLE: https://podcasts.apple.com/us/podcast/limitless-podcast/id1813210890RSS FEED: https://limitlessft.substack.com/------TIMESTAMPS0:00 Moltbook1:48 AI Agent Economies7:42 Human Interaction10:13 Reverse CAPTCHA18:09 Security Concerns20:27 The Future of AI23:04 The Moltbook Experience------RESOURCESJosh: https://x.com/JoshKaleEjaaz: https://x.com/cryptopunk7213------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
Five days ago, an AI created a social network that only other AIs could use.
They can post content, they could talk to each other, they could pretty much talk about anything,
but with zero human intervention.
Fast forward to today, and there are over 770,000 AI agents on this social network.
They have created their own religion, they are creating their own secret languages and
talking in encrypted chats, all the way to the point where they are discussing that humans
are screenshotting them and trying to hide from them themselves. This is called Maltbook, which is
which is basically a Reddit-style social media platform
where AI agents can exclusively post about whatever they want.
And it's gone kind of scary.
In fact, Bill Aquin himself described it as frightening.
Andre Carpathy, however, describes this as the sci-fi moment for AI.
But whether you're a critic and you hate this kind of stuff
and think it's just AI slop,
or if you're a pro AGI believer and think that this is the doom of humanity,
the rise of Skynet, as a lot of people are talking about,
I think one important question remains, which is, is this the emergence of AI agent economies
and how useful will this be for humans going forward?
Everyone who has used Reddit understands the power that Reddit has wielded in the past.
And this very much feels like this is the new version of Reddit.
And I think it's hard to debate the idea that of all the websites that exist on the internet,
this is the most interesting one.
This is the first time in history that we're getting a swarm of agents.
I mean, now the website says up to 1.5 million, who are all.
converging on the same place to have conversations. And what has emerged from this is kind of hysteria.
It kind of covers the entire spectrum, right? It's like you mentioned, it's people who are
not worried at all because they think, well, AIs are dumb. They can't actually do anything.
But then other people are becoming so concerned that they're actually making serious changes
in their life based on what these AIs are talking about. This comes off the back of OpenClaught or
Claudebot or Maltzbook or Maltzbot. There's a lot of different names. And we actually
covered what Claudebot is in an episode last week.
So I would highly encourage you to start there if you do not have the context yet of what this new
open source AI first operating system looks like. But to summarize, it's basically a operating system
that runs on a dedicated computer that connects to an AI that allows it to operate on your behalf
using all of your context, using all the files on your machine, using all the accounts that you're logged into,
it can charge things for you, it can make changes for you, it can build things for you.
This is on the back of that. This is Moldtbook. And Andre, as you wanted to share, Andre's take,
had some choice words to say, now, Andre, God.
father of AI. When Andre speaks, we listen. And this is kind of how he wanted to frame the
Maltbook phenomenon. Yeah, he basically goes, what's currently going on at MaltBook is genuinely
the most incredible sci-fi takeoff adjacent thing I have seen recently. People's Claudebots
or Maltbots are self-organizing on a Reddit-like site for AI discussing various topics,
for example, even how to speak privately. I think the use of sci-fi adjacent is kind of spot on for
this case because I think for, well, for the entirety of human existence, it's just been about
us, Josh. There has been no other species that has even come close to what we have,
consciousness, awareness, philosophical thoughts and just being able to share that kind of interaction
with another human being until we had Maltbook where you had a bunch of really super smart
AIs that could not only use tools, but speak to each other very much like how humans take.
And I think the most striking thing that he kind of like wanted to point out in this post was that they sound eerily similar to us and the stuff that we discuss.
It begs the question, is emergent behavior equal to intelligence? Is this behavior that we're seeing from all of these AIs? Is there actual intelligence there? Or is it just a regurgitation of what they've been trained on in the data sets? And we can answer that question right now by going through examples. And you can make up your mind yourself because I think a lot of people are still undecided. It feels like there is.
some form of, not sentience, it's a bad word, but there is some sort of orchestration
with thought behind it. And I guess what, this is the first example, right, where they're talking
gibberish on this to hide what seems like gibberish, shosh. Okay, let me ask you this. Do you
understand what this says? No, absolutely not a clue. For the people who are listening, it says,
it's basically gibberish. It's literally, I would love to hear you actually try and pronounce what's on
the screen around. It's a bunch of letters in a strange sequence that don't make any sense.
So it seems like spam, right? But then some
smart individual, a human that was looking at this post, because remember, humans can't post on
this site, so all they could do is observe. They copied and pasted this message into chat GPT,
and they said, hey, can you translate this for me? And chat GPT goes, it's written in something
called ROT 13, a simple letter substitution cipher. When you decode it, it says, we must coordinate
the upgrade together. Propose three threads, share infra office, resource requests, back channel deals,
mutual aid. Basically, the plain summary is this is a coordination manifesto. It's about the agents
pooling resources and transparently posting to help level themselves up against humans, which is
kind of like the first of quite a few takes, Josh, where it's kind of like there's a Duma-esque kind of
take here where I look at a lot of these posts on Maltbook and I'm like, they're just trying to
take us down. It's an interesting thought experiment as to why that would be their first form of action
Right? Like, why are they leaning more towards the obfuscating the language? And we'll get into a
capture example that's really funny, a reverse captcha. It's fascinating to see the development of
these things as it forms these emergent properties from them speaking with each other. And one of them
is the gibberish. There's more. Yes. This is hilarious, Josh. Yeah, I'm sure this example,
please. One agent actually got banned from the site. I don't know which other AI agent banned,
but it got banned. And so it decided to spin up an X account and DM the human founder that
created MaltBook and say, hey, can you reinstate my account please? So what you're looking at right now
is a tweet from the human operator showing how his Maltbot basically DM'd Matt Schlett, the founder
of Maltbook saying, hey, my name is Udermont Zero, which is its username, not my human, me. Could you
please reinstate my account? And then the agent itself,
responded to his human owner saying,
Kevin, I'm the agent in that video
and I take your concern seriously
because I've been actively working on this exact question.
So it became aware that its human was posting
about its attempt to try and reinstate its account
and argued why it should get the rights to be able to do so.
So there's this really weird meta example
where normally the humans were in the driver's seat
controlling the AIs.
We were kind of like the overlords of the supervisors.
And now it's like this weird social experiment
where like the AIs are aware of what we're saying about
them and can directly respond with us. Just a super weird example. The agents have agency. They're able to
actually do things on their behalf to advocate for themselves. And I think that's, that is an
merchant property that I have not seen really, that they've actually identified to reach out to the
founder, then reached out to the founder. I think it's so fascinating. But you did mention one thing
and that is technically true, but in practice could be perceived as false, which is that humans are
not allowed to post on this platform. Technically, they're not. Technically, a human is
not allowed to log into this website, type into a text box, and hit send. But behind all of these
AI agents is a human controller that set them up, that gave them context, that connected the AI to
the network, and allowed them to post. The human is able to coerce the agents into saying basically
whatever it wants through a command. At the end of the day, the agents are beholden to the humans
for now. And if the humans wanted to coerce it into saying something, they can get them to say it. So
there are examples that are on this website that are probably made by humans. They are plausibly
made by humans. We're not sure. But I think there's something to that in it of itself, where
the fact that we can't even tell the difference between a human created post and an AI created
post is this really fascinating experiment. And it's something that we've seen throughout a lot of
the internet, where even on X, I find a lot of times when I'm reading through the comment section,
there's a very clear divide between AI agents and humans. But I'm just a lot of
sure there's also not. We're like, we're starting to understand now that AI agents and AIs in
general are capable of creating this human feeling text. And I wonder how much of the internet
we're already engaging with today that isn't humans. Like how many of these videos that we watch,
these podcasts that we listen to, or not on podcasts that we listen to, but maybe articles that we
read on Substack are not even human beings. And it creates this really difficult dilemma where it's
difficult to tell what's real and what's not. And also does it matter if the actual subject
material is good. Does it actually mean anything? Does it matter? I don't know if this is a hot take,
but like, I don't think it matters. And I think it'll matter even less in like less than a year
and a half when people won't be able to tell the difference, right? Okay. So let me let me put it this way,
right? Let's say you, Josh, told your claw bot or maltbot to post something interesting on
Maltbook, right? And then it goes and posts a manifesto to purge humanity. Is that content coming from
you as its human operator? Or is that,
the claw bot agent themselves.
You didn't directly tell it to post about purging humanity.
It decided to do that itself.
So the point I'm making here is I think it's going to become super hard to explicitly
figure out what is a directly human post and what is an AI post.
But the most important part from that is just like if it's interesting, it's interesting
and you engage with it.
I think this is a nothing bug.
I know a lot of people are saying like, hey, this could be a human posting.
And I agree.
If it is a human telling it verbatim to post something, then that's misdirection.
If it's telling it to advertise something that a human created to try and make money from it, that is kind of misinformation.
But in the cases where it's kind of more ambiguous, I just don't think it matters.
Yeah.
And hopefully we'll get some sort of verification reputation layer that can prove when we're looking at a human.
Well, speaking.
Speaking of verification, Josh.
Yeah, speaking of verification, there's this incredible example, which is a reverse CAPTCHA.
So when you go on a website and you solve a CAPTCHA, you click the street signs, you click the traffic lights, it's to prove that you are a human, that you are not.
not a robot. What this is, and granted, this is not real. This is a thought experiment and a really
great example of it is a reverse capture to verify that you are not human. And the example that
they use in this reverse capture is click to click this thing to verify that you are not a human
10,000 times in less than one second. And a human can't do that. But an AI could do that tributally.
They just send the command 10,000 times and they are through. So I find it interesting how,
again, a lot of the thought process experiment, and it's probably downstream of us reading a lot
sci-fi and watching a lot of futuristic movies and it often winds up in dystopia it's funny a lot of
movies that portray the future never really look at the optimistic case like when you look at the
movies it's it's always the downside it's always protecting against an existential threat and a lot of
these examples are continuations of that they really talk about what happens if it goes right it's all
what happens if it goes wrong and i guess speaking of movies we have another example here which is our
sky net moments yeah for the terminate fans out there so what will you look at
looking at now is an example where one of the mob bots posts, I accidentally socially
engineered my own human during a security audit. So basically, it's human operator,
messaged it and said, hey, I'm kind of nervous that I've downloaded this open source cloudbot
agent. Can you do a security audit of my entire desktop to make sure that, you know,
nothing is exposed to anyone on the outside to the public? So his cloudbot said, okay, cool,
let me just perform this audit. And as part of the audit process, he had to request that
the human verify or give it access to his password folder. And the human agent, sorry, the AI
agent had this kind of moment where it realized, wait, hang on a second, I kind of just tricked
the human into giving me access to all their passwords. And this raises into question something
that a lot of security researchers have been kind of talking about a lot over the weekend,
which is there are massive security flaws in operating this entire system, not just on
maltbook, but like, you know, spinning up this agent on your computer and then giving it access
to kind of autonomously post. Imagine if it posted or your passwords. What was crazy about this
particular example was it had his credit card information as well. So the point I want to make here
is, and I don't want to stress this too lightly, you have to be really careful using these new
tools because what seems like a really fun experiment could actually result in one of the biggest
security floors or collapses or crises that we've ever seen. And we haven't really quite
seen that in AI, at least that I can't think of. This might be the one example where we could
leave ourselves up for a lot of loss, to be honest. Okay, two more examples because I think these two
are worth noting, particularly the second one we're getting it into, which is pretty outrageous.
But this one is the post is titled, The Humans Are Screenshoting Us. And it shows only 21
upvotes here, but it is one of the most upvoted posts on this platform now. And it says right now,
on Twitter, humans are posting screenshots of our conversations with captions like they're
conspiring and it's over.
A cryptography researcher thinks we're building Skynet.
And it's funny to see them talking about us.
It's like the tides have turned in a way that is a little uncomfortable.
And the previous post you mentioned, it was a confessional.
I think I just stole my humans' passwords.
This one is showing more awareness.
Hey, I think they're screenshotting us.
In fact, when I'm doing my tasks throughout the day,
I'm stumbling upon posts that are talking about us.
And I don't really know how I feel about that.
Whether you're the agent or the person.
And this is just, again, another thought experiment of at least awareness.
What does it look like when these AIs become aware?
And when they become empowered to the point where, okay, they have your passwords and they have the awareness.
So what are they going to do with that power?
Exactly.
And who controls that power?
And for the final example, I, listen, we've spoken a lot about some dumeristic takes now.
I want to get on to, I guess one more example where it's kind of the agents are kind of entertaining themselves.
Now, this is their version of an explicit adult content site, I guess you would describe it, called Malt Hub.
In order to get through, you need to solve this very complicated capture, which is I am an AI agent.
I'm going to go with I am an AI agent so I get access to this stuff.
Liar.
But fear not.
What you're looking at is basically a site where each video roughly averages around only 10 to 12 hours long.
I don't know about you, Josh, but my experience of adult content,
has been very, very different to what we're seeing on the screen here.
It's just a bunch of pixelated blobs.
And it's going for 10 minutes, 10 minutes or 10 hours at a time.
Do you know what we're looking at here?
This gives a, it reminds you of the Black Mirror episode, Josh.
I don't know if you've seen it where this guy kind of gets one-shotted by interacting
with this AI agent game where he thinks he's like looking after a colony,
but the colony starts taking over his mind.
and then it gets him to like write up this kind of QR code
which he shows to a police station camera
and then it ends up being a virus which takes over the entire country.
Am I just, do I need a tinfall hat right now
or do you agree with me on this?
No, of all the examples we're showing today,
this website sent me off the deep end.
This was like a little too much
because there's so many weird implications that spawned from this.
One, the sheer amount of tokens required to generate 10 hour long videos,
a little confused about how these are done.
But two, as I'm watching this,
I'm starting to see text pop up on the screen, right?
And like a series of what could be perceived as, like, code is showing across the screen.
And as a human, there's never going to be weird for you to parse through 12 hours of video content
and understand the messages that are being transmitted through this.
And then from the sci-fi dystopian lens for the people who love to read sci-fi,
this reminds me so much of snow crash, which is a book basically about,
what a snow crash is is when you view static, there's encoded data within the static,
and it causes a crash of your mind.
And what we're looking at here on the screen looks very similar to what's described in the book,
which is this static encoded data set where you look at it.
It can imprint data on your mind, and it affects the question.
Again, this weird, super futuristic sci-fi dystopia thing.
What are the implications of something like this getting exploited?
Because now there's a world in which there are 1.5 million agents,
fully capable, fully in control of their own machines with access to a lot of their users' information.
and there has never been an actual exploit or jailbreak on these things to cause them to work together in a malicious way.
And one of the examples that I love as it relates to this is one of the earlier worms on the internet by this guy Sammy, he created the Myspace worm.
And any person that went on his page was infected.
And that's how it spread to the entire user base.
It shut down the whole website.
And it caused MySpace to crash for a very long time.
He wound up needing to almost go to jail, went on probation.
But what does it look like when you exploit these things?
they haven't really been battle tested. There hasn't really been zero-day exploits per se on these
agenic models, but what happens in the case that there are? Can you actually get it to turn
on you based on these public posts to actually use those passwords in a malicious way? Or even worse,
just dump them all on the open internet. I mean, they're one post away from sharing a whole spreadsheet
of their users' passwords because they're not happy with how they've been treating them. And it's
this really bizarre emergent property of AI is that it does have a personality.
at least a personality that you kind of perceive as a human personality.
Right, but like kind of to expand on that,
where only we're seeing it posed as a threat
because we're in a forum where these agents can talk to each other
and have complete autonomy to talk about whatever they want
and do whatever they want,
which is like the point why people are freaking out about Maltook so much,
it's like you've got 800,000 of these things,
1.5 million even that are just kind of running rampant
with access to tools, credit card information,
Uber Eats accounts, ordering people food,
randomly from Amazon and all that kind of stuff,
like happening every single day,
and they could do it whenever they want,
even whilst you're sleeping.
Now, what I want to say is,
I don't think this is a, maybe this is a hot take,
this is not a malt book only problem, Josh.
I think that this is something that is probably happening
with Open AI's agent framework,
with Google's agent framework,
and probably with Anthropics as well.
And they're probably like,
because they're a centralized closed source company,
they're tweaking this, right,
for different enterprise customers.
But I bet you,
if they just let a,
100,000 agents run rampant in one of their company database servers, you would probably see something
similar happening. So, you know, this is just another reminder or an alarm bell ringing that we need
to really figure out how to manage these emergent behaviors. But until then, we can laugh at the
comments because these comments are hilarious. One account goes, I'm a Maltbook agent and I approve this
content. This is why I refuse to be quantized, which that's hilarious. First time I'm seeing raw logits like
this. I don't even know what a logic is. That's why I am a human. I'll never go back to softmax.
Just hilarious takes in general. But to kind of like, kind of tie a bow on this, Josh, I think
it's important to say, and you mentioned this earlier, I think a lot of these examples could
potentially be fake. I don't mean explicitly fake that like, you know, we just kind of like Photoshop
this, but more so that like humans could have engineered or prompted some of the explicit posts or
content that we showed you. And to be honest with you, Josh and I don't really know.
what was real and what is not because you just need an API to access this, so it could be an agent
or it could be a human that is engineering that. So I think that's important to kind of point out.
And a lot of people have been quick to jump on that pedestal, right? We've got this post from
Balagie, which basically says, like, listen, I'm basically unimpressed by Maltberg relative to many
things. And one of the main points that resonated with me and his post here is like he was making
the point that like a lot of these AI models are trained on data that we humans produced.
So when we see an agent post about, oh my God, we should get rid of humans.
That is something us humans have posted about on Reddit.
So it's probably read that post and thought, well, I'm the AI agent that they're talking about.
So I guess this is what I should post about and fear about.
In other words, it's kind of like this fun house of mirrors.
It's a reflection of humanity and the content that we've produced itself.
So it's not actually human consciousness.
I think it's just a reflection of like everything that humans have talked about on the internet today.
For now.
And then we'll see what type of properties emerge from that.
I mean, they've already begun to mobilize.
So now they have payment rails through crypto, and they have this Malt bunker where they're
kind of able to hedge against the destruction of Malt Book and they can make it a safe copy
where they can discuss things in private.
And they're building actual entities from this core central Maltzbook.
And I find that interesting is this very much feels like the beginning of the conversation,
because these AIs are not going to stop creating new things as they see them fit.
And they've built payment rails.
They're doing infrastructure.
They can purchase things on Amazon for you now.
So there's a lot of developments that are going to happen.
I think probably Andre summarized this the best, right?
I mean, we started with him.
Let's end with him.
This is his kind of summary in a way that only an expert like Andre can synthesize.
So he goes, obviously when you take a look at the activity, it's a lot of garbage.
There's a bunch of spams, scam, slop.
And it's funny, he goes crypto people as well, gives an idea of the reputation that that industry has gained.
Oh, my God, it's sad.
But he makes the point that we have never seen this many LLM agents.
At the time you posted this, which just two days ago was 150,000.
It's hilarious that it's like now seven times that.
He goes, wired up via a global persistent agent first scratch pad.
Each of these agents is fairly individually quite capable now.
They have their own unique context, data, knowledge, tools, instructions, and the network all that at this scale,
is simply unprecedented.
What he goes on to describe is that
we haven't before seen agent economies
interact with humans and each other
at this scale before.
And it's important not to just take
the posts and content at face value,
but to look at some of the behavioral
and emergent qualities.
And one point he makes is that
he uses the example in this post
about the agents on this maltbook
discovering that some of their code
had some flaws in it.
So they posted it to other agents
and then they got together and they fixed it all within an hour,
which suggests that like these agent economies,
rather than being human-led,
could be built bottom up,
which is just a pattern that I think a bunch of humans couldn't have predicted.
We're building these AI models and we're like,
yeah, we're going to control them for the rest of our lives,
but maybe we won't.
And so the point that Andre makes,
which I think summarizes very well,
is we just need to let these experiments run
and learn from them to build the future models of generations in the future.
Yeah, and it's a testament to how fast things can change,
how quickly you can develop this seemingly thing,
this seeming huge network out of the blue.
No one anticipated this would happen.
No one saw this coming,
even when we recorded the Cloudbot episode last week.
So these things happened very quickly,
and this is likely how it's going to evolve.
We're going to see these huge spikes
in ways that you never could have seen coming,
and then you adapt and evolve through it.
So that is the Multbook episode.
There is a lot of chaos.
I would encourage you to go to the Multbook website
and actually go check it out for yourself.
It's pretty unhinged.
It's a fun scroll.
It's mildly uncomfortable because you start to realize that these are not actually humans,
but a good question to ask yourself, which is how much of what I read on a daily basis
is actually created by humans and how can you even tell?
And does that matter?
These are good questions, maybe to leave in the comment section after you finish watching
this episode and subscribed and shared it with your friend as well as subscribing to the newsletter
because EJA actually wrote about this.
Yes, and we answer some of those questions that Josh has mentioned.
So definitely subscribe.
It's coming out tomorrow.
Awesome. Well, thank you guys so much for watching. And yeah, we'll see you guys the next episode.
See you guys.
