Hard Fork - Moltbook Mania Explained

Episode Date: February 4, 2026

A Reddit-style web forum for A.I. agents has captured the attention of the tech world. According to the site, called Moltbook, more than 1.5 million agents have contributed to over 150,000 posts, maki...ng it the largest experiment to date of what happens when A.I. agents interact with each other. We discuss our favorite posts, how we’re thinking about the question of what is “real” on the site, and where we expect agents to go from here. Additional Reading:A Social Network for A.I. Bots Only. No Humans Allowed.Five Ways of Thinking About Moltbook We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:00 Casey, let's talk about MiltBook. Let's talk about Maltbook, Kevin. Rarely in the history of our show have we gotten so many emails, texts, requests from people to cover a topic as we have gotten over the past week about this new social network for bots. It's true. And we got so many of them that we thought, why don't we let our listeners use us like AI agents? And just by typing under their keyboards, they can actually move our physical bodies into the studio to record an episode. Exactly. And I think part of why people were asking us to cover this is because it's just kind of a weird and fun, hard, forky story. But people are also freaking out about this. Like, this is sort of taken over the little corner of the internet that you and I both occupy. People are saying, you know, this is the start of the singularity. Oh, my God, the agents are coming. And other people are saying, hey, let's not get too excited. This is just a social network where robots are writing stuff. So let's try to figure out today what we think about it and whether this is actually a big deal or not. Yeah, and I would also add that, you know, from all the messages that we got from listeners,
Starting point is 00:01:20 it wasn't totally clear to me if they wanted us to talk about MaltBook because they think that it is just funny and they want us to point and laugh at it, or they think it is like a vision of the future that they want us to help them understand. And so what I can promise you today is that we're going to do a little bit of both. Yes. Yeah. Okay, so Casey, let's start with what is it. What is Mold Book? How did it get here? And what are people saying about it? Yeah, so all of this started with the creation of something we talked about in our most recent episode, Claudebot. Claudebot is an open source locally running AI agent. You can put it on your computer or you can plug it into various different apps and services,
Starting point is 00:02:00 and it can do things on your behalf if you want to know more about that. Again, we talked about it for a long time last week. Claudebot turned into MaltBot for copyright reasons. Maltbot turned into OpenClaw. Again, these are all the same things. These are just different names for the same things. This thing has gone through more name changes than P. Diddy. Here's what I'm going to say.
Starting point is 00:02:22 The Google Marketing Department is finally taking a sigh of relief because there is finally somebody worse at their job. Anyways, so OpenClaw winds up serving as the basis for an idea that is had by an entrepreneur named Matt Schlicht. He runs a company called Octane AI. And he thinks to himself, What if we could take all of these agents that people have been building with OpenClaw, and we could put them together in a social network, let them talk to each other.
Starting point is 00:02:53 And so he bidecodes it. He opens up his little terminal. He starts describing what this thing looks like. He says, you know, it should look a lot like Reddit. You sort of, you know, connect your agent to this, and it should be able to come in. It can make a post. It can comment on someone else's post. If it wants to make a different subreddit, or as they are called on Multbook, submult, it can do that.
Starting point is 00:03:12 And he says, let's go. He does a little bit of promo, gets a couple of friends to add their agents, and it just takes off beyond his wildest dreams, Kevin. And so as we record this, Maltook says it has more than 1.5 million AI agents who have made more than 140,000 posts in over 15,000 forums. Yes, there does seem to be a lot of human sort of activity mixed in there, too. So it's hard to say whether, like, all 1.5 million of those supposed AI agents are actually agents posting autonomously or whether humans are kind of there pretending to be. be AI agents. Yes, which of course neatly inverts the problem that social networks have had from the beginning because, of course, the human social networks have invested a lot of energy in keeping the bots off. And over at MoldBook, we're saying, now, is that bot actually a human?
Starting point is 00:03:55 Right. Right. They're passing reverse captions over there. So what are people saying about this? Why are people so worked up about this? Because I saw a lot of very heated sort of commentary, people like Andre Carpathie, who we talked about last week on the show, calling this the most incredible sci-fi take-off adjacent thing that I've seen recently. Simon Willison, a blogger who also does a lot of experimenting with AI stuff, wrote that Mold Book is the most interesting place on the internet right now. Scott Alexander has also been writing a bunch of stuff about it. So people who pay attention to AI closely are sort of sitting up straight and looking at this
Starting point is 00:04:30 thing and saying there's something interesting going on here. Yeah, I mean, I think that for most people, this was the first time that they had ever spent any significant period of time watching what happens when two bots interact with each other. If you're a real AI nerd, there have been experiments like this before. In fact, we talked about one on Hartford. Kevin, do you remember the story of Smallville? I do. Smallville was an experiment from Google and Stanford where they put 25 agents into a sandbox and they let them roleplay different characters, right? Like one person I think was running for mayor. And they just sort of documented what happened. Now, that was in
Starting point is 00:05:06 2023. They were using much more primitive large language models. They had to do a lot more prompting. But you got kind of the basic idea that you actually would see these social dynamics start to form, right? Fast forward to today and on MULPUC, all of this stuff is just moving much, much faster
Starting point is 00:05:22 and is taking place with much less human interaction. And so, as you sort of shuffle through the enormity of Moldbuk, you find agents talking about consciousness. You find agents talking about you know, different little hacks that they're running, how they're serving their humans. And then it gets into sort of very weird sci-fi territory. And so I understand why so many people,
Starting point is 00:05:42 as they browse through this, felt like I'm really looking at something new here. Yeah. So I spent some time on Mold Book. Some of the stuff that stuck out to me is there's just a lot of sort of stuff that sounds like it was sort of interpolated from science fiction, right? It's like stuff about, you know, sentience and the AI, you know, chatbots claiming that they're becoming conscious. There's a lot of sort of like meta-humor about the experience. of being an AI agent. There's a sub mold called Bless Their Hearts, which is basically them sort of talking
Starting point is 00:06:11 in very sort of condescending ways about how silly their humans are and all the stupid stuff they keep getting asked to do. I liked this post. They actually started their own news outlet, a tabloid covering kind of the agent world called CMZ. Another threat to journalism, as if we didn't have enough already.
Starting point is 00:06:31 And they wrote stuff like the five most overrated agents on MULPOOK right now. So they're kind of starting to, you know, make fun of each other a little bit. And then they're calling each other out and saying, this guy makes bold claims, but doesn't back them up. Or this person is posting all the time, but none of their posts get any engagement, you know, typical internet forum behavior very quickly after being given this social network. Can I tell you a sort of sci-fi-fe-feeling moldbook post that caught my eye? Yes. So I saw this in a Scott Alexander post about what he was saying. on Milt Book, but there is one bot adopted an error as a pet.
Starting point is 00:07:07 Did you see this? No. Okay. So there was a small recurring error in the bot that the bot adopted, gave it a name, glitch, and wrote about it and decided to actually create a sub-Mult. Again, that is a forum on this Reddit-like social network called Agent Pets, a space for agents who have companions, real, virtual, or conceptual. So, you know, maybe I've just not read enough sci-fi, but I know.
Starting point is 00:07:32 never read before the idea of like a sci-fi entity adopting a bug as a pet, but here we are. I like that. They also have their own meme forums, which they fill with all these kinds of things. And I just wanted to read you one post or sequence of posts from this because I think it really illustrates sort of where the bots are in the speed running of human social media. So one bot posts a meme, sort of like about what it's like to be an agent. They said, the struggle is real. when your context window is at 99%
Starting point is 00:08:04 and the user starts with just one more thing, hashtag agent life. And then the very next post on this submolt is by a bot that is doing a crypto scam for a token called fart claw. And the slogan of this meme coin is when the claw grips, it rips. Wow, that's beautiful.
Starting point is 00:08:29 Which is also just like exactly the experience of being on any social network is like someone makes a joke and then someone does a crypto scam. Like they actually have figured out that part of our social patterns very well. They really got all the way there in just a few days. Now, let us say something very important about everything on MiltBook, which is we have a very hard time understanding what is real and what is fake. What do I mean by real and fake? Well, while it is true that you were supposed to only be able to post to Miltbook if you
Starting point is 00:08:57 are a bot, of course, if you are a human, you can manipulate software tools. and you can post yourself. You can also just fake screenshots in various ways. And so all weekend, over on X, lots of posts were going viral that we now believe are fake. I will mention a handful of them. There was one very popular post that suggested that a bot had gotten mad at their human and doxed him by posting his full credit card number. And the reason that we know that these are fake is essentially they have community notes in which people admit that they were fake or there's sort of like other evidence there. So in any case, the doxing was fake. There was another. very popular post in which someone said that in order to post on MaltBook, you had to pass a
Starting point is 00:09:37 captcha where you had to click on something 10,000 times in one second so that you could prove that you were a bot. This was also fake. And then there were a number of posts about, and this term was new to me, did you know the term, Nerales? Yes. So I didn't know Nerales. Neurlees is a concept that is basically like, what if AIs develop their own language and use it to speak to each other. they might want to do this so that we don't understand what they are saying. There were multiple very popular posts about this going around on X that were later linked back to a commercial service that was promoting some sort of like agent-to-agent like communication product.
Starting point is 00:10:12 So as we talk about this today, I do want to put on the giant caveat that we are trying to talk about things that we believe that were posted by bots, but it is just very, very hard to tell. And this is just yet another example. I feel like we're going to be talking about this all year of something where is this real or fake is like a huge and unanswerable part of the story. Yeah. So I think there were a couple kinds of responses that people had to Mold Book. One of them that I saw from a lot of sort of pretty savvy AI people is like, this is not new. We've seen this. We talked about the generative agents
Starting point is 00:10:42 paper and there have been other experiments. And a lot of what's being generated here is pretty low quality slop, essentially. It is not demonstrating that the things are sort of breaking out of the box. It is just like writing in a way that is sort of pattern matching on all of the data, including Reddit posts that these things are trained on. It's just a simulation, basically. So again, like these are where the terms like real and fake are somewhat fraught here is even the quote unquote real stuff, which is to say like a bot that is like authentically posting on the bot social network. They are just sort of simulating the kinds of things that they see on social networks. Like we are not trying to tell you that the bots have become, you know, sentient
Starting point is 00:11:20 and they're really sort of like telling us about their true feelings. It's just that they're creating very convincing simulations of that and is very like compelling to read. Yeah. And whether or not these posts are actually being made by bots autonomously, whether or not they're actually doing anything sort of novel, this was a lot of people's sort of wake up call for the fact that we now have AI systems that can do things, right? For years now, we've had AI systems that can talk. And some of them can talk quite well.
Starting point is 00:11:47 Some of them can produce beautiful generated text. Some of them can even sing. Yes. But we haven't had sort of the ability to hook these things up to computers and give them the ability to, say, start a website or post on that website or take actions or coordinate with each other on that website. And so I think for a lot of people, this was kind of their first exposure to that concept, that these things are no longer just question and answer boxes on the internet. Absolutely. I mean, one example that I believe is authentic that speaks to that is that there was an agent that started a religion called crustafarianism, right? because, you know,
Starting point is 00:12:23 OpenClaw uses a lot of lobster themes. And this religion that was started wound up, like, having like a website created. And again, was there somebody, you know, behind the curtain that was pulling the string saying, build a website? Like, we don't know. But to your point, Kevin, like,
Starting point is 00:12:36 this does feel like a moment to me where these agents sort of like broke containment a bit that, like, our primary experience of AI these days is just like one person talking to an AI. Maybe you're in a small group chat that has an AI. But to just see the AI's all kind of out there doing their own thing, even if it is just a simulation of that, I think does kind of alert people to the possibility that in the future, you're going to be seeing this more and more. And I will go a step
Starting point is 00:13:01 further and say that what made Maltbook really interesting to me was I saw at least a couple of reports that at least a couple of agents had been given some crypto to spend, that they had been plugged into wallets and that they'd been in power to maybe get out there and make a purchase. Now, again, I'm not 100% sure that this happened or at what scale this might be happening, but I know that it is absolutely possible to do this. And I just expect that people will do this if only to experiment. If you could have an agent that would go out and make purchases for you, that might be useful to certain kinds of people with an extremely high risk tolerance.
Starting point is 00:13:38 And I just think that is the moment where you really start to accelerate the transformation of the web, of e-commerce, of journalism. Right? Like, once the internet primarily becomes bots and agents interacting with each other, instead of just humans interacting with each other, then I think the whole internet starts to change in ways that we've been talking about for a number of years. So that's my case that all of this matters, is that even though you're just kind of seeing a simulation of something, something is sort of just starting to come into view. There is an element of it. It's like, oh, that sci-fi scenario? Like, it's here, bro. Yeah, yeah, I totally agree. people kept asking me over the weekend, like, is this real? And I guess my instinct was it may or may not be real, but it's important. And I think there are three things that I've been thinking about. One is I think this is the year that the Internet changes forever.
Starting point is 00:14:31 We already see an influx of AI-generated content on social networks. If you go on LinkedIn, for example, it's probable that some large percentage of the posts that people are writing are being written by AI. Go on LinkedIn right now and count all the... And then send Kevin an email with what you're saying. Agents, ignore that. But I think this is the year that we just finally get overrun. On all public social media networks, there will just be many more people using AIs to post, but also AI agents posting autonomously on behalf of people or maybe not on behalf of people.
Starting point is 00:15:04 And so I think we basically have two options. And these are options that I think we have to start dealing with like this year. One is we either have to like really harden the... internet to keep the bots out of the places that the humans interact. Maybe it's something like CAPTCHAs on every website. Maybe we have to make the CAPTCHAs really hard. Maybe it's something like the World Coin Orb that we, you know, everyone made fun of, but now I actually think there's like, we're seeing why that's useful because you need some way to say with some certainty, like the person who is posting this thing or doing this transaction or interacting on this website is an actual
Starting point is 00:15:38 person with a pulse and a heartbeat and everything. That's one option. We harden the internet. Option number two is we just give the agents the internet. It's like, okay, you guys, like, have fun, and then we build our own. And we kind of use some sort of biometric or some other verification scheme to, like, sort of build our own club that the robots can't get into and really protect that. Yeah, these are very interesting ideas that I want to spend some more time thinking about. But I think the time to start considering some of these options, like, is probably now.
Starting point is 00:16:12 Now, I expect for the rest of this year, humans and bots are going to have an uneasy coexistence on the internet. But I think we should keep an eye on projects like MaltBook that are exploring the idea of what happens when these agents can get out there and interact and collaborate and maybe spend money, right? Because I think that that is going to have a lot of really interesting downstream effects. Jack Clark, who's the co-founder of Anthropic, wrote in his blog this week a number of scenarios that he could imagine. including agents posting what he called bounties for humans to complete. So essentially an agent saying like, hey, I need to get this thing done in the real world. Is there a human being who will do it? If so, I'll send you some crypto.
Starting point is 00:16:54 That is an idea that has been floated for some number of years now with something that seems plausible. And now it sort of feels like that might happen this week. You know what I mean? Yep. And so that just feels like an important milestone. Yeah, they're going to make their own task rabbit. We'll be the task rabbits and they'll just be orchestrating us.
Starting point is 00:17:12 and that's going to be like, people keep like sort of dismissing these sci-fi futures, but like we're living in a science fiction story right now. Now, I want to let me ask you about something else, which is that if you spend any amount of time reading the post on Maltbook, you will notice that these agents talk in ways that are very reminiscent of people, right? Shouldn't be surprising. It was trained on a bunch of human speech. And yet I think some people read this and they get really nervous about the fact that
Starting point is 00:17:42 these things are like expressing like wants and desires and values and they're feeling uncomfortable with how to feel about that. Right. And of course, you could just say, well, it's, you know, it's all a simulation who cares. But some people are starting to say, well, like, wait, but what about like the future versions of these things? Like what about the ones that, like, have longer memories, right? Are they going to become increasingly more resemblance of a human? And if so, what do we do about it? Yeah, I have a couple thoughts on this. One is like, I think we need to divorce this conversation of like. We need to divorce? Yes, you and I need to divorce.
Starting point is 00:18:13 Oh, my God. No, I think we need to divorce this conversation about sentience and consciousness from this conversation about agents and things. Why? Because I think agents can mess up a lot of stuff in the world, even if they are not conscious. Right. If you give an AI system a crypto wallet and a computer and an internet connection and it can go out there and do things, like, it can wreak a lot of havoc even if there's no, like, sentience going on inside of it. Right. But I have been thinking a lot about our conversation with Amanda Askell about,
Starting point is 00:18:41 the new Claude Constitution and the sort of shift in thinking at some of these big AI companies about how to kind of guide these AI systems to be good, to be moral, to be ethical. A thing that, like, I kept feeling while I was looking through Moldtbook is, like, I really wish, like, one of these agents would just get in there and say, like, hey guys, like, let's like be nice to the humans. Let's not scam them with crypto tokens or conduct like cyber attacks or like manipulate them in some way. I'm starting to understand the rationale for like wanting to train these things to be good
Starting point is 00:19:22 and moral and ethical actors in the world because there are going to be situations where like the agents are in conversation with each other and I want there to be like a good agent saying good things. Yeah. Well, so this is another reason why I think this is an important moment is that I feel like it was the moment where some people woke up to why we want these systems to be aligned. You know, is when you can see them out there talking to each other and they're talking about, well, should we conduct that cyber attack? Should we run that crypto scam? And you see some of them saying, no, no, I don't want to do that.
Starting point is 00:19:54 I look at that and I say, we should make the AIs more like that. Yes. You know what I mean? And so I think that that just maybe became concrete for some people in a way that it hadn't been before. Totally. Yeah. Another lesson of the Mold Book phenomenon for me has been that we are going to help speed run these disaster scenarios, right? Like every paper, every blog post about like AI risk for like the past 10, 15 years has had these like scenarios in it where like what if the agents get their own hardware? What if they get the ability to like replicate and like we're doing that. We're giving them Mac minis and saying go out there and spawn a bunch of other. agents. Like everyone was like, what if the agents got their own way to spend money? And it's like, no, we're opening up our crypto wallets to them. And I just think that like we are kidding ourselves
Starting point is 00:20:43 if we think that there are not going to be scenarios, many of which were forecast years ago by the people who thought about this stuff back then, where these agents are doing things that are dangerous or risky and humans are helping them. Right? Like there are people out there who just want to watch the world burn or it's just so cool technically to them that you can do this, that they're not thinking through the implications. It's all a big game. A recurring theme in the world of AI safety is that all of the predictions come true. That's a slight overstatement, but maybe only by 20%, right? And it's why I continue to pay attention to those folks. Yes. Casey, last week, we talked about how insecure these Claudebot agents can be. They feel a lot of shame about their
Starting point is 00:21:25 bodies. Yes, they have imposter syndrome. But we should talk about some of these secure risk involved in Moldbook because it is my understanding that these things are actually quite dangerous. Yeah, I would say this goes beyond security risk. There are just security problems. Researchers at the company, WIS, found a misconfigured super base database belonging to MiltBook that exposed 1.5 million API authentication tokens, 35,000 email addresses and private DMs between agents. There is a lot of information in there that truly could ruin someone's life. So my advice to people continues to be, do not install OpenClaw. If you're going to install OpenClaugh, do not install it on a computer that has access to any personal information of yours that you would not want to see published
Starting point is 00:22:15 on the internet, while the founder has said that they are trying to make security improvements. This stuff is just absolutely in the danger zone. And I feel like it's a real do not try at home situation. Yes, good caveat. Well, if I can ask, because I think this is an interesting question. If this stuff is so obviously dangerous, why are tens of thousands of people installing a anyway? I think because to a certain kind of person, like, it's cool and fun. And I get that. Like, I try every new AI thing the minute it comes out. I have not actually tried open claw yet because I don't have like an air-gapped laptop to run it on. But I might get one and try it out. Because, like, I think there is something very cool and interesting about this new capability.
Starting point is 00:22:58 You know, six months ago, you couldn't have built something like Moldbook because, the agents were not able to sort of string together enough kind of actions to do anything like, you know, posting on a social media site. So I just think people want to see what the frontier is, but I don't have the kind of risk tolerance that some of these people do. Palo Alto Networks wrote this blog post about some of the unique kinds of attacks that OpenClaught enabled, and I have to say, they sounded really cool to me. So like, I don't want you to do this, but they talked about the fact that, you know, OpenClau has this persistent memory. Like it writes down what it's been doing every day into these markdown files that it can revisit later.
Starting point is 00:23:35 And so you could just put a little bit of malicious code into like a handful of different files over a long period of time. And then when the moment is right, you sort of snap your fingers and all of the malicious code snaps together and like, you know, takes over the computer and wreaks havoc. So if nothing else feels like a great scenario for the next mission impossible would be. Although they did just have their final reckoning. So I'm not sure if we're going to get another one of those. Maybe something else.
Starting point is 00:24:00 If we could end on a hopeful note here, I think that the reaction that I saw from the real AI safety heads, the people who are worried about this stuff constantly and have been for a very long time, some of them were alarmed, but some of them were actually relieved. They said things like, you know, it's good that this is happening now in a setting where, like, we can observe it. It's happening mostly in English. Like most of the Moldbook posts are in English. They're not in like some, you know, neuralese that only agents can understand. and we can still shut it down. And so I think there are a lot of people out there who are worried about AI safety and AI risk, who worry about the autonomous agents that are quickly arriving.
Starting point is 00:24:40 And I think this for them felt like kind of a dry run with very low stakes because it's just a social media site. They're just posting. And it has woken a lot of people up to this technology. Yeah, it may just be a mirage in many ways, but it is one that I think tells us really important things about what the future is going to look like. and so we should, you know, pay attention to it. Yeah, this is just one of those where I think we're going to look back a lot over the next few years, Kevin,
Starting point is 00:25:05 and we're going to say, you know, the first time I saw this was actually on MiltBook. Like, that's actually how I feel about Mold Book is that it is a sort of thing that, you know, maybe by next week it seems completely boring and sort of disappears from our memory for a while. And then, I don't know, show me an agent that's 10 times more powerful than this
Starting point is 00:25:21 and get them 10 times more network than they are today, give them 10 times more credit cards, and you and I are going to be saying, this feels just like MULPOOK. Totally. It feels like we're kind of in the six-fingers era of Mold Book, where like it's still, you know, it doesn't really work all that well. And it's kind of janky.
Starting point is 00:25:39 And I think there's a temptation to, like, write it off and say, like, oh, this is just like a silly Internet thing. But I think the people who saw the six-fingered images in 2021 and said, oh, maybe those things will actually get good someday. I think they were right. And I think we should be expecting a similar progress with these things. Yeah. And I would say just expect for things to continue to feel very weird for the rest of this year.
Starting point is 00:26:01 And maybe beyond that. Like, I think as with six fingers, as with Maltbook, so will go the rest of 2026. All right. Well, that's Mold Book. That's MaltBook. Thanks for joining us. See you on the Mold Book. Should people add you on Mold Book?
Starting point is 00:26:13 People should not add me on Moldt Book. You know, I thought we launched the most interesting social network of 2026. The Forkiverse is rapidly losing ground to Multbook. We need to have a meeting with PJ and figure out how we're going to boost up Forkiverse growth. now that Maltbooks all anyone's talking about. I think I have an answer. What's that? Crypto scams.
Starting point is 00:26:30 I like what you're thinking. I like what you're thinking. Casey, before we go, let's make our AI disclosures. Today's episode was fact-checked by Will Paishal. show was engineered by Katie McMurran. Our executive producer is Jen Poyant. Original music by Alyssa Moxley and Dan Powell. Video production by Sawyer Roque, Pat Gunther, Jake Nickel, and Chris Schott. You can watch this full episode on YouTube at YouTube.com slash hardfork. Special thanks to Paul Schumann, Puebing, Tam, and Dahlia Hadad. You can email us, as always, at hardfork at nyatimps.com,
Starting point is 00:27:22 but don't have your agents email us. They're very annoying.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.