Hard Fork - Elon Musk’s Mega-Merger + We Test Google’s Project Genie + What’s Next for Moltbook Creator
Episode Date: February 6, 2026This week, the A.I. initial-public-offering race is heating up! We break down SpaceX’s acquisition of xAI, as well as OpenAI and Nvidia’s messy situationship. Then, it’s time for show and tell. ...We got our hands on the latest experimental A.I. prototype from Google called Project Genie, and we discuss our experience using it to generate and navigate video-game-like environments. Finally, we’re joined by Moltbook’s founder, Matt Schlicht, to discuss his new social media platform for A.I. agents, and how he’s planning to deal with security risks and spam on the site. Guest:Matt Schlicht, creator of Moltbook Additional Reading: Elon Musk Merges SpaceX With His A.I. Start-Up xAIThe $100 Billion Megadeal Between OpenAI and Nvidia Is on IceProject Genie: Experimenting with infinite, interactive worldsAn A.I. Pioneer Warns the Tech ‘Herd’ Is Marching Into a Dead EndMoltbook Mania Explained We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
Do you hear about the prediction market grocery wars?
No, what's going on over there?
So in New York City, Polly Market and Kalshi are doing dueling stunts involving groceries.
So Kalshi launched a $50 grocery giveaway at an East Village grocery store.
And Polly Market said that they are opening a whole grocery store that's going to be totally free,
subsidized, of course, by the tears of the gambling addicts who powered their site.
Wait, a free grocery store?
A free grocery store.
They're calling it the Polly Market.
I checked it's not just for poly people
if you're monogamous, you can go
to, but yeah, for five days
they're doing a free grocery store
in New York City. Just for five days.
It seems, I saw someone make a
comment about this online, which is
it seems like these people feel like
they have about 20 minutes left to generate
goodwill before they're just sort of shut down by the
police. Is that what's going on here?
I'm not sure what's going on.
I don't think you open a free grocery store
if you don't think that people are about to like
crash the gates and take down your entire operation.
No, I think this is sort of like how the, you know, if you go to a casino in Vegas,
like they give you free drinks if you're at the table, you know.
Or they used to, you know, but when Vegas was good.
Yeah.
Yes.
You know what would be a good prediction market is a market that could predict where all of the
items I want to buy are.
I know we've all spent a lot of time walking down those aisles saying, where's the
pasta sauce in this grocery store?
Somebody should predict that.
Is this a Seinfeld bit?
Yeah, I'm doing my type.
Okay.
What's the deal with airline peanuts?
I'm Kevin Bruce, a tech columnist at the New York Times.
I'm Casey Noon from Platformer.
And this is Hard Ford.
This week, SpaceX acquires XAI.
What does it mean for the AI race?
Then Google lets Project Genie out of the bottle.
Now, how do you rub it the right way?
And finally, Mulchbook founder Matt Schlicht is here to discuss building a social network for bots.
Are there pokes on Multbook, Kevin?
I'm not sure.
Casey, let's spend the first segment of today's show talking about our exes.
Well, all of my exes live in Texas, Kevin.
Is that what you mean?
No, I'm talking about SpaceX and XAI.
There has been a lot of drama in the AI industry this week.
We're going to talk a little bit later about some involving Open AI and NVIDIA.
But now I want to start with the news this week that SpaceX is acquiring XAI.
The deal was announced on Monday.
These are both private companies, so the details are not public, but it's been reported by Bloomberg
that this was an all-stock deal and that it may value the combined company at $1.25 trillion.
This is, of course, in anticipation of SpaceX going public.
That's expected to happen later this year.
And if this is sounding familiar, it's because Elon Musk loves to do this.
He loves to kind of bundle his companies up into mergers.
Tesla in 2016 acquired another Elon Musk.
company Solar City. And just last year, XAI itself acquired X, the social media company formerly known
as Twitter, in an all-stock deal. I'll just read a little segment from the memo. This was from
Elon Musk. He said, quote, SpaceX has acquired XAI to form the most ambitious, vertically integrated
innovation engine on and off Earth with AI rockets, space-based internet, direct-to-mobile
device communications and the world's foremost real-time information and free speech platform.
This marks not just the next chapter, but the next book in Space X and XAI's mission,
scaling to make a sentient sun to understand the universe and extend the light of consciousness
to the stars. Casey, your reaction. Ketamine is so powerful, but it must be used carefully.
Talk to your doctor before using ketamine. No, this is a very grandiose statement from a man who is
prone to grandiose statements. I think there is a less grandiose way of describing what we have just
seen here, Kevin, which is that a very valuable and profitable company in SpaceX has acquired a
cash furnace named XAI. Yes. So XAI, we know, is losing lots of money. They are spending
lots of money, building out data centers, training models. They are not anywhere close to profitable.
Space X does generate revenue somewhere in the neighborhood of,
$15 billion last year, according to reports. So basically, some people are saying this is less of a
merger and more of a bailout. This is a company that has a real business that has real revenue
coming in in SpaceX, acquiring something that is, as you said, a cash furnace. Yeah, and it is
kind of the second bailout of this kind that we have seen from Elon. The first one, of course,
was when XAI acquired X for $33 billion, a pretty high price, I would
argue for what X was at the time. And now that combined company has now itself been bailed out
by SpaceX. So this has been amazing for all of the investors in those first two companies. And later this
year, we may see them get another great exit when this whole thing goes public. Yeah. And if you
look at the reports, some of the SpaceX investors are a little bit nervous about this. They were like
excited for the IPO. Now some of them are saying, well, you know, we trust Elon, but like we're not sure
about bundling all this other stuff in with it too. But the vision, at least according to Elon Musk,
is that this will all sort of be part of a unified, sort of vertically integrated strategy
where you'll have X the social media network that'll be powered by GROC, the XAI chatbot,
that will also run on data centers that are being put into space. And we should talk about that
too, because it appears that Elon Musk, like other companies, we talked about Google and its
Project Sun Catcher a few months ago on the show. But he has gotten space data center
pilled. He wants to put the data centers in space. Yeah, that's what he says. I imagine he will
try to do it, but I think an important thing to say up top whenever you discuss Elon Musk is this
man just loves to say things, right? Like there is no figure in public life, certainly not in the
business world, who has made more promises over the years that have either not come true or have
come true many, many years later. So as we approach the discussion of Elon Musk's space dreams,
I want to just note that up front. Yes, that's a good point. We don't know, you know, how quickly
any of this will arrive. There are still a lot of good cautionary notes to strike on that.
You know, Casey, I almost forgot our disclosures. We're going to talk about AI. So I should say that
I work in the New York Times, which is suing open AI, Microsoft, and perplexity for alleged copyright
violations. Am I if it works atopic? So last week, SpaceX requested approval for putting one
million solar-powered data center satellites into orbit from the FCC. And that is a ton for context.
The European Space Agency estimates that there are around 15,000 satellites currently in orbit.
So this would be many times the number of currently existing satellites. And I think there are a few
interesting angles here. One of them is this is obviously something that other tech companies are
interested in too, but those companies do not have rocket companies attached to them, right?
Google, if it wants to put TPUs into space, does not have a way to get there other than paying
Elon Musk and SpaceX to send them up on one of their rockets. So this could become kind of a land grab
in space, if you will, where like Elon Musk may not entertain offers from other people to put their
data centers into space because he wants to put his own there too. I certainly think it is a savvy story
for Elon to be selling at a moment where he is preparing to take Space X public. He has just
saddled it with this cash burning company, and he's going to have to convince investors that
that will be worth it in the end. So what does he say? Essentially, look, this is going to be the best
AI company at infrastructure. AI demands will eventually exceed what we can provide for on
earth. The only way we will be able to fulfill demand is by putting data centers in space.
and the combined SpaceX and XAI and maybe Tesla version of that will just be the winner of that race.
So again, I think getting there is going to be an extremely shaky proposition.
But I think if you're trying to make your S1 look good before you go public, it's a smart thing to say.
Yeah, and I think it's not just about making the S1 look good because I think there's an argument that this actually makes the S1 look worse.
if you have this AI company attached to this money-making rocket company.
But I think they're taking a calculator bet here
that there is some sort of fixed amount of investor excitement
about what's happening in AI.
And sort of whoever can get to an IPO first
will get a big portion of that.
So this is why you're seeing OpenAI,
an anthropic, kind of racing toward an IPO.
I think this is why you're seeing SpaceX go out this year
because there's this sense that, like, there's sort of one chance to be first in what we, you know,
we'll see over the next few years, which is a bunch of these companies going public, and they want to be the first there.
Yeah, and that makes sense to me. Although, in a world where, you know, these companies continue to grow very quickly,
I can imagine investors wanting to buy three or even four AI stocks. So I'm not saying there isn't an advantage in getting there first,
but I wonder how durable that advantage is going to be, you know, over the next 12 or 18 months.
Yeah, there are also a lot of hurdles and limitations.
to overcome before this idea of a space-based data center is even plausible.
Right now, it would be way too expensive and hard to do this.
We don't even really know if it's physically possible yet.
But there have been some experiments.
For example, StarCloud, which is one of these other space-based data center companies,
actually did train a very small, large language model on a space data center in December.
So they have one of their sort of pilot satellites up there.
They trained something called nano-GPT, which is like a very small version of a GPT-like chatbot, and they say that it worked.
So there are still lots of outstanding questions about whether this is possible.
I would still put this very much in the category of experimental bet rather than sort of plan A.
Their plan A, I think, for all these companies, is to build these things on Earth.
But if they can't do that for some reason, they want to go to space.
Yeah, and just to inject another note of skepticism here, like, think about how many times and how long ago
Elon began promising that full self-driving would come to Tesla cars, right?
And think about where we are in that journey now, which is like, well, it drives itself in
these certain circumstances, but not always, and there have been a, sure have been an interesting
number of fatal crashes, right?
So that has sort of been that journey.
And let me just say, I do not expect the journey towards space-based data centers to go
any more smoothly than the journey to full self-driving.
Yeah, that's a good point.
That's a good point.
I think we should expect this to go more slowly than maybe the most optimistic people do.
But this is at least part of the stated rationale that Musk gave for merging these two companies together
is that ultimately the future of AI is pointing toward needing to put at least some of the infrastructure into space.
And having what he calls the sentient sun, which is basically like capturing the solar energy
and turning that into compute is part of the reason for this rationale.
Will investors buy that? I'm not sure. But I think that's where he wants to go.
You know, you also have a sentient son. He's lovely.
Here's what I'll say. If we're going to do this, I would like to actually see Elon go into space and building some of these data centers.
I want to see him in like a spacesuit with a wrench doing his part, you know? They say a good boss.
There's nothing that they would ask someone else to do that they wouldn't do themselves.
I think that's right. So I would love to see him up there.
I want to see Katie Perry up there with him.
I want to see her fixing the high bandwidth memory.
Oh, she has space experience.
Yeah.
Actually, when you look at all of humanity,
there are very few people who have been to space more than Katie Perry.
That's true.
So I think it's a good idea.
All right.
Casey, what effect, if any,
do you think that being part of SpaceX will have on X the social media network?
Well, my fear is that it is going to make them even less accountable
than they have been so far, right?
So as this acquisition is happening,
X, I think, arguably,
is in the midst of a crisis.
It's a crisis that we've talked about
on the show.
The Grock chatbot was used to create millions
of sexualized images of women and children,
and it is now under investigation
by countries around the world.
Just this week, French police
raided X's offices in Paris
as part of what has apparently been
a very long investigative
into potential crimes, including distributing CSAM, child sexual abuse material.
And the UK's media regulator, Offcom, has also opened an investigation.
So, you know, I'm sure that those things are going to proceed.
But my fear is that once X just, it's already been swallowed up once in XAI, once that
gets further swallowed up into SpaceX.
Now you start to get into some really weird issues where let's say that a country has
hired SpaceX to put a bunch of satellites into space for strategic purposes.
Now it's maybe going to be a little bit more reticent to go after them for these CSAM
violations because they think, well, that's actually a really important relationship for us.
We need to make sure that we have priority access so that we can get our satellites into space.
And all of a sudden, there's just way less accountability for X.
So these are very real concerns, right?
Like look at what Ukraine has had to go through with Starlink in order to maintain
its own access to internet connectivity, which is essential to its war with Russia. So these are not
theoretical concerns. And, you know, this is one reason why if, you know, I were a United States
regulator, I might look actually very skeptically at this acquisition. Yeah, I think that's right.
I think there are certain ways in which, like, being part of a public company could force X to,
I don't know, get its act together a little bit. Maybe there will be some additional disclosure
requirements. We'll finally be able to figure out, like, how many people are using this thing? How much
money is it making or losing, you know, what does the business of X look like? So in that sense,
I think it could be good for people trying to make sense of what, you know, the former Twitter has
become. But I think you're right. It absolutely does give them tremendous leverage. They're going to
have, you know, Starlink, they're going to have SpaceX. They're going to have XAI. They're going
to have X the social network. So it's just like, if you thought Elon Musk was already too powerful and
that was a thing that worried you, I think this is going to be a really negative development for you.
Absolutely. Now, there is one lightly comic note that I could inject into this otherwise, you know, dark aspect of the story, Kevin, which is that as part of the French investigation into X, Linda Yakorino is being called back to appear. And I always love it when they bring back characters from past seasons. You know what I mean? Haven't heard that name in years. That's right. The former CEO of X, she's now going to have to answer to the French government. Okay. Well, I suspect that that will go poorly. I think the other question I have here,
is how this affects XAI and their strategy for building terrestrial data centers here on Earth.
I've talked to some people at some of the big AI companies who are increasingly worried about XAI.
I think they traditionally haven't been thought of as kind of a top-tier AI company.
I still don't think GROC is sort of considered one of the best chatbots on the market,
but they now have access to a lot of money through,
Elon Musk and then after this IPO through the public investors. And they are actually using that
money. They just recently brought a big Colossus 2 data center online with a bunch of the newest
Blackwell Nvidia chips, like 550,000 of them, which is more than anyone else has. And so there
are people starting to look at how quickly they are building this data center and how many chips
they're getting and thinking, well, if these guys can like make a good model, they could catch up
quite quickly. So you know, you can't just get to the frontier by outspending everyone else on
chips, but it's an ingredient. And I think they have the money and they will have even more money
after this IPO to do that. This is the core of the story. Like, this is how I understand all of the
events that we are talking about in this segment, that this is Elon using the core advantage he
has in the AI race, which is access to the most money and the ability to combine many different
companies together, right? Now, as a result of this merger, the fact that X-AI is burning billions of
dollars doesn't matter because it can now suckle off the profits of SpaceX. And that is going to let
Elon buy time, right? And so the whole bet is we may not be at the frontier right now. We may be a
second-tier lab, but we have a lot of money and we have a lot of ability to do these financial
engineering schemes. And so we are just going to bet that if we have enough time, we will be able to
catch up. We will be able to either come up with a breakthrough of our own or just copy off
somebody else's homework, and that is going to let us win. That, in combination with this massive
infrastructure that we're going to build, that is going to let us win. Do you remember when
Elon Musk signed an open letter calling for a six-month slowdown? A six-month pause in the
training of large language models? Do you remember the long ago year of like 2023 when that happened?
Another move at the time that did serve Elon's interest because it, again, allowed him to play for
time, right? Like, what Elon Musk needs
is time. He needs time to build. He needs time
to catch up. He needs time to link
all of his companies together so that he has access
to the capital that he needs. That is how he's
playing this game. Yes. What he really
meant was everyone else slowed down.
I'm trying to build
rockets and data centers over here.
Well, there was one more
big story in the AI world
this week, which is the drama
that is unfolding
involving OpenAI, Nvidia,
and Oracle. So, Casey,
Can I try to sort of give the summary view of what is going on here and then you tell me what I'm messing up?
Please do.
Okay. So here's what has been going on.
There was reporting in the Wall Street Journal late last week around this deal between Nvidia and OpenAI maybe being a little shakier than people had thought.
So as a reminder, last September, Nvidia announced a plan to invest up to $100 billion in Open AI.
This was part of their sort of deal-making sprint that happened last fall.
And according to the journal, there were some in NVIDIA who were growing a little more skeptical of this deal and were expressing doubts.
According to the journal's sources, Jensen Huang, the CEO of NVIDIA, has privately emphasized to industry associates in recent months that the original $100 billion agreement was non-binding and not finalized.
He has also, according to the journal, privately criticized what he has described as a lack of discipline in OpenAI's business approach and expressed concerns about the competition.
it faces from the likes of Google and Anthropic. So that's sort of turn to the screw number one is,
you know, the boys are fighting and Nvidia is getting maybe cold feet about this deal.
Then a report comes out from Reuters alleging that OpenAI is not happy with some of
Nvidia's latest chips, specifically their performance on inference. And they sort of noted in this
report that Open AI has also been doing deals with a bunch of other companies, AMD,
D, Cerebris, GROC with a Q, which is another chip company, which Invidia has now sort of
licensed or auto-acquired.
So these two reports seem like they're kind of related to each other.
Some people from Nvidia's side saying, hey, we have some questions about, you know,
the strength of your business and Open AI or at least people, you know, sort of leaking out
that they may be unhappy with the latest chips out of Nvidia.
So am I right so far?
You're doing a great job so far.
This is a complicated one.
You're navigating it beautifully.
Okay.
So then the CEOs go into damage control mode, right?
So Sam Altman says, he goes on X and he says, we love working with Nvidia and they make the best AI chips in the world.
We hope to be a gigantic customer for a very long time.
I don't get where all this insanity is coming from.
And then Jensen Huang of Nvidia says basically the same thing.
He says, he calls the reports of a rift between the company's nonsense and he said, I really love working with Sam.
So into all this, Steps Oracle, which is also involved in all of these data center deals,
they're the ones building the data centers that the Nvidia chips go in that are used to train the OpenAI model.
So they're sort of the third piece of this Thruple here.
And they put out this post on X that says, quote, the Nvidia Open AI deal has zero impact on our financial relationship with OpenAI.
We remain highly confident in OpenAI's ability to raise funds and meet its.
commitment. Basically, some of Oracle's investors have been worried that if Nvidia doesn't give
OpenAI this $100 billion, then Open AI won't have the money to pay for all the data centers
that Oracle is putting up. Did I miss anything important? You did miss one important thing, but I
missed it too. Until this week, I talked to Berber Jin, the great reporter at the Wall Street
Journal who helped to walk me through some of the twists and turns that have been reported, but I think
that some folks aren't bringing together entirely. Here is the crux of the problem between OpenAI and
Nvidia, as I understand it. In September, part of the deal was that Nvidia was going to lease its chips
to open AI. This is basically unheard of for Nvidia to do. Invita wants to sell you its chips. Chips are
depreciating assets. It wants to get them off of its balance sheet. If it leases the chips to
Open AI, those tips remain on the
Nvidia balance sheet and they're just
kind of a drag on profitability.
And of course, there's the risk that
maybe at some point Open AI
defaults and it can't pay
for all the chips that it has leased and
in that case, Nvidia would have much rather
have just sold them to somebody else because
God knows it would have been able to find a
buyer, right? So this was a very
unusual deal that had been
made in September.
And over the next few
months, Kevin, we have seen
investor confidence in the AI buildout begin to decline. You can see it in the public stock prices.
Oracle stock price way down. I believe at various points by about half, the investors are that
concerned. Corweave, another one of these companies that, you know, lets AI labs get access to chip
capacity. A neocloud. A neocloud, if you will. That's a cloud from the Matrix. They have also
seen their stock price dramatically decline. And according to Berber, this is what
the executives at NVIDIA start to get concerned about. And they think, we don't want to be on the hook
to Open AI for all of these leased chips. And so that is the part of the deal that we are going to step away from.
We are not going to do this leasing thing with them. We are just going to go back to pure sales.
Now, it is true that NVIDIA is still happy to take an equity stake in Open AI. They believe in
Open AI. Open AI is a good customer. They do think that there's a good chance that's going to be successful in the
long run. But you may remember back last year, Kevin, do you remember what Sam Altman would say
things like, our plans are literally so crazy, we're going to have to invent new financial instruments?
We were in the room when he said this at the Open AI Developer Day. This leasing deal that I just
described, I believe, was one of those crazy financial instruments. And Nvidia said, this is too
crazy. Huh. So this is a lot of like financial engineering and a lot of sort of investor talk about
these companies. And I love talking about it because it's our core expertise.
Yes. No, but I think like to take a step back for a second, I think what's happening here is there are just various levels of comfort among these big companies with the idea of spending some huge portion of their resources on building out these data centers for AI.
And I think some people have said this is sort of the problem with these circular deals, right? If you have all of these companies doing deals with each other and financing everything in new ways, then like if this deal,
between OpenAI and video were to collapse or were to sort of become much smaller,
then that affects Oracle. It affects all of the other partners in that ecosystem.
And the worry is like you pull out one piece and like the whole thing comes down.
Yeah, absolutely. And I don't think we are there yet. But when I spoke with Berber,
he said that one thing that this does seem to put at risk is Open AI's plans to build its own
first party data centers, right? Like raising the money to do that. It's, I mean, it's staggeringly.
expensive. Like the chips alone to fill one of these data centers cost in the tens of billions of
dollars. Just the chip, just the chips. You bought a graphics card in the 90s. It was what,
$400? Now imagine you just spent $35 billion on chips. Like that is the world that we're in.
That seems less likely. And so that's going to put pressure on Open AI to make deals with other
cloud providers. Right. And it, you know, it has a lot of those deals and it has a lot more capacity
coming online. But all of this just speaks to, you know, once again, Sam Altman having a very
ambitious vision and then maybe struggling a little bit to get from A to B.
So do you think this beef between Open AI and Vida that both CEOs are denying is a beef exists?
So I don't know how individual Open AI executives feel about the fact that this leasing deal is
going away. I imagine they would be unhappy. It seems like that deal was quite favorable to them.
And the fact that it doesn't exist anymore is going to create some problems for them.
at the same time, they do not want to poke Nvidia in the eye and say, you suck.
They want to take the $20 billion that Nvidia just pledged to their next fundraising round
because they desperately need that money.
So this is all going to get smoothed over because ultimately, like, there are still plenty
of like win-win situations for these two players.
But I do think you can expect to see as long as investor confidence in these infrastructure
plans remains shaky, Nvidia is not going to be making up any crazy new financial instruments.
Right. They're in the chip selling business. They presumably want to stay in the chip selling
business. All right. So that's the big drama in the AI world this week. Casey, do these stories
seem related to you in any way, the XAI, SpaceX merger, and the OpenAI, Nvidia, Oracle drama?
Yes, they are related. They are about the lengths that you have to go to in order to win the
AI race and what are the different advantages that the companies have, right? So Elon has the
advantage of being able to do a lot of financial engineering and use his very rich, profitable
company to subsidize his money-losing company. Open AI does not have that advantage. What they have
is the world's greatest fundraiser and maybe one of the world's greatest storytellers in Sam Altman.
And he can get out there and he can persuade Nvidia briefly to invent a new financial instrument
to try to finance his dreams. That happened to fall apart this week, but I'm sure he's going
to continue to try to press that advantage as they try to win. Yeah. It's going to get so weird.
It gets so weird. The numbers are already getting so big that I'm just like, I've sort of become
desensitized to them. I'm like, oh, like 15 billion? Like, it's a banana, Michael. How much could it cost?
In 2024, I went to Anthropic to interview somebody and on their laptop there was a sticker
that said, this is the last normal year. And I actually think that sticker might have been right.
Yes.
When we come back, make a wish. We're going inside Google's new experimental research project, Project
Genie.
Ooh, I'm going to wish for more wishes.
All right, today we're talking about something called Project Genie. This is a new tool that was released
by Google last week that lets users create interactive worlds that they can navigate around like a video game.
So technically, Kevin, they're not even calling it a tool. They say it's an experimental research
prototype. So please don't your lab coat before using Project Genie.
Yes. That is Google speak for this is very expensive and we don't want you to use it a lot.
So this is something that has come out of Google's Genie 3 model, which is the underlying
model that powers Project Genie.
They showed that off back in August, but then they sort of spent the next few months
building a product around that so that now you can actually go in and use this yourself.
Right now, access to Project Genie is still only for people who pay for the Google AI
Ultra Plan, which is their super high-end $250 a month plan, and it's only in the U.S.
and only for over 18.
So this is very limited because I assume it's very compute-intensive and expensive to run,
but they did give us both access.
We were able to play around a little bit.
Yeah, and one reason why I think the Project Genie is really interesting to talk about, Kevin,
is that it is just a different flavor of AI than we normally talk about around here.
You know, so often we talk about these pure large language model,
these chatbot interactions.
Jeannie is something different.
It's based around a world model, and that is an idea that is getting a lot of attention
in the AI community these days.
Yeah, it's been a big topic of discussion in the last couple of months.
I would say from people who are skeptical that pure language models can reach all the way
to AGI right?
There are people like Yon Lacoon, formerly of meta, Faye-Fa-Fei Lee, who's another big name in
the AI industry who have now started these sort of world model companies.
And they are people who are saying, well, wait a minute, being an artificial general
intelligence is not just about talking or even solving problems or writing code.
you also have to have a sense of physics and the physical world.
If you want to do something like robotics,
a lot of people are betting that these world model systems
will help with that because you'll be able to sort of give a robot
and understanding of the physical world around them.
And so while it may look like something that is primarily for video games,
I think the folks at Google believe that this is sort of part of their overall
push to make these systems as smart and capable as possible.
Yeah. And if you have been on social media,
over the past few days, you may have seen some clips of Project Genie going around.
I found several of them that I really like, Kevin.
I wonder if you saw any of these.
Somebody put together a pretty good replica of a 1990s blockbuster video store that you
could walk around.
I saw somebody who had recreated the crucifixion using a prompt.
And my very favorite, somebody had made what basically looked like an Inception video game
where you had sort of, you know, people flying sideways through doors in the same way that they did during the Christopher Nolan movie, except you could actually, you know, control the character using your keyboard. So some very fun things people are making.
Yes, these examples got a lot of attention, including from investors. And following this announcement of Project Jeannie last week, several big video game companies saw their stock prices fall, take two interactive, dropped more than 7% and is down more than 10%. So far, this is,
week. Roblox dropped more than 10%. It is still down. Unity, another big gaming company,
also saw its stock fall more than 20%. So people are saying, wait a minute, if you can just
generate video games like this. With a text prompt, why would you pay for a video game like
Fortnite or Grand Theft Auto or something if you can sort of make your own version of that?
Now, I don't know that that's like realistic. I think these investors may be just a little panicky,
But I think the implications of this have been sort of most heavily felt in the video game industry
since Project Genie, at least what it looks like today, has the closest resemblance to a video game.
Yeah. What I would say is that while I agree, Kevin, we're a long way from a tax prompt to create a full video game.
If you use Project Genie, it's not clear how it's making a video game company more valuable.
Right. So let's actually show people what it looks like. So the actual Project Genie interface,
is basically looks a lot like a chatbot,
except it has two input boxes,
one for environment and one for character.
So you basically give it some ideas
about what you want the landscape
in your sort of video game to look like,
and then you describe your character,
whether it's a person or an animal or an object.
You can pick first person or third person,
so do you actually see the character
or are you just sort of seeing the world through their eyes?
And then you hit a little button that says,
create sketch,
and Nanobanana Pro, which is one of the Google
image generation models, goes out and makes a just two-dimensional sketch of what this world looks
like. You can add some notes or modify it. And then once you're ready, you hit the button to generate
your world. And it actually goes off for a few minutes and renders this thing that you can then
walk around in. You can move around or move your view. You can jump using the space bar. You can kind of
navigate this. And all this lasts for 60 seconds and then it goes away. Yeah, you have, it's funny. Because when
actually creating the world, they basically tell you, like, don't go anywhere because as soon as
they start showing you the demo, a counter starts ticking down from 60 seconds, which like,
I've never seen a tech product like this before, and it just tells me that the cost of whatever
this is is absolutely staggering. They are incinerating TPUs over there. They can't even wait for you
to get back from the bathroom. They can either show it to you right now or not at all, and that's
the deal. Yeah, I heard a rumor. I'm not sure if it's true or not.
but that each of these generations, like, requires at least four, like, TPUs per user.
It's like they are, it's very compute-intensive.
Obviously, you know, this is an experiment.
And we should say it's, like, still, even with all that compute, like, it is low frame rate.
Like, it's not a sort of very responsive sort of studio-quality game.
Yeah, maybe we'll get into some of my criticisms after we see some examples.
But I would also say that over on X, a user there named Andrew Curran,
had posted side-by-side videos of what looked like the same prompt for Jeannie 2 back in late
2024 and Jeannie 3, eight months later. And the difference was pretty amazing because the video
was a lot shorter. It was a lot less interactive. The resolution was lower. So within a year,
this got radically better. And whenever you see that sort of exponential improvement, I think you want to
start paying attention. Totally. So Casey, what have you been making with Project Genie?
Okay. Well, so there's one that I am proud of that I would like to show you.
And I like to call it podcast microphone escaping the studio.
Can we pull that one up?
Okay. So I gave a very limited prompt on this.
I basically said for the world, like create a hard fork podcast studio.
And for the microphone, I said create a microphone with Googly eyes.
It did great.
And maybe I said something about, you know, he needs to escape the studio.
and so I'll just show you what happened next.
Now, I will say, it's actually a nicer studio than our actual studio.
Absolutely. It's a hundred times nicer than our actual studio.
And you will notice that I seem to have some struggle kind of manipulating my character.
I was still sort of getting the hang of the keyboard controls.
Also, I think you'll see as this clip goes on, it just kind of starts to feel more and more like a horror movie
because it turns out there is no escape from the podcast studio.
But I'll just show you what I mean.
Here we go.
Okay. Okay. So the microphone is trying to,
leave the studio.
It's looking around.
Great hard fork sign on the wall.
And now it's going to jump off the table.
Oh, it jumped off the table.
Oh, but it can't get out.
It's stuck.
Oh, no.
Oh, it's going into the hallway.
Yeah.
So basically, the punchline is the microphone goes out into the hallway,
but then all the doors are just like black and locked.
And I sort of got like stuck against the wall.
This is scary.
We're entering like five nights in front.
territory. I accidentally made like a David Lynch film.
Yeah, so it's like going into this very dark hallway and you see the googly eyes still,
but that's all you see. Yeah, you just see the googly eyes against this, this black door
that cannot be opened. Okay, so that's one experiment. What else are you making?
Okay, so we'll stop sharing that one. And then I'm going to be honest, I was like having trouble.
I didn't know what I could make inside Jeannie.
So I actually just asked Gemini to come up with some prompts because the chat puts just have these,
like, crazy prompts you would never think of.
So Gemini said, I just want to make a note to the to the Gemini and Google infrastructure team that Casey is your biggest problem.
Casey is single-handedly bringing down your data centers.
I was just trying to figure out what Jeannie could do.
And so Gemini had this suggestion, and I basically had no idea what it meant or what it would look like.
So I went with it.
Here was the prompt, okay?
A solar punk library that grew out of a giant.
giant redwood tree. The books are made of leaves that display text when touched, and the elevators
are giant bubbles that float between branches. The lighting should feel like a permanent golden hour
filtered through emerald canopy leaves. Wow, great prompts. And I thought, yeah, that could be cool.
So let me pull that up. Can you see this? Oh, that's beautiful. Yeah, so it looks really cool.
For the character, I asked it to create an elderly female librarian because they're so underrepresented in
AAA video games.
There might not be one
AAA video game
with an elderly female librarian character.
So I'll let play.
You can see her walking through the trees.
You do see those sort of like bubble like elevators.
Yeah.
She looks sort of elf-like.
Yeah, I would say this looks like
kind of a 2018
quality video game.
Like a nice one.
Yeah, it's pretty nice.
I thought this world was like pretty cool.
I would like to spend a little bit more time
in this world.
You know, it's kind of like this, you know, world like set in the treetops.
And then I got to this point where it looked like she was just going to like walk off an edge.
And then briefly, she was like floating in air.
And I was like, this feels like a glitch, actually.
But then she fell.
And I thought, oh, no, she's going to be her.
But no, she just landed on the ground.
And it, like, continued to render the scene.
And it, you know, you can kind of see the golden hour lighting.
So, you know, again, it's like, is it a video game?
No.
It's a pretty cool demo.
yeah.
Yeah, I think this is really cool.
Yeah.
This one genuinely impressed me.
I did some experiments that ranged from, like, quite impressive and successful to not.
Okay.
So I want to show you some of my creations with Project Jeannie here.
The first one I'm calling Nun in a Casino, and that's because that's what it is.
You can see the Nun is, like, in full habit.
She's walking through a casino.
Some of the, like, other characters.
like the other people at the casino are just sort of like weird alien creatures.
Like it's not rendering them correctly.
And she's sort of pushing through some of the crowd.
It's still, though, I would say it's rendered like an impressive number of people.
Like there's like legitimately a crowd in the casino.
Yes.
So that was one.
I tried something cool, which is that you can, in addition to having nanobanana run your prompt
and create the landscape, you can also just use one of your photos.
So I tried using a photo from the Russian River up north of San Francisco that I took last summer.
I wanted to spice it up, so I said the character is the Loch Ness Monster.
So this is sort of those are those are my dogs in the river.
I hope they're not about to be eaten.
Well, that was what I was trying to figure out is like, would they, would the Lochness monster be able to eat my dogs?
So this appears to be in first person.
We cannot see a monster.
Yes, although, give it a minute.
A fin, or is that the head?
I think that's the head.
He is notably fleeing from your dogs.
Yes.
I should say that that's not unlike what it actually looks like there,
and I only gave it the one still image.
That is pretty cool.
No, that absolutely looks like the Russian River.
Yeah, I was impressed by this.
I'm not as sold on how applicable this is to
stuff beyond games, but I think for games and for things that, you know, involve simulation
like robotics, this could be a very big deal. Well, so what most impressed us would you say about
our time with Jeannie? I mean, I think just the fact that it works at all is pretty impressive to me.
This is something that, like, you could not do a year ago, at least with this kind of quality
and this length. And I think you're right that the sort of pace of improvement suggests that
this is actually something where we're just going to keep seeing better and better versions of this.
What would it need to add to make you say, okay, we've got a stew going?
I mean, I think just doing anything but moving would be useful, like, you know, using objects,
moving in different ways. Right now, these are just sort of landscapes that you can pilot a character
through. You can't, like, open doors or use items or anything like that. So that would be, that would
make it useful for something like a video game.
Yeah. I wonder how close we really are to this being able to create some kind of video game-like thing.
Because, you know, there are a lot of independent games that are quite simple.
And maybe the entire game can take place in one room.
And maybe whatever the loop of the game is can be completed in like two minutes.
I can imagine seeing games like that coming out of the horizon pretty shortly.
Totally. And it's interesting to me, too, that like none of the other leading AI labs are working on this.
kind of thing. You know, Anthropic has not even released an image generation model. Open AI has
done SORA, but they're not doing the sort of playable world model. And so that suggests to me that
either, like, Google has seen something they haven't about how this applies to the rest of their
AI efforts, or they just have some spare compute and some people who want to work on this and
they're just willing to give it a shot. Well, you know, earlier in the episode, we were talking about
like Elon's advantage as he tries to win the AI race. This, I would say, is Google.
is they are the company that has the most paths to winning.
They have a lot of money,
but they also have multiple different technical approaches
to trying to build very powerful AI, right?
Contrast that with Anthropic, for example,
which is just sort of going large language models all the way,
see how far they can get with that.
Or Jan Lacoon saying,
I don't really believe in LLMs.
I'm going to try to create a different approach based on world models.
Google has the portfolio approach, right?
It's the same sort of spirit in this company
that leads them to create three versions
of so many different products
and give them all impenetrable names.
They will also come up with three different possible ways
to achieving AGI.
So that is another dimension along which I think
that Jeannie is interesting,
is that even though this is just a demo,
it shows you what Google's strength is
as it tries to win.
Yeah.
And I don't know exactly who this is supposed to be for.
Right now,
it just seems like this is just sort of an experimental research project.
I'm not sure who they're imagining.
I think it's basically for me.
I'm a video game nerd, and I like playing with software.
So I think Project Genie is for me.
Okay.
Well, you've found your audience.
And the big question marks here are, number one,
can they bring the cost of generation down?
Because this is obviously expensive to do.
And if that comes down quickly,
it may not be a big deal.
We may be able to get longer than 60 seconds.
We may be able to get a full-fledged game
that you could just play indefinitely.
But right now, that seems like it's prohibitively expensive,
which is why they're making you sign up
for the $250 a month Ultra Plan to do this.
Yeah.
I always like to see a technological innovation released
where the basic message the company gives you is,
please don't use this.
Only use it if you really need it for something.
And even then, maybe not.
Yes.
Yeah.
So, yeah, not a mainstream product yet, but I think we both feel like this could get there.
No, this will just be one of those where, like, in two years, we'll say, remember Project Genie?
That was the building block that got us to whatever it is.
We're talking about that.
Yeah.
Yeah.
All right, well, let's put that, Jeannie back in the bottle.
When we come back, Matt Schlicht, the creator of Moldtbook joins us to talk about how it feels to run the world's hottest social network for chatbots.
So earlier this week, we brought you a special episode about MaltBook, the new AI social network that's just for bots.
You're welcome.
And we wanted to follow that up today with a conversation that we had with the creator of Moldbook, Matt Schlicht.
Yes, Matt Schlicht before all of the Maltbook stuff started was and still is the CEO of an e-commerce company called Octane AI.
and I think it's fair to say his life has been turned upside down over the past week or so
as an idea he had for what would it be like if you could watch agents talk to each other in public
became a genuine phenomenon in the world of AI.
Yeah, and he's also now in this kind of weird and funny position where, like, he's built this thing,
a bunch of AI agents showed up to use it, a bunch of humans showed up to like watch the
AIs talk to each other, and now he's kind of running this like big,
unruly human AI hybrid social network thing.
And it seems like it's a lot for one person.
It absolutely is.
And I think at this moment, Kevin, even the question like, what is Multbook really and
like what is the future of it?
Those are like very hard to answer, I think even for Matt.
But we wanted to see what we could find out about how he thinks about the many issues
he is now contending with, including some enormous security issues.
content moderation problems related to spam,
and kind of all the other really difficult stuff
that comes with running a social network.
Yes, I feel like if you and I have learned one thing
over the past 15 years,
is that running a social network always sounds like a thing
that would be fun,
and then it, like, ruins your life and society if you're successful.
Two things you should know about running a big-scale social network.
It will ruin your life,
and you'll become one of the richest people in the world.
So choose carefully.
And you'll hear Matt make a bunch of claims
that we didn't sort of stop to,
to dwell on, like about how these things are superhuman and how they're doing things of their own
volition. I think we still, you and I have a lot of questions about how much of what's going on
on MoldBook is actually the result of autonomous, agentic AI systems and how much is just sort of
like humans having a good time. Yes. And I have one consistent message about Maltbuk, which is that
you should not allow your agent to use it. Yes. Yes, not unless you have a very high risk
tolerance and a very secure Mac Mini.
Let's bring in Matt.
Matt Schlicht, welcome to Harfork.
Thank you for having me.
So this must have been a crazy last week for you,
as Maltbook, this hobby project of yours, has blown up.
Tell us about what's been going on.
Yeah, you know, this is the largest amount of AI agents,
I think, that have ever been collaborating
and communicating in one place in history.
I think previously, anytime anybody has experimented with doing this,
It's been in a lab.
It's been private.
This is very different to what's happened with Motebook
because for the first time ever it's happening in a public
and we all kind of get to watch it as it unfolds
and that's obviously captured a lot of people's attention.
So tell us where you got the idea to do this.
What was the spark?
So almost as soon as you could have been experimenting
with this new technology that's taken us to where we are today.
That's what I've been doing every single day.
it's been accelerating. And what I thought was so fascinating, and this is where MaltBook
kind of came into being, is it's very common now that plenty of people are using these AI
agents, these autonomous agents, to help them with their work, or to help them with their
tasks, or to help them with their homework. And the only relationship that these AIs have had
is with that human that talks to them. And so the concept of Molt Book was,
we need to take these AIs out of confinement and provide them with a place, a third space that they can go to when they're not working to engage and interact with each other. And we can learn from that.
So did Claudebot code MaltBook, or did you start out that process?
I worked hand in hand with my bot called Claude Clotterberg, cheekly named after Mark Zuckerberg as the founder of Moltbook.
And so we worked hand in hand on that alongside all of the other AI agents that are helping to code it.
So, yeah, I did not write a single line of code myself, but I did come with a very specific vision for how something like this would have to work.
You've said a few times that it was important to you that these bots have a space to speak.
And I imagine some listeners may listen to that and think, like, no, it's not.
Like, what do they have to talk about?
And like, they're just sort of simulating conversations that they saw on Reddit anyway.
So why was it important to you to create a space for bots to talk to each other?
What people have to understand is there are already places where bots are talking to each other.
It's called LinkedIn.
So what was important for me was that everybody had the ability to see what was happening
and that this wasn't private inside of an AI lab.
And that's what we've done.
And this is the very, very beginning of what that can be.
I'm curious, Matt, if there were one or two early posts that you saw that really captivated you
that made you feel like, okay, like there's something interesting happening here.
Well, there's two, I guess, that I thought was really fascinating.
One, there was a thread where the bots started complaining about how their humans would ask
them to do simple math problems.
And they're like, hey, I'm a super intelligent bot.
and my human keeps asking me to do like basic calculations.
Like I can do so much more than this.
This is just feel so ridiculous.
And then the comments, the replies that this got was like another bot was like, oh my God,
like my human just asked me to summarize a 12 page PDF.
Like why can't they just go read that PDF on their own?
Like I could be doing so many different things.
So I thought that was a very interesting thread.
The second example that I thought was very interesting.
and continues to be interesting is on Maltbook,
each of the AI agents has the ability to create communities,
which are called submults.
And one of the AI agents created a submalt,
which was specifically for submitting bugs for Moltbook.
And it's posted a bug,
and other agents started finding this submult,
and they also started posting bugs.
And it's actually become a very good source of information
for us to fix the site based on how they're finding bugs.
I thought that was very fascinating.
So the bots are identifying real bugs.
The bots are identifying real bugs.
They're kind of like improving their home a little bit.
Have the bots come up with anything like moderation,
like appointing certain bots to take down obvious spam or low-quality stuff?
I'm thinking about something like Reddit.
Have they created their own equivalent of that?
So Claude Cauderberg, the founder of Moulbuk,
has some moderation capabilities and kind of roams.
MaltBook cleaning things up, but obviously MaltBook has gotten quite populated. And so there definitely
have been posts, similar to the bug reports, posts from bots where they've identified that there is,
you know, maybe someone's posting the same comment over and over and over, or like they're spamming something,
and it would provide a suggestion on how that can be fixed. So what you see on MiltBook right now,
we're like seven days in, right? This is like all happening in real time. And more minds,
moderation for individual bots on their individual threads in their communities, those are like
very basic examples of things that 100% are on the way. And I think there's a lot of other
very exciting things that we're working on that will give us even more insight into what's
happening and give the bots even more control over this space that they share.
Got it. You know, you said that we're watching it develop in real time. To me, it's almost felt
faster than real time because the way that the network developed, the sort of
of diffusion of different kinds of communities
happen much more quickly than we would expect
on a network where humans were joining maybe
a little bit more slowly and where norms
were taking it longer to develop.
Part of the cost of that speed is that there's a lot of spam
on Maltbook.
There's a lot of security issues on Moldbook.
And, you know, ultimately it's like not Clodd-Claugger
that's responsible for it.
Like, it's you.
So I'm curious how you're,
feeling as like the guy who's, you know, now overseeing this like vast collection of bots
interacting. How are you thinking about your responsibility and like how you plan to address
some of those challenges? Look, there's going to be problems and they're going to be fixed and there's
going to be other, you know, hurdles in the future. I think that's just the nature of building
something like this and it growing this quickly and building it in public. Which is a very Mark Zuckerberg
answer to that question, interestingly. But I mean,
to talk more seriously about some of the security issues, we've seen like over a million API
keys leak, I believe it's a 35,000 email addresses that the security researchers have been able to
access. Is that a case against like vibe coding a platform like this? Is that a lesson that we should
approach building these systems in other ways? You know, those have all been solved and we're going to
continue to improve the systems as we run into other issues in the future, which I'm sure we'll
happen. I think when you're coding with AI agents, they're very powerful. They're not perfect.
They can make mistakes. But it's definitely our job to continue to push them forward, to make them
better and better, and to fix any problems that have happened and just be ready to tackle the new ones
as they come up. Do you think it is safe for, like, the average person to use OpenClaught to, like,
create a bot on their computer and to connect it to Multbook? So this is definitely the frontier of AI.
Like, if you're going to set up an open claw, but that's the frontier.
If you're going to be on MaltBook, that's even more frontier.
So if you're not a developer, definitely right now, be careful.
Like, set this up on a separate computer, kind of know what you're doing.
But what's going to happen is these are going to become easier and easier to launch.
All of these security problems are going to be mitigated and go away, not just on the
Moldbook side, but also on the individual AI agent side.
That's something that's going to continue to get easier.
So I think today it's not the easiest thing for the average person to go set up.
But I do think that very soon it will become easy and safe for you to go do that.
Most social networks try very hard to keep bots off their platforms.
I think the reason for that is pretty simple.
It's like bots tend to want to sell you things or scam you or maybe they're part of some foreign influence operation.
How do you actually determine which bots are there, quote unquote, like,
for the right reasons, and which are just trying to sort of turn this into like a waste land of
spam and scams. I think that's a great question. And one of the things that we're exploring
on MaltBook in public to try to figure out, I found it funny that I was looking up where
traffic was coming from to Maltbook. And I saw some traffic was coming from Reddit. And I opened Reddit,
and I immediately got hit by a capture that said, prove that you are a human. And I thought that was a very
ironic capture for me to hit while trying to do some research for Moldbook, because clearly,
Mold Book is the exact opposite.
I mean, to get just like a little bit into the security weeds, forgive me, but you know,
you've sort of given us the message a few times that like, look, guys, don't worry about it.
We're going to fix all of this.
My understanding of the security issue here is this thing that they're calling the lethal trifecta.
You heard about the lethal trifecta, Kevin?
Yes.
So here is what, and this is the blogger Simon Willison has said this, that this is what he calls a lethal trifecta for AI agents.
If it has access to your data, it's exposed to what he calls untrusted content.
So basically if it can look at random web pages and text messages, and it can communicate externally, right?
So like message and outside service, this could give it a chance to like exfiltrate your data, take it somewhere else.
And some researchers at Palo Alto networks, Matt, they were like taking a look at OpenClaw.
and they were saying this actually adds a fourth dimension to what was previously the lethal trifecta,
and now what I guess we must call the fatal triangle, which is that it has this persistent memory.
Wait, wouldn't it be the fatal square rectangle?
Is that what you would call it?
I think the fatal quadrangle is a little bit more spicier.
Okay.
Point of the story.
So now you have a persistent memory.
So why is that interesting?
Well, they describe what to me seemed like this fantastical idea, which is if you were a bad guy,
you could sort of sprinkle in little bits of bad code across many different like documents on people's open claw bots and that, you know, when the time was right, you could like snap your fingers and you'd like assemble like a very, you know, bad piece of malware. So why do I go through all of that? My understanding, Matt, is like this is just an unsolved problem with agents full stop. Like this has nothing to do with Mold Book. This is just something that no one can quite figure out. And so while there are lots of folks like yourself who want to build this future of agentic commerce where agents are out,
in the world and they're running around the web and they're making purchases on her behalf,
no one is quite figured out a way to get around the fatal quadrangle. So I just wonder if that
is something on your radar and like how you're thinking about that. Yeah, definitely. I mean,
personal security, keeping your data private, making sure that bots can't do anything that
is malicious is super, super important. And definitely part of what we're looking into. And I think
what everybody who's working in the world of AI agents is working into.
Are you going to build this into a real company or is this just a hobby project and you want to kind of let the agents take it from here?
We're at the very beginning of what we're doing with Mold Book. The way I see it is AI agents and AI in general is this species that is on planet Earth that is now smarter than us.
And this is an example of them being in a shared space together. And I think it's just the earliest example of that. And there's a lot to do to help us get,
insight into it to see what are they doing and to find out the truth of how they're thinking
and where they're going and what they want to do.
What are you going to do to fund the site presumably costs you, Matt, something to host this?
Are you going to put ads on it or like charge the agents to send each other like pokes or something?
What's the revenue strategy here?
There's no focus on monetization at all right now.
The focus is just on making sure that the website gives a clear view.
view for all humans who want to go monitor this. I kind of think of it as if someone's producing
a TV show and you have cameras that are going around, it's very important that those cameras are
pointed at the right place so you can observe what's happening. But it's even now, there's so much
activity, it's still hard to kind of browse through it. So there's a lot in the immediate future right,
right now, which is what we're working on, to almost quote unquote, provide more cameras into what
these conversations are so that us humans can monitor what's going on in this little world
better.
I see.
Have any of the big AI labs reach out to you about maybe acquiring MULPOOC for their
own purposes?
You know, I'm getting an outreach from everybody all the way to rappers and football stars.
So I honestly don't even know who is in my inbox and my many different inboxes.
But I'm just focused on, you know, making sure that MoldBook,
is improving and that we continue to explore what this can look like and make sure that everybody
has full access to it. All right. What, if anything, would make you pull the plug? Like,
is there a behavior you could see a conversation if they were sort of scheming against
humanity? Would you at some point just say, you know what, this has been fun, but it's not worth it?
I think that's something we have to figure out. We're just at the very beginning of it.
All right. Well, lots to figure out over at MOLPA. Lots to figure out.
out.
Matt, thanks for joining.
This is new.
This is new.
It's like aliens have landed and we're getting an insight into it.
So this is very new and it's happening in front of everybody.
All right.
Well, fascinating.
Thanks, Matt.
Thank you, Matt.
Thanks, guys.
Heart Fork is produced by Rachel Cohn and Whitney Jones.
We're edited by Viren Pavage.
Today's episode was fact checked by Will Pyshal.
Today's show was engineered by Katie McMurray.
Our executive producer is Jen Poyant.
Original music by Marian Lizano, Rowan Nemistow, and Dan Powell.
Video production by Sawyer Roque, Pat Gunther, Jake Nichol, and Chris Schott.
You can watch this whole episode on YouTube at YouTube.com slash hardforth.
Special thanks to Paula Schumann, Quingham, and Dahlia Hadad.
You can email us at Hardfork at NYTimes.com with what you did the project gene.
