Hard Fork - Tech Grapples With ICE + Casey Tries Clawdbot, a Risky New A.I. Assistant + HatGPT
Episode Date: January 30, 2026This week we’re talking about the tech industry’s response to the killings by federal agents in Minneapolis and the federal government’s strategy to control the narrative on social media. Then w...e follow Casey through his trial of a new open-source A.I. assistant called Moltbot (formerly Clawdbot) and consider whether it is worth the security risk. And, finally, it’s time for a look at the rest of the week’s tech news with a round of HatGPT.Additional Reading: False Posts and Altered Images Distort Views of Minnesota ShootingA Moment-by-Moment Look at the Shooting of Alex PrettiIt’s a War: Inside ICE’s Media MachineSome Amazon Employees Get ‘Project Dawn’ Calendar Invitation Discussing Upcoming Job CutsTikTok Data Center Outage Triggers Trust Crisis for New U.S. OwnersFormer FTX Crypto Executive Caroline Ellison Released From Federal CustodyAnthropic C.E.O.’s Grave Warning: A.I. Will “Test Us as a Species”Inside the White House Screening for Amazon’s ‘Melania’ DocApp for Quitting Porn Leaked Users’ Masturbation HabitsAlaska Student Arrested After Eating A.I.-Generated Art in ProtestSteak ’n Shake Adds $5 Million in Bitcoin Exposure, Deepening Bitcoin CommitmentSpaceX Weighs June I.P.O. Timed to Planetary Alignment and Elon Musk’s BirthdayLinkedIn Will Let You Show Off Your Vibe Coding Expertise We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.
Transcript
Discussion (0)
I'm going monk mode.
That's what I'm calling it.
Monk mode.
When I, because I need to finish this book.
So you're embracing celibacy?
Yeah.
No, I'm locking myself in an Airbnb and I'm writing 14 hours a day.
And now that's actually the premise of The Shining, the Stephen King novel.
So how is it working out for you?
It's not great.
Are you seeing any rivers of blood flowing down the hallways yet?
Red Rom.
But no, it's going well.
I'm in the home stretch now.
I can sort of see the light at the end of the tunnel.
It's very nice.
And tell us about the experience of like,
because I've, you know, been in similar situations
where you sort of look yourself in the mirror
and you're like, all right, Newton, shape up.
Now, it's time to buckle down.
No more visiting your little comfort websites anymore.
You actually got to, you know,
make the clicky, clucky noise on the keyboard.
Does that actually work for you?
Yes, I've figured out a productivity stack that works for me.
Let me tell us a stack.
So I bring my big ass.
curved monitor, I sit myself down, I put on my noise-canceling headphones.
You have a silent disco.
I listened to somebody told me by The Killers on a repeat, like infinite loop,
and I imbibed dangerous quantities of the Celsius energy drink, and I write until I fall asleep.
Now, what is it about the killers somebody told me?
I don't know.
If you're familiar, younger listeners, you know, if you haven't heard somebody told me,
It's a song about how Brandon, the lead singer of the killers,
somebody told him that they had a girlfriend who looked like a boyfriend that he had in February of last year.
Yeah. And that's like a code phrase and it triggers something inside you.
It like activates you like a sleeper agent.
Yes, exactly. Honestly, so here's the thing.
Yeah.
This is a tip that I picked up from Michael Lewis, the famous author.
He said that he listens to let it go from Frozen over and over again while he writes his books.
Because you don't know, Michael Lewis is a five-year-old girl.
Yes. And so I thought, well, that can't possibly work. But it's true. I tried it out and it works. It doesn't matter what the song is. What matters is that it has the right tempo. And after about the seventh time through the song, the words just melt away and it just becomes white noise and it puts you in kind of this flow state. I can't wait to repeat this story and tell people that's effective. And they'll say, well, how do you know all this? And I'll say, well, somebody told me.
I'm Kevin Roos, the tech columnist of the New York Times.
I'm Casey Noon from Platformer.
And this is Hard Fork.
This week, phone-to-phone combat in the streets.
We'll talk about the tech industry's response to ice killings in Minneapolis.
Then it's the reason Mac minis are sold out across San Francisco.
We'll tell you about the sensation now known as Maltbot.
And finally, it's time once again for HATGPT.
So today we want to start.
by turning our attention to Minneapolis
and the ongoing confrontation
between ICE officers and protesters.
Kevin and I have both been horrified
by the fatal shooting of Alex Preddy,
the second fatal shooting of a U.S. citizen
by federal agents this month.
And while we don't often turn our attention
directly to matters of politics
and protests on the show,
we both believe that the tech industry
is playing a role in this story.
and that we would be remiss if we did not look at the elements of what is going on in Minneapolis that are ours.
Yeah, I've been really horrified by what's been going on in Minneapolis.
I have family in Minneapolis.
They're scared.
Some of them are leaving.
It is a really tense and hard time in American civic life right now.
And I think we should talk today about not just how the big tech companies and their leaders are acting right now,
about what role technology itself is playing in some of the reaction to these shootings.
Yeah, and I think the tech industry is playing a significant role in what is happening right now.
It is an infrastructure provider behind many of the key tools that are being used in this fight.
I would say primarily a lot of the surveillance tech that ICE is relying on to identify and detain migrants,
but also the social platforms, where once again we're seeing violence being turned into content
and influencers on both sides trying to make it go viral.
And then you just have the dimension of the fact that there is actually now state power,
the government that is using these tools in ways that we have not seen before in America.
So there's a lot to talk about here.
Yeah.
So let's start with the CEOs.
At the start of this week, Sam Altman of OpenAI, Dario Amadeh of Anthropic and Tim Cook of Apple,
each made statements or sent internal memos to their employees,
kind of lamenting what was going on.
These were slightly different in the way they were worded, but they all sort of came after employees at some of these companies have been urging their CEOs to speak up to say something.
Sam Altman had an internal slack message to OpenAI employees that said that he had spent some time talking with Trump officials, and he described ICE as going too far.
He also said that he thought that part of loving the country is the American duty to push back against overreach.
Dario Amadeh also discussed it in a post he made on X, where he called the events in Minnesota
horror, and Tim Cook sent an internal message to Apple employees saying that, quote,
this is a time for de-escalation.
So we're not seeing the kind of full-throated denunciations that we might have seen with the
first Trump presidency from some of these leaders, but they are saying at least to their own
employees that they are not happy about what's going on. Yeah, and I would say that in several of
these cases, we are seeing these CEOs say about the least that they possibly could. Like,
you can tell, I would say particularly in the Altman and Cook statements, they are very afraid
of irritating the White House. And at the same time, they're facing a lot of pressure from their
employees to say something. And so I think this represents their best efforts to find a middle
path. Personally, I found it a little weak. Yeah, I, like, as a general rule, I don't like when CEOs or other
business leaders talk insincerely out of obligation. I think we saw some of that several years ago
from the left. I think we saw some of that during the sort of first months of the Trump administration
aimed at appeasing the right. Like, I just don't like when CEOs talk in ways that are not their own
views. In some of these cases, I'm sure these are sincerely held views, but it just seemed to me like
why are we doing this?
Well, I think the reason that we're doing this
is because these guys are politicians too.
They are in their own way, heads of state.
They represent hundreds of millions of users
and giant employee bases,
and so they have to get in there and play politics.
And I'm sure they hate it,
but the fact is that in these cases, they have to.
Yeah, and I think we should say,
like, it is risky to do this
in this current environment.
Like, there was a post from Chris Ola,
who's one of the co-founders of Anthropic,
who posted very heartfelted,
I think sincere message about how horrified he was by what was happening in Minneapolis.
And Katie Miller, the wife of Stephen Miller, the sort of White House advisor, reposted it and
basically said, look, this is the AI companies. They're building this bias into their models.
If they talk like this, this is how their AI models are going to talk. So there is a cost
politically to saying something right now. And so I don't want to be totally dismissive of the people
who are doing that. I think they are displaying some backbone. But as you said, it's like,
in some of these cases they're saying about as little as they could.
So, Casey, one of the last times we talked about political violence on the show was after the
assassination of Charlie Kirk when we talked about some of the online commentary and how weaponized
it was feeling out there how these sort of platforms were becoming places full of rage bait
and incendiary provocative content and how that was sort of like bad for us as a society.
I'm curious, like, does this feel like another case of that to you, or what are you seeing out there?
I think there are definitely ways where it's similar, Kevin. People are drawn to conflict, and so are social networks, right? This is why Charlie Kirk was so successful in his way, is that he would create these conflicts by visiting college campuses, staging debates, and those conflicts would then, you know, go viral in ways that served him. We're seeing a similar thing here where the Trump administration uses spectacle as one of its main weapons. And if it can create a crisis that begets a bunch of video, that can sort of,
their interest too. I think the real difference here is that Charlie Kirk did not have the power of the
state behind him and Trump does, right? Like Trump has an army behind him. He has thousands of ICE agents and other
federal agents. And so we have just seen, as after he returned to power, the way that various
Trump administration officials would bring along influencers in their entourages when they would go about
their business. So like last year, Christy Noam brought a bunch of right-wing influencers with her when
she visited an ICE operation happening in Portland.
And another important thing here, Kevin, is the way that the Minneapolis operation began after a video alleging fraud in daycare centers went viral.
So in this very real way, a social platform is setting the president's policy agenda.
So that's just very different from the Kirk stuff.
The stakes are a lot higher here.
Right.
I think in a lot of public spaces, like what's happening in Minneapolis is not a 50-50 issue.
Like, I think it is actually a pretty widely held belief that federal agents should not be shooting protesters in the streets.
But on X, it's like you see people defending it.
You see people spreading these, like, conspiracy theories.
You see people circulating this manipulated footage and these, like, deep faked images that purport to show, like, you know, more of a violent standoff that there was.
And it just seems like a platform that has either decided to put its thumb on the scale in favor of the administration and its,
responses or where just a lot of this is going on unchecked.
Yeah, I mean, you know, you predicted after Trump won that X was going to become state media.
And I think that it basically has. This is where the Trump administration goes to get their
policy ideas and to advance, you know, whatever operation they're working on at the moment.
And so you have just seen all across X administration officials making their case that this
ICE campaign is justified or necessary. Yeah, and I think another difference between what's
going on now and the Charlie Kirk assassination is that the government itself is getting involved
in actually producing some of this content. There was a great story by two reporters at the Washington
Post last year about ICE's media strategy, which was based on a bunch of internal agency
documents and messages they got. And what I found remarkable about this is that ICE, like some
other federal agencies, now essentially has like a team of content creators who work at
agency and who are spending every day trying to sort of like steer the narrative using, you know,
the same techniques that a brand or a celebrity would. They're out there, you know, using paid
social media tools. They have like a team of producers and editors and video makers who are
out there making clips. It seems like these agencies are not just thinking that their jobs are
enforcing the law. It seems like they are also trying to control the narrative around how they
enforce the law. And I would say almost to the point where like winning on
social media has become almost the entire point, right?
Like, yes, there are policy objectives here, but in a very real way, they seem secondary
to getting the most retweets.
Yes.
Yeah.
So that feels new.
Right.
So in addition to the government actively working to kind of seize and control the narrative
online about these events, we also have the rise of AI tools that are making it very hard for
people to know what's real and what's not.
So my colleagues at the New York Times had some reports on the way that AI was being used to
alter images of Alex Preddy, the victim of this shooting. One was doctored to make him look like he
was pointing a gun at an agent, although it was actually a phone. Another was used to enhance a
blurry freeze frame from a video of this shooting. And we should put enhance in air quotes,
because it didn't actually enhance it. It just made a bunch of stuff up. Right, exactly. So
it was sort of upscaled in this way that like introduced some errors into the image.
So Casey, I can't imagine this was surprising to you. We've seen this.
kind of thing after most major news events in the past couple of years. But did anything stick out to
you about the actual media that was being circulated? Well, I mean, when it comes to like online detectives
who want to participate in the story of the day, that is not new. That's been going on for well
over a decade. But Kevin, the White House released this AI altered image of a civil rights attorney
named Nikima Levi Armstrong who went to an anti-ice protest last week and was arrested. And in the
aftermath of her arrest, the White House puts out this image to make her look like she's crying
her eyes out when in fact she was not. And it's kind of a weird story because, like, DHS had already
released the original unaltered image, like before this altered one came out. But once the one
of her crying, which again she wasn't doing, comes out, like J.D. Vance, the vice president retweets
the image. So the thing that this just has me thinking about is this phrase that got associated with
Russia in the 2010s, which is.
is nothing is true and everything is possible. And it just referred to the way that Putin and his
lieutenants would try to shape a world where you just sort of couldn't trust anything that you
were seeing anymore, that the state invested huge resources into just blurring the line between
truth and fiction. And if you wanted to know what was really happening, you were just going to have
to invest enormous amounts of resources. And the way that this serves the administration,
is this concept we've talked about a few times on the show that I love, which is called the
liars dividend, which means that because people know that evidence can be fabricated, no matter what
evidence you see now, you're always wondering, was this maybe fabricated? And that just erodes trust
in our society more broadly. So I think it's hugely irresponsible for the vice president to be
sharing these sorts of images. But when the administration has been asked, what are you doing here?
Like, why are you sharing these obviously doctored images, a spokesman for the White House
just said, the memes will continue, which I honestly found chilling. Yeah. What do you think the
platform should be doing about this? Should they be taking down doctored images? Should they be applying
some sort of label? Like, what is the correct response here? Well, so I go back to the way that platforms
handled this during the first Trump administration, where they were much likelier to act. You know,
during the 2020 election when President Trump was sharing misinformation about mail-in voting ballots,
Twitter put a label on his post that directed people to high-quality information about how mail-in-voting
actually worked. Now, of course, Twitter doesn't exist. Elon Musk owns X. And if anybody is going
to correct anything, it's going to be in this community notes feature, which is just sort of this
volunteer crowdsourcing effort. And I'm glad that it exists in the, you know, like I'd rather have it
than not have it. But of course, it's unpredictable. It doesn't always appear in a timely fashion.
It's my view that platforms have a strong interest in labeling, obviously, AI generated content.
I think that if you want people to trust your platform,
you should take steps to help them understand
what they're saying is real and false.
But of course, in many cases,
they're going to be terrified to do that
because there might be a mean tweet about them.
Right, exactly.
And I think for me, what I've been thinking about
is less the platform response and more, like,
I've been thinking a lot about the AI companies.
Obviously, I have AI on the brain sort of permanently.
But I think this is actually why these companies
should support regulation for the technology that they're building.
Because right now they're in kind of this fragile state where they can't really speak up against the administration in very forceful terms without risking some kind of retaliation, some new law, just a bunch of mean tweets.
Like there are actual political risks to taking a stand on something.
Even something if they were to come out now and say, oh, we're going to like apply a label to AI generated images that are misleading or we're going to have some kind of watermarking system.
I think some people in the Trump administration would interpret that as like a response to what they were doing and they would get mad about it.
Yeah, they call it censorship.
Exactly.
So I think this is actually the case for like regulation that is passed by Congress, that is signed into law and that persists across administrations so that you don't have to kind of operate in this like weird limbo where you're trying to do the right thing, but you're scared that the administration will punish you for that.
and anything that you do is going to be seen correctly or not as a reactive countermeasure
instead of just having one set of rules that apply across all administrations.
I think one set of rules is a great idea.
I think we should pursue this.
I also think that if you find yourself in a situation where there's a presidential administration
that decides it wants to lie about what is happening and is willing to fabricate evidence to support
those lies, platform policy is only going to help you so much.
I want it to exist.
I want to be able to trust in it.
I just want to also point out their limits to it.
Exactly.
I want to talk about this other dynamic here, Kevin,
which in my head I've been calling the battle of phone versus phone, right?
Smartphones are playing a central role in this conflict,
and while, of course, phones have long been part of protests,
I feel like we're seeing a direct confrontation between the state
and the protesters involving the phone
in a way that we haven't quite seen before.
Yeah, this stuck out to me, too, as I was looking at some of the awful videos of the shooting
of Alex Preddy, everyone's holding phones.
Yeah.
Right?
This is now like a standard part of any confrontation with law enforcement in this country
is if there's anyone around to witness it, there's going to be phones pointed at it.
Someone is going to be filming that.
Absolutely.
Alex Preddy was holding a phone when he was shot.
Renee Good's partner was filming when.
Renee was shot, and videos of their killings have spread on social networks and I think have significantly
shifted public opinion against the Trump administration. You know, Governor Tim Walts of Minnesota
said, carry your phone with you at all times in order to help us create a database of the
atrocities against Minnesotans. Okay. So this is a dynamic I think we have seen before. You can go
all the way back to the Arab Spring and look at the way that people were posting on Twitter,
images of, you know, state violence. Here's, I think, the twist. The Trump administration has been doing
two things. One, they've been putting pressure on American citizens not to film, right? The DHS secretary,
Christine Nome, said in a press briefing in July. Violence is anything that threatens them in their
safety. So it is doxing them. It's videotaping them. It's videotaping them where they're at when they're out
on operations. Also, Trisha McLaughlin, who's the DHS Assistant Secretary for Public Affairs, said
in a statement, quote, videotaping ICE law enforcement and posting photos and videos of them
online is doxing our agents. We will prosecute those who illegally harass ICE agents to the fullest
extent of the law. So there's this recognition in the Trump administration that being filmed
and having your video put on social media is dangerous to you. And we've started to see a lot of
threats against people who are doing this. And we should just say like, it is not illegal
to film law enforcement. It is not doxing to post the name or the,
the image of a federal agent who is operating on behalf of the state,
like these people wear badges, or at least they're supposed to,
because we want this sort of public accountability.
And if they are acting in ways that are illegal or unethical,
we want to be able to hold those individuals accountable.
But I agree that there's this sort of feeling
among certain members of the administration
that these phones present a danger.
Absolutely.
And when you consider the fact that all of these ICE agents
are just permanently masked when they're out,
in the world. It makes you wonder, Kevin, what are they more scared of? Is it the guns that some of these
protesters may be legally carrying, as Alex Pretti was? Or is it the phones that are in their hands?
Is it the threat that someone will find out who they are and what they're doing and link it back to
their actual identity? There's this final thing that we're seeing in this phone versus phone battle,
though, Kevin. And this is where I also think that we're in just kind of some new territory,
which is the Trump administration is bringing their own phones to the battle, right? They're bringing
along those conservative influencers. They're giving them talking points and ideas for what sorts of
videos they might post. There's a long tradition of the government using celebrities and influencers
to promote their worldview or, you know, various things that they're doing. But the thing in
Minneapolis that just seems really different to me is that it is almost like phone-to-phone combat
between Americans, right, where you have the protesters who are trying to document what is happening.
And you have the administration, which is trying to shoot photos and videos to suggest that
that these are sort of like very effective anti-immigration raids?
Yeah, and I think for me the question is what happens when the videos,
the artifacts that these phones are producing,
themselves become untrustworthy.
You know, when, as you mentioned, people were filming the Arab Spring
to try to bring accountability to their leaders,
when people started taping their encounters with law enforcement 10 or 15 years ago,
it was seen as sort of verifiable proof
that what you saw filmed from these cameras
was what actually happened.
And I don't think we can make that assumption anymore.
I think we are no longer in the sort of era of video being proof
that something happened or pictures or any other form of media.
I think we have entered this kind of postmodern state
where things need to be interpreted in context
and there is no one canonical source of truth.
That is true, and yet at the same time, like, I have a more optimistic view of where we are, at least in this moment, because this particular shooting, the pretty shooting, was shot from many different angles.
Journalists put a lot of work in to verify the videos that were posted. And interestingly, as they spread on social media, there was not a big movement to say these videos are fake. There just wasn't.
And in fact, I think the video of Preddy's killing shocked the conscience of many Americans,
some of whom had previously been fully aligned with the Trump administration in these anti-immigration raids, right?
We saw the NRA criticize the administration's portrayal of Alex Predey's killing, which I think really shocked a lot of people.
And so in a very grim moment, this was the silver lining for me, was that actually Americans
do still trust the video that they are seeing,
including the video that's being shared on social media.
And they are not immediately leaping to,
I am being lied to,
this is just sort of manufactured evidence from partisans.
Now, of course, the question is,
how long will that state of affairs hold as AI tools improve?
Does that bargain sort of break?
But I don't know.
There's something that is happening here
where it's like, if enough people see it,
if the video is captured from enough different angles,
And if the administration isn't willing to just lie at an ever greater scale and manufacture ever more evidence to support their version, maybe the truth survives here.
Yeah.
Well, that's a hopeful note to end this discussion on, and I'm glad we talked about this.
Next up, why Casey is giving over his entire digital identity to an app called Moldbot.
Casey, did you watch the Alex Honnold free solo of the 10?
Taipei 101 on Netflix.
I was able to watch about 60 seconds of it,
and my palms started sweating so much.
I thought I was an eight-mile.
Yes.
It was very nerve-wracking.
But speaking of extreme risk-takers,
daredevils, people who take their lives into their own hands,
I hear you have actually been experimenting with some new technology this week.
I have, Kevin.
I have installed MaltBot, formerly known as ClaudeBot,
onto my laptop, and I've said,
damn the consequences. Yes, and this is
kind of the AI version
of a free solo up the Taipei
101. My
understanding is this is like insane
behavior. So please tell me
about Claudebot, which we should
say is not affiliated with
the anthropic Claude
chatbot. It is spelled,
or was spelled, C-L-A-W-D.
Claude like a lobster is Claude.
Yes. And it has since changed its name
to Molt Bot, I guess, because of some copyright concerns.
Well, when lobsters shed their shells, they're molting, and so now the former Clodbot
is Moldbot.
And this thing was the talk of the town.
Over the past weekend, I saw many, many people who work in tech in San Francisco becoming
obsessed with this tool, talking about how it was running their entire lives.
And I am glad that you experimented with it so that I didn't have to.
And now you can tell me about it.
But let's just do our disclosures real quick, because we are going to talk about Anthropics.
and other companies in this.
Well, my boyfriend works at Anthropic.
And I work for The New York Times,
which is doing OpenAI, Microsoft,
and perplexity over alleged copyright violations.
Yeah, well, so I first saw Claudebot
on a post on Mac Stories
by the great Federico Vitici,
and Federico, like me,
is always ready at the drop of a hat
to make some new piece of software
his entire personality,
and Claudebot looked like one of those things.
It is a free, open source,
personal AI agent that you can run on your computer,
plug it into various systems, services, and AI tools,
and if all goes according to plan,
you will have something resembling a little genie inside your computer
who can work for you.
So, Casey, who created this bot?
Moldbot was created by a developer named Peter Steinberger.
He has worked on many, many kind of cool, hacky, tinkery project.
you can go to his GitHub and see like dozens of projects that he has worked on that are all in this kind of do-it-yourself vein.
He started Claudebot to scratch his own itch.
Basically, he just observed that there was no great personal assistant agent yet on the market,
and he wondered if he could figure out a way to use Claude code through WhatsApp,
and that's how the whole project began.
Okay, so you saw this and thought, I should try this.
So how easy or hard was it to set up?
Well, I would say the setup is pretty easy.
If you've installed Claude code, it will feel quite familiar.
You go to the Multbot website.
You grab a little one-line piece of code.
You put it in your terminal, and it will begin the setup process.
And I saw a lot of people over the weekend talking about how they were buying Mac minis
to install this thing, sort of like give it its own computer.
What is the deal with that?
Why are people doing this on separate computers rather than their main computer?
Yeah.
Well, like when you free-stop.
solo, a giant skyscraper, Kevin. There is some risk involved with installing an AI agent onto your
main machine. A few of the risks I would name. There is the risk that somebody can just kind of
hack into your computer. Claudebot can connect to a messaging app like Telegram. If somebody
hacks into your telegram account, theoretically, they could actually just take over your entire computer
and do anything on your computer that you could. Yeah, so we should just say, like, don't do this.
Don't do this. Do as Casey says not as he does. This is a very bad idea. And you like to, I don't know, live on the edge a little bit or what were you thinking?
Yeah. Well, what I was thinking was, first of all, I have like disabled the telegram integration. So I have, you know, gotten rid of that particular attack surface. And I'm also being careful.
How are you buying your drugs?
No comment.
Yeah. So anyway, that's one of the risk. You have.
have the prompt injection attack risk, which is that you will visit a website that has some hidden
instructions buried in it, and the agent will see it and will follow those instructions.
That's very hard to protect against. And so basically, if you're going to use something like this,
you would be better off putting it in what they call a sandbox, some sort of contained
environment where it only has access to a narrow range of tools and nothing that has access to,
for example, your bank account. So that's the right way to do it. I tried to do it in a way that
is undeniably risky, but, you know, I've tried to take the precautions I can't. I'm only
connecting it to services that, you know, are not life or death for me. You know, as I'm sure we'll
get into, the main thing that I've been doing with Claudebot is just trying to build a personal
daily briefing for myself. I do not have it working for me as like a full-time AI employee who's,
you know, out browsing the web and doing a bunch of, you know, errands on my behalf every day.
Right, which is the kind of thing that I saw people experimenting with, or at least, you know,
posting about this weekend. And we've been talking a lot about like agentic AI systems. How is this
different? Is it just that it can like connect to stuff that Claudecode or Codex or all these
other coding tools can't? Yeah. So the first thing to say is that yes, this actually is a lot like
Claudecode or maybe Codex from OpenAI. And you can get a lot of the same benefits out of one of
those tools as you can with Claudebot. Claudebot, though, tries to offer a few things to distinguish it.
Number one, it runs locally on your computer.
That's important to a lot of people.
There are actually ways to restrict the amount of data that it is sending back into the cloud.
Two, it has a better memory system than Claude Code does currently.
In Cloud Code, you're always trying to cram everything into a context window,
and often you'll run out of context window, and it has to compact the conversation,
and you often feel like you're just kind of starting from scratch,
and Cloud can't remember what you did yesterday.
What Moldbot does is it just writes memories to,
a markdown file, and then it continuously revisits that. And in my experience, it has been a little bit
better at understanding, for example, if you built a tool with it the previous day and you want
to make a tweak to it, it knows where to look to go find that project and edit it, which sounds
very simple, but it's actually something that Claude Code is very bad at and has frustrated
me constantly. Like if North Korean state hackers are like infiltrating your machine and stealing your
money, like, it'll like sort of leave them a nice little markdown file that they can return to the next time they
want to steal from you. That's right. It'll say here's exactly how to break into the vault.
Okay. Okay, good. You're really reassuring me here. So give me a sense of like what
Maltbot is doing for you. Give me like one example of something that it has done that other
AI tools have not been able to do. Well yeah. So, you know, I did again, danger zone,
but I wired it up to my email and my calendar. Oh my God. And I thought,
I didn't know I was in the presence of evil coneeval. You are truly doing the most out here.
What happened to you?
I'm always on the hunt for something interesting to write about.
I'd be honest answer to that question.
And I want to see if maybe this would be one of those things.
As one of my mentors used to say, it'll either be a good time or a good story.
There you go.
There you go.
And I would say this was probably more in the realm of good story than good time.
Yes.
But yeah, so, you know, I connected it to my email, my calendar, a few other services.
And I thought it would be great if in the morning when I visited my clodbot and I just write good morning,
I would get a nice briefing
that would have things like,
here's the weather,
here's some important emails you should answer,
here are the calendar events
that you have coming on today.
And then I thought,
okay, now I want to make it extremely specific to me, right?
Because like this is maybe where AI
can help me get beyond the very basic stuff
like email calendar.
And so I said,
I want you to tell me
whenever there is a new pro wrestling payper view on TV,
I want you to tell me
when there's a new episode of Rupal's Drag Race,
every Thursday,
I want you to tell me what movies are coming out this week
and who stars in them and give me a one-sentence synopsis.
And so I just start, you know, again, you're typing into the box.
You're saying here's what I think the best daily briefing would look like for me.
And by hook and by crook over a handful of days,
I got something, Kevin, that works about 70% of the time.
This is why you can't buy RAM.
Because you need your RuPaul's Drag Race briefing.
Yeah, it's definitely me and not the like, you know, 800-day.
data centers that are being built all over the world.
I have not seen this in action yet.
Can you just show me or tell me how you actually use this thing?
Yeah, I can show you.
So let's see.
I'll open up my cloudbot here.
And this is running,
you're going to see this in a browser window,
but this is running locally.
So let's see.
Can I talk to it?
No, I haven't trained on your voice to not,
to not respond to anything you said.
Okay.
But, you know, I can just type, good morning in here, and it'll think for a second, and then it'll, like, pull up a briefing.
But, you know, you can see it's like, it's like a plain beige chat window, nothing fancy.
A lot of people are messaging molt bot through third-party messaging apps, so there are ways to hook it up to WhatsApp, Discord, Telegram, various other messaging apps.
That's very convenient. It means you can easily access it from your phone.
But, of course, then there's a lot of risk involved.
What can it do?
Show me one thing it can do.
Yeah, well, so it's going to pull up the briefing that I've been putting together.
So you're going to see it's going to come up here.
It's going to show you like, here's the weather.
Here's some emails you might want to look at.
Here are some overdue tasks that you have in your to-do app.
You're also probably noticing that it's taking a really long time.
Now, I want to say something about this thing that I did,
which was I wanted to be able to get this.
sort of daily briefing that I've described just by typing the words,
Good Morning into CloudBot.
And now I've already done this a few times this morning as I was testing and preparing
for this segment.
So that might be why this broke.
But I just want to point out that in this moment, the thing that I built did not work.
So, you know, one reason why I was excited to talk about Claude Code on a recent episode
was I felt like this had gotten to a point where if you told Claude Code to do something,
it basically would just work.
And I am telling you,
I do not personally believe
in my own experience
that Claudebot is yet at that level.
I now run into this every day that I use it
where I will tell it to do something
and something will break along the way
and it won't quite work.
So what is going on under the hood?
Here's the great part of vibe coding.
I have no idea and I never will.
Yes.
This thing is wild
and I think we should just make,
again, a little caveat
that you should not install
this on your machine unless you know what you're doing and are willing to take some risks.
Put it on your ex-boyfriend's machine.
So, I mean, look, here's the thing. I entered into this thinking this might be something really
fantastic that could be a new way that I used my computer. And over the first few days of using
it, my expectations have been managed, I would say very far downward. Yes. The security risks,
as you mentioned, are like not at all resolve.
This is not a safe tool to use for a lot of things.
But honestly, in some ways, just as important to me,
it just does not enable that much new stuff, right?
You can spend a lot of time hacking around,
wiring up various services,
having that feeling that you're being productive.
And then when you look back on it a day later,
you're like, okay, well, I have a very, like,
complicated new widget that's telling me the weather.
So why are people so worked up about this?
Like the way that people were talking about Claudebot, Moldbot over the weekend,
it was like the rapture had happened and Jesus had come back.
So why are people so excited about this?
So here's my best guess.
I think that Claudebot is a very compelling vision of the future, right?
We have lived in a world where to do anything, basically since the dawn of personal computing,
you've needed a dedicated piece of software
that had a discrete set of capabilities
and Claudebot and AI come along
and they start to chip away at that vision of the world
and they say what if instead of having
a bunch of apps on your computer Kevin
there was just a genie who lived inside your computer
and every time you had a wish you could go to the genie
and say, genie, I wish for you to make me a website
and then abracadabra, to quote Lady Gaga, it exists.
That is something that is really interesting.
And while it is only barely beginning to become possible,
I can imagine what a good version of Siri would look like in the year 2036.
Yeah, I mean, that was...
Yes, it may arrive a little sooner than that.
Maybe.
But that was my first thought.
I was like, this is actually the thing that Apple said they were building into Siri.
It seems like this one random developer has actually gone out and built.
And maybe it doesn't work all the time.
Maybe it's got all these risks.
but it does feel like people are figuring out
how to kind of stitch these tools together.
I'll tell you about one example of an interaction with MaltBot
that got me thinking somewhat excitedly
and also nervously about the future.
This is a user named Alex Finn
who posted this screenshot of his MoldBot
and he says,
can you make a reservation for me at this restaurant?
And then it said, the bot says,
I can't book directly through Open Table right now.
now, but if you want me to just call them, I can call and book under your name. And the bot actually
did sort of use a synthetic voice from 11 labs to call the restaurant and place this reservation.
So that to me was like, you know, and I have not verified that this example works. I have not
tried this on my own machine. I will not be installing this bot because I value my security
and my data. But this is the kind of thing where like if it does work in a sustained way,
would actually, I think, be a very useful tool for a lot of people.
Yeah, and this is honestly just what it is often like to cover technology is that you are
writing about stuff that is directionally correct, way too early, and barely works.
And I think a lot of people's relationship to stuff like that is they don't want to
know about it because it's irrelevant to them.
But I think for you and me, it's really important to pay attention to that frontier because
it's been our observation, particularly in the AI.
era, stuff that barely works in January might be pretty freaking good by November.
So that's why I have my eyes on this one is I think you're seeing it starting to capture
people's imaginations and say, well, what if there was a safe version of this?
What if this was just natively part of the operating system?
What if I could trust it, you know, with all of my personal information?
Maybe I would never have to visit an open table ever again.
Right.
And what a world that would be.
Yeah.
Casey, I want to talk a little bit about this tweet that I got a lot of.
of heat for over the weekend. A heat tweet. A heat tweet. So I was sort of watching all this happen,
everyone, you know, experimenting, tinkering with Claudebot, Maltbot. And, you know, I started
feeling like the gap that we are seeing between the people who are early adopters on this stuff
is really diverging from the people who are more cautious, who are more skeptical, who really
are only sort of using chatbots
in the way people were using them in
2023. And
I would say people got like very
mad at me for making this.
They thought I was sort of endorsing this early
adoption mindset where you like turn over
all of your decision making to an AI chatbot
and you let sort of these like clod
swarms like run your life.
And I guess I'm curious
whether you think I was
right there or wrong. Like do you think
there is something notable going on here?
Well, let's go to the text.
Listen, if you're,
ever want to know what's what's a gift you can give any man ask him to read one of his tweets out loud so
Kevin why don't you read us this tweet okay I wrote I follow AI adoption pretty closely and I've never
seen such a yawning inside outside gap people in San Francisco are putting multi-agent
clod swarms in charge of their lives consulting chatbots before every decision
wire heading to a degree only sci-fi writers dared to imagine people elsewhere are still trying
to get approval to use copilot in teams if they're using AI at all
And the backstory here is that I had just been meeting with some groups about their AI policy.
You know, these are schools, nonprofits, people who sort of want to know, like, how should we be using this stuff.
And I have been struck recently that so much of what people are struggling with are the same things they were struggling with a couple years ago, right?
It's like, what is my enterprise IT policy about letting people use AI?
Should people be allowed to use AI to take notes in meetings or write performance?
performance reviews. It is like kind of these institutional rules for the sort of last generation of
AI products. And what really has been striking to me is like how little people seem to believe
that the future in the past are going to look very different from each other. Yeah. That we are like
heading into something new and strange and something that requires us to shift the way we think about
these things. So I think that what you're hitting on is,
is a dynamic that has long been predicted among those who say that the diffusion of AI technologies
will be slower than the accelerationalist think because it will just run into a lot of
institutional roadblocks and bottlenecks along the way.
Like here in the real world, we do have IT policies.
We do have companies that aren't going to let you install whatever AI tool you want
in your laptop.
And we have a startup ecosystem full of wireheads that doesn't have any IT policies.
And the minute that some crazy new, insecure, open source project comes out, they put it on their
laptop to see what they can do.
And so that dynamic, I think, is going to persist.
And I think the question is, is there any real alpha to be gained by these wireheads who have access to
cloudbot and who can try any new AI tools and can lean all the way in?
Or do they wind up just kind of spinning their wheels a lot and running into a bunch of security
problems and get their GitHub account compromised and have phony crypto coins associated with their
names. There's going to be some variation. People are going to have different outcomes, but I think
that's what we want to watch. Yeah, and I'm worried about this sort of new axis of polarization that I
think is starting to happen right now. It's like, do you think this technology is real and important,
or do you think it is fake and overhyped? And I think we are starting to see the results of that polarization,
where the minute something new comes out,
you have thousands of people rushing to try it on GitHub
and install it on their machines,
and you have other people saying, like, stop this train,
you're all huge nerds and losers,
and we should focus on the problems that people really have.
And look, right now, I think this is debatable,
whether these tools are actually supercharging people
outside of maybe coding, which I think is its own special domain.
Maybe this is all productivity theater,
and people are going to, you know, regret installing these things,
and they're going to go back to, like, using, you know,
Google Calendar and Notes app and whatever else they're using, like, the rest of us.
But I worry that in the world where this stuff does actually continue to improve,
where it's not just productivity theater,
where the people who are using things like whatever the, you know,
the 17th version of ClaudeBot is that actually works,
like those people may be way more productive than they,
the people who are still doing things by hand.
In the same way that I think if you are a coder today who writes your code by hand and doesn't
use any of these AI-assisted coding tools, I think you're actually just much less competitive
and much less well-suited to succeed than the people who are using the tools that they're
using.
Maybe.
I think that if these tools wind up being as productive as you are suggesting, they will sell
themselves to the mainstream, right?
You don't have to convince anyone to get an email account or buy a smartphone.
They know that's going to be helpful in their lives.
I think the concern that you're seeing among people who are more skeptical, and part of it is
just, yes, sort of like reflexive skepticism and maybe a resistance to change.
But part of it is just it's actually not clear to me that this is going to help me in my life.
And maybe this is actually just a tool that is being built by people who do not have my best
interests at heart, who are mostly just trying to take my job away for their own purposes.
So to the extent that these tools can help empower people, help them make money, help them get and keep their jobs, help them find work that they like to do, I think you're going to see very enthusiastic adoption.
But if it just remains, this is kind of something that's getting better and better at doing the thing that I'm currently being paid to do.
You better believe we're going to continue to see a lot of resistance.
Yeah, I don't like, I'm not saying that people should be excited or enthusiastic or to adopt every new tool the minute it's out.
I think there are very good reasons to be worried about and skeptical of some of the, you know, the hype that's out there.
At the same time, I do want people to understand what the tools are capable of doing.
And right now we have a situation where, as you said, like, if you are at a big company, if you are at a sort of more traditional organization, you are probably not able to use this stuff, at least not at work on your work devices.
If people are experimenting with this stuff and decide this is not for me,
like that feels like a totally fine outcome to me.
But I think if people aren't even aware that these tools are out there,
that people who are using them are getting some benefits from it,
I just worry that they're going to get left behind.
And I was thinking about this in part because of something else that happened this weekend,
which is that Andre Carpathie, the sort of legendary AI researcher,
kind of came down off Mount Olympus to give his sort of, you know, one of his regular
pronouncements about the state of these AI coding tools.
And he said that programs like Claude Code and Codex are easily the biggest change to my
basic coding workflow in two decades of programming, and it happened over the course of a few
weeks.
This is one of the most accomplished programmers in the world, someone who has built and
maintained, like, huge, complex systems, saying that he is sort of trying to adjust to this new
era, that his job, his old job, the job that he spent his whole career doing, has essentially
been taken over. And, like, if that is true for him, it is also true for other programmers.
And if that is also true for programmers, it is probably true for people in other areas and or will
be soon. Yeah. Well, the goodness is everyone on the Hard Fork show is now fully aware of all this.
So if you listen to the show, you're caught up.
And if you don't, once again, you're out of luck.
Listen to podcasts.
That's the moral of the story.
When we come back, it's time to pass the hat.
We'll play a round of Hat GPT.
I'll call my Milliner.
Is that a Hatmaker?
That's right.
Did you go to college or something?
Casey, it's time for Hat GPD.
Hat GPT, Kevin, of course, the game we play, where we put a bunch of the week's top
stories into a hat, draw them at random, and discuss them until the other person gets
bored and says, stop generating.
Should we use our hard fork hat?
Please produce the hard fork hat.
This is an official piece of hard fork merchandise.
Very nice.
Now, is this still available for people to buy?
I don't think so.
Good.
I want to gatekeep it.
You can't have it.
But there's a different hard fork hat that you can buy at the New York Times shop.
Oh, well, there you go.
Yeah.
All proceeds go to supporting journalism.
It's about time somebody supported journalism.
Okay.
Here we go.
You're up first.
All right.
Well, Kevin, this story comes to us from Business Insider.
some Amazon employees got a calendar invitation titled Project Dawn discussing upcoming job cuts.
Now, of course, on Wednesday, Amazon announced 16,000 layoffs.
The day before that, Amazon seemed to mistakenly send employees a calendar invite about those very layoffs.
It was for an event at 5 a.m. Pacific time on Wednesday, but was canceled shortly after it was sent.
What do you make of this?
And so the layoffs were called Project Dawn.
Yeah.
That's bad.
We shouldn't do that.
That's like I'm pretty sure I have played like a sci-fi video game where Project
Dawn was the name of an effort to wipe out all of humanity.
God, just say Project layoffs, Project Hatchet, Project Cost Cutting.
Project By-bye.
Project Efficiency.
Much better names.
Let's not call this something that sounds like a science fiction movie.
Now, here's what I hope.
I hope that reporters stay on the case and figure it out,
which AI tool is responsible for that calendar invitation being sent.
They're deploying Claudebott and is sending people random notices about their layoffs.
Okay, stop generating.
This one comes to us from The Guardian.
Former FTX crypto executive Caroline Ellison has been released from federal custody.
This is about 14 months she served for her involvement in the multi-billion dollar FTX fraud scandal.
She was originally sentenced to two years.
And her release comes as the whole FTX saga is set to be.
turned into a Netflix series this fall called The Altruists, starring Julia Garner and Anthony Boyle.
Here's what I will say about the icon Caroline Ellison, Kevin.
Never have I been more confident in my life that someone was about to start a substack.
So we want to wish Caroline well on her upcoming journey into content creation.
And I would be very excited to read her substack because she is actually like a good writer.
Yeah.
And I enjoyed the Tumblr posts of hers that I read.
during the sort of whole conflagration at FTCX.
And I think we should just officially extend
Caroline Ellison, the invite to come on hard fork.
Absolutely. You know, in the gay community,
we love a problematic queen, and we believe everyone deserves a second chance.
So Caroline, get in touch. We'd love to chat.
Stop generating.
Now, this one really has been the talk of the town, Kevin.
A TikTok data center outage triggered a trust crisis
for its new U.S. owners.
This comes to us from Wired.
So over the past week,
Bight Dance officially transferred control of its American business
to a new subsidiary that has a group of majority America investors
and things immediately went haywire, Kevin.
We've seen celebrity after celebrity,
along with a bunch of politicians raising claims of censorship,
saying that as they have tried to post in protest of ICE,
Their posts are getting zero views or not appearing at all.
Gavin Newsom, the governor of California, has said he is launching an investigation because he tried to send the word Epstein in a direct message and it did not appear, which was something that other users reported as well.
So what do we think is going on with this new TikTok?
So I have tuned at most TikTok-related news because I think I made the logical conclusion that nothing was ever going to happen.
to TikTok and just sort of said like wake me up when it happens.
Yeah.
But I'm awake now.
Wake up,
sheeple.
There are two things that have happened.
One, TikTok was sold.
And then the second thing is this sort of theory about the fact that these new owners
might be trying to sort of put their thumbs on the scale, suppress certain content
they don't like.
I don't know if this is happening or not.
I always like go back to the line, like never attribute to malice what could be attributed
to stupidity.
I think probably during the transfer, like some stuff has gotten shuffled and some cords got pulled.
But like, do you think there's anything real here?
I would be very surprised if there is.
And it pains me to say this because this controversy has roped in some of my favorite celebrities.
You know, Billy Eilish and Meg Stalter are out there posting about this.
Meg Staltors said she's going to delete her TikTok account.
I would just urge a little bit of calm here.
If you've ever moved apartments, you know that some of the dishware gets shattered in the move.
I think something like that is happening here.
We know there was an outage at one of the companies' data centers.
And I've seen stories in the past where for some limited period of time, YouTube videos were showing zero views.
It didn't actually mean no one was watching them.
It meant that the view counter was broken.
So I think, you know, I'm not saying trust TikTok US forever.
I am saying in this particular case, which did admittedly happen at a very bad time for them,
it probably just was some sort of technical bug.
Yeah.
They installed Moldbot on their servers.
It's just been order and stuff.
Stop generating.
Okay.
Next up, we have Anthropic CEO's grave warning.
AI will test us as a species.
This comes to us from Axios,
and it is about a new 38-page 19,000-word essay
by Dario Amade, the CEO of Anthropic,
titled The Adolescence of Technology.
This is a follow-up to his hit essay,
Machines of Loving Grace,
that we talked about with him on the show.
This one is kind of the flip side of that.
That one was all about how AI might create this amazing, you know,
acceleration of scientific progress, all the optimistic things.
And this is sort of Dario saying, well, you know,
let's not get too excited here because there are also all these scary things
that could happen as a result of AI too.
Yeah, I think it should have titled it,
Machines of Violent Domination, since that seemed to be what most of the essay was about.
And, of course, we always love to read essays like this
from the CEOs of the companies
that are actually building
the torment nexus.
It's like we're all trying
to find the guy
who built the country
of geniuses in a data center.
I think his name was Dario something.
Anyways, we'll look that up later.
What I don't understand
is like how these people
have time to write.
Like I would find writing
a 19,000 word essay
that would take me several weeks.
I don't run a large
multi-billion dollar AI company.
And so whatever kind of
you know, productivity, enhancing, focus, enhancing drugs, Dario Amadei is on, I would like to get some of those.
On a bit more serious note, I feel like the last essay got a lot more attention.
Yeah.
No one seems to want to read about any of the potential downsides of AI.
Well, I think there are a couple things here.
I think one is, I think it was somewhat out of character for Dario to put out an optimistic essay about AI.
You know, he's known within the AI community as something of a worry.
about the risks.
He's been writing papers
about risks from AI
for many years.
Anthropic was sort of
built itself
as like the AI safety company.
And so I think
it just was a little more notable
when he was saying,
well, actually,
you know,
people think I'm this doomer,
but I'm actually quite excited
about this technology.
I think this latest essay
is just more in line
with kind of what
he's been saying all along,
which is like these systems
are getting better quite quickly.
They present all these new dangers
and we don't know where it's all heading.
Here's an essay
that would have,
gotten more attention. The five worst decisions made by my rivals. Something to think about.
Stop generating. All right. You hate to see this one, Kevin. An app for quitting porn leaked users
masturbation habits. This came from 404 media. The name of that app, Salesforce.
is being withheld to protect the privacy of the masturbators, Kevin.
But this was an app that purported to help people stop consuming pornography,
and it exposed highly sensitive data.
Some of the data included users age,
how often they masturbate, and how viewing pornography makes them feel.
You know, I have to say, this is a sticky situation,
and I think it's gotten entirely out of hand.
What? You don't agree?
I wonder why you wanted this one in the segment, and now I know.
It was to make that joke.
I didn't even put it in here.
But here's what I will say.
I'm this serious.
I'm calling for a full investigation because I think it's important that at the end of this, we were able to finger the culprit.
What?
Why are you laughing?
This is serious.
These people had their data bleak.
Stop generating.
All right.
Okay.
a dozy, Casey. This one comes to us from
Dig Watch, and it is titled,
Alaska student arrested after eating
AI-generated art in protest.
On January 13th,
Graham Granger, a film and
performing arts major at the University of Alaska
Fairbanks, was arrested and charged with
criminal mischief after ripping AI-assisted
artwork from a campus gallery wall
and eating around 57
of the images as part of what he
describes as a protest and performance.
piece against the use of AI and art.
Now I've heard of consuming content, but this is taking it too far.
I love this so much. Move over Banksy.
Listen, you know, maybe I will reflect further on this and not feel this way, but here's how I feel right now.
I think it's a great protest.
Honestly.
No, because this was another student's artwork.
Now, let me be clear.
I don't actually know anything about this art or how it was made.
But what I can tell you is often people save a backup to their desktop.
So maybe that could come into play here.
Maybe it could.
Maybe it couldn't.
Maybe this was great art.
We'll never know because now it's sitting in Graham's intestines.
And it was delicious.
They make them different up there in Alaska, Kevin.
Stop generating.
All right.
Now here's a story that I think we all saw coming and that now it's here and we're like,
oh, finally.
steak and shake has added $5 million in Bitcoin exposure, deepening its commitment to Bitcoin.
You might be surprised to learn that this one comes from Bitcoin magazine.
Steak and Shake, which is one of the greatest triumphs of the Midwest, I think we could say.
It's a place where you can get, a delicious hamburger, and a milkshake.
It's continuing what it calls its burger to Bitcoin transformation.
In a post on X, the restaurant chain wrote that it had increased its Bitcoin exposure,
by $5 million in notional value,
or as it's sometimes called value,
and that all Bitcoin sales go into our strategic Bitcoin reserve.
So, Kevin, anything you could tell us
about what you think Steak and Shake should do
with its Bitcoin holdings?
I mean, I think they should expand to the West Coast, right?
Obviously, in-and-out has sort of a monopoly
on that kind of experience out here.
But I grew up eating Steak and Shake
in my Midwestern town, and it slaps.
It's really good.
They got a good steak burger.
Unfortunately, you know, many, many years ago now, the chain was acquired by a private equity guy named Sardar Big Lari, who has made some very strange decisions, including putting his own picture up at every steak and shake.
Oh, my God.
Like a T.
T. Moo chairman now?
Yes.
Opening, you know, versions of steak and shake, like in, like, European cities.
You know, there was one in can.
the last time I was there, I was like, what's a steak and shake doing in can?
This guy for just some context also, you know, was the owner of Maxim Magazine.
This is part of his like sort of lifestyle brand.
And now he's become a big crypto guy and wants to, you know, increase stake and shakes profits by investing some of its money in Bitcoin.
Well, let me say that I hope that someday we get a big sorry from Mr. Big Lari.
Stop generating.
Okay, this one comes to us from the information.
Apple is developing an AI-powered wearable pin, the size of the air tag that is equipped with multiple cameras, a speaker, microphones, and wireless charging, according to people with direct knowledge of the project.
The device, this report claims, could be released as early as 2027.
I think this is really interesting because this shows Apple pursuing what I like to call their Vision Pro approach to hardware, which is making things that cost 35.
$500 and you're not sure what they do.
But they are available.
This is total vindication for the humane pin.
Really is.
They died too young.
You know, are they going to be upset that they let the humane AI pin go to work at HP
and didn't snap them up when they had the chance?
No, I think they'll be okay.
Okay.
But I think that they are trying to compete with OpenAI,
who is also reportedly developing some sort of pin-like hardware thing.
And I, for one, cannot wait to buy and then immediately lose this.
Yeah, well, I'm just excited that air tags are going to have a second use beyond illegally stalking your ex.
You got to stop doing that.
Yeah.
Stop generating.
Okay.
You want to talk about the social event of the season, Kevin.
Let's go inside the White House's screening for Amazon's Melania doc.
This report comes to us from the Hollywood reporter.
On Saturday, there was a black tie event, which was not promoted.
or advertised, but it took place in the east room of the White House and attracted about 70
VIP guests, including Queen Rania of Jordan, Zoom CEO, Eric Yuan, Apple CEO, Tim Cook,
the CEO of the New York Stock Exchange, the CEO of AMD, Lisa Sue, and Mike Tyson. Kevin,
is that giving nightmare blunt rotation or what?
Yes, this is like one of, I'm sure there are many perks,
associated with sucking up to a sitting president and groveling at his feet.
One of the downsides is that you have to go to the White House to dress up in black tie and sit through a screening of what I can only imagine is a terrible documentary.
And you probably can't even look at your phone while you're doing it.
You really can't. I have still not seen any reviews. I guess Tim Cook is still working on his letterbox post.
So keep an eye out for that. But here's what I'll say. When I think of all of those people, you know, all of these people, you know, all of these people,
players in one place.
It has all the makings of a great
reality TV show, but
unfortunately we already have one called the Traders.
So we'll have to come up with something new.
Okay, stop generating.
SpaceX is weighing a June IPO, Kevin,
that is being timed to planetary alignment
and Elon Musk's birthday.
This comes to us from the Financial Times.
The Rocket Maker is targeting mid-June for its IPO,
when Jupiter and Venus will appear very close together, known as a conjunction, for the first time in more than three years.
Now, I heard that boys go to Jupiter to get more stupider. Is that a factor in this as well?
I'm honestly surprised he didn't plan some sort of Uranus joke here.
Honestly, don't give him any ideas. Edit that out of the show. We don't want him to hear that.
So what stage of capitalism is it when the world's richest man starts factoring in planetary alignment in his own birthday into his IPO plans?
It's just so dumb.
I'm sure the SpaceX IPO will be a huge, whatever.
It's a company they make lots of money.
I'm sure investors will be excited about it.
Can we just like stop doing the like 42069 Jupiters in retrograde?
Epic bacon, L.O.L.
Like, it just feels like we should have left that one behind in, like, 2013.
Yeah, your IPO is not a Tumblr post.
Knock it off.
Okay, stop generating.
Last one.
Last one.
Last but not least.
LinkedIn will let you show off your vibe coding expertise.
This one comes to us from Engadget.
LinkedIn is partnering with replet, lovable, descript, and relay.
app on the feature and is working on integrations with fellow Microsoft-owned GitHub as well as Zapier.
LinkedIn is allowing the companies behind the AI tools to assess an individual's relative skill
and assign a level of proficiency that goes directly to their profile.
We're getting vibe-coding badges on LinkedIn, Casey. What do you think?
You know, I have to give it up to LinkedIn because every year, they invent levels of BS that
have never been seen before in corporate life.
The idea that there's a badge on your profile that says you can type in a box,
like there might as well be a little trophy you can get for having computer access on this website.
What are we even doing here?
Yeah, LinkedIn, I think, is the sneaky, like, first all AI social network.
Like, I recently went on there.
God knows why.
I think I had blocked access to my other social networking apps.
And, like, every post on my feed was, like, clearly generated by Clotter Chat-GTP.
It was just, like, it was like, this is it.
just a transformation? It's a revolution. Yeah, the vibes over there are extremely strange.
This is a little bit unrelated, but am I right that anyone who asks to connect with you on LinkedIn,
you accept the connection? Correct.
Do you not? No, because like when I grew up, you would connect with people on LinkedIn because you
knew who they were. Oh, no. And it seems like that norm has just collapsed. Well, I don't really use
it, but I like having a big network for unspecified reasons.
Is it weird to you when you're just seeing a feed of like utterly random posts for people you've never heard of?
No, it's fine.
Do you ever congratulate people for getting a new job and you have no idea who they are?
I don't actually type that much into LinkedIn, but I will say that one of my tests that I give to new AI models is can they autonomously respond to my LinkedIn messages for me?
That's very good.
Listen, just to wrap this up, here's what I'm going to say.
If you find yourself tempted to put a vibe coding badge on your,
your LinkedIn profile.
Here's what you can do instead.
Put on a rainbow wig and some white makeup
and like a big red nose
and look in the mirror and there's your vibe
coding badge. Okay, congratulations.
Congratulations.
All right, Casey, that's hat GPT.
And that's hat GPT.
Close that.
Closing up the old hat.
Hart Bork is produced by
Rachel Cohn and Whitney Jones.
We're edited by Veer and Pavich.
Today's episode was fact-checked by Will Pyshal.
and was engineered by Chris Wood.
Our executive producer is Jen Poyant,
original music by
Alicia Bikitup, Rowan Nemistow, and Dan Powell.
Video production by Soya Roque,
Rebecca Bland Dunn, and Chris Schott.
You can watch this whole episode on YouTube
at YouTube.com slash hardfork.
Special thanks to Paula Schumann,
Kuiwink, Tam, and Dahlia Hadad.
You can email us at Hardfork
at at www.com
with your MoltBot set up.
