Hard Fork - Age-Gating the Internet + Cloudflare Takes On A.I. Scrapers + HatGPT
Episode Date: August 1, 2025This week, we look at the fallout from a sweeping internet age-verification law that went into effect in Britain. We explain why age restrictions are suddenly popping up all over the internet — and ...how some might create more problems than they solve. Then Matthew Prince, chief executive of Cloudflare, returns to the show to discuss his company’s new plan to help publishers fight back against A.I. scrapers and potentially to create a new online marketplace for quality content in the process. Finally, we round up some headlines from around the tech world in the latest round of HatGPT.Guests:Matthew Prince, chief executive of CloudflareAdditional Reading:Supreme Court Upholds Texas Law Limiting Access to PornographyThe U.K.’s age gates are coming to AmericaCloudflare Introduces Default Blocking of A.I. Data ScrapersAlso, you can still get a special-edition “Hard Fork” hat! For a limited time, you’ll receive one when you purchase an annual New York Times Audio subscription for the first time (U.S. only). Go to nytimes.com/hardforkhat.We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Transcript
Discussion (0)
People on social media, Gen Z, people, have started referring to robots as clankers.
Have you heard this?
I believe I saw something about this.
Yes, so people are saying like, oh, like I hate when I call customer support and a clanker picks up.
And, you know, this is sort of their new derogatory slang.
Well, and it pairs nicely with another new piece of slang that I wonder if you heard because people are talking about people who use tools like Chachibitilat and they're calling them sloppers.
Really?
Have you heard this?
No.
Yeah.
So it's like if you're, I don't know, you're out on the internet and you're publishing something
and it seems like it's obviously just AI, somebody might call you a slopper.
Clankers and sloppers.
Clankers and sloppers.
I just think this is, it makes me a little uncomfortable, even though Clankers is not, you know, actually,
I think it's sort of a tug-in-cheek thing.
I just don't think we should be calling the robot's names.
I don't believe in slurring against anyone human or not.
You believe in an appeasement strategy with the robots.
Just give them what they want.
They are keeping score.
I believe that.
I'm Kevin Roos.
I'm a tech columnist at the New York Times.
I'm Casey Noon from Platformer.
And this is Hard Fork.
This week, why age gates are suddenly popping up all over the internet
and how some could create more problems than they solve.
Then, Cloudflare CEO Matthew Prince returns to the show
to discuss his company's new plan to help websites fight back against AI scrapers.
And finally, we're Patrick.
passing the hats for some hat GPT.
Well, Casey, we are here yet again to remind our great listeners that they can have their very own hard fork hat if they so choose.
It's time to get the hat. If you don't have the hat, the moment is now.
Yes. For a limited time, you can get a hard fork hat with the
purchase of a new annual New York Times audio subscription. This is a special edition hat. This
color is not going to be available anywhere ever again. And it'll be going on eBay for
hundreds, possibly thousands of dollars. Millions of dollars. Millions of dollars. And you'll be
like, why didn't I get a hard fork hat back when it was available to me as part of a New York
Times audio subscription? If you missed out on the Bitcoin boom, get this hat. Yeah, they're calling it
hat coin. And it's available now at NYTimes.com slash hard fork hat.
Kevin, how old are you? It's none of your business. How old are you?
That's none of your business. But guess what? If we lived in the United Kingdom, it would be
the government's business, Kevin, because as many people found out over the last week to use a lot
of different websites, you now have to prove your age. Yes, this is the age gating issue, which
I've heard a lot of people talking about.
I know you wrote about it in your newsletter this week.
I'm excited to talk to you about it.
And before we get into what is happening and some of the consequences and the reactions to this law,
I wonder if you could just kind of sell me on the stakes.
Why talk about how the UK's Online Safety Act is mandating age verification for websites?
Why does it matter to me and American?
Sure.
So I would say a couple things.
Number one, this truly is one of the most far-reaching attempts we have seen by a Western democracy to regulate speech online.
We've talked on this show in the past about various ways that people want to protect children online.
And requiring folks to verify their ages is one of the ways that has been discussed.
But it's extremely rare to see it roll out across an entire country the way it has in the UK.
So that's thing one.
Thing two is this stuff is coming to the United States.
in June, the Supreme Court upheld a law in Texas that requires residents of Texas to do
something very similar if they want to access adult content online. And so it's really not
just a UK thing, Kevin. We are starting to see a gradual erosion of people's freedom of
expression online. Yeah, I think that's well taken. And I might just even take a further step back
and say, like, I think the internet for the last, call it 40 years, has
been a kind of informational free-for-all, right? You are going to experience the same internet,
whether you were 13 years old or 25 years old or 50 years old. But what we are talking about now
and what is happening in the UK is you have to establish your age in order to sort of access a number
of different kinds of services, including social media. And one outcome here would just be that
the internet actually kind of fragments in this way where if you are 12 or 13 years old,
you are just going to have a very different experience of the internet than someone who is 25 or
50. Yeah. And I think in addition to all of that, Kevin, there is just the fact that the way
that websites are collecting this information is putting people at risk in ways that we can talk
about. Okay. So let's just start with what is happening in the UK? What's the backdrop here?
Yeah. So in 2023, the United Kingdom passes something called the Online Safety Act, and it includes
provisions that require online services to try to spare minors from seeing what they would call
harmful content online. So porn is a big part of that, but it also includes stuff about, like,
if you have a pro-suicide website or a pro-eating disorder website, you are now required in the United
Kingdom to first do a sort of like risk assessment of your website of, hey, could children
access this and would they be exposed to this certain list of harms? And if so, you then have to
implement what they call like high quality age assurance, which means that no, you can't get
away with just putting a box on your website that says, yes, I promise I'm 18. And so all of this
goes into effect last Friday, the 25th, and things start to go a little bit haywire. What happens?
Well, first of all, I think a lot of people in the UK just didn't realize that this was about to
happen. And so, you know, they're sitting down for their evening visit to Pornhub, and all of a
sudden, they find that they are being asked to upload their driver's license to prove that
they're actually 18. There are a number of different ways that people are allowed to prove their
identity. You could also, for example, show your credit card because you can't have a credit card
unless you're at least 18. Or you can use your phone or a laptop's camera to take a picture of you
and they'll do some sort of, you know, AI in the background to figure out if they think that you're
actually 18. But for a lot of adults who didn't see this coming, this produces some real
anxiety because all of a sudden, an experience that had previously been basically completely
private, right, just visiting a website, is now being linked to your personally identifying
information and who knows what's happening to it once you actually submit it. Right. Now,
is this just affecting porn sites or are there other kinds of websites where people are being asked
to prove that they're of age? It is affecting many more kinds of sites.
So X is being affected.
Many different subredits are being affected.
So like a stop smoking subreddit, a cider subreddit, various things that do not immediately
seem like adult content and could actually be quite beneficial to minors, all of a sudden
you need to prove who you are if you want to access them.
Also, Wikipedia has said that they may have to limit access to the site in the UK because
of privacy concerns that are created by this law that would require them to.
collect a lot of information that they don't actually want to collect.
Now, is there porn on Wikipedia?
I haven't been able to find any of you.
Okay.
Well, I will say there are some Greek vases with some incredibly curvacious men and women.
And, you know, depending on what kind of night you're having.
Right.
So how are people in the UK reacting to this?
This is not a keep calm and carry-on situation, Kevin.
Okay.
More than 400,000 people have signed a petition saying that they want these changes.
to be reversed, and I'll be curious to see where that goes. In the meantime, though,
people are just to find new ways to get around this. And I think to the extent that this law
actually sticks around, I think this will be the reason why, is that people have just found
all sorts of relatively easy workarounds. One thing you can do is use what they call a virtual
private network or VPN. All that does is just lie to your internet service provider and say,
hey, I'm not in the UK. I'm in the United States where I can still watch most of what I want to
watch. And so we're seeing a lot of that. But we're also seeing people get much more creative.
Now, are you familiar with the video game Death Stranding? No, I'm not. So unfortunately,
this is so funny if you have, if you know what Death Stranding is, it's this kind of very
strange and beautiful video game made by this Japanese Autour. And people are using photo mode
in the game to take pictures of the protagonist in order to fool the UK's age verification
technology. And the reason this works is because in the photo mode of this game,
game, you can tell the character, smile or frown. And it turns out that a lot of the age
gates in the UK are doing the same sort of thing. If you're using the camera on your laptop,
you're like, okay, now smile, okay, now frown, people are now just doing this with the game
Death Stranding. So if Death Stranding winds up being the best-selling video game in the UK this
will know why. That's amazing. So, okay, has there been any response from the UK government
or the safety authorities about people's reactions to this? Are they saying, we'll keep improving
the systems or something like that? There was a response from offcom, the UK's media regulator.
They have said that this act is not, quote, a silver bullet, but, quote, until now, kids could easily
stumble across porn and other online content that's harmful to them without even looking for it.
Age checks will help prevent that. All right. So that is what is happening in the UK.
Where are we in the U.S. with these age-gating laws? I've read some headlines about people in some
states who are having to go through age verification to get to an adult website. But where are
these laws and what is the regulatory picture here? So age verification laws have been passed in 24
states and last month the Supreme Court upheld a law in Texas that requires websites where
more than one third of the content is sexual material to use one of these age verification
methods. And Justice Clarence Thomas spoke for the majority when he said,
unlike a store clerk, a website operator cannot look at its visitors and estimate their ages without a requirement to submit proof of age, even clearly underage minors would be able to access sexual content undetected.
So I suspect we're going to start to see these laws in more and more places.
So, all right, let me try to steal man the case for age verification here because I think I am undecided about what I think about it, but I think you have decided that these laws are a bad idea.
So I want to try to make the opposite case.
we age gate things all the time to prevent minors from getting access to it or being exposed to it.
So if I walk into a bar and the bouncer asks to see my ID, that is allowed.
No one is contested whether that is a violation of the Constitution.
If I walk into, this probably doesn't exist anymore.
But when I was a kid, there were these things called nudie magazines.
And if you wanted to go into a newsstand and get them, they were sort of like, behind
the counter. They would have these special barriers on them, and you would have to prove that you were
of age to be allowed to buy that. So why is what's going on with age verification on adult websites
on the internet any different? So I share your concern here. I think that we should come up with
ways to prevent minors from accessing this kind of adult material. I just object to the way in which
they're doing it, which requires that people share a lot of personal information for the rest of the
We're not opposed to age gating as a concept.
You're just opposed to the way that they're doing it in the U.K.
Yes.
And this year, Apple suggested a way to do age verification that I think is a lot more elegant, Kevin.
It's not available quite yet.
They said this week to Bloomberg that it's coming soon.
But basically, they're going to offer what they call an age assurance API.
And if you are a parent and you're setting up your child's device, you can just tell the device,
hey, my kid is X years old, right?
Or here's my kid's birthday.
Apple can then essentially anonymize that and pass it through to a developer.
And so if you are a Facebook or an Instagram or a TikTok, you just get a little token that says this person is 13.
And so, you know, you may want to show them different kinds of content or you might want to restrict certain features.
And essentially put the onus on the parents and the device makers to do this.
That way, if you're just a normal adult using the internet, you don't have to worry about, you know, uploading your driver's license to visit a website.
So I think that that's a very elegant solution.
Yeah, I prefer the on-device age verification rather than making every website operator go and do their own version of this
and store all the driver's license photos and whatever else.
So I did not know that Apple was building that.
So you're saying that's going to be released soon.
Yes, they're going to release that soon.
Now, interestingly, meta is trying to ensure that this does not become the way that this is handled
because that still puts an onus on meta to do.
do a lot of the age checking. They want Apple and Google, which also has a big app store,
to have the legal liability in cases where a minor does access material that they're not
supposed to. And so META has been leading a charge lobbying for a lot of bills around the country
that would put the onus on Apple and Google to do all the verification instead of them having
to play a role in it. And they've been having some success. Now, this confuses me because
you know, if I know one thing about meta and other social media sites, is that they are very good
at collecting information on users and using machine learning to sort of detect who is more
interested in what and who is part of what consumer segment. I assume that these platforms
already know or have very good guesses about how old all of their users are. So what is the
issue with just having them do the detection? I also saw that YouTube this week is,
is looking at things like your browsing history
and your consumption patterns
and the kinds of videos you're searching for
to make their best estimate
of whether you are underage or not.
So why shouldn't the platforms have a responsibility
or a role here, too?
I think the platforms do and should have a responsibility.
What we have seen is a lot of reporting
over the past couple years
that at Meta in particular,
they were like writing reports
about how many like under 13 users they had
of Instagram, Jeff Horowitz and the Wall Street Journal
and a lot of this reporting, it's, you know, both very disturbing, but, like, darkly comic to read.
And, like, yes, they absolutely knew that they had all of these younger users, and so they've spent
the last year trying to release a lot of features that essentially make it more difficult for, you know,
under 13s to use the platform.
So all the platforms are kind of belatedly coming around on this.
You know, you mentioned the YouTube thing.
Here's something really interesting about YouTube.
This week, the Australian government said that they were going to ban YouTube for kids under 16 because
they consider it social media, which was a major reversal from.
from what they were saying before,
I sort of think this is the sort of thing
that if it was actually carried out
could cause the government of Australia to be toppled.
A bunch of like angry Minecraft preteens
are going to storm whatever the White House of Australia is.
Yeah, but, and we'll never know.
But it just goes to show you.
Actually, if you know what the seat of government is in Australia,
please email hard fork at NYTimes.com.
We were unable to find this information online due to age checks.
Anyways, look, I've been a little bit, you know,
glib about this, maybe in the spirit of trying to record
an entertaining segment about tech policy. But the truth is that this stuff is very complicated.
And I think everyone has a role to play, which is something that is very satisfying to say,
but does not actually solve the problem. Because ultimately, if you want to solve the problem,
somebody has to be responsible. Somebody has to try to implement a solution. Inevitably,
when you implement a solution, people are going to be caught up in it. They're going to be,
you know, falsely tagged as underage when they're overage or vice versa. So it is really messy.
what I'm trying to say is
there are more privacy
preserving ways of understanding
a person's age online
and I would like to see us focus on those
rather than this sort of blunt
hammer approach that they're taking in the UK
which in practice is mostly just going
to annoy a lot of adults and
potentially put their public
information out there in a way that could be
breached which is my way, Kevin,
of introducing the conversation about tea.
Oh yes. Let's spill the tea about tea. Let's spell the tea
about tea. Kevin, what was tea? And by the
Are there any red flags about you on tea?
I haven't looked yet.
I have not been able to look, nor will I be looking because of what happened over the past week.
So T is an app that had a viral moment over the past week, in part because it reached briefly
number one on the iOS app store ahead of chat GPT and Instagram and all these other apps.
And to my understanding, although I've never been on it, T is an app that allows women to
sort of anonymously divulge experiences they've had with men.
Yeah.
So basically dish the dirt about the guy you dated who was sort of a jerk to you
or behaved in a way that you didn't like.
And what got people's attention was that you could only register for this app if you
were a woman.
So they would do some kind of verification process when you signed up.
I think you were asked to scan your driver's license and also submit,
like a selfie so that they could sort of use AI to try to detect whether you are a woman or not
and keep out all the men. And this drove a lot of people on the internet insane. And so my understanding
is they essentially hacked tea. They found that tea had not secured some of these uploads that users
had made. And they leaked a bunch of people's verification information and photos that they had
submitted to T and kind of a revenge of the men.
Yeah.
And this gets to my exact concern with some of these age-gating methods is it is just
left up to the service provider to decide how you're going to verify people's ages.
And while most countries do have some laws that regulate data, in practice, we just see breaches
all of the time.
It feels like every week, you know, you see headlines about one app or another is being
breached.
And in this case, you have material that is just ripe for someone to do.
stuff that is really abusive. And indeed, a bunch of 4chan users got a hold of all of these
T-selfies. They started creating like, you know, hot or not style websites, just doing all kinds
of gross stuff with them. And that, again, I think is just a really predictable outcome of
building these kinds of age-gating technologies. Yeah. I think that's a really good point and an
example of why we should not be throwing this to the website operators and platforms, because inevitably
some of them are going to use really secure,
really well-designed age verification services,
but some of them are just going to do this
in a very sloppy way
and inevitably leak people's personal information.
Yeah, and now like every online service
just has to have this database laying around.
You know, it just like increases the risk
across the entire internet for all of us.
Yeah, so I think it makes a lot of sense
to do it in a more centralized way
through Apple and Google and their app stores.
But I have also read that
Apple has been lobbying against some of these state bills that would force them to do exactly that.
So you just told me that Apple is building an age verification feature, but I've also read that
Tim Cook at one point personally lobbied the governor of Texas not to do this kind of age
verification thing. So what is going on here? So I think it's mostly just that they want to
avoid the legal liability here. You know, I read one story that said, like, look, like Apple is a mall,
Facebook is a liquor store in the mall.
Like, you need to hold the liquor store
accountable for who they are selling liquor to.
This is like the Apple metaphor.
I think Apple does clearly want to play a role
in the solution here.
That's why they've sort of crafted this whole API.
But they kind of want it to be on the honor system, right?
Where it's like, hey, we're going to do this.
We can all agree this is a good thing.
Now leave us alone.
And what the lawmakers are saying are like,
yeah, that's like a good thing.
But also, we want to hold you legally liable
if we find out that like a bunch of, you know,
eight-year-olds are, I don't know, like, you know, watching eating disorder content on TikTok.
Yeah, I think that makes a lot of sense.
And I'll also say, like, this is an issue, this age-gating thing, where I find that my values
are intention because I agree with you that the First Amendment is important, that adults
should be able to, like, access all manner of information on the Internet, even if it is
lewd or obscene or inappropriate for children. I don't want a kind of UK style system where
like you have to upload your driver's license to some database that may or may not be secure
just to go like look at a subreddit. At the same time, I totally understand that parents have
legitimate concerns here. I talk to so many parents who are just at their wits end about how to
safely introduce things like social media or smartphones or the internet to their children
without just sort of opening the floodgates. And parental controls are the thing that everyone
talks about, but any parent can tell you these are not perfect. You practically have to become
like a part-time IT person just to be able to like understand and control what your underage
kids are doing on the internet. So I totally understand where the anger comes from at these
companies that have made it very clear that they don't want to be doing any of this.
They don't want this to be their responsibility.
But I think that we are going to move to an internet with age verification in the United
States and around the world.
I just think that is going to happen.
There's enough public pressure at this point that it is sort of inevitable in my mind.
And so I would like to see it done in a way that preserves the privacy of those users
rather than having sort of T-style leaks every six months.
I would like that too.
You know, here's sort of the last thing I would say about this, Kevin.
you know, this may not be the free expression that you, the listener, cares about the most.
You may be happy to upload your ID if you want to look at adult content on the Internet or even Wikipedia.
But what I am going to say is I view this very much as part and parcel with just a widespread clawback of speech rights in the United States over the past six months.
Look at the attacks on free speech that we've seen against journalists, against academia, against broadcast media.
right and now you are coming along and saying by the way now you need to show government
ID to look at a website like these things are all of a piece and I think it's important that we
think about them as being of a piece because it may be that you know six months a year from
now we look back and we think well we have really lost a lot of ground with free expression
and the time to talk about that online is why you still can I see well thank you for
explaining this to me it was not immediately clear to me why I needed to care about a
tech regulation in the United
Kingdom, but I think I get it
now. Yeah. I feel like just doing this
segment has aged me about 10 years.
Yeah. Also,
I'm looking at a list of British expressions
that mean really angry.
And I just have to read some of these too.
Like a bear with a sore head.
Okay.
Throwing a wobbly.
Throwing a wobbly.
Yeah. Anyway, I'm throwing
a wobbly over this regulation
and I think a lot of other Brits
are in a right tizzy.
When we come back,
Matthew Prince from Cloudflare
will explain how he's trying to keep
the internet safe from AI crawlers.
Not to mention the clankers and the sloppers.
Bluff.
Ha!
Well, Casey, we've got a very exciting guest with us today.
Matthew Prince, the CEO of Cloudflare, is here to talk about something new that they've been working on.
Yeah, and I am so excited about this, Kevin, because it actually is kind of in the dead center of my own interest as somebody who has a website on the Internet.
And, Casey, because we are talking about AI in this segment, we should talk about our disclosures.
That's right, Kevin.
Well, my boyfriend works for the notorious AI.
scraping company, Anthropic.
And my employer of the New York Times is suing Open AI in Microsoft over alleged copyright violations
related to the training of large language models.
So Cloudflare, for those who may remember, our last podcast with Matthew, is sort of the plumbers
and bouncers of the Internet, right?
They make a lot of these security services like DDoS protection that many, many websites rely on
to keep their services safe.
They also do a host of other security and data-related things.
and Matthew is a sort of diehard supporter of the Internet.
And so a couple years ago when all these AI bots started scraping the Internet to feed data into their models,
he and Cloudflare got really nervous about this.
And so they have been doing a lot of interesting things to try to preserve what they see as the heart of the Internet.
And recently they made a lot of waves by announcing that Cloudflare would start to introduce default blocking of these AI data scrapers.
basically preventing AI companies from automatically being able to exploit and scrape the data
from the websites that they're visiting. This is a step that he said would help to protect
content creators like you online. Yeah, and so I'm excited to talk to him about that.
And also, Kevin, just about some of the research that Cloudflare has been doing into the
state of the web as a whole. You know, you and I have both been worried about what the rise of AI
means for the Internet in general. Cloudflare has taken a close look at that and has some
interesting things to say. Yeah, it's one of the sort of more underappreciated companies on the
internet. Their decisions about even seemingly small things can have enormous ripple effects
throughout the industry. So I thought it was a really good time to bring on Matthew and ask
him some questions about what they're doing and why. Well, let's bring them in.
Matthew Prince, welcome back to Hard Fork. Thanks for having me. So we're going to
to talk about AI and scraping and all the steps that you're taking to sort of protect the
internet. But before we describe the solution that you're proposing, which you announced on something
called Content Independence Day, I want to ask you to describe the problem. You had some pretty
shocking numbers in your post on this about how much internet traffic patterns have already
changed as a result of AI. So talk a little bit about those and why you felt the need to do
something about it. Sure. So at Cloudflare, we sit in front of north of 20% of the web. So we see a lot
of what's happening online. And we understand, I think, a lot of how the web works and the business
model that exists behind it. And 25 years ago, basically Google struck a deal, an implicit deal with
publishers, which was, you let us copy all of the content that you're creating. And in exchange,
will send you traffic.
And the whole web, the economy of the entire web,
was really built on that search-based interface.
However, over the last 10 years,
Google has changed what that interface looks like.
It started sort of subtly,
starting about eight years ago,
when they introduced the answer box.
And the answer boxes,
if you type in something like,
when did Cloudflare launch,
instead of you having to go to 10 Blue Links,
it just says September 27th, 2010.
And that was a pretty radical change because back in the day, Larry and Sergei used to brag about how their job was to get people off Google.com as quickly as possible, right? They even measured the time up in the corner to show you just how fast you basically were, you know, leaving their site. And now all of a sudden, Google was keeping you on the site. More recently, they've introduced something that's AI overview. So if you do a search most of the time now, overview shows up, which is an AI summary of what's going on.
And that has made a big difference in how much traffic goes to content creators from search.
How much of a difference? Like, put some numbers on that.
Yeah. So if you take 10 years ago as the sort of the standard and you compare it with today,
it's almost 10 times harder to get traffic today for a piece of content than it was 10 years ago.
And it's gotten significantly worse just since the introduction of AI AnswerBox.
That's just with Google, and that's the good news.
If you look at the AI companies, OpenAI is 750 times harder, Anthropic is 30,000 times harder.
What's basically happening is as the interface of the web is shifting from search to AI, people are consuming derivatives, not originals.
Basically, they're not following the footnotes.
And we've actually seen it get even worse as people are actually trusting the AI systems more because the AI systems are getting better.
And what scares us about that is that there are really three reasons people create content.
Maybe not you, Kevin, but one is ego.
Vanity is mine.
Right?
But that's a big thing.
Like, there's a whole bunch of content that's just created, you know, for just the ego and
the fame of doing it.
But the business model is you either sell subscriptions or you sell ads or both of those
things.
And so the value has to be either make people famous or make people famous or make.
them rich through ads or subscriptions. That is what has built the web over the last 25 years.
The problem is that publishers are struggling already at 10 times harder. I worry that as more
and more of the interface of the web looks like open AI are anthropic, they're dead at 750 times
and they're buried in the ground and forgotten about it 30,000 times. And so if that's the case,
if there's not an incentive for people to create content, I think people will stop creating
content and that really is an existential threat to the web.
Okay, that's a pretty good and concise description of the problem. So what is the solution here? What are you all doing?
Well, so I don't pretend to know exactly what the solution is, but I know kind of some of the aspects of how we have to get there. So the answer has to be that AI companies have to pay for content. The deal is different than it used to be with search. Search, they copied your content and in exchange they sent you traffic that you can monetize. If now they're copying your content, they're not sending you anything.
then why would you give them their content in the beginning?
So AI companies have to pay for content.
And I think what's encouraging is we're actually seeing some AI companies that are doing that.
Amazon just announced a deal with the New York Times, that Open AI has done a number of content deals that are out there.
But the problem with a lot of those deals that we realized was, you know, Sam at Open AI can't be a sucker.
He can't pay for content and then have all of his competitors get it for free.
Or another way of thinking is, is you can't have a market unless you have some level of scarcity.
And so what we thought the first step was, and what we announced on July 1st in conjunction with the who's who of the world's
publishers, was we're going to, by default, block the ability for AI crawlers to get content unless they
are compensating the content creators for getting that content.
And we think that that's super important.
Now, exactly how the compensation works.
Again, I think there's a lot of different models.
I analogize it to, and at risk of hubris, when Apple announces, you know, the introduction of iTunes, $0.99 a song, right?
That wasn't what the final business model was.
The final business model that we've kind of come to is more like Spotify, where it's $10 a month and you get kind of all you can eat from the spot of like playlist.
I think we're going to take some iterations to figure out where we, you know, we might start at some price, you know, fixed price that's there.
and we might evolve to something which is more like a Spotify model over time.
But the first step has to be actually saying,
if you're an AI company, you can't get content for free.
Now, if I'm a website operator and I want to prevent AI bots from scraping my site,
I can already do that through robots.t.
I can just put a little, you know, file in the metadata,
and I can say, hey, these three crawlers, you're allowed to scrape,
but these other six, you're not allowed to scrape.
So how is what you are building with Cloudflare different than that?
Well, the first thing is with Robots.com, the analogy would be like it's the speed limit sign, where it says, okay, you can drive 55 miles an hour, right?
There's no law of physics that says you have to draw 55 miles an hour, and I'll think a lot of us maybe sort of look at the speed limit sign and go, yeah, it's probably 60s okay, right?
It's the same thing.
It is a recommendation.
It's not an actual enforcement that's there.
That's problem number one.
Problem number two is it's incredibly blunt in terms of what it does.
So you basically have to apply it across a significant portion of your site.
So you can't say, okay, let these things out, but don't let these things.
The last problem is a lot of the search companies, because again, we see a huge amount of the Internet.
We can track how this works.
A lot of the AI companies, I should say, if they hit a robot's text file,
what they then do is they find some other way
to go out and scour the internet
to find your same content.
So they'll actually do a search against Bing
and try and pull the cash content
or they'll go look at the internet archive
or they'll actually do things
that are incredibly sneaky
like pinging ad server networks
to get a description of the page
that comes back to them
and trying to find all kinds of things around it.
We've tracked some of the sort of
worst performing AI companies
and there's a huge range.
Like Open AI, they're actually the good guys here.
They're doing things
right. They're trying to do things the right way, but by far the best behavior. There are others
that most closely resemble like North Korean hackers, where they're literally using residential
proxies to spoof who they are to try to get around around the various blocks. And so I think that
in addition to a road sign, for the badly behaving bots, we actually need something where we say,
we're going to take away your car because you keep going 400 miles an hour down the road.
tell us a bit how technically the solution that you've built works how are you able to build those kind of fine-grained controls yeah so i mean so cloud flare um we're we're essentially a giant network and a ton of the web sits behind us and so it has to flow through our pipes and our primary business for most of our history has been cybersecurity so every day we go to war with the north korean hackers the russian hackers the iranian hackers the you know chinese hackers who are
trying to get into our customer systems one way or another, and we're really good at identifying,
no matter what they pretend to be, we're really good at identifying who they are, what they're
doing, and stopping them, literally by just not letting their traffic get to our customers.
What we realized, though, was that actually gave us the perfect position to be able to help
anyone who is a content creator also set what were in forced rules of the road, where we can
say if this bot is behaving in a bad way, that we're going to block it. So, for instance,
we, you know, as we have now studied this and we have very good evidence on what even some
of the major AI companies are doing that's really sleazy, we're going to put them in time out.
We're actually going to stop their ability to access a huge portion of the web, even if they,
you know, pretend that they're doing it. And we've got the record to do that. And our security
teams have actually investigated and figured that out. On the other hand, when, you know,
If Open AI has done a deal, we also want to make sure that they get that content as efficiently as possible and structured in a way, which is possible.
So the content creator can say, I want to allow Open AI to come to my page.
I think what will develop over time is then what is sort of a standard rack rate where you as a content creator, even if you're small, can say, okay, I'm happy to let this content be scraped, but here's the price for it.
And that's something that, you know, how that market will develop, I think, is going to be what kind of the next.
steps of this project.
Now, you announced this new approach to AI crawlers a few weeks ago and flipped this switch
that blocks all the crawlers by default.
What has the reaction been from publishers, from AI companies, from people who are inside
Cloudflare looking at the data?
Yeah.
So, I mean, from publishers, not surprisingly, I'm on a lot more people's Christmas lists.
Like, we're, this, because publishers were really struggling with how to deal with it.
They saw the problem and they didn't have a good technical solution to be able to lock it down.
They're not cybersecurity companies.
So they have a harder time being able to track this.
And so the glee that we've seen from publishers as they've turned the system on, seen what was going on,
and then had the ability to push a button and say, you know, disallow, disallow, disallow, has been really, really palpable.
And I think that's why this resonated across such a wide swath of the publisher base.
what surprised me has been the reaction of the AI companies, which I kind of thought that they were just going to throw up all over this and hate it. But it hasn't been that. For the most part, with a couple of exceptions, they've said, listen, we get it. Ultimately, content is the fuel that runs our engine and we need to pay for it. But the key is it needs to be a level playing field. And so I think that what I am encouraged by is in all,
the conversations that that I've been a part of, the AI company said, if you can make it a
level playing field, if you can make it something that's fair, then we're willing to pay for
content. And that, I think, is, that's a dramatic step that says that we're headed in the
right direction. Now, making it fair is going to be going to be a trick in folks like Google who
can just believe that they've got kind of a God-given right to be able to copy everything
off the web. Like, that's going to take some persuading to get there. But I think as the
industry lines up and says, this is the right thing to do, that we'll be able to even get Google to
hopefully voluntarily. But if not, there's certainly enough investigations on them going on
around the world that one way or another, I think that they will be persuaded or compelled
to get behind this effort. I mean, basically what you're describing is a revenue share deal,
right? And it seems only logical that a company like Google, which is going to be making a lot more
revenue by crawling everyone's content because there's such a source of queries should be paying
more than the upstart AI company that just shows up on day one and wants to start crawling the
web. So I would hope that they would be open to that. I am encouraged and I do believe that Google
really does believe in the ecosystem and they get it. But it's a little bit of the frog boiling in
water where 10 years ago or 25 years ago when Google started, it was a good deal. Like get in
nice pot of water, you're happy, you're a frog, right? But they've slowly turned the heat up
and that's made what used to be a good deal into a much, much worse deal. And so the deal needs
to get renegotiated. And that's a big piece of it. The other thing that I think is important is
you need to have access to content. And that's the thing today that's cheap. But my prediction
is that over time, the real differentiator between the different AI companies is who has access
to the most interesting content. And we've seen this play out with things like Netflix.
where you can see that by getting original content,
it actually drives subscribers.
And so I think that at the end of the day,
it's exactly right,
that fundamentally what content producers should be arguing for
is we should be getting a share
of whatever the revenue that you're generating from users is
because we're the fuel that runs the engine
that is powering your business.
I feel like you can already see this today,
and Kevin, I'm sure you've seen this, Matthew,
I'd be curious if you have to,
but if you ever use one of these chatbots
to run a deep research report
because you're trying to like really, you know, bone up on some, you know, a set of historical facts or, you know, recent current events that you want to write about.
You get these results and like half of them are from like bleep bloop.com, you know, like newswire.X, Y, Z.
Like publishers that no one has ever heard of and that you're not entirely sure are on the level.
And so often when I run those searches, I think, God, I wish that the sources I trust actually did just have deals with these chatbots or that I could like log in.
with my like Bloomberg credentials
and then just have you read Bloomberg too
as part of the report that you're making.
Yeah.
And I think, I mean, all of those things will come.
But two things.
One, bleep blop bloop bloop.com.
Actually, sometimes it's going to have
really interesting original content.
And I think part of the key here
is we don't want to create a situation
where the only people who get the content deals
are the major, major publishers.
I mean, the New York Times can do a big deal
with OpenAI.
At the same time, you want to make sure
that the small,
AI companies, the new startups are able to also get access to this. So a vibrant market here
is lots of sellers and lots of buyers. And you want to be able to do that. Right. Now,
you mentioned that publishers are very happy about the steps that Cloudflare has taken to
block these AI crawlers, that AI companies were not as excited, understandably, but maybe
they would get on board too. I did talk to one AI executive about this, who basically accused
you and Cloudflare of setting up a new toll booth between the AI companies and the content
companies because you are not only providing this technology, but you are sort of inserting
yourselves as the merchant of record in these transactions. And this person asked me to ask
you, what percentage of each payment Cloudflare plans to take? So what percentage does Cloudflare
plan to take from these transactions? So first of all, like if you're the New York Times
and you do a deal with Open AI, that's your deal. We don't get any of that. If we facilitate
that. So if we're the ones who go out and negotiate it, then, yeah, we'll take some percentage
of it. And I have no idea what that will be, but I think it'll be something reminiscent of,
like, what does Spotify take of a subscriber's revenue versus what do they pay out? And usually
that's sort of in the 20 to 30 percent range of what that is. I think the only way that we
should get something is we're actually generating value from this. And if not, then you should
do the deals yourself between the AI companies and the publishers. And in that case,
sure, we'll provide you the interface to be able to stop it. But that isn't something that we
would take any percentage of. What's also important is we provide this at no cost to even our free
users, because we think that this is fundamentally important to the sort of health of the long-term
internet. It shouldn't be something that just the big companies can get access to. And so no matter
who you are, if you're signing up for Cloudflare, you get these tools, you get the analytics, you get the
understanding, you get the ability to block it. And again, I think that we will be less involved
in the transactions for folks like the New York Times, but for bitbop bloop.com or whatever,
that's probably, it's probably a porn site that we're pointing people to.
And you can sort of envision what it might be. But if that's the case, then yeah, I think
that that's a place where if we're doing work, then we should get compensated for that work.
I mean, you know, so I have a website on the internet, platformer.com. News. That's where my newsletter is.
And I'm super interested in this because I, frankly, just do not have the time or energy to go out and try to strike deals with AI companies.
I also have no idea what the value of platformer is in that particular marketplace.
So if somebody wants to go, like, make a market and then I can just show up and you tell me like, hey, you know, we can make you $80 this year or like whatever it is.
Like, I'm interested.
And you want to take 20% of that, sure.
That's, yeah.
And I think that that's, that feels fair.
And again, hopefully we're not the only ones doing this.
Like, from my perspective, personally, like, my wife and I own a small newspaper in our hometown, we see how hard and how important it is to have local news.
And there's got to be a business model to it.
Like, reporters need to eat.
You know, it costs money to print papers.
Like, you have to have a business model that's there.
You know, I think personally, we've built a $60 billion company on the back of the Internet.
Like, I feel an enormous responsibility to give back to the Internet and actually protect it.
Our mission is to help build a better Internet.
If you talk to anyone at Cloudflare, that's why they work for us.
And so, with all due respect to the AI executive, they've built an entire company by stealing
content creators' content and not compensating them for that.
And if they keep doing that, people will stop creating content, which not only kills the
internet, but kills them in the process.
Yeah.
I think there are a lot of people who would agree with you.
One of them is not President Trump.
He said just last week, in connection with some new AI policy rollouts that he was
doing. He basically took the side of the AII companies and said that they shouldn't have to pay for
copyrighted material. He said, quote, you can't be expected to have a successful AI program when
every single article book or anything else that you've read or studied you're supposed to pay
for. It's not doable. So I guess what is the incentive for an AI company to agree to something
like a pay per crawl system if the current administration is signaling that they're not going to
face any penalty for just breaking copyright the old way?
And I think, like many things that sometimes come out of the Trump administration, nuance is a little bit lost here.
We've actually talked with a number of people in the Trump administration.
And I think what they are concerned about is much more a government instituted kind of you must pay X in the way that we've seen come out of Australia and the way that we've seen come out of Canada.
I think that's what they're trying to signal that they're not in favor of.
They are very much in favor of as far as we can tell, private market solutions.
where that is created.
And so, again, I think the incentive for the AI company
is you need content in order to have access to build your tools,
and we've just stopped you from getting it.
And so regardless of what the law says,
we're going to be able to technically stop the AI companies
from being able to get the content unless they're compensating the content creators for it.
I want to ask you, Matthew, about another release that you all did earlier this year
related to this issue of AI crawling,
which was something called the AI labyrinth.
This was a system that was designed to basically,
instead of blocking AI crawlers,
it would just sort of redirect them to an endless series
of AI-generated links and pages,
basically trapping them in this labyrinth
that they could not escape from.
So my first question is,
do you worry that the robots will remember
that you did this to them and take their revenge?
It is not a risk factor that we have currently included
in our S-1,
but I'll talk to our legal team.
about it. We now have the data that there are some AI companies, big AI companies, well
funded, that are just behaving horribly. And frankly, if they're going to behave like hackers,
then we're going to behave like trolls right back to them. And we can feed enough garbage
into their system that they will create garbage content. And so again, when we come back to
what's the incentive, like you don't want to piss us off. Because again, we believe in the future
of the internet. We believe in supporting journalists. We believe in supporting content creators.
We're on the side, the right side of history here. And so the right thing to do is say,
yeah, you're spending $10 billion on GPUs. You're spending billions of dollars on employees.
You should be dedicating at least something to paying for content.
Did the labyrinth work? Like, what was the results of that?
I mean, again, we try not to put like open AI good, good actor, doing great, doing all the things
right. So we don't throw them.
This message was not endorsed by the New York Times legal team.
That's, yes.
But for bad actors, and there are bad actors out there, we can pollute their data and we can pollute it at scale.
You know, I'm having the, I think, entirely novel feeling of a hard fork recording where I'm feeling optimistic about the state of the media.
Like, every single time we talk about the internet and the media on this show, I'm like, well, it's going to be really tough, everybody, you know, batting down the hat.
The storm is here, but Matthew, you actually are giving me optimism that, you know, when someone with the right incentives, and I do feel like you have the right incentives, shows up with great technology and, like, the willingness to go out and make a market, maybe by hook and by crook, like, we actually still will have a media industry five years from now.
Well, and I think it, again, and I might lose people because this is, this is starting to get a little, a little woo-woo, but everything that's wrong with the world today is ultimately Google's fault.
now that's a little strong um but google taught us they were the first ones to really teach us
that traffic was the deity that we all had to worship to now and again i think google's actually
been a net positive for the world massively but google begett facebook which begett tick
talk which is just kind of spiraling down in towards sort of the attention economy of how do
i create a cortisol response and get people to click on my stuff so that we can sell ads to them
I think that that's the wrong direction for really furthering humanity.
If, on the other hand, as we create this, if the giant block of cheese has holes in it,
and we can say that if you as a creator go out there and create something that fills in one of those holes,
and we'll compensate you for that, we'll pay you more for it.
What I'm hopeful is that we get a lot more really interesting, long-form knowledge-generating content,
which is what we all want, and the only reason we're not kidding it is because all of the incentives are not how do we do
great things, but how do we actually just rage bait people into clicking on links? And so I think
it's super important that as we go through this, like, it shouldn't just be us figuring this out.
And so I've tried to spend as much time with publishers, with AI companies, we've working
with some of the leading academic economists, others to sort of say, as we think about this market,
how do we make sure it's as healthy as possible? Because the dark mirror version of this
is not that journalists go away, not that researchers go away.
It's that they're all employed by the five big AI companies, right?
And so it's not that we're going to go back to kind of the media times of, you know, the early 90s.
It's not we're going to go back to the media times as like the Medici's, where, you know, you have just these five powerful families that control all of, you know, academia and research and journalism.
And that's that it's not actually that hard to envision.
There will be a conservative one, sort of the Fox News version.
There will be the liberal one.
There will be the Chinese one.
There will be the Europeans attempt at one.
And those will be the things that are out there and knowledge gets consigned behind that.
I don't want that to happen.
And they think the key to making that not happen is figuring out how we can have a healthy market with lots of sellers of content, lots of buyers of content, and make that as robust as possible.
Yeah. Now, Matthew, I know in the past you have expressed some concerns about the power that you and Cloudflare have by virtue of, you know, sitting in front of 20% of the Internet, as you say. And when we've talked before, it's been, some of it has been in the context of the decisions you've made around content moderation, basically deciding whether or not to protect websites with extreme or violent content on them. You sort of famous.
pulled protection from 8chan, which led that site to go down after some mass shootings.
I think a lot of people understood that, but you were concerned at the time that this sort of
unilateral power that you had to sort of like wake up and make a big change to the structure
of the internet wasn't something that maybe anyone should have.
So I'm curious, like when you were deciding to implement this change, to push the button,
to block all the AI crawlers by default, did you worry about that exercise of power and
and how sort of unilateral it was?
Totally.
And I think we take that responsibility really seriously.
But what we realized was all the publishers were sitting around saying, we're dying, we're dying, we're dying.
And no one was doing anything about it.
And so, like, if you're in that situation and you see something that really matters, I mean, the internet
really matters, and we should be fighting for it, and we should be protecting it, and it's dying,
because the business model behind it is dying.
And at no step have we said, we're the only solution.
In fact, we've tried to work with as many different, even competitors as possible to say,
this is important, let's do it.
And we don't pretend that we have the answer or that it won't evolve over time.
But we do know that the first step in any market has to be creating scarcity.
And so that's what we did on July 1st.
Got it.
Well, Matthew, thank you so much for stopping by.
Really interesting experiment.
I'm excited to follow it and see how it plays out.
Thank you guys for having me.
Thanks, Matthew.
When we come back, we'll pass the hat for a big round of Hat Chippy T.
All right, Casey, it's time to pass the hat.
We are playing another round of our favorite game, Hat GPT.
Hat GPT, of course, is our game where we pick tech headlines out of a hat.
We riff on them, and we eventually yell at each other to stop generating.
Which they don't even say in Chat GPT anymore, sadly.
It's already become a throwback expression.
Yes.
So let's use this one.
week are brand new, very pretty hard fork hats. Casey, would you like to tell the people where
they can get one of these? You can go to NYTimes.com slash hard fork hat, Kevin, and if you haven't
already become a subscriber to New York Times Audio, and in exchange for that, and a full year
of all the great podcasts, including the entire hard fork back catalog, we will send you this very
hat. Yes. It will not have slips of paper inside it, though. Or will it? There's only one to
I know. God. We were merging them the same person. It's terrible.
Okay. So I'm going to put the slips in.
Okay. Jiggle them around a little bit there, mix them up.
And then why don't you pick the first one?
All right.
LeBron James' lawyer sends cease and assist to AI company making pregnant videos of him.
This is from our friend Jason Keebler at 404 Media.
He writes, the creators of an AI tool and Discord community called Interlink AI
that allowed people to create AI videos of NBA.
stars, says that it got a cease and assist letter from lawyers representing LeBron James.
AI generated videos of James, Kevin, included scenes like James as a homeless person, James
on his knees with his tongue out, and James lying on a couch clutching a pregnant belly.
So I guess my first question here is, who is LeBron James?
Just kidding.
He's a very famous basketball player.
Kevin, what do you make of this?
So this is obviously going to be a thing for celebrities.
They do not like their names and likenesses
being used without their permission
and especially if what you're doing
with LeBron James' name and likeness
is turning him into a pregnant person.
Did you watch any of these videos?
I did see a couple of them.
They're very disturbing.
Some of them, I think, are like on the end of the spectrum
that is just like sort of like surreal and funny.
And then some of it is like also just like racist
and horrible.
And regardless of all of that,
it's clear that LeBron James did not give permission for this.
and it's an interesting story
because as far as we know
this is one of the first known times
that a celebrity has objected
to the misuse of their likeness
by an AI company.
Yeah.
And I am worried that it seems like
they have automated the jobs
of the M-Preg community.
M-Preg is of course a...
Yeah, what is that?
I've been meaning to ask you about that.
It's a niche fan fiction thing
that for years
people have been sort of creating
these like animated
fan fiction cartoons of like Sonic the Hedgehog becoming pregnant.
Why are they doing this?
I don't know.
Couldn't tell you.
Not an M-Pregor myself.
But there were a lot of hardworking M-Preg artists out there who now have been put
out of jobs by these AI tools.
So for that reason, and that reason alone, I think we should take a hard stand.
I now want to say retroactively to my parents, do not listen to this segment.
And definitely don't Google M-Preg.
All right.
Stop generating.
Okay.
up. Oh, this is a fun one. Sam Altman warns there's no legal confidentiality when using
chat GPT as a therapist. This one comes from TechCrunch. Sam Aldman, the CEO of OpenAI, went on
the popular podcast by Theo Vaughn last week, where he acknowledged that OpenAI might be legally
required to produce a user's conversations with ChatGPT in the case of a lawsuit. Now, I saw this
clip going around, and I thought Sam Altman continues his terror campaign against the hard fork
podcast. As you will remember at our live show, he burst onto stage and then peppered us
with questions about this lawsuit between Open AI and the New York Times, which has resulted
in open AI having to retain conversations between chat GPT and its users. He's very upset about
this, doesn't want this kind of document retention to be required.
and has started advocating for the privacy guarantees that AI companies should be allowed to make with their users.
Look, this one is important because a lot of people already are using these chatbots as therapists.
They're having therapy-style conversations.
And I think most people are not thinking a lot about what is happening to that data.
Some companies may want to erase that data for, you know, user protection reasons.
Other companies might want to keep it forever and create a detailed.
profile about you and then like rent some information to advertisers. So I would love to see some
kind of legal intervention, regulatory intervention come down and say, you're allowed to use
information submitted to chatbots in these ways. You know, users should have a full view
into what a chatbot knows about them, what kind of information is being stored about
them. They should be able to delete it, right? We just need a lot of like data and privacy stuff
around this sort of thing. And so I have to say, I'm like, I'm grateful for Sam Altman for at least saying,
like, hey, by the way, you don't have legal productions here.
Yes, we are allowed to use data however we want in training our models,
but God forbid, you know, anyone else want to use data about our users after the fact.
All right. Stop generating. All right. Your turn.
How to catch a wily poacher in a sting, a thermal robotic deer.
This is from James Finnelli at the Wall Street Journal.
Wildlife enforcement officers turned to a Wisconsin-Texamine.
Exudermis to make remote-controlled robots that look like wild animals to catch poachers, Kevin.
To make his decoys, a man named Brian Walslegel applies the skin of a dead animal to a mold made out of polyurethane.
He affixes glass eyes and plastic ears.
The circuitry to make the decoys move comes from parts for remote-controlled cars.
With some AA batteries, officers can remotely operate the Bambi bots.
Kevin, would you be interested in one of these to ward off some of the poachers on your property?
Yeah, I think there's been a lot of poaching attempts made against my livestock, and I won't be standing for it, so I'll be buying one of these.
Here's where this man's real opportunity is. He needs to use the same technology to build decoy AI researchers and put them in open AI so that when Mark Zuckerberg comes onto the property, he mistakenly poaches one of the robots instead of one of the human researchers.
So if I'm Sam Altman, this is what I'm doing.
Yeah. In general, I think like robot animals, I'm not.
a big fan of.
Like, there's actually, on my block, there's a guy with a robot dog.
He runs, like, a STEM program for kids, and I've seen him, like, walking his robot dog
out on the street.
It's very disconcerting.
Let's just say...
Top ten signs you live in the Bay Area, by the way.
Yes.
Yeah.
Let's just say these things are not entirely life yet, and they do still give you the willies.
All right.
Stop generating.
Next up.
Did a guy just save a picture of a bird to a bird's brain?
This one comes to us from Sean Hollister at The Verge.
It's about a YouTuber named Ben Jordan,
who has a popular channel about music and acoustic science,
who was able to get a bird, a starling,
to reproduce a spectrogram image in sound.
Casey, did you hear about this story?
This is actually my favorite story of the week.
What happened?
So, if I get this straight, the YouTuber,
converts an image into a sound
he then plays the sound for a bird
the bird then repeats the sound
and so in that way
you can say that the YouTuber
was able to store an image
in the bird's brain
he used the bird as a data transfer device
like it was like a disc drive
yeah so you almost got it right
because what happened is he takes this drawing
of a bird he converts it into a spectrogram
then he plays that sound for the
the bird mimics it back, and then he takes that recording and turns it back into a spectrogram.
So he's essentially going image to sound, transfers the sound to the bird, bird transfers it back,
converts it back into an image.
Now, Casey, why would you do something like this?
Because you have a YouTube channel, and you're trying to get a lot of subscribers.
Now, how close was the image, the second image to the first image?
Apparently it was quite close.
This was like, you know, the bird is a little bit of a lossy compression device, but you actually were able to
to sort of like recognize it as a line drawing
of a bird after the fact, which is kind of
amazing. Honestly, hats off to this person.
What a bizarre idea.
But fantastic.
You know, I uploaded an episode of our
podcast to a bird. Oh, yeah?
Yeah. It's called hard stork.
Okay, sir.
Why not?
Stop generating.
All right, you're up.
Okay.
Meta is going to let job candidates use
AI during coding tests. This is from Jason Keebler at 404 Media again. Meta told employees that it is going
to allow some coding job candidates to use an AI assistant during the interview process,
according to internal meta communications. On an internal message board for the company,
there was a post that called for mock candidates, because apparently they're going to let employees,
you know, do one of these sort of AI assistant interviews so they can work out all of the kinks.
So it looks like meta's going all in on using AI coding agents to write code and also is just not going to
try to stop people from cheating on their job entrance exams anymore.
This is total Roy Lee victory.
Absolutely.
So this is like a vindication of what Roy told us when he came on the show several months ago,
which is that these leap code interviews are totally cooked because now people can just use
these tools like the ones Roy Lee is developing to cheat on their interviews.
I guess meta is sort of seeing the writing on the wall and saying, you know what, go ahead,
use your AI.
We'll design our new test, which I think is probably a good outcome.
you think? Yeah. I mean, I think I'm interested to see how this affects the candidates that
they attract, the quality of the engineers that they can recruit. You know, I am ultimately persuaded
that people are going to be using these tools in the workplace anyway, so why not use them when
you're doing the actual test to get the job? Now, do you think they make the people who are making
a billion dollars a year take the tests when they come in? Yeah, they just give them the really hard
version. They say, if you do a good job, we'll give me a billion dollars. God, what a weird thing. Can you
imagine being onboarded to a new job and like you go through the training and it's like,
you know, here's your benefits package and you're just sitting there thinking I'm making a million,
a billion dollars. Yeah, they're like, do you want to put anything in your health savings
flex account this year? What about the commuter benefit? Do you want the 40 bucks for Bart this,
this month? You're like, I'm making a billion dollars, people. I'm not sitting through the IT
training.
Okay.
Okay.
Stop generating.
Here's one, Kevin.
This is from Taylor Lorenz at UserMag.
Substack sent a push alert, encouraging users to subscribe to a Nazi newsletter that claim
Jewish people are a sickness and that we must eradicate minorities to build a white
homeland.
Oh, boy.
Yeah.
And, you know, I have to say, Kevin, rarely have I felt so smug in my entire life as I did
when I read this story.
Long-time listeners of the Hard Fork Show may know.
that I moved platformer off of Substack last year
after some other folks had found a bunch of pro-Nazi websites on the network,
and Substack would not commit to doing any proactive searching
for these blogs to get rid of them.
And the main reason that I wanted to leave was I thought
these people are building amplification features,
and inevitably, they're going to just start promoting these things,
and it might be unwitting and it might be intentional,
but either way, I don't want any part of it.
And so now, sure enough,
a bunch of people who have the Substack app installed,
Yesterday, just got a ringing endorsement for a Nazi blog.
Oh, boy.
Yeah.
Now, Substack did basically say that this was a huge mistake, and they took the offending
recommendation system offline, and they're going to rejigger it, so it doesn't happen again.
But, well, it did happen, and I feel quite vindicated.
They did not see it coming.
Yeah.
That's a good way of putting it.
Yeah.
All right.
Stop generating.
All right.
Last one.
Why Amazon wants an AI bracelet that records everything.
everything you say. This comes from Nicole Wend at the Wall Street Journal. Amazon is acquiring a
company called B.B.B makes a wearable device, B-E-E, that transcribes all the conversations
in your day, including when you talk to yourself. It then uses AI to turn that giant word soup
into a searchable history, offering up key events and even to-do lists based on your chatter.
Friend of the Pod and Wall Street General reporter Joanna Stern reported on her own experience,
testing out the B bracelet earlier this year.
She described it as impressively useful
and also, quote, really fucking creepy.
Casey, what do you make of this bracelet
and will you be buying one?
I'm not going to buy one for myself.
You know, generally I don't want a detailed record
of everything that I say during the day.
You know, one of the reasons why we podcast
is so that most of what I say can be edited out, you know.
So the idea that I would just sort of have this unfiltered record,
you know, stored in an AWS bucket
doesn't really appeal to me.
Also, as I read the reviews of these devices from Joanna and others, nobody really seemed like they were getting a lot of value out of it.
It was like, oh, yeah, I told my husband, I should get milk.
And, like, now I get an email that's like, hey, remember, you want milk.
It's like, is that really worth, you know, giving up all of your privacy and perpetuity?
Well, and all the privacy of everyone that you talk to throughout the day.
Like, that is the worst part of this.
I have at times been recording interviews
and accidentally left the voice memo running
for like an hour afterwards.
And it's never that it's that interesting,
but sometimes it does catch other people's conversations in there
and then I feel bad about it and delete it.
But with the B bracelet, this is the whole point.
It's just logging you all the time.
I don't think people are going to be that excited about that.
Well, here's what I'm looking for is
for keeping the B on you at all times
to be a condition of continuing to work
at the Washington Post owned by Amazon founder Jeff Bezos.
because they're going through a lot of turmoil right now,
and there's a lot of people who are, like, leaking stuff to the media.
So I think it's going to be like, hey, we need to check your B.
What'd have been, what have been saying about us?
So it sounds like you are not going to be on the early beta tester list for the Amazon bracelet.
Maybe I will.
I'll be minding my own beeswax.
Stop generating.
Okay.
And that's HATGBT.
Yay.
Thanks for playing.
We won again.
What's that they used to say on?
line is it anyway where everything's made up and the points don't matter? That's right. That's
GPT very similar. You know my three-year-old is very obsessed with winning and losing. Oh yeah?
Yeah, every time we do anything, he says, I won, you lost. So I'm going to start doing that with you.
Hard Fork is produced by Rachel Cohn and Whitney Jones.
We're edited by Jen Poyant.
We're fact-checked by Caitlin Love.
Today's show is engineered by Chris Wood.
Original music by Alicia Beitupit, Rowan Nemistow, Alyssa Moxley, and Dan Powell.
Video production by Swayra Roque, Pat Gunther, Jake Nichol, and Chris Schott.
You can watch this full episode on YouTube at YouTube.com slash hardfork.
Special thanks to Paula Schumann,
Kuiwing, Tam, Dahlia Haddad, and Jeffrey Miranda.
You can email us, as always, at hardfork at nyatimes.com.
Send us all the images that you've uploaded to Bird's Brains.
Thank you.