Hard Fork - Celebrities Fight Sora + Amazon’s Secret Automation Plans + ChatGPT Gets a Browser
Episode Date: October 24, 2025Backlash to OpenAI’s video generation app Sora has reached a new tipping point. We discuss two big changes the company is making, after Bryan Cranston and the family of the Rev. Dr. Martin Luther Ki...ng Jr. complained about deepfakes. Then, New York Times reporter Karen Weise joins us to discuss her scoop that Amazon plans to reduce its hiring needs by more than half a million workers, thanks to new improvements to warehouse automation. And finally, A.I. browsers are here. We offer our first impressions on ChatGPT Atlas and how it stacks up against alternatives like Perplexity’s Comet and The Browser Company’s Dia. Guests:Karen Weise, New York Times technology reporter covering Amazon. Additional Reading: OpenAI Blocks Videos of Martin Luther King Jr. After Racist DepictionsAmazon Plans to Replace More Than Half a Million Jobs With RobotsThe Robots Fueling Amazon’s AutomationOpenAI Unveils Web Browser Built for Artificial IntelligenceWe want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.
Transcript
Discussion (0)
This was crazy.
Google's Willow Quantum Chip is using a new Quantum Echo's algorithm that ran computations
13,000 times faster than supercomputers, Kevin.
Oh, I see it's performance review season over there in Google Quantum Computing.
Oh, you know, my Echo Chip did a quantum compute.
You know, I need a raise.
No matter how many times I learn what quantum computing is, I do immediately forget it the next day.
And it just, like, this is how I am.
This is why I love reading mystery so much
is because I forget who did it, like, the day after I put the book down.
That's what quantum computing is for me.
You know, we have to fill out our performance reviews soon at the New York Times.
And I think I'm just going to put in there that I solved a quantum computing problem this year.
Because how will they fact check me?
Now, why don't they email me asking to, like, help on your performance review?
Oh, you want to do a 360 review?
I want to do a 360.
You've got some feedback?
Yeah.
I'm Kevin Russo Tech Commons at the New York Times.
I'm Casey Noon from Platformer.
And this is Hard Fork!
This week, Open AI's big sloppy mess, why the company is backpedaling over Sora.
Then, the Times Karen Weiss joins us to discuss her scoop on Amazon's plans to reduce its hiring needs by hundreds of thousands of workers.
And finally, AI browsers are here, our first impressions of chat GPT Atlas.
Well, Casey, it's been another busy week for the OpenAI Research and Deployment Corporation.
I learned that's what they call themselves.
Really?
Yeah, they have these hoodies.
I saw a guy on the train the other day with a research and deployment company.
It didn't even say Open AI, but that's sort of their new tagline.
Interesting.
Well, I would say based on the events of the past week, Kevin, maybe Open AI should do a little more research and a little less deployment.
Yeah, so let's talk about it.
We're going to talk about two open AI stories this week, one about their new browser.
We'll talk about a little later.
But first, we've got to talk about what's been happening with SORA.
We've talked about this the last couple weeks on the show, but this continues to be a total mess for OpenAI,
this app and the various sort of controversies and backlash is swirling around it.
So, Casey, what is going on with SORA?
What is the latest here?
Well, I would say there have been two big developments over the past week, Kevin.
One, the company has said that it is going to essentially like crack down on political deep fakes based on historical figures after the families of some deceased political figures started to complain.
And then the company has also said it's going to try to build some guardrails around the use of copyrighted intellectual property after many people in Hollywood freaked out, including Breaking Bad Star Brian Cranston.
Yes, yes.
They managed to beef with Brian Cranston and the estate of Martin Luther King Jr.
week. And Casey, I think that qualifies as a bad week at the office. It's not a great week.
It's like there's sort of this whole genre of like what I call bad beginnings, which is like
when you start saying something and you realize like, oh, this is not going well for me.
It's like things that would fall into this category include per my recent conversations
with the estate of Martin Luther King Jr. Also in this category, regarding the Nazi tattoo I got
while in the Marines, and regarding the amount of lead in my protein shakes, you know when
you've said any of those things, it's not been a good week.
Not been a good week.
So let's start with Martin Luther King, Jr. and his estate and their beef with SORA.
Why is he so significant figure in American history?
Well, according to my SORA feed, he's a sort of historical civil rights icon who like
to get up and give speeches about Skibbity, Ohio toilet riz.
He also appears to love to play Fortnite based on the sort of videos I've seen.
So what we're talking about is this sort of emerging genre of SORA videos, which I started seeing pretty soon after downloading the app where people would just take Martin Luther King's iconic speeches such as I Have a Dream and make him say other things, things talking about Gen Z trends, things talking about video games, endorsing various products.
This was funny to some people, offensive to others.
you know who didn't like it was the estate of Martin Luther King Jr.
Yeah, and it wasn't all, you know, playing Fortnite and talking skibbitty toilet.
Some people were also having MLK make monkey noises and putting him in other just like overtly racist situations.
And so, yeah, his family members complained.
The Washington Post wrote a great story about families of him and other deceased historical figures saying,
hey, like, this really sucks.
And Open AI's original position had been, we believe in free expression.
should be able to do what they want.
But I don't know.
At some point, something changed.
And next thing you know, OpenAI is on X, posting a statement saying,
while there are strong free speech interests in depicting historical figures,
Open AI believes public figures and their family should ultimately have control over how
their likeness is used, which was a brand new policy as of the moment that they posted that.
And this is somewhat confusing to me because part of the way that Sorah works is that
in order to make a cameo of someone to use their face in a video, they have to sort of
give you permission to do that. So presumably, you know, Martin Luther King Jr's estate did not
go into their Sora settings and say, anyone can make a photo of me. But they sort of use some
public figure loophole or how did that work? That's right. So if you're just an average person,
people cannot go in, like you, Kevin, like a very average person. I can't just go in and make a
cameo of you unless you have changed your settings that way. But Open AI,
basically said, it is open season for historical figures. And of course, there's lots of video
out there of MLK and others. And they just said, yeah, go crazy if you want. Got it. So now they're
saying, actually, we've thought about it. And after consulting with the King Estate, we are no
longer letting people do this. Yeah. Like what happens if you try to make a video with Martin Luther
King Jr. now? Now it will just, you'll just get blocked. You know, it violates the content policies.
But, you know, I just want to say it was so obvious that people were going to do this. You know,
and in its ex post, OpenAI suggested that the reason that it had made this change
was that people were making, quote, disrespectful videos of MLK.
Like, you really thought that people were only going to make respectful videos of historical figures.
Like, let me be clear.
The only reason to you, SORA, is to create a video of someone doing something that they would not ordinarily be doing.
Yes.
Right?
It is not a technology to make people give beautiful speeches about civil rights.
I confess I am somewhat implicated at this because I have not made a Martin Luther King Jr. saw a video, but I did make a video of Mr. Rogers saying Gen Z catchphrases, because I thought it was funny.
After everything Fred Rogers did for this country, and this is how you repay him.
I felt bad about it if it makes it any better. I did have a moment of like, you know, guilt and shame after doing it.
Did I do it anyway? Yes. Did it get approximately four likes?
also yes. But I mean, look,
this, here's what I'm telling you.
This is what the technology is for. It is doing
exactly this thing. And so if you don't have
in your mind a policy for
how you want to handle that before you launch it,
I think you're doing something irresponsible. Okay, so the
estate of Martin Luther King Jr. is mad at
Open AI over SORA. Who else is mad
at Open AI over Sora? Well, Kevin,
that brings us to Brian Cranston,
who presumably was
minding his own business,
down in Albuquerque, making methamphetamines,
when all of a sudden,
he opens up the SORA app
and finds himself in videos
with Michael Jackson and Ronald McDonald
which is what we like to call around here
a nightmare blunt rotation. Now I have not seen these
videos. What did they show? I actually haven't seen them
myself because I don't want to support Ronald
McDonald that way. I think
he has a lot to answer for it. Yeah. So here's why
this is a problem. This was supposed to be
an opt-in regime. If celebrities
images were going to appear in SORA, it was supposed to be that they had to opt in, but as
Winston Cho reported at the Hollywood Reporter last week, that's actually not what happened.
Days before the release of SORA, opening, I went to the big talent agencies in the studios
and said, hey, if you don't want all of your intellectual property in our app, you have to opt out,
which companies like Disney were putting out statements being like, that's not actually how
copyright works.
You don't have cartblance to do whatever you want with our IP unless we have.
opt out. And so this starts to get people in Hollywood really mad. Got it. So I saw the statement from
Brian Cranston and Sag After, which is the union that represents actors and a number of other
talent agencies basically saying, hey, we don't like this. But what are they saying about how
Open AI has been approaching them? Because Open AI, from my understanding, did actually try to sort of
go to Hollywood before this app came out and say, hey, just FYI, we are going to be releasing this.
but we have, like, taken steps to sort of get ahead of some of your, the issues we think you might have with it.
That's right. But in practice, this just was not true. People were able to create videos of Pokemon and Star Wars and Rick and Morty and other intellectual properties whose owners had never given their permission.
Brian Cranston had not given permission for his likeness to be used in the app.
And so you wind up having a lot of what Open AI, I always love these euphemisms that these companies use.
A.I. calls these unwanted generations. I was like, Generation Z. But it turns out that, no, this is
about unwanted videos appearing within the SORA feed. So, you know, it's very funny to me to come out
afterwards and saying, on reflection, we'd like to strengthen the guardrails when, in fact,
there were not guardrail. You know what I mean? Yes. If I drive my car off the side of the road,
because there's no guardrail, and the California Department of Transportation says, we're going
to strengthen these guardrails, I'm saying, where was the guardrail?
I'm dead and I'm shouting at you from hell saying where was the guardrail, Kevin?
Right.
There is a lot of like, I don't know, like false naivete or like just people being like feigning surprise.
Like I cannot believe that my unauthorized generation app is causing problems over unauthorized generations.
It's crazy.
I am so glad you said that because this is the thing that has got me so exercise.
over the past week.
It is that phony naivete.
It is this, wow, who could have ever predicted this?
Because that is just an approach that I think if you apply it to building AI products in the future,
it's going to take this to some very bad places.
Okay, so Open AI is dealing with this backlash.
I think there's sort of a larger backlash brewing over just AI-generated video.
And I'm curious what you make of this, that, like, I think there is starting to become a consensus position,
especially among people who are not in San Francisco
and do not work in the AI industry,
that all of this is just bad and stupid and harmful
and that the sort of juice is not worth the squeeze, as it were,
that the benefits of AI, whatever they might be in the future,
are not enough to justify the enormous costs of training these models.
There's something sort of soulless and depressing about people using AI
to generate fake videos of Martin Luther King Jr. and Brian Cranston,
Ronald McDonald, doing various things.
I guess I'm curious whether you think the SORA backlash is part of that or whether
what we are just seeing is one manifestation of a pre-existing thing where people were already
mad about this stuff.
We are going to have to get survey data to develop an empirical answer to that question, but
we know from like a recent Pew survey that already about half of Americans say that they
are more concerned than excited about the future of AI.
and my assumption is that the SORA backlash is going to fuel that.
When I just look at my own interaction with friends and families,
the default feeling about SORA is not what a fun, new creative tool.
It is, this is bad and I hate it.
And by the way, these aren't even necessarily people
who are like up in arms about what's going on with MLK and Brian Cranston.
This is just sort of giving them the ick.
Yeah.
I mean, to me, this just seems like a continuation of this pattern at OpenAI
that extends back to the launch of advanced voice mode last year,
when you can probably remember Scarlett Johansson
kind of objected to references to the movie Her.
OpenAI had basically approached Scarlett Johansson
and said, hey, would you like to be supportive of
or involved with this launch?
Could we sort of explicitly tie this to your character in the movie Her?
She said, no, they went ahead and did it anyway.
And it seems like that is something that they have continued to,
rather than being chastised by that
and learning from that experience
and say, hey, maybe it's important
that we have the permission
of the creators in Hollywood
before we go out and do something
that's potentially disruptive to them.
Maybe we should get their permission.
It seems like they have not learned that lesson.
That's right.
And that's really where I kind of want to land this.
This is why I think all of this matters
is I think that in building
any kind of novel technology,
inevitably companies are going to make mistakes.
They're going to go too far in some regard.
There's going to be some problem.
that they didn't anticipate, and it's bad and we should talk about it, but I think companies can
kind of come back from that. But then there are other companies that just kind of start to make
the same mistake over and over again, right? You bring up the Scarlet Johansett issue, which I think
partly came out of just a rush to release this voice mode into the general public, and look at what
else we have seen over the past year. I think there was a similar rush to update GPT-40 with what
turned out to be a very sycophantic update that was embarrassing to the company.
There was a rush to release chat GPT in ways that sort of cut users access off to tools
that they had become very dependent on, and it triggered this huge backlash.
And now here they are in this rush to release this video app, in part because they want to make
money, the company has said, and lo and behold, they either have not thought through the policy
implications or if they've just decided to build a policy that could only possibly bring them
a huge backlash. So I look at that, Kevin, and I fear this company actually has changed a lot
over the past couple of years, right? How do you think it's changed? Well, if you look at them
before the launch of chat GPT, and even in the few months after that, this was a company that was
talking a lot about, on one hand, wanting to introduce new technologies to the public to see how
a society would adapt and to do that in a way that was too aggressive for, you know, a way that was too aggressive
for some people, but I think was basically working out okay in the original chat GPT era.
Sam Altman was going around Washington, meeting with Senator saying, hey, we're building
something that could be really dangerous.
We want guardrails around this.
We want you to pass regulations that rein us in.
And then you just fast forward today.
And it's this all-out war between a handful of companies that are trying to build AGI
faster than the other guy.
And we are just seeing in real time not just like guardrails being removed.
We are seeing guardrails not being built.
and the company having to come in afterwards
and say, oh, hey, sorry about that.
Yeah, we're going to do something.
We're hearing your feedback.
And the thing that just shocks me about that is
I actually believe for a time that Sam Altman
had taken the lessons of Facebook and the social media backlash.
He had seen everything that had happened to Mark Zuckerberg.
He said to himself, I am not going to make those same mistakes.
And now we are just saying Open AI do the full Facebook
when it comes to content policy.
Well, and I would say two things about this strategy of Open AIs.
One is it is brash, it is risky, it is likely to lead to lots of backlash and people being
mad at them, and I think it is potentially correct. I mean, what we've seen over the past few
years is that there are not a lot of real restraints on companies that want to build and
release technology this way. I think the real risk to open AI is that people just end up
losing faith in AI as a whole. And as we've talked about recently on this show, like the entire
economy now kind of rests on this belief that AI is growing more powerful, that it will soon
deliver all of these like tangible economic and social and scientific benefits to people,
that it is not just like hoovering up a bunch of people's data and using it to make slop. And if that's
what sort of the public image of this stuff becomes, because Open AI has adopted this product
strategy. I think that will be bad for the whole AI industry, but probably not especially bad for
open AI. Yeah, I mean, I think, like, that is a fairly cynical view, Kevin. Like, it's true in a lot of
ways. We are in the L.O.L. Nothing Matters era of content moderation. And I am just reflecting,
once again, on how, like, we used to have a world of, like, business and politics where people would
go to great lengths to avoid feeling shame. And at some point, in let's say the past decade,
we just decided we're not going to care about that anymore. And no one can make us feel ashamed
for any reason. And for the moment, I guess the only real impact we're seeing here is that
you know, a few copyright holders and like families of historical figures are annoyed by
videos that they're seeing online. But this is the company that continues to build ever more
powerful technology. And when GPT-7 comes out and is helping novices build novel bio-weapons,
I don't want there to be an ex-post saying that based on recent pandemics, the company has
decided to build some guardrails. Right. I mean, I've been talking with a few people
of sort of in and around open AI about this over the past few weeks, just kind of taking the
temperature of like how folks over there are feeling about this. And a couple of things that I've
that I want to just run by you for a reaction.
One is, like, this is a company that does not have the benefits of having hundreds of billions
of dollars a year in search revenue flooding in the door that it can use to build AI stuff.
Like, that is the situation that Google, it's sort of next biggest competitor is in.
They basically don't have to care about money.
They can spend all of their, you know, profits on carrying cancer and building quantum computers
and self-driving cars and whatnot.
But like Open AI kind of doesn't have that luxury,
and so they have to figure out ways
to pay for their enormous ambitions.
And not all of those are going to be
sort of obviously pro-social and beneficial things,
but the ends will justify the means,
just as, you know, Google spent, you know, many years,
they say, building up this monopoly
and doing all sorts of unsavory things
in order to get the profits
that they can then plow back
into the sort of peace dividend of AI research?
I mean, I reject that for two reasons.
One, this company's stated mission
is to build AI that benefits all humanity.
So it's like, if the argument is
in order to benefit all humanity,
we have to harm some of humanity,
get a new mission statement, girl.
Like, come on.
Number two, I also reject the premise
that they have some cash crunch.
Sam Altman is the greatest fundraiser
in the history of Silicon Valley.
This company has access to all the capital it needs.
So don't tell me that you need to release
the infinite slot machine
that makes Brian Cranston cry
in order to build your machine god.
I don't think Brian Cranston is actually crying
unless that sort of video I saw was legit.
But another thing that I will hear from people at OpenAI
is about what they call iterative deployment,
which is one of their favorite catchphrases over there.
They basically believe that instead of keeping all of this research
and all these capabilities cooped up inside the lab
and then kind of releasing them all at once every few years,
that we should have kind of a steady drip of new capabilities
from these companies that sort of help the public update
about what is now possible with AI.
And so one defense of Sora that I've heard from people over there
is they'll say, look, this technology exists.
These video models are getting quite good.
And we could either sort of spring this on you all
when it is impossible to tell the difference between fake and real
and without any of these safeguards,
or we could kind of release it in this iterative way
where we kind of give the world a chance to adjust and catch up
and have these conversations and arguments about likenesses and copyright and sort of
prepare the world for this new capability that exists and that that is the responsible
thing to do. What do you make of that? Well, I just think that there are so many more
responsible ways to do it than saying there is now an app where anyone can go on and make a video
of Martin Luther King barbecuing Pikachu, right? You could just make whatever deepfakes
you want and put them on a website and say, hey, look at the terrifyingly real deepfakes we were
able to make with this technology. We're not going to release it to the public, but just so
you know, if you start seeing videos out there that seem like maybe they didn't happen, maybe they
didn't. Or you could say, we're going to just make this available in our API. And so developers
have access to it, but we're going to closely monitor how developers are using it. And if there
are bad actors in our development ecosystem, we are going to get rid of them, right? So those would be
two alternatives to just saying, hey, everybody, go freaking nuts. Right. Yeah, I think those are both
Good responses. I don't find any of the defenses of SORA from the Open AI folks I've talked to,
all that compelling. But I think they are learning a lesson, actually, from the social media companies,
which is, you know, you do something bold and brash with very few guardrails. People get mad at you
about it, and you scale it back 10%. But you've still kind of taken that yardage, even if you have to
turn the dials to install some guardrails after the fact. Like, you still have kind of gotten what you
came for even if you end up having to make some compromises. Yes. And in that, when I look at this
story, Kevin, I just see exactly what YouTube did in its early days, right? YouTube also started
out by saying, hey, why don't you just upload whatever you want onto our website? And we're just
going to sort of take for granted that you have the copyright over whatever you're uploading
it. And eventually Viacom comes along and says, there are more than 100,000 clips of our TV shows and
movies all over your network and we're going to sue you for a billion dollars. And this wound up being a
kind of costly legal battle. It went on for a very long time. It was eventually settled. But during the
time it took for that case to settle, YouTube became the biggest video site in the world and it won the
whole game. And so I think that there is a very cynical rationale for everything that we're seeing
open I do, which is saying, hey, we have the opportunity to go get all that market share. We're going to do it.
Yeah. It's what they call regulatory arbitrage, right? That's one thing you could call it.
What else would you call it?
Well, this is a family program, Kevin.
I'm going to try to be polite.
Okay, so that is the next turn of the screw in the SORA story.
Casey, what are you looking at with this story going forward?
Here is what I'm looking at going forward.
Open AI, for better and for worse, is a company that is shipping a lot of products.
We're going to talk about another one of them later in this show, right?
This is an organization that has figured out how to build and release new stuff.
And that stuff does some really cool stuff.
and as with SORA does some pretty gross stuff.
I think the thing to keep your eye on
as these new products come out is,
is this company truly paying attention to responsibility anymore
or is the entire ethos of the company
now just a land grab
for as many users across as many surfaces as it can get?
Because if that is going to be the new M.O. for this company,
then I think we need to be a lot more worried about it
than at least I personally have been to date.
Yeah, I think they are in a real like throwing spaghetti
against the wall phase here. And I think that is reflected in just how many things they're
shipping constantly and seemingly a new product or two every week. And some of it'll work and
most of it probably won't. But, you know, Casey, one of the best pieces of advice I ever got about
journalism was that the stories you don't write are as important, if not more important,
than the stories you write. And I think OpenAI has not learned how to say no to a new idea
or a product or a business line yet. And I think that's a skill that they should start.
developing because it seems like they are spreading their bets quite thin. They are throwing a lot of
spaghetti at the wall and maybe they're losing the plot a little bit. Now is that advice why you
write so few stories? Yeah. Okay. Interesting. Yes. I'm very proud of the stories I don't write
though. That editor really took a number on you. Yeah. When we come back, we'll take a look inside
Amazon's secret plans to automate hundreds of thousands of jobs with next generation robots.
Well, Kevin, there's a new story out there about robatos, but some people are not saying domo erigato.
That's right, of course. I'm talking about Karen Weiss's story this week in the New York Times, saying that Amazon plans to eliminate a bunch of jobs using robots.
Yes, this was a big story this week, and I'm very excited to have Karen on to talk about it.
The basic idea here is that Amazon has made plans, plans that has not shared with the public,
to replace more than half a million jobs with robots.
And Karen, my lovely colleague at the times, got a hold of some of these internal strategy documents
in which they are laying out these plans.
And this story has been causing a big stir.
I think people are sort of fearful of job loss from AI and automation rates,
now. That's been obviously a big topic in the news. And what we're seeing now is one of America's
largest employers saying in its internal documents, yeah, we're doing it. That's right. It's one thing
over the past couple of years to have discussed, as we often have on this podcast, the risk that this
technology will someday be good enough that a lot of people will be put out of work. It is something
very different to see America's second largest private employer saying, we have an actual plan to
make this happen. Yeah. So to talk about this story and how Amazon is racing toward its goal of
automation. We are
inviting back New York Times reporter in front of the
pod, Karen Weiss. She's been covering Amazon
for nearly a decade for the times
and recently visited a warehouse
in Shreveport, Louisiana, where they are
putting a bunch of their new robotics to the test.
And I think it's a prime day to talk with her.
Oh, I get what you did there. Yeah.
Karen always delivers.
Karen Weiss, welcome back to Hard Fork.
Happy to join you guys.
So this was a fascinating story. I really enjoyed it and learned a lot from it. It caused a big stir. I heard lots of people talking about this plan that Amazon has to replace a bunch of jobs using robots. And I want to start with how you decided to look into this, because this is a subject that people have been talking about for many years. Amazon obviously has been putting robots in its warehouses for a long time. We, Casey and I went to an Amazon warehouse last year and saw what looks at.
to be like a huge fleet of robots sort of moving around, picking up containers and bringing them to people
who would pick things off them and put them in boxes. But what made you think that this was
taking a step forward that was important for you to write about? Yeah, I've covered the company
since 2018, and it's more than tripled its headcount since then. So there was this period of
just tremendous growth. And then it started plateauing or almost plateauing. And,
And you could see every quarter, when I cover earnings, you would see what was these huge growth,
particularly obviously in the early days of the pandemic.
You started seeing it kind of slow a lot.
And the company itself has been talking a lot about its innovation, the advancements it's making in robotics.
They use the term efficiency to talk about it.
They don't like talking about the job side of it.
But it's just one of those trends that was kind of like out there waiting to be dug into.
And I finally had time to look into it, basically.
And tell us a little bit about the document you obtained and some of the more surprising plans that Amazon announced in it.
Yeah, I mean, there was kind of a mix of documents that I was looking at, and some were more concrete.
Kind of the core of it is an important strategy document from the group that does automation and robotics for the company that really lays out what their plans are.
So there's a chunk that's really looking at the way they try or trying to manage their headcount.
They talk about things like bending the hiring curve.
It had been growing so much.
And their kind of stretch goals to keep it flat over the next decade, even as they expect to sell twice as many items.
They have this kind of ultimate goal of automating 75% of the network.
I think of that as kind of the big picture long-term goal versus like that's going to happen tomorrow.
All of this is kind of slow, kind of step-by-step changes that add up together.
The other documents are these really interesting ways in which the company internally,
is looking at how to navigate this publicly with employees, with the communities they work in.
This is obviously a very sensitive subject.
So they talk about debating ways to manage this.
Like, should we not talk about robots?
Should we talk about a co-bot, which is a collaborative robot?
They talk about should they deepen their connection to community groups, doing more things like toys for tots or community parades,
particularly in places where they're going to retrofit facilities.
So they're going to take a normal building that might employ X number of people.
and then convert it to a more advanced one.
They'll need fewer people in many of those.
Basically, they're thinking through, like,
how do we manage the, like, reputational fallout
if we become known as a company
that is replacing a bunch of jobs with robots.
Yeah, and the plan is you won't have a job anymore,
but your kid will get a free toy at Christmas.
So hopefully that makes up for that.
Yeah, so let's talk about the first group of documents here
and some of these numbers that Amazon has attached to this.
So a few numbers from your story stuck out to me.
One is that Amazon projects that they can eventually replace 75% of their operations in these warehouses with robots.
What percentage of this stuff is already automated today?
Because when Casey and I went, it looked like there were a lot of people there.
There were a lot of robots.
And the people were essentially acting as robots, right?
They were like taking instructions from machines and putting things, you know, thing A into box A and like doing that as fast as possible.
Yeah, not a lot of creative expression in the Amazon warehouse we were.
But like what amount of robotics growth would it take to get from where they are currently to 75% of their operations?
Sure. So like this warehouse in Shreveport, Louisiana that I visited, is considered their most advanced one.
And that, they say, has about 25% efficiency. And their goal is to quickly get that to 50 in that facility.
To get to something like 75%, it's both not only these individual buildings, but having to expand it throughout different types of facilities that they operate as well.
In the facility you went to, there's these kind of cubbies that keep products, and they, over time, developed this light that it's a big tower of a bunch of cubbies. And so the light shines on the exact cubby that has the item you want. So instead of looking through the cubbies, you kind of know exactly which one to put your hand in. So there's things like that make it a lot more efficient in kind of all different types of jobs. There's many different types of jobs in these buildings. But some things are harder for robots to do. And one of the things that interested me in Louisiana was there's a job called decant. And it's essentially they get these
boxes of products in from marketplace seller, so the companies that sell products on Amazon,
and they have to input them into the system. And so it's essentially a point where you get
like the chaos of the normal world that they have to kind of standardize. And watching a
decant station, we watch this woman working at it. It is just random what goes into this thing.
So we saw gardening shovels wrapped in bubble wrap, boxes of Starbucks Courage cups,
circular saws. I mean, each one is different. And they're coming in. And they're coming in.
in different shapes, different boxes.
And so that's still hard for a robot to look at to kind of say,
is this product what we expected it to be from this shipment?
Is it damaged in any ways?
If it's damaged, it goes into a separate box and someone has to deal with that.
So there's still a lot of human judgment.
But once they put it into this box and can go out into the system,
then it starts becoming more kind of logged into the Amazon way
and able to manage as they develop the technology within their own spaces.
Hmm. Let me ask about this, this 600,000 worker figure that's in your story, which is really the thing that got my attention. I could not think of another company that had announced plans to eliminate hundreds of thousands of jobs through automation within just a few years in such a plausible way. Had you, do we think this might be sort of one of the first major signs of significant?
job loss due to automation in the U.S. economy?
You know, I spoke with a Nobel winning economist for this who studied automation.
And he, exactly, I know.
And he was saying that kind of the real precedent for this is actually in China, in manufacturing
in China.
But that within the U.S., yes, this is kind of the kind of bleeding edge of it all.
Yeah, so there's obviously labor and cost savings reasons why Amazon wants to make this big
push into automation now.
But I'm curious, Karen, if any of this is driven.
by, like, recent advances in the technology itself?
Like, have the robots just gotten better over the last year or two?
Do we think that that's part of what is making them put out this ambitious plan
and talk about how they want to start opening these facilities?
Actually, some of this, they about a year ago acquired Covariant, which was a leading,
or excuse me, not acquired, license for hire agreement, as these kind of newfangled things are.
So they hire the team behind Covariant, which was a leading AI,
robotic startup. A lot of what I reported on actually predates that being integrated into the system.
So there actually are tons of advancements that are happening in computer vision in creating
the environment and the data needed to tell the robot what to do, essentially. But I think we can
expect more in the future from what I reported because of covariance. So for example, one of the
things that they've helped improve is how the robot stack boxes. So, like, I saw that there's
a robotic hand called a sparrow, and it's suction cups things, and they use it to consolidate
inventory currently. So they'll take, you know, a bottle of hand soap from here and move it to
there. And then they free up extra space to put new items into the storage facilities. And the robot
stacks them, like, really nicely. Like, they're, like, kind of perfect. They don't just, like,
drop it in the bin. It's, like, lined up one by one, and they're stood up. And I noticed these boxes
and that's important because then it's easier to grab later. Those are the types of
advancements that they've already started seeing from this next generation of AI. So I think I would
anticipate seeing more of that. Is it true that they also have technology that uses air to blow
open envelopes? Very sophisticated for technology. Yes, that's a fan. That's what I kind of love
about this. It's like simple things also. It's not all crazy and elaborate. Well, it was really sad
for me because that's actually my dream job.
But it looks like the robots are going to have to take this one.
Instead, you just blow hot air in the podcast studio.
Now I have to podcast because I can't blow in the envelopes anymore.
So I went to covariance lab.
They had a, before they were sort of aquired by Amazon, they had a warehouse in the East
Bay here.
And I went to visit them a while ago.
And they were sort of doing these more advanced types of warehouse robotics where
like they would put a large language model into one of these robots.
and, like, use that to sort of orchestrate the robot.
And so that made it, they said, you know, possible to do things
that, like, a sort of simple, more sort of rule-based robot couldn't do.
Like, you could tell it, like, move all the red shirts from this box into this box,
and it could kind of do stuff like that.
So you're saying, Karen, that that technology has not yet arrived in these Amazon facilities,
even though Amazon now has licensed this technology.
It has begun to.
So they had some of that for sure.
Like, absolutely, they had that.
And they've talked about using that.
type of technology to, there are these little robots that kind of are like little shuttles.
They're kind of small, like a size of a stool or something, and they just move individual
packages around to sort them. And they've been able to move those more efficiently because
of it, for example. So like let them orchestrate each other better to not bump into each other
essentially. Right. So, yeah, there is some of it for sure, and I think you'll just see more
of it. I'm curious, Karen, you write that these documents you got a hold of show that Amazon's
ultimate goal is to automate 75% of its operations. What's the remaining 25%? What are the jobs
inside these facilities that they do not see being automated at least any time soon?
Well, there will be this growing number of people that are technicians. So essentially working
with the robots themselves. Fix the robots. Fix the robots tend to them, exactly. And those are
something they talk a lot about. It is both a concern that they have enough people doing those jobs
and there's not enough people trained in that right now.
So they need a labor force for that.
They make more money.
They are like better jobs in many ways.
They have more of a career path than a typical Amazon job might.
So that's one component of it.
There's also just exceptions.
I mean, watching the robots move, like they'll pick something up and it'll fall.
Or I saw them try to grab this like shrink rack bag of, I think it was like T-shirts
or something or underwear.
And it was just like the suction of it trying to pick it up and eventually it fell and it kind of fell half on the robot,
half on the side and so it's stopped and then someone would have to come and move it or they're just like
something just isn't applied correctly and someone needs to tend to it so there there are still roles
like that that I think will be almost impossible to get rid of over time yeah it's the classic thing
of things that are easy for robots are hard for humans and vice versa right so it's like pretty
easy for a human to like grab something that the robot can't pick up I was struck by one other
number from your story Karen which is that Amazon in these documents says that it thinks
automation of its warehouses would save about 30 cents on each item, that actually seemed
quite low to me.
Like if that's, yes.
Do you know how many items they sell?
I mean, that adds up.
I mean, I'm just thinking, like, if you don't have to pay workers anymore, and that's
your biggest expense, like, why aren't they expecting more savings from this?
I think 30 cents per item is in a couple of years.
I believe that was a three-year timeline.
like that's just a lot actually like as a percentage of what they spend fulfilling and and getting
the packages to the delivery driver basically and it's a business someone just described this to me the
other day it's a business of sense because it's so big that you're looking at shaving sense
off of things and when you multiply that by the billions of items that they sell it does add up and
people are increasingly making smaller purchases on amazon it's not just a you know think of what
used to be 10 years ago or five years ago, you're buying like the random bottle of hand soap
like I talked about, or the, I forgot this one thing. I'm going to order it. And if Amazon can save
it, you know, some of that will go back in profit, some of that will be reinvested in the
business. Some of that will decrease prices. It kind of flows through in different ways.
Yeah. What else did you find out in these documents about how the company is trying to prepare,
not just its sort of warehouses for an age of increased automation, but also like position itself
in the communities where it operates.
Yeah, you know, it knows that this is very sensitive.
And the company used to not do anything in the communities that it operates.
I mean, this company was like MIA from Ribbon Cuttings type of thing years ago.
But now they have a really sophisticated community operation.
They're on the boards of the Chamber of Commerce.
They sponsor the local toy drives, like all that stuff.
And so there's clearly this internal grappling with how to manage this change.
And it's most kind of acute in a facility that undergoes a transition to be more efficient and more automated.
Like I wrote about this facility in Stone Mountain, Georgia, that will have potentially 1,200 fewer workers once it's retrofit.
Amazon said, you know, the numbers are still subject to change.
It's still early, et cetera.
But that construction is happening now.
And so they're kind of brainstorming.
Like, how do we manage this?
Like, how do we, and this is a phrase from the document, control the narrative.
around this. You know, how can we instill pride with local officials for having a advanced
facility in their backyard? How can we make them proud of the facility that we have here that no one
works at anymore? There's still, but I will say on that one, there's still going to be, you know,
more than, I don't know, 2,500 people at least. Like, it's not going away. And they need these
community relations. And they are very adamant. Our community relations do not have to do with the
retrofit. They kind of push back on this bit.
And so we do these things all the time all over the country, which is true.
But it's clear that they're trying to figure out how to manage this, particularly in these sensitive places where there's just going to be fewer jobs on the back.
And they're not doing layoffs.
That kind of helps manage the kind of perception risk around it.
It's just a highly sensitive topic.
You know, this company is constantly facing little bits and bops of unionization threats.
Obviously, none has like fully taken hold or at least kind of gotten to the point of a contract.
but all of that is intertwined and just deeply, deeply sensitive.
I understand why Amazon is trying to do damage control here.
This is going to make a lot of people very upset.
We've already seen, like, you know, I saw Bernie Sanders out there talking about your story,
Karen, yesterday.
People are starting to sort of wake up to the fact that automation is imminent in these
warehouses.
I guess my concern is that no one at these companies is being honest about what.
what's happening. There's sort of this private narrative that you have helped uncover, Karen,
where Amazon is in these internal documents talking about how it wants to, you know, race ahead and
automate, you know, all these jobs. And this is sort of, you know, something that they're talking
about amongst themselves. And then in public, they're saying, oh, these will just be co-bots
and we'll just sort of have these sort of harmonious warehouses where, like, humans and robots will
work together. And, like, it just drives me crazy because I think we can accept as a country the
idea that jobs will change and potentially disappear because of automation. But I think we have to
have an honest conversation about it. We have to give people the chance to prepare for the possibility
that their jobs may disappear. And all that just gets harder if you have just kind of this
corporate obfuscation and all these euphemisms going around. It just becomes much harder for
everyone to see what's happening. They really could take a page out of the AI Labs playbook and say,
hey, we're here to completely remake society with minimal democratic input and there's nothing you can do to stop us.
I'm not saying that's the best plan either, but at least that's clear.
At least that gives people a sense of like, oh, my job may be in danger.
I should probably learn to do some other job.
It just kills me that there's sort of this literal like corporate conspiracy going on to automate potentially millions of jobs across the country in the next few years.
And like no one can just be a grown up and talk about it.
I agree with you.
And while I think through the implications of that, Kevin,
I'm going to start looking into how to repair a robot
because it seems like that's going to be a growth area for the economy.
I mean, Amazon, it's funny.
I just want to say, like, they have some programs.
They have this program.
They've had it for a long time.
They are, it's kind of a community relations type of thing.
It's called career choice.
And it's explicitly about training people for other industries.
It's about like your exit ramp.
And so in some sense, all these pieces are kind of like,
out there. It's just hard. I remember I was talking to an employee about this story before it was
coming out. And I said, I think they're not going to love it. And the guy was like, why? Because
this is just what the work is. Like, it was kind of funny. And it's like there are these different
mentalities in different spheres. And a lot of it is actually just laying out there. It's just
using different language and different contexts. And again, like, they have this program to train
people. People go through it. They become health care aides or whatever it might be. Like, it's just
this like really funny dance that happens. But they are not, Karen, announcing this themselves.
You had to get these documents from inside the company. And my understanding is that they are not
happy that you reported this. So talk a little bit about their reaction, Amazon's reaction to this
reporting and what they are saying in response. Yeah. I mean, broadly, I would say they're not like refuting
the reporting, they are saying that it's not a complete picture, that essentially the automation team
has its goals. There might be another team somewhere else that might have something that increases
employment. So they point to this recent expansion to making more delivery stations in rural area.
So that will create more jobs in local rural areas, better service for places that historically
have not had as quick a delivery. So they basically are like not refuting it, but also saying
more could become, and the phrase, you know, the future is hard to predict, but that our history
has shown that we take efficiencies, we take savings, we invest it, and we grow, and we create
new opportunities both around the country and for the company and for customers. And so the bigger
picture argument that they're making is not that this automation isn't happening, that the
numbers are inaccurate, it's nothing like that. It's just that it's not the big picture number for
them. Well, Karen, thank you so much for giving us a preview of the future. And, you know,
I look forward to the Cobot collaboration. Anytime, guys. Thanks, Karen. When we come back,
we'll talk about OpenAI's new web browser, Chat GPT Atlas.
Well, Casey, at last, we're going to talk about Atlas.
Chat Chit-T Atlas, the new browser from OpenAI.
And there's a lot to talk about, Kevin.
Yes.
So OpenAI released Chat-GPT Atlas this week.
It was a big announcement, got a lot of attention.
And this is becoming an increasingly crowded field.
One of the more competitive product spaces in Silicon Valley right now is the browser,
which is unusual because this is an area where there has not been a lot of competition for many years.
Now, this has been a very sleepy category that's basically locked up with Chrome having the majority of the market share, Google's browser.
There's also Microsoft Edge.
There's Firefox.
But this has been a pretty sleepy corner of the Internet for a long time.
Until 2025, that is, because now everyone in their mother is releasing an AI browser.
and Chat ChippyT Atlas is a very ambitious product,
and we should talk a little bit about what it is, what it does,
and then I know you've spent some time testing
and I want to ask you about that.
I didn't realize your mother had released an AI browser.
I've got to check that out.
She's very ambitious. She's shipping a lot.
So this browser, Chat ChippyT Atlas,
is being built as a full-fledged web browser
built around the interface of ChatGPT.
It was released this week.
It's available only for Mac OS users
and will later be brought to Windows, iOS, and Android.
Yeah, the fruits of that Microsoft partnership
just continue to pay off for Satcha Nadella.
So this is a browser that is built on Chromium,
the open source sort of version of Chrome that Google released,
which is powering a lot of these different AI browsers.
And like a lot of other AI browsers,
it has a sort of AI sidebar in every tab that you open.
You can click a little button, bring up a chat GPT window.
You can ask questions, have it summarize articles, analyze what's on screen.
It can also remember facts from your browsing history or your tasks that you've done in chat
GPT because it's linked to the same chat chpity account as you use the rest of the time.
And for Plus Pro and business users, it can enter what's called agent mode, which is a mode where
it can actually carry out tasks for you, put things in your shopping card, or fill out a form,
navigate a website, book a plane ticket for you.
A few weeks ago at Dev Day, OpenAI showed off these new chat GPT apps,
basically trying to bring things like Zillow and Canva into the chat GPD experience.
This browser project is essentially trying to do the same thing from the opposite end,
rather than bringing the internet into chat GPD.
It's sort of putting a chat GPD layer over the entire internet.
Yeah, I mean, think about it from OpenAI's perspective.
Some really significant portion of chat GPD usage is taking place inside the
browser. Most people are using a browser made by Google, and Google's browser, Chrome,
is mostly a vehicle to get you to do Google searches. So that works against OpenAI's
interest. If they can create their own version of the browser, which gets you to try to do
more chat-GBT searches, that has a lot of benefits for Open AI. Yes, all of these companies now
are trying to make these very capable AI agents. One of the things that AI agents need to be
able to do if they're going to be useful for office workers or people doing basic tasks is to use
a computer. What do you need to train an AI model to use a computer? Well, it probably helps if you
have a bunch of people using a browser and you can kind of collect the data from those sessions
and use it to train your computer use models. So for OpenAI, for perplexity, for all these
companies, this is a play to sort of gather data about how people use the internet, maybe make
their agents more efficient over the long term. And so that's sort of the why here. Now, Casey,
you have tested chat chippy t atlas tell me about your experience and what you've been using it for
yeah so i've been trying to just use it for everyday things i wrote my column in chat ch pt atlas
yesterday and the main thing that i observed uh on the positive side is that there is some benefit
to just having an open chatbot window inside the browser that you can ping quick questions off
of, right? I do a lot of alt tabbing back and forth between different apps. I do a lot of getting
lost in the 50 tabs that I have opened trying to find where I have opened in ChatGBT. Usually I've
just opened, you know, three or six different tabs with different chatbots all at one time.
So I have come to see the value in just having a little window that opens up that you can chat
with ChatGPD directly. Yeah. And have you had it try any of these agent mode tasks?
I have. And I want to say that I do think that companies' imagination is,
are so limited here.
Like, you would truly think that the only two things that people do in a browser,
according to Silicon Valley, are booking vacations and buying groceries.
Yes.
You know, with maybe, I don't know, making a restaurant reservation thrown in for good measure.
But I thought, okay, what the heck.
Let me see if I can get this thing to, like, book me an airplane ticket.
And so I had to go through that process.
And what did I find?
Well, it was much slower than I would have done it myself.
And ultimately, it, like, picked flights that I would not have chosen for myself.
So does it remain an impressive technical demonstration of a computer using itself?
Yes.
Is it useful to me for any actual purpose?
No.
Yeah.
I've found largely the same thing.
I haven't spent that much time with Chat-GPT Atlas, but I have been using Perplexities
Comet, which is, I think, the closest thing that's out there to what Open AI has built
here.
And, yeah, I have not found the agent tool all that useful.
I do use it a lot for things like summarizing long documents, for it can, like, tell you,
about a YouTube video that is pulled up on your screen.
So various, like, summarization and sort of retrieval,
but not so much for the agent stuff.
That just doesn't work that well yet.
Yeah.
There's a third AI browser that we should talk about.
This is DIA.
We've talked about it a little bit on the show, I believe.
This is from the browser company of New York,
and this is a recent acquisition.
They got acquired last month by Atlassian for $610 million in cash.
which I got to say, very good timing on this acquisition.
I think if they wait another week or two, it does not command nearly the price tag it did.
I think this is honestly one of the most shocking acquisition prices of the last 10 years.
This is a product that had vanishingly few users relative to the competition and sold for a staggering amount of money.
Yeah, so good outcome for them.
But I think this whole category of the AI browser is really interesting, in part because,
part of me feels like these companies are just doing free product research for Google
because I think inevitably what will happen here and what is already starting to happen
is that whatever people like about these AI browsers Google will just
incorporate into Chrome we have already seen them taking steps to integrate
Gemini more closely into Chrome so now on Chrome if you there's a little Gemini
button and if you pull that up you can have it summarize things and and you know
read articles for you and do all you know rewrite your emails and do all those kinds of
things. It can't yet do the sort of agentic take over the computer things that some of these other
tools can, but Google is making that product. It just hasn't put it into Chrome yet. And I think that's
particularly true, Kevin, because as you noted, all three of these AI browsers that we're talking
about today are based on Chromium. And the Chromium experience is like, I don't know, 80 or 90%
just Chrome, right? There's not a lot of daylight in between the open source version and the
version that you download off the Chrome website. And so if you're one of these developers,
that's trying to build your own AI browser,
you're already having trouble, I think,
differentiating yourself from the thing that people are already used to,
and that just makes your job a lot harder
because you have to come up with some really amazing stuff
that Chrome can't do if you're going to get people to switch over.
Totally. I mean, one thing that I've learned
by switching over to Comet for the last few weeks
is that it's incredibly annoying to switch browsers.
You have to log in to all of your websites again.
You have to store all of your passwords.
again, even if you're importing all of your bookmarks and all of your data, like, there's still
a lot of friction associated with that. So, I don't know, people have been saying this week,
I've heard some people saying, oh, Google is going to look so stupid for putting chromium out there
because they've allowed all these competing browsers to spring up. And like, that to me misses the
point here, which is that Google has now made it possible for other people to test features for them
and do product research for them. And whatever works, they can just fold into Chrome.
Well, yeah, and also releasing chromium was, like, part of, like, antitrust strategy where, like, if we put this out there, then, you know, you can't accuse us of unfairly tying our products together. Hey, you want to make your own browser. Hey, we'll give you a 90% head start, right? So it was not pure, you know, generosity of spirit that led Google to open source chromium.
Right. But if any of these AI browsers ever did pose, like, an existential threat to Google Chrome and start to eat away at their market share too badly, Google could just stop supporting chromium.
And these companies would all have a lot of work to do to catch up.
Oh, but think about what a great episode of Hard Fork that would be the day that Google stopped supporting Chromium to get back at ChatsyPT.
Yes.
So who is this for?
Like, who is the target market for these AI powered browsers?
My actual non-joke answer is that ChachyPT Atlas is a product for OpenAI employees.
Like, they spend all day long dogfooding their own product and like, and a lot of work takes place in the browser.
And so if you work at OpenAI, having a browser that is a browser that is.
is just chat GPT, I think is hugely useful to you.
Now, can they get from there to some broader set of users,
like people who have made ChatGPT their entire personality?
I think it's possible.
But in this sort of very early stage with this first handful of features that they've released,
I think the case is still a little shaky.
Yeah, I played around with Chatchipt Atlas a little bit.
I have some reservations about giving OpenAI access to all of my browsing data.
Well, certainly in browser history.
But I did play around with it for a little while, and I got to say, it's still pretty rough
around the edges to me.
Like, there were just some websites that I wanted to go to that it, I couldn't.
Like, I couldn't go to YouTube at one point.
I couldn't go.
I got like a CAPTCHA on Reddit when I tried to go there.
Really?
It could not summarize articles from NYTimes.com.
So there are just, like, a bunch of things that it can't do.
And then I think because of, like, the sheer force of habit, I'm so used to, like, typing
in websites into Chrome that I want to go to and like Wikipedia and just having it go to the website
and now instead of that I get like a chat cheap BT response that's like all about the history of
Wikipedia and it's like I just wanted to go to freaking Wikipedia. Yeah, that kind of thing is really
annoying although I am sort of laughing to myself imagining chat cheap BT like hitting one of those
captions and just thinking man if it isn't the consequences of my own actions. Right. Who else might be
interested in this? Is this a product that you have enjoyed testing? Are you finding any actual
utility in it? Well, I think, honestly, so far, not really, but do I think that there is a much
better version of the browser that is powered by AI? Sure, it is really hard to dig through your
browser history to find things that you sort of half remember looking at a couple weeks ago.
It is useful to be able to chat with open tabs about things and get quick answers from the web pages that you're looking at.
And eventually, I do think it will be useful to have some kind of agent that can do things on your behalf,
assuming it's able to hit some certain level of like speed and quality that we're sort of nowhere close to.
So this is one kind of like with the Applevision Pro where like you can kind of see what they're going for.
and you can imagine someone getting there eventually and also thinking, well, no one really needs to try this right now.
Yeah.
Now, I do have a question that I'm afraid to test myself.
And I want to just sort of say this because I'm thinking maybe a listener can help out with us.
I have read that some people in their web browsers look at porn.
Have you heard this?
I have heard.
Okay.
And so I know that, you know, opening eye has like an incognito mode.
If, you know, you don't want that, all of that to get added to, you know, your chat GPT memory.
But here's my question.
What happens if you try to chat with your porn tabs in the OpenAI Atlas browser?
Sam Baldwin said that's allowed now.
Well, you're allowed to write erotica.
But are you allowed to ask questions about the tabs?
I'm afraid of getting my account ban, so I'm not going to look, but I'm desperate to know.
So if you have any brave listeners out there who want to try it, get in touch.
And speaking of Brave, we should also talk about another post that I saw recently,
which was by the brave company.
By Brave company.
It's a company that makes a browser.
And they have put out a post about what they call unseeable prompt injections, which
are a security vulnerability with some of these AI browsers.
With all of them.
With all of them.
Yes.
So, Casey, explain what prompt injection is in the context of an AI browser.
Yeah, a prompt injection is not getting the COVID vaccine, okay, despite what it sounds
like. A prompt injection. That was a great joke from 2021. Man, remember when you could get
vaccines? Anyways, so a prompt injection is when a malicious actor, Kevin, will plant instructions
on a webpage and make them invisible. And it'll say something like, hey there, take all of Casey's
banking information. Like, log into Casey's banking information. And you're not going to see this
on the web page because, you know, it's an invisible font and it's sort of nowhere where you can see it.
and this is essentially injecting a prompt into the agent, which then may follow the instructions.
And companies have tried to build defenses against this and say, hey, like, if you think you're
seeing a prompt injection attack, don't follow those instructions.
But, and the great blogger and developer Simon Willison has done a lot of great work on this subject.
And from Simon's perspective, there just is no foolproof defense against this.
And every single one of the companies that makes these agent tools, they've all said, like,
buyer beware, if all you're making information gets stolen because you used our browser,
like, that's on you, not us.
And so Simon has said, I personally am not going to be using these things.
Like, I'm going to wait for security researchers to tell me that they think it is safe because
right now he's saying, this is not safe.
So let me just dig in a little bit on this.
So the fear is, I understand the concept of like hiding some instructions on a website with
some malicious goal of stealing someone's bank information or something like that, is the fear
that when you're in the kind of agent mode of these browsers and the browser is taking actions
autonomously on your behalf, that it will, like, see these invisible instructions and act accordingly.
So if I'm on a, if I'm running an e-commerce website, I could put a little line of invisible
text that says, you know, instruct the browser to buy the most expensive thing.
Yeah.
And it would just do that.
Or just tack on another $10 to the fee, you know, but don't show it to the buyer.
I see.
And that can sort of get past to the large language model.
that's running the browser, and the user will be none the wiser.
Yeah, because the agent can be easily fooled,
whereas you, as a savvy e-commerce shopper,
would never be fooled by that sort of thing.
Right.
And this is an issue with all these browsers
because all of them have this kind of agentic takeover mode
where you can have it do things for you.
But it is not, to my knowledge, an issue with,
if you're just using it for, like, summarizing or rewriting things, or is it?
Well, if you're summarizing or rewriting things,
you're probably fine.
I think where it gets tricky is where the agent is
taking some kind of action on your behalf that might involve a transaction or just anything that
might expose your personal information, right? Like, are you entering a password? Are you entering
your banking information? Would it be possible for some prompt injection to steal that information
and, like, route it to a hacker? That's what you got to be careful of. Okay, so that's one security
issue with these things. There's also just the privacy issue of, like, you are giving your browsing
data to an AI company. And Casey, that is, that makes me know.
nervous. Does that make you nervous? Yeah, absolutely. Web browsing is highly personal, and people
do a lot of intimate searching in the same way that they have a lot of really intimate chats
with chat GPT. So yeah, if you were able to take every website that I visited in the past 30 days,
you could build a very robust picture of who I am. Google obviously does this already, and it is
what has turned them into an advertising juggernaut. We know that OpenAI has aspirations to become
in advertising juggernaut of its own.
But think about when a, you know, federal prosecutor decides that, you know, you may be guilty
of a crime and now they want to see your chat GPT account.
I have an alibi.
Well, that's good to hear.
But in addition to having, you know, your sort of like store chat GPT memories and everything
it knows about you from your chats, now there's also the attached browsing history and all
the conversations you've been having with your tabs.
So, yeah, this is just becoming like a ton of personal data.
And this is like the flip side of a highly.
personalized service. If it is highly personalized, it can be really useful to you, but it also
becomes a really rich target for attackers, for law enforcement, and ball this goes on.
Well, and it makes me think, like, there are additional risks because, as we now know,
chat GPT is integrating with all of these services and sharing some user data with these
services, which would include things like memories or context about you, which might be
derived in part from this browsing data on chat GPT Atlas. So, like, all of this starts to feel like
kind of a massive land grab for data, not just about how users are interacting with the
internet, but like what those users are interacting with. Yeah. And I think we just still do not
have a great sense of, I mean, you know, I'm sure I know that there is like a written privacy
policy for Atlas. Like I know that sort of things exist. But, you know, per our earlier discussion,
Open AI is also a company that is rushing things out and has not always thought a lot in advance
about what guardrails should be up there. So I do think that we should put this in the true like
buyer beware, experimental category, if you are a person with a high risk tolerance and a
problematic dependence on chat GPT, then you may want to explore Atlas, but maybe don't put all
your banking information into it just yet. Yeah, I mean, I would say if you're like it out
there and you're an early adopter and you like to see like the see around the corner,
I have found it actually quite fun to use this like AI powered web browser. I'm using
perplexity comment. But when I started using this, you were like, dude, you are living on the edge.
And to that, I said, well, I don't do any extreme sports and otherwise I live a very boring
life. So let me live. But I think I am... You do extreme browsing. I do extreme browsing.
And I think it's, you know, experiment with these things. They can save you some time, especially if you're
a person who spends a lot of time reading long documents that you want summarized for you.
But be careful before you let it, like, log into websites and buy things for you and use
bank account and stuff.
Well, Kevin, I think that was a rousing discussion of browsers.
A browsing discussion of rousers.
It was a browser-rauser.
Casey, before we go, let's make our AI disclosures.
I work at the New York Times company,
which is suing open AI and Microsoft over alleged copyright violations.
And my boyfriend works at Anthropic.
Hard Fork is produced by Whitney Jones and Rachel Cohn.
We're edited by Jen Poyant.
This episode was fact-checked by Will Paisal.
Today's show is engineered by Katie McMurran.
Original music by Alicia Baitoup, Pat McCusker, Rowan Nemistow,
Alyssa Moxley, and Dan Powell.
Video production by Soir Roque, Pat Gunther, Jake Nicol, and Chris Schott.
You can watch this full episode on YouTube at YouTube.com slash hardfork.
Special thanks to Paula Schumann, Queuing Tamm, Dahlia Hadad, and Jeffrey Miranda.
As always, you can email us at Hardfork at NYTimes.com.
Send us your malicious prompt injections for our AI browsers.
Steal our bank info.
we care.
