Hard Fork - A.I. Goes to War + Is ‘A.I. Brain Fry’ Real? + How Grammarly Stole Casey’s Identity
Episode Date: March 13, 2026A.I. is changing the ways war is waged. This week, we explore how the U.S. and Israel are using A.I. to identify targets in the conflict with Iran — and why data centers and fiber optic cables are t...argets on the front lines. Then, researcher Julie Bedard breaks down “A.I. brain fry,” a new condition she and her colleagues studied among A.I. users at work. And finally, Casey shares his battle with Grammarly after the company used his identity in a new A.I. feature, without his consent. Guest: Julie Bedard, managing director and partner at Boston Consulting Group who is also the lead author of a survey of “A.I. brain fry” in the workplace. Additional Reading: U.S. at Fault in Strike on School in Iran, Preliminary Inquiry Says How A.I. Is Turbocharging the War in Iran Anthropic’s A.I. tool Claude central to U.S. campaign in Iran, amid a bitter feud A.I. Fatigue Is Real and Nobody Talks About It Token Anxiety A.I. Doesn’t Reduce Work — It Intensifies It Grammarly Is Using Our Identities Without Permission We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
Well, I'm having sort of a weird day.
How so?
Well, I woke up this morning and I checked my social media feeds and I saw messages like the following.
You're garbage and I hope you lose your job and become homeless.
God, what a waste of sperm you are.
And if you have never seen a message like that before 8 a.m., you might not work for the New York Times.
Well, I suspect that I know what this was about, but tell the listeners what made.
people so mad. So my colleague, Stuart Thompson and I recently published this quiz, which is basically
a set of AI written passages next to unlabeled sort of works from masterful human writers.
Yeah. And it was sort of designed as kind of a blind taste test where you'd pick which one you
liked better, and then it would tell you, you know, which one is generated by AI and which one was
written by a human. And Casey, people did not like this quiz. Well, what were the findings of the
quiz. Well, so the big headline finding is that, like, it's basically a coin flip. Like,
a slightly more people, at least so far, have preferred the AI written passages. But when you tell
them that they prefer the AI written passages, they get very mad. Because they think that they're
too smart to fall for AI writing. Yeah, or they just don't like the way that the test was constructed,
or they just, it makes them uncomfortable, or they think, you know, we're cooked now that AI can write
passable versions of this thing, or they just start saying, you know, oh, it's just, it's because
it was trained on all these books, so obviously it can sort of mimic them. So I think there's a lot of
different emotional reactions, but mostly the emotional reaction has been to get mad at the people
who made the quiz. I have to say, you seem excited about this. Like, whenever a large group of people
gets mad at you, you experience a glee that I rarely see in people. It's not a glee. It's just like,
yeah, you're right, maybe there's a little bit okay.
I'm Kevin Rus and tech columnist at the New York Times.
I'm Casey Noon from Platformer.
And this is Hard Fork.
This week, how AI is reshaping the war in Iran.
Then, researcher Julie Bedard joins us to discuss the discovery of a strange new condition they're calling AI brain fry.
And finally, I was turned into an AI editor against my will by Grammarly.
Here's how I stopped it.
It involved overwhelming physical force.
All right, Kevin.
Let's get into the biggest.
News of the Week, which is the war in Iran. Specifically, we want to talk about what we know about
how AI is being used in this fight. Yeah, and I think the reason to talk about this is not just because
it's happening. It's the biggest story in the world, but also because I think this is really a
turning point in the use of AI in the military. We've been hearing for years and reading science
fiction books and listening to people talk about the use of AI in military applications. But now I think
we are starting to see exactly how these tools are being used on the battlefield and what kind of
effects they might be having. We are. And I'll say up top that any time you're talking about
the use of technology in war, there is always the risk that you are just passing along propaganda,
right? Because both the military and the contractors have a vested interest and telling you,
hey, we have some real gee whiz new stuff and it's totally changing the game, right? Everybody has an
incentive to tell you that. And yet,
As you and I have dug into it, we do believe that there are some notable ways that AI are being used,
and I think it is worth mentioning them.
If for no other reason, then I think it's been the experience in the United States over the past couple of decades,
that tools that are deployed abroad during times of war sometimes come back home after the war
and wind up being used against American citizens.
Yeah.
So I think we should tease apart a few things here, one of which is like let's talk about how the actual AI tools are being used by the military,
what the tools are, what the kind of ramifications of using them this way are.
We should talk about how Claude in particular seems to be a key part of the war in Iran so far,
and at least from what we know, seems to be behind a lot of the strategic decisions and operations that the military is making.
And finally, about how this conflict is or isn't going to reshape the future of AI by doing things like taking aim at data centers,
by interrupting the supply chains of things like semiconductor materials,
all the larger questions about how this conflict is playing out.
And before we get into it, let's briefly do our disclosures.
My fiance works in Anthropic.
And I work at the New York Times,
which is seeing Open AI perplexity in Microsoft over alleged copyright violations.
Okay, Kevin, so where should we begin?
Well, let's talk about how AI is actually being used in the war in Iran
and what we know about the actual deployment of this stuff.
Casey, what do we know? Yeah, so I read a great overview this week in the Wall Street Journal
by Daniel Michaels and Dove Lieber, who goes into good detail about what we know about how
the United States and the Israeli militaries are using AI. They're up front about the fact
that the military is trying to keep a lot of this secret. They are not apparently going into a lot
of detail, but there are some things that we know. One is that Israeli intelligence for Yitters
had been monitoring traffic cameras in Tehran that they had hacked into and also eavesdropped
on senior officials' communications. And this is a big theme, Kevin, that runs through all of the
coverage of AI in the war in Iran, which is that the military is saying that it is very effective,
as you would probably imagine, at processing large quantities of information. Yeah. So you've got all this
data coming at you. If you're, you know, running a military in the year 2026, you've got data
from drones and sensors and maybe security cameras that you've found a way into. And you can kind
of use AI to process all of that to put it onto some kind of like a real-time dashboard so that you
can just like open a screen and kind of see where all your supplies and all your troops and where all
the enemy combatants are and like use it to sort of make sense of this wave of information that is
coming at you every day. Yeah, you know,
Recently on the show, as we've been talking about the conflict between Anthropic and the Pentagon,
we've been talking about the potential eventually to have autonomous weapons out in the battlefield,
potentially killing people without human intervention.
And the big message that I'm reading in the coverage so far is we are not there yet, right?
The AI tools that are being used, we're seeing them in fields like intelligence, mission planning, logistics,
actually pretty far away from the battlefield,
doing things like helping to find a target to send a missile at,
and then after an attack, trying to do some kind of quick analysis to see,
hey, what exactly did we hit and maybe what should our next target be?
It's also really clear that what's happening in the military is what I would call,
like, shrinking the haystacks where there's sort of these massive troves of data,
where it's like we have, you know, hundreds of thousands of phone calls or audio recordings
or emails or interstacks.
or intercepted traffic to Iranian websites,
and we can use that AI to kind of narrow down the bits of that that might be useful to us,
because in all intelligence gathering situations since the dawn of eternity,
like 99 plus percent of what you're collecting is totally useless,
and there have been entire divisions of humans who have been employed to, like,
dig through all that stuff and find the stuff that's actually useful,
and now AI can do that pretty well.
Yeah, and military leaders are saying that there are many, many missions that just never happened
because they didn't have the manpower to do exactly what you just said, and now they do.
And I would point out, Kevin, that again, you know, in our whole discussion of Anthropic
versus the Pentagon, we were talking about, you know, the risk of this technology being deployed
against Americans and how effective that could be and, you know, all sorts of surveillance operations.
So I think it's important to highlight, like that exact thing that we were talking about, like,
sort of like a bad scenario in the United States, if the government was doing it to its own people,
is just sort of absolutely happening right now in Iran.
Yeah, and we probably won't know the extent to which it's happening
because most of it is classified and, you know,
nobody in the military wants to give away their secrets
to any potential adversaries.
But my best guess, and from the people that I've talked to
who have been working on this stuff,
is that this is happening pretty rapidly,
that we are seeing many, many divisions of the military
that are essentially using this stuff every day.
Yes. Now, one question that is coming up a lot is
to what extent, if any, is the military starting to offload decisions to AI, right?
Is it the case that there is some military commander that is typing into a chatbot,
hey, should I send the missile here or there?
And the military's public statements are that they are not doing this, right?
They are sort of taking care to say, no, like, humans are in the loop here.
We are relying on human judgment.
But there are other experts that are saying, you know, at some point, if you're going to be
consulting with a chatbot and the chatbot, and the chatbot is getting smarter and
order, before too long, it's probably not going to feel very different from the AI actually just making the decision for where to shoot a missile.
Yeah, I think that's a really good point. I think there is a difference between a fully autonomous weapon that can sort of do everything from selecting the target to like firing the weapon all on its own with no humans in the loop.
But I think what you're talking about is sort of a system that can do everything except fire the weapon. It can sort of select the target. It can tell you the right timing. It can identify all the objects in the surveillance footage.
and it can kind of give the military officials the confidence they need to go ahead and push the button.
And there's some worry that this is starting to happen with the help or the encouragement of AI.
There was a missile strike in Iran that hit in elementary school the other day.
And according to Iranian officials, killed over 175 people, mostly children, horrible thing.
And people have been wondering if that was related to Claude or some other AI system.
telling the military maybe erroneously that this was a legitimate target. Now, we should say that
particular incident is still under investigations, and initial reports from the military have said that it was
unlikely that AI was responsible in that case. But I think this is the kind of thing you're going to
start seeing more and more of is like when there is an attack that, you know, kills civilians or doesn't
hit its intended target, people are going to be asking, oh, was that a human who made that mistake?
or was that an AI system?
Yeah, and I have to imagine, Kevin,
that there is just going to be more and more pressure
within the military to more fully defer
these decisions to AI systems, right?
Because at some point,
there will at least be some contingent in the military
saying these systems are more trustworthy.
They can make decisions faster and let's do it.
So I think that's just something
that we need to be very much on guard for.
Yeah.
So that is what we know about how AI systems
have been deployed so far.
But Kevin, as you mentioned,
there's also been a lot of discussion
about, well,
what some particular models may or may not be doing during the war.
Yeah, and I think Claude and Anthropic have come up a lot in recent weeks for obvious reasons.
They had this big fight with the Pentagon.
But it's also the case that right now in this war in Iran,
Claude is the only AI model that has actually been deployed inside classified military systems.
So it's the extent that AI is having an effect in Iran, it is probably Claude.
Yes, and The Washington Post had a story about AI and the war in which they said that
Claude was so essential to operations that if for some reason, Anthropics said, hey, we want you to
stop using Claude, the military would push back and say, we're actually going to force you
to continue to use this product. So just, again, the continued strangeness of the situation,
the Pentagon has now formally declared Claude and Anthropic to be a supply chain risk this week.
Anthropic sued over that.
Yeah, and there's also been a lot of reporting coming out over the past week or two about the
the actual ways that Claude is being used and deployed in the military.
There's been some reporting on this system built by Palantir called Maven Smart System,
which from what I can tell is kind of a real-time dashboard for intelligence
that basically allows you to pull in a bunch of drone footage and sensor data
and track a bunch of supplies and troop movements and things like that.
And by the way, this is the system that caused a huge controversy at Google in the late
2010s and, you know, Google's like quit over this. They did not want the company involved with
Project Maven. And eventually Google dropped the contract when they did. Palantir stepped in
and eventually brought on Claude. Right. And so Claude has been integrated into Maven Smart
system since 2024. And the reporting that I've seen over the past week, including in this
article in the Washington Post, said that this combination of the Maven smart system built by Palantir and
Claude has already suggested hundreds of targets, issued precise location coordinates, and
prioritized those targets according to importance. And according to this same article, it says that
the use of Maven and Claude has turned weeks-long battle planning into real-time operations.
So this is not just like a kind of tool that people in the military are using for handling
like routine office work. This is actually sort of a core part of their strategic decision-making
process. Now, Kevin, do you know if this is a like specialized model of clot? Again, I'm thinking back to our
conversation with Amanda Askell, where she talked about all these efforts to make sure that, you know,
Claude is really good. I'm sort of imagining that version of Claude being told like, hey,
analyze all this footage and decide like where to send a missile to kill a bunch of people.
It's hard for me to imagine that version of cloud being like, yeah, yes, sir, right away, right? So do we
understand at all how that is working? So my understanding is that it is largely the same model that
consumers and enterprises would use, but that there may be some additional fine-tuning to make it work
inside these classified systems on these sort of military applications, that it may sort of refuse
different prompts or fewer prompts than a model aimed at consumers, and that there may be some
additional kind of changes around the edges, but that it's basically the same cloud that you and I have.
I see. Well, so this appears to be a very temporary phenomenon. We know that OpenAI has signed a deal
with the Pentagon and presumably its systems will be onboarded onto classified defense systems.
Soon, Gemini was approved for non-classified uses at the Pentagon. So I think pretty soon the Pentagon
is to have more options to choose from as it deploys these systems.
Yeah. So that is how AI is being used offensively by the United States and Israel, Kevin. But we should also talk about what Iran is doing offensively against some of these AI systems.
Yeah, this is a part that I have not spent as much time looking into. So tell me what you're seeing.
Well, so as you know, there's been this huge buildout of AI infrastructure throughout the Middle East over the past several years. We've seen these multi-billion dollar projects being signed and built in.
Saudi Arabia and United Arab Emirates and Qatar.
And these deals involve basically all of the big American tech giants, Amazon, Microsoft, and Google.
And I would say there are sort of like two major pieces of infrastructure that are relevant here.
One is data centers, right, which are, you know, being used to run AI systems and also just provide basic cloud hosting and storage services to all sorts of companies.
And then you have fiber optic cables.
which connect those data centers to the rest of the world.
So let's maybe talk about the data centers first.
Sure.
So the Guardian reported that on the morning of March 1st, which was the day after the initial U.S. attacks in Iran,
Iran responded by striking a couple of Amazon data centers in the UAE, and they also damaged a third one in Bahrain.
And in the immediate aftermath of that, people in those countries were opening up their phones
and they couldn't check their bank balances,
they couldn't order a taxi.
It seems like a lot of services in those countries
were being hosted on AWS,
and they just didn't have access to those services anymore.
Afterwards, Iran put out a statement
that said that they had gone after the data centers
to identify the role that they played
in supporting the enemy's military and intelligence activities.
That's so interesting.
So they were basically targeting data centers
rather than, say, troops
because they thought it could actually
be more disruptive if it turned out that the U.S. or Israel or any of the other allied nations
were running their services on data centers located in the Middle East. Yeah, well, I mean,
and also, like, data centers are a great target. Like, they're just sitting there. They don't have
any defenses, right? So you can just send a few missiles over there and do an asymmetric amount of
damage. And so now, Kevin, people are starting to question the logic of doing all these
multi-billion dollar deals in the Middle East. They're saying, hey, should this really
be a linchpin of global AI infrastructure if it's just kind of a rough neighborhood and all of the
investments that you're going to build there are just going to be kind of perpetually at risk.
Yeah, I think that's a really interesting sort of tactical shift that just speaks to how central
all of this AI stuff has become in military conflict. And then you have all these other risks
of disruptions to the supply chain. And right now there are lots of ships stuck that can't get
through this trade of Hormuz because it's been blocked off.
And we now have people and companies saying that some of the raw materials that you need to make things like semiconductors might be delayed for weeks or months or however long as conflict lasts.
And that prices might go up and it might get harder for companies to build new data centers here in the U.S.
So all of these ripple effects we're starting to see are downstream from the fact that we're at war with Iran.
So that's what's going on with the data center infrastructure.
Kevin, you're also probably wondering what is going on with these undersea cables, right?
So there are very important fiber optic cables that run through the straight of Hormuz
that are responsible for transporting internet traffic from that region to the rest of the world.
As of press time, as we record this, these lines have not been attacked or disrupted,
but everyone is keeping a really close eye on it because were they to be disrupted,
there is just simply no obvious way to fix them in the middle of a live war.
Casey, how does this all make you feel that AI is playing such important and central role in an ongoing war in Iran?
I mean, this to me just feels like the frog is being boiled, right?
Like, when I think of all of the potential violent uses of AI, data analysis is not among those that gets me most nervous.
Although, of course, I do have concerns about, you know, domestic surveillance.
But I also know how rapidly these systems are advancing.
I know the pressures that are quite apparent in our military to use AI for ever more things.
I worry that there aren't going to be appropriate safeguards on those things.
And so, yeah, I just have a high degree of concern about where all of this is going.
I'm open to the idea that AI systems could be used to wage war more.
safely and to maybe even prevent casualties, but I am not sure that we have built systems that
will actually do that.
Yeah.
And I would just say, like, I keep thinking about how all of the companies that are building
frontier AI systems today at one point in their existence had decided that they didn't
want their stuff being used by the military.
You know, back in 2014, when DeepMind was a sort of little known AI startup in London, they
sold themselves to Google. And one of the major sticking points in those negotiations, one of the
reasons they sold to Google and not what became meta and was at the time Facebook was that Google
had allowed them to have this prohibition on using their technology for military applications or
surveillance. As recently as a couple of years ago, Google's AI principles said that we are not
going to allow our technology to be used for the military. And in 2025, it quietly took that
language out. Open AI, same thing. They had language in their terms prohibiting their models from
being used for military applications. They took that language out quietly in 2024. Meta, same thing.
Anthropic, interestingly, is the one sort of frontier AI lab that never had an explicit prohibition
on military applications, but they did have a bunch of language in their original terms
that they have amended to make it more possible for the military to use this stuff.
And so, like, I understand strategically why you would make the decision to sell your
AI tools to the U.S. military.
But I just don't want us to forget that, like, all of these companies were run by people
who at one point thought this was all a bad idea, to be selling these very advanced
AI tools to the military.
And then they changed their minds.
And they did that because of some combination of pressure or just maybe market opportunity to get these big military contracts.
But they did at one point have a principle that involved, we don't want our stuff being used to kill people.
And I would like them to at least reflect on the fact that that has changed.
Yes. And for everyone else, the next time one of these companies tells you about some unshakable principle that is the foundation that the entire company is built on,
it should make you wonder whether that can hold up to pressure as well.
Yeah.
When we come back, are you experiencing AI brain fry?
If so, you may be entitled to compensation.
We'll talk to researcher Julie Bedard about this strange new AI psychological phenomenon.
So, Kevin, I feel like there is this new genre of blogs and social media posts all devoted to the idea that using AI is making people feel completely exhausted.
Yes, and insane.
There's a spectrum, and it starts it exhausted, and it goes all the way to insane.
Sidant Carr, who's an engineer who builds tools for AI agents, wrote a blog post that I saw all over social media recently called AI Fatigue.
is real and nobody talks about it. And he said that on one hand, he felt like he'd have the most
productive quarter of his entire life as he uses all these new agentic coding tools. But on the other
hand, he said he had felt more drained than ever before in his career. Yeah, I think people are
starting to sort of use these tools more and come to grips with not only the effect
that's having on their productivity, but also like on their brains and on their ability to kind
of make sense of how quickly things are shifting. I really liked this essay that a venture capitalist
wrote a few weeks ago about what he called token anxiety, which was this feeling that like
if you don't have a bunch of, you know, Claude code agents like running parallel tasks for you
while you sleep, like you're feeling like you're missing out. And people at dinner parties in
San Francisco are now talking about bragging about how many agents they have running at all
time. So there's like something psychological happening to the people who are using this stuff
a lot at work. Absolutely. And recently we have begun to see some actual empirical research on
the subject. So last month, researchers at UC Berkeley published some findings in the Harvard
Business Review from an eight-month study observing workers at one 200-person tech company, and they found
that AI was just making work a lot more intense. Workers were having to multitask a lot more.
They felt like if they were not using a lot of AI tools, they were not keeping up with expectations,
and that they used to have little breaks during the day where, you know, you go to the water cooler and
talk about, you know, what's going to happen on Survivor this week. Well, that doesn't exist anymore,
at least not at this company. And then last week, a group of researchers at BCG shared some similar
findings in the Harvard Business Review. And this one really caught our eye because they found that
under certain conditions, workers are experiencing what the researchers are calling AI brain fry.
And to be clear, that is different than AI brain rocked, which is what you get on TikTok when
you start looking at videos of ballerina cappuccina.
That's right.
You know, and actually, they thought that Emmanuel Macron might have this, but that turned
out to be AI French fry.
So, anyways, here's what AI brain fry is, Kevin.
They're defining it as mental fatigue from excessive use or oversight of AI tools beyond one's
cognitive capacity, which I think is kind of a funny idea.
It's almost like you got a new coworker and they're really, really smart.
and it's sucking your life force out of your body.
Yeah.
So we want to know more about this study
because I think it gives shape to a conversation
that we're seeing rippling out across the economy
as more and more managers are telling their workers
to start using AI tools.
It is clear that not all is well out there.
People are starting to feel kind of bad
and they're maybe going to be less productive
and likely to leave their jobs as a result.
So to learn more about the findings in this study,
we've invited the lead author Julie Boudard.
Julie is a managing director and partner at Boston Consulting Group,
as well as a fellow at the Henderson Institute,
which is an internal research group and think tank at BCG.
So let's bring her in.
Let's do it.
Let's get fried.
Julie Bidard, welcome to Hard Fork.
Thank you.
Thanks for having me.
So let's talk about the study.
You surveyed 1,488 workers in,
in January of this year from all different disciplines, lots of different companies.
What kind of questions did you ask these workers?
Yeah, we asked them all kinds of questions around how they use AI, how they feel at work,
you know, traditional burnout metrics.
We asked some, you know, sort of proxies for cognitive ability.
And we did throw in a question about AI brain fry.
We said specifically, like, what do you think about this thing that could be AI brain fry?
Like, are you feeling that?
And tell us how you define AI brain fry and what the survey results told you about it.
I mean, we defined it as really like a type of cognitive strain.
So we said it was mental fatigue.
It was related to excessive use of interaction with or oversight of AI.
And it was about being beyond one's cognitive ability.
So it's sort of like, I'm using the tool, but it feels beyond my ability to process it.
So 14% of people who use AI.
said that they felt this.
And I was especially surprised by the extent to which they told us about it.
We asked, you know, free-ended, like, just tell us, what is this thing?
What does it show up?
How does it feel to you?
And people wrote a lot, right?
Like, they wrote all these things about, it feels like I have 12 browser tabs open in my head.
Or it feels like I'm working so hard to manage the tools.
I'm actually not really doing the work.
Like, I'm not actually managing what I'm supposed to be doing.
I thought this was so interesting because on paper, if you told you.
me, hey, we're going to give you a brilliant new assistant. They can answer all of your questions.
They can do many of the tasks that you prompt it to do. That would sound very exciting.
Sometimes I think, what would it be like to have like a really great podcast co-host?
You know, somebody who kind of came in, really prepared, asked a lot of great questions,
had a great energy. You'll never know, buddy. And I'll never know. Okay. But some of these people
at work are now having that experience. But what you're saying is that that is not an energizing
thing for them. It's draining them in some way. So what do you think?
think is the mechanism by which people are coming to feel so exhausted by working with these systems.
Yeah. Well, I do think it's particular to these two things that we found, which is the oversight
of the tools and the intensification of work due to AI. And what people reported specifically
is they put in more mental effort, they felt more fatigue, and they felt information overload.
And, you know, we need more research, right? Like, this is new when we're learning. But my hypothesis,
right from working with a lot of different companies on this kind of thing is it is fun and exciting combined
with we feel more pressure everybody's talking about AI AI productivity right and I think it's it's just
natured okay one more thing let me just sort of try this out see what I can do and we're not re-centering on
like what was I actually trying to achieve today right we're not getting focused on some of the
most important aspects of our work yeah I'm curious how much you think this really boils down to
fear. Because when I talk to people who are anxious about using AI at work, they circle around
this issue that, like, maybe it's materializing as burnout or feelings of overwhelm, but like at
its core, what they're nervous about is that we now have these systems that can do parts of
their job and they're worried about losing their jobs. Did anything in your study sort of
get to any of the economic or sort of survival anxiety that these workers might have been feeling?
that might have been registering to them as burnout, but deeper were something else?
Yeah.
So this is probably a good time to separate the two because the brain fry is the cognitive piece.
Burnout is physical and mental exhaustion.
It's more emotional.
It's more about how I feel about work and do I feel like I'm doing a good job at work.
Burnout, we did not find a correlation with brain fry.
So I just want to be really clear.
It was very interesting.
I thought we would.
We did not.
Brain fry is distinct.
And then what we found is, actually, you could use AI to reduce burnout.
So there's a lot of nuance.
Maybe the last thing I would say is we did look at, you know, how positive or negative you feel.
But typically the people who are afraid are not the people who are doing heavy oversight work
in my experience.
Right.
So there's sort of the people who are, you know, leveraging it more like a search tool, right?
They're not necessarily getting up that learning curve to more of the intense
interactions. In your study, you found that people in certain industries tended to experience
AI brain fry more frequently. I was struck by marketing seems to be the place where people are
feeling it the most. And people in areas like management and law and compliance reported significantly
less brain fry. Do you have a theory on why that is? Yeah. So the short answer is,
unfortunately our survey, at least scientifically, was not designed to answer that question.
But I have my theories based on other work that I've done.
And, you know, three years ago, I worked with some of the models to try to predict skill disruption.
I was trying to figure out, like, which jobs will change the most.
And one of the jobs that changed the most from a skill perspective was marketing manager.
A marketing manager was 90% disrupted from a skill perspective.
So that's sort of the first fundamental piece about marketing is like,
they've tended to adopt and is a really different way of working because of the power of the tools.
The next thing, if I really just think about like what is brain fry, like it's about the iteration,
it's about the oversight. A lot of marketing lends itself to that. Like in the field, we see stories
of folks who are doing image creation. They're doing synthetic consumer panels, right? They're spinning
up a bunch of campaigns at the same time. And it really lends itself to that definition of, like,
when do they know they're done? When do they know the image is ready?
Like, have they defined those success thresholds for themselves?
I'm guessing they haven't yet, right?
Like, they haven't figured out how do you do all the things to the right level of quality
based on the outcome that you're trying to drive for?
It makes sense to me that, like, the more your job is changing, the more kind of vertigo
you're going to be experiencing as these new tools are introduced into your workplace.
You know, Kevin, you just observed that managers seem to be experiencing this less.
one of my theories was that, well, the reason is because they're already used to overseeing a bunch of digital abstractions since they're human employees, right?
They're mostly just sending them Slack messages and sending them emails, you know, hopefully meeting in person, you know, fairly regularly.
But I think if you're a manager, you've already been used to sort of overseeing a bunch of stuff and those people just sort of may have skills that people who have not yet been in management roles don't have.
I think there's something to that.
And I also wonder, Julie, if you think there's anything that is.
is sort of inherently isolating about these tools.
One thing that I've found with using AI for my own work is like,
it's a single player video game, right?
You're going back and forth with a machine.
Very rarely am I in a room with other people using AI with them.
And I wonder if part of the brain fry is sort of this siloing effect
that these tools tend to have in the workplace
where it's like everyone is chatting with their chatbots and their agents
and no one is talking to each other.
I'm glad you brought that up, Kevin,
because back to this point around there's ways to use AI that actually reduce burnout.
The people who are using it for repetitive tasks, they actually were doing those types of things.
Like we found that they felt more socially connected at work.
And so it's interesting, like in all the companies that I go to, I do various types of, you know,
AI enablement and workshops.
And one of the questions that I always get a lot of engagement on is, what could you use AI for?
Which is like the three worst things on your to-do list, like the procrastination things,
like the things you really wait and do.
I mean, people love to talk about using AI for those.
And my hypothesis is sometimes that's probably the repetitive work.
And when you use it for that type of repetitive work,
you actually reinvest the time in things that give you energy.
So more work needs to be done,
but I think I've seen that a bit in the field.
And that's what our data would suggest as well.
I want to ask about the three-tool cliff,
which was a funny part of your study, basically.
You found that the sort of not-
number of AI tools that people are using at work has some sort of bearing on their productivity
or their feelings of productivity. And that actually when you switch from using three to four
AI tools at work, there's something that happens where all of us starting to start experiencing
these things is not like a productivity enhancer, but actually just more of a stressful thing.
Do you have a theory on why that is or why there seems to be this threshold?
Well, I mean, classically multitasking is not very productive, right? Like, we all are, you know, seduced by the idea that we can do more and more and more.
Yeah, Casey's playing Bellatra right now. Exactly. I am not.
So, yeah, no, I think multitasking is part of that. But it's back to this point of, like, I'm overseeing more things.
Like, I'm actually doing more things. I'm starting more things. I'm stopping more things. I have more output to govern.
And, you know, advice for leaders and managers are to help people understand this.
Like, one of the things I'd love to see is AI fluency right now mostly was defined by technical skills.
Maybe in the last six to nine months we've started to talk about the human skills that persist.
I actually think cognitive sort of health should be part of defining AI fluency as we go forward.
So both, again, like individuals, like I can start to work differently with the tools, but also, again, managers and leaders can
can help protect against that.
Let me ask one objection that some people might have to the research.
You work for a consultancy.
Consultants have an interest in making AI seem difficult so that companies will hire them to
help manage it.
Is there any chance that we're over-pathologizing what is going on here or sort of, you
know, giving a scary sounding name to what might just sort of be a temporary adjustment process
as people, you know, start to use AI tools in the workplace?
Yeah.
I'm glad you've asked that. Maybe what I would say just first about kind of how I look at this
and why I'm doing this research. So I am a consultant. Yes, I do advise companies. It's sort of the
bread and butter of what I do. However, I'm also a researcher and I care really deeply about the data.
And what's been very hard is our clients have wanted answers. Answers that we don't necessarily
have all of the playbook for because it's so new and is changing so rapidly. So I'd say just, you know,
we really designed this to be a data-driven intervention. But beyond that, I think I've been, like I said,
for the last three years at the Rock Face. Like I've talked to more than 100 companies. I've actually
trained teams myself. I've been in the room with software developers, marketers, et cetera, trying to
use these tools. And I see that, like, there's something there. Like, there's a real strain where I'm
trying to do the right thing, but something's getting in the way of me being productive with the
tools, and we need to redesign work, hopefully, and particularly, you know, within teams,
to do that better. And like, if you're a worker out there, if people are listening to this
and saying, yes, I am a worker, I am using AI tools at work, I am feeling the brain fry that
you are describing, what can they do to help themselves? What has shown itself to be effective
in your experience? Yeah. So if you're an individual worker, I think first, just acknowledging that
this is a risk is the first thing. The second thing is really focusing on what you're trying to
achieve. It's like back to that outcome piece. I mean, I know this is really basic, but if we were
very clear about we're measuring outcomes, not output, and we're trying to get to the right answer,
and what are those steps to help me get there? And so, you know, from our data, we would say the
things you could do is, one, engage your manager. So managers who engaged in questions, we saw brain
fry go down. And I think it's about creating that sort of open dialogue about how should I
use AI? When is it valuable? The other thing is to engage your team on this. So interestingly,
when teams were using AI together and they had better integrated it into their workflow, so like
how I handoff work to Kevin and Kevin does to Casey, we also saw brain fry go down. And, you know,
I don't have the data to say exactly why, but my hypothesis would be is we're not bottlenecking work
in one person, and we're creating actually like a much more effective system where we're getting
the work done with the right outcomes together. It seems tricky to me, though, because I think
there is just so much thrashing around in organizations right now. I think that the amount of knowledge
that any given manager or worker has about AI right now is highly variable. Whether their knowledge
is like keeping pace with the capabilities of the latest models.
That seems like an open question to me.
So I have to say like in the near term,
I actually feel quite pessimistic about this.
I'm sure there are going to be individual managers and teams
that are like doing a great job.
But at a like economy-wide level,
I think people are just absolutely all over the map on this.
Yeah, I think so too.
And I think it's also not clear to me
that people are going to feel comfortable talking to their managers
about how they're feeling about it.
Because I think a lot of people have these reasonably well-founded fears that, like, if you tell your manager, like, I'm using AI to do this part of my job, the manager's first thought is, well, going to be, well, maybe I can lay you off.
Maybe I don't need all these humans anymore.
And I think we're seeing enough of that happening at big companies now where they're laying off big percentages of their workforce and attributing that to productivity gains from AI that I think people are sort of feeling like, well, if I discover how to use AI for my work, I'm going to keep it to my damn self.
Absolutely. Or, Kevin, I think we also see the reverse of that, which is you go on social media and you see people bragging about the insane lengths that they are going to to be using AI at all times, to have their, you know, clawed swarms up and running and coding, you know, while they sleep.
And I feel this sort of deep insecurity embedded in that, which is if I'm not out there constantly telling you how much AI I'm using, you know, I might sort of be next on the chopping block.
My reaction to that is this is why leaders play a really important role.
Because I think, Kevin, your point is well taken.
I think there are things individuals can do.
There are absolutely things managers can do.
But this is about systemic redesign of work.
So, Casey, to your point, like, I don't think AI brain fry is going away unless we tackle it head on.
Like, I don't think this is something that we can sort of just democratize and let everybody figure it out.
Although I think there are things they can do to mitigate.
But I'm really interested in actually like, okay, let's rethink how we get the job done.
Like, you know, we are really bad at stopping work.
Is all work valuable?
Like, if we had leaders engage more meaningfully in these questions, that's the work we need to do if we really want to address some of this.
Julie, I'm wondering how much you went back and looked through sort of historical precedent here.
When I was researching my last book, I was doing a lot of reading about the 1970s when a bunch of manufacturing workplaces like auto plants were getting
all these new automated robots to help them do things like assemble cars. And there was this whole
sort of nationwide panic about this. They called it Lordstown syndrome because the first sort of
GM plant to have this level of automation was in Lordstown, Ohio. And, you know, Congress held
hearings about this like sort of new wave of worker alienation that was happening in these blue collar
manufacturing workplaces for a lot of the same reasons that to me seem like they rhyme with at least
this AI brain fry idea.
Workers were just saying basically, like,
I don't feel like a human anymore.
I feel like I just push buttons
and the robots do all the work.
I don't talk to people at the office anymore.
My managers have all these crazy productivity expectations of me.
And I think what was interesting in that
beyond just the parallels to what people are feeling
in white-collar workplaces today
was that the way that they sort of got out of that
was through striking and through organizing
and unionizing and getting a bigger share
of the profit.
that these companies were making from all this productivity.
So I guess I'm just wondering if you could riff on maybe some of the historical parallels before
and where this may all be heading.
Well, I always get the question around Excel and accountants, right?
Like, did the rise of Excel lead to more or fewer accountants?
Or even if you think back actually to do the Industrial Revolution.
One thing I actually think is a really interesting parallel there is, you know,
the rise of technology at that time, in many cases it wasn't until the world.
was actually a re-architecture of the shop floor. Did we actually see the productivity games?
And to me, that's an interesting parallel to what we need to do with redesigning work.
Julie, one of the questions I wanted to ask you was like, you know, it is the role of the consultant to come in and say, I have talked to people all across this land and I understand the best practices and I will bring them to you. And you can redesign your shop floor so that you can get back to being maximally productive. But I feel for Kevin,
and I, we feel like the ground never stops shifting under our feet anymore. And that every few weeks,
some new model comes along where the level of capability goes up. And maybe even something that I would
not have been able to do in November, I actually can now. And before too long, maybe that's
going to be a core expectation for me that is part of my job. So part of me wonders, like,
is this actually even a good time to be redesigning your workflows if, you know, three months from now,
six months from now, the landscape might have completely changed all over again.
Yes, and I have tackled this question many, many times.
Here's my take.
For companies who didn't do anything two years ago, they would have said the exact same thing to me, Casey.
They would have said the tech is going to change.
I'm going to wait.
I want to be a fast follower.
And honestly, there is some smart truth to that, right?
Like, pick your bets.
Like, I definitely wouldn't be doing this everywhere.
but I think this is about learning a new capability and muscle as an organization.
This is about teaching us how to change.
So I would say, like, if you're on the sidelines, yes, it's just going to keep moving.
So you could have that excuse, you know, a year ago, two years ago, two more years.
But you're also going to be missing out on that opportunity to build capability as leaders,
to build that in your teams, to start upskilling people.
I think there's actual things that you can do to support your talent to go on this journey
with you. Yeah. And I would say, like, also if I could add something to that from 1972,
which is apparently where I love going on this subject, there was this sort of team at GM when the
Lordstown syndrome was taking over that had to figure out how to bring back the striking workers.
And one thing they did was that they set up these new humanization councils who were basically
workers, people from the assembly line, were invited to give their thoughts on how the robots were being
used and how the machines were set up and how the assembly lines were laid out. And feeling like
they had some input and some control over their situation and were not just like passive bystanders
actually seemed to help. So I don't know whether that's directly applicable to white collar
workplaces that are going through this today. But I do think that having some of the energy
and ideas come from the quote-unquote bottom from the actual workers doing the individual
contribution seems to matter. Yeah, I mean, Kevin, that's absolutely right. Like, how do we have more
agency in this? And if you do that, you're going to be really user-centric. You're going to think
about, like, what work do people enjoy doing? What work do they not enjoy doing? What are some of the
barriers, cognitive or otherwise, to getting actually that work done? I think that's exactly right.
Well, Juliet, thank you so much for giving us a lesson now. If you'll excuse us, we have to go deal with
our AI brain fry. I actually have AI brain freeze.
It happens if you use chat GPT while you're drinking a slurpee.
Well, as long as it's that AI brain rot, we're fine.
Yeah, yeah.
We got there a long time ago.
Thanks, Julie.
Thanks, Julie.
Thanks.
Why we cut back?
The worst AI feature we've ever seen.
Makes you more like Casey.
Well, Casey, I heard you got an exciting new job last week.
I did.
And it was the sort of job, Kevin, that I didn't even know that I had or was doing.
So you had this crazy experience by being selected against your will and without your permission as one of Grammarly, the AI kind of writing assistant.
They have an expert network of people whose voices they have borrowed for the purposes of, I guess, making people's writing better.
So A, congratulations.
Thank you.
I assume the royalty checks are just overflowing your mailbox.
But what actually happened here?
You had a fascinating newsletter about this this week.
Well, thank you.
So this story I first learned about from The Verge.
Their reporter Stevie Bonnefield wrote about this.
And it turned out that last summer, Grammarly had added this feature called Expert Review.
I had not actually used Grammarly until this.
Have you ever used it?
No.
So I decided, you know what?
Why don't I sign up for the free travel?
trial and see what Grammarly can do for me. And if you go to the support page for this feature,
it says that Expert Review, quote, is designed to take your writing to the next level with
insights from leading professionals, authors, and subject matter experts. That sounds pretty cool,
right? Well, scroll a little further down, Kevin, and you see the following disclaimer.
References to experts in Expert Review are for informational purposes only, and do not indicate
any affiliation with Grammerly or endorsement by those individuals or entities.
And so I read that and I thought, when you say that these insights come from leading professionals,
what does the word from mean to you?
Because it sounds like what you're telling me is they don't come from those experts at all.
Yeah, it's like when you see like a tub of margarine and it's like, you know, it's like butter style product in very small type.
They had sort of an expert network with an asterisk.
None of the experts were actually consulted, and we didn't actually hear from them in any way.
Absolutely.
So Stevie over at The Verge put a bunch of writing through expert review to see what sort of expert names would pop up.
I was one of them.
Congratulations.
Thank you.
You know, as you might imagine, Gramerly also picked a bunch of like actual famous people.
So Stephen King, Neil deGrasse Tyson, Carl Sagan.
and I decided to put this thing through my own paces and loaded up some recent columns that we published in Platformer and paced them in to see what sort of experts it would suggest.
And while I was never able to get my own name, Kevin, I did see a succession of people that sort of felt like if you made a list of people who would hate this idea the most, that is who Grammarly had picked.
So Timnit Gebrugh, a very vocal critic of AI systems the way they are built and deployed, she showed up as a quote unquote expert.
So did Julia Anguette, who is an investigative reporter.
She writes for New York Times opinion.
And it used her writing, even though she has written a lot about how tech systems are used for privacy and surveillance in ways that are contrary to how we probably want them to be used.
Julia, by the way, filed a class action complaint against Gramerly's parent company on Wednesday
seeking to stop them from, quote, trading on her name and those of hundreds of other journalists, authors, and editors,
and to stop them from, quote, attributing words to them that they never uttered and advice that they never gave.
Wait, can I ask a question about the mechanics of this?
Yes.
Okay, so you're writing in Gramerly, which I gather is sort of like a bolt on to like a word processor.
Yes.
And it sort of detects the topic you're writing about and then pops up a little like,
clipy thing that's like,
would you like Julia Anguyn
to edit this for you? Would you like
Casey Newton to give this one a pass?
Exactly. I'll actually show you an
example here. If you want to look at my laptop,
you can see
that here is the text that I wrote.
And then in this little left-hand column,
in this case, it just says
Kara Swisher. Carra Swisher,
my good friend, pass hard forecast,
legendary Silicon Valley journalist
and podcaster, and someone who
has absolutely no involvement with
grammarly. But her name just sort of pops up there with no disclaimer at all, right? And then when you
sort of click in, it will offer this sort of Kara-inspired advice. And this is the point, Kevin,
where I would like to talk about the kind of advice that this thing actually gives.
So you might expect, given that they were, you know, allegedly trying to borrow the expertise
of real humans, that that expertise would seem like incredibly specific to that person, right? Instead,
what you're getting is just a bunch of very generic advice about something that you might do.
So I noted, for example, that, you know, we public, my colleague Ella Marciano, wrote a historian platformer last week where she went to a protest at OpenAI.
And there was a suggestion that Gramerly had said was inspired by John Kerry Rue, the legendary investigative journalist who brought down Theranos.
And the advice basically boiled down to try opening with a colorful.
scene and use a lot of rich details and characters, right? Like sort of the most absolute generic
advice that you would ever imagine getting. And nothing like I would imagine the actual experience
of sitting down with John Kerry Rue and saying like, hey, how did you write bad blood?
Yeah. How did it say that Kara Swisher would edit a story? So I will just reach you the piece of
advice that it gave me. This was also a piece of advice about this protest story.
The fake AI Kara said, could you briefly compare how daily AI users versus AI skeptics articulate risk, creating a through line readers can follow?
A synthesizing sentence here may tighten the narrative arc.
I'm laughing because that is the exact opposite of how I imagine Kara Swisher would edit someone.
It would just be like a string of like four-letter words and like, you know, this sucks, do it over again.
Yeah, it would say stop wasting my time.
you know, like that that would be the advice.
The thing that I just read, I just wanted to acknowledge, like, it is word salad.
Do you know what I mean?
Totally.
Like, you can tell what, I don't know what underlying model they're using here.
I'm guessing it is not a frontier one, right?
It's reading very like GPT2 to me, you know?
So this advice is so bad.
But let's bring this into what I actually find upsetting about this, Kevin.
Yeah, let's make this about you.
No, well, here's the thing.
I'm actually not going to make it about me because I have sort of just long since
accepted that all of these companies have stolen all my intellectual property and are having their
way with it. Where I really feel bad is for the subscribers to Grammarly. These people are paying
$144 a year to be able to use this glorified spell checker, okay? And they load this thing up
and then Grammarly gives them this service. And so if you are a paid subscriber to Grammarly,
you are paying a subscription to get grammarly to hallucinate on your behalf, right?
To make up a bunch of stuff that is not true, right?
This is not the actual sort of advice that any of these experts would provide,
and you are paying for that service.
When you just as easily could have taken whatever text you had written
and pasted into a free chat bot and gotten generic advice
that is just as not great as what you were getting here.
Right, and the truly crazy thing about this is that despite,
charging all this money for people to use this substandard AI product. They are not, to my knowledge,
passing any of this along to you or Kara or John Kerry Roo or any of these authors whose
identities they have purloid for the purposes of selling this product. No, they're not. And,
you know, look, I think that all of the AI companies just have a huge entitlement problem in general.
You know, I think that they think, look, if it's on the internet, it is in the public domain and it
belongs to us, and they don't spend enough time thinking about how they are destroying the incentives
for anyone to create a public open internet, right? If you feel like you're just kind of get screwed
in this way. So I do think that that is really unfortunate. Yeah. So what did grammarly say when you
started writing about this? Well, when I reached out to them, they thought about it for a while,
and then finally came back to me on Monday and said, you know what, we've thought about it. And if you're
one of our experts who we didn't consult and we're not paying, you can now opt out of this feature.
How nice of them. So you can now send an email and say, I don't want to be a part of this system anymore.
And so, you know, I wrote the story and got a lot of comments on social media like, you know, geez, it really seems like the least they can do.
But Kevin, as we record this, I actually have some breaking news. What's that?
So I got an email from the spokeswoman over at Superhuman today. Superhuman is what
Grammarly now calls itself. They did a rebrand last year, and they're now sort of a bundle of mediocre
products. And they set me a note and said that after careful consideration, we have decided to
disable expert review as we reimagine the feature to make it more useful for users while
giving experts real control over how they want to be represented or not represented at all.
dot dot dot dot dot dot dot dot dot dot dot thanks for holding us accountable we're committed to getting it
right next time and we'll be transparent about how we improve from wow results
newton gets results newton getting some results i mean look it's clear to me that they are
embarrassed about this but this is one where the whole time i was using this thing i was like
who was the product manager what were the meetings imagine the meeting
was there a lawyer involved in that who was the lawyer that signed
off and said, yes, feel free to misrepresent that you are getting inspiration from all of these
different editors. So the fair is such a like spectacular misfire. And it really made me wonder,
like, what is the future of a product like Gramerly? And like, that's kind of where I want to
end this. You just finish writing a book. You presumably could have used some sort of AI writing
assistance. Did it ever occur to you to use Gramerly? No. Why not? Because I don't know anything about it.
and I don't need it, and I have other tools.
Well, so talk to me about these other tools,
because this is what I think the real story is,
which is like in 2009, when Grammarly launched,
you didn't have a lot of options for writing assistance, right?
You had like whatever spell checker was in Google Docs,
and like that was, you know, probably going to be the best tool available.
Fast forward to today, though, you got Chat Chit, you got Gemini, you got Claude.
There are free versions of these services.
If you want a quick grammar check, you can get it.
My guess is that's the experience that you just had.
Yeah, if I want a grammar check,
I'm just copying and pasting into one of the AI models.
I'm not using like a purpose-built thing for that.
Or it's now built into Google Docs.
Yeah.
And to, you know, emphasize a point, when you're using Claude, as you did in your book,
you're using the latest and greatest version of Claude.
If you are using some sort of startup that is like using the API of Anthropic,
they're not actually incentivized to give you the frontier model most of the time, right?
Because that's going to be very expensive.
So they're going to give you a model that's a couple generations.
old because they can get a lower price and their margin is going to be better on it.
So we've talked a lot in recent weeks about the potential for a SaaSpocalypse where these
companies that are selling these sort of, you know, businessy prosumer services are going to get
crushed by the fact that there is now just a cheaper way to do it. I wonder if you think
the Gramerly might be one of those. No, I think it's going to be part of the asspocalypse,
which is for software that absolutely sucks, that there's no reason to be using in the first place.
and I think that that software has a hard roadhead.
I just do not think there is a future for this product.
When I saw this, yes, I did have the moment of like, outrageous, too strong a word,
I felt supremely annoyed.
I did feel like very annoyed that this was happening.
But again, it's like, I know all these companies have like all read my stuff.
You could go into Claw today and say, draw inspiration from Casey Newton and edit my piece.
Claude is not going to refuse and say, I don't have the rights to his intellectual property.
It's just going to do it.
And it's not going to notify me and it's not going to pay me, right?
So I do think that there is a distinction between what these companies are doing,
but I just want to point out that in some way, like, the violation is the same.
The bigger thing to me was, this really feels like desperation.
You know, and I think that more and more of these consumer sort of internet services
that have been able to get away by offering a pretty subpar product
and selling it to you for more than $100 a year,
I think the rude awakening is showing up, you know, where all of a sudden, if you have a subscription
to your Claude or your Gemini or your chat GPT, you're probably going to be able to get more
from that and do more things. And you're just not going to need the subscription anymore.
It's exactly like when we were talking about vibe coding and being like, why are we paying
Squarespace all this money, right? I think the, why are we paying Grammarly all this money moment
is coming. Yeah. And I should say, if you want,
to rip off Casey Newton's editing style without his permission or without compensating him.
You should just do that in a free chatbot. His advice is not worth that much.
Trust me, I have seen his edits and I would not pay $140 a year for them.
I'm a great editor, okay? Ask a route. You should really ask a round. I offer very detailed, thoughtful
feedback.
No, this is horrible. I'm very glad you exposed it. I'm very glad they went back.
and said, we're not going to do this anymore.
But I think this kind of thing is going to keep happening, unfortunately,
because there is money to be made.
And if you can get away with it, you're going to do it.
Yeah.
You know, an interesting question might be, like,
is there a good version of this feature?
And what would that be?
Do you think so?
If they had come to you and said,
Hey, Casey, we're starting this new expert review feature.
And every time someone edits their emails to sound more like,
Casey Newton, we're going to give you 10 cents. Would you have done that? I mean, I don't know.
In general, I am in favor of AI companies trying to strike deals with creative people that say,
like, we are going to give you some sort of, you know, we're going to essentially share the
revenue that is based on the creative work that you have done. So certainly, I would like to see
some kind of explorations like that. But, you know, I think about some of the editing I've done.
You know, I can remember, like, working with one writer once, and she was working like on
kind of narrative feature story.
And it just made me think of Catherine Boo,
the great features writer
for the New Yorker for a long time,
wrote this incredible book
behind The Beautiful Forever's.
And I was like,
go read Catherine Boo.
Like, go read Catherine Boo pieces
in the New Yorker.
She's the goat.
And see how she, like,
evokes characters
and see how she kind of
structures her narratives.
And like,
so can I imagine an AI tool
that, like,
you were having a conversation with
that also said, like,
you need to read some,
Catherine Boo,
and click here.
And hey,
you already have a New Yorker subscription.
Maybe you can log in right here and we'll sort of bring up some of the relevant passages.
So, yes, I do think that there is value in sort of guiding writers to actual excerpts.
The key is you have to guide them to the actual expertise, not just what your LLM is hallucinating.
Right, that would be my worry if they had come to me, which they did not, which I'm a little bit offended by, frankly.
They put you in the feature.
Was I not worth ripping off?
I'm right here, grammarly.
I'm pretty good
But no
Had they come to me and said
Hey we want to make you part of this
I would have said well how good is your model
Because
You know my worry about something like that
Would be that someone would you know
Open up their word processor
And start writing their business memo and say make it sound more like Kevin
Ruse and then it would make it sound terrible and generic
And people would blame me and I would kind of get the bad rap
For allowing my reputation to be laundered in this way
I was texting with my friend Matt Honan
who's the editor of the MIT technology review,
and he found that he was also being used as an expert.
And when he clicked on the expertise
that he was allegedly providing the user of expert review,
he looked to see what source they were citing,
and it was a speaker bio that he had submitted from an event.
So it was like, based on Matt's speaker bio,
you should have used to work at Wired?
Like, I don't even know what it said.
But again, it's just like they just did not think this rule.
Yeah.
Well, now as a result of Grammarly pulling back this feature request,
if you want your emails to sound like Casey Newton,
you're going to just have to put a bunch of typos
and random punctuation in yourself manually.
And if you really want to know what I think of your writing,
it's that you should start a podcast.
That's where the future's going.
Okay. Casey, I'm glad you escaped.
Grammarly servitude.
Fork is produced by Whitney Jones and Rachel Cohn.
We're edited by Viren Pavich.
We're fact-checked by Caitlin Love.
Today's show was engineered by Chris Wood.
Our executive producer is Jen Poyant.
Original music by Mary Lazzano, Diane Wong,
Rowan Nemistow, and Dan Powell.
Video production by Sawyer Roque,
Pat Gunther, Jake Nickel, and Chris Schott.
You can watch this whole episode on YouTube at YouTube.com
slash hardfor.
Special thanks to Paula Schumann, Puiwing Tam, and Dahlia Hadad.
You can email us at Heartf.
for at nytimes.com
with whatever fake advice I just gave
you in Grammarly and I'm sorry for it.
Holy shit. They just
disabled expert review in Grammarly.
Whoa!
Yay! You're free!
