The Daily - The Workers Letting A.I. Do Their Jobs
Episode Date: April 14, 2026Since the release of generative A.I., questions have been raised about how it would change our lives and jobs. Now, many software developers who were early adopters of the technology have outsourced s...o many tasks that they barely program at all. Clive Thompson, who writes about technology and science, interviewed about 75 software developers at major tech companies, small businesses and start-ups. He explains what it looks like when programmers invite A.I. to help them do their jobs. Guest: Clive Thompson, who writes about technology and science for The New York Times Magazine, Wired, Smithsonian and other publications. Background reading: Coding after coders: It’s the end of computer programming as we know it. Photo: Adam Glanzman for The New York Times For more information on today’s episode, visit nytimes.com/thedaily. Transcripts of each episode will be made available by the next workday. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
For the New York Times, I'm Natalie Kittrow-F. This is the Daily.
For the past few years, people all over the world have been asking how AI will change their lives or affect their work.
And the answers range from total salvation to absolute doom.
At the front lines of all of this are software developers, who are using artificial intelligence so much that it's already taking over many of their day-to-day task.
Today, I talked to Times Magazine writer Clive Thompson about his recent survey of the tech industry to find out what it looks like when people invite AI to do their jobs.
It's Tuesday, April 14th.
Clive Thompson, legendary tech reporter, a person whose work I have admired for a very long time, welcome to the Daily.
It's good to be here.
So you are here because you've been covering extensively the question of how much AI is affecting the workers who are really the backbone of Silicon Valley programmers, the people who write the code that powers every piece of software we use.
This is a group of people you know well, not least of all, because you wrote a book about them.
You spent a lot of time talking to them in recent months.
So what did you find?
walk us through that reporting, what it entailed, and what it unearthed.
Sure. Well, I'd been following the arrival or the advent of AI as a tool that can write code for a couple of years now.
But it started to accelerate a lot last year. And I really just wanted to find out, you know, what was going on in the everyday trenches of software development.
So I just hit the road and I talked to about 75 different software developers all around the country.
Yeah.
75.
Yeah.
That's a lot.
That's a lot of them.
Yeah, I might have overdone it.
But I really wanted to know what was going on kind of across the board because different software
developers have very different types of jobs, right?
So I wanted to talk to people who are doing consulting work for regional banks in Tennessee,
people who are doing buzzy little startups, just the two of them in Silicon Valley,
trying to like make something new.
And then, you know, the people that are working at the big software giants like Google and
Amazon and Microsoft, where you've got.
tens of thousands of developers having to take care of these code bases that have been around for 20
years, right? And what do they tell you? Well, it was really interesting because what I found was
that a lot of the coders, they're writing a lot less code, some of them are writing no code at all.
They're having the AI write it for them. Wow. And this transition has happened really quickly.
I would say it began heavily in the last six months. It accelerated in the last three months,
as these AI coding tools have just gotten a lot better. And they have started.
to gain the trust of a lot of programmers,
including ones that might have been a little skeptical before.
And it's a really stark change from what things looked like even a year or two ago.
So you're saying coders aren't coding.
Is that right?
Well, not all of them.
It's a gradation.
But of the people that I spoke to,
a majority of them were outsourcing a lot of their day-to-day programming to AI.
There are definitely coders who are writing very little to zero code.
Wow.
It's a C change. It's a big C change.
And how do they feel about that?
When I first started the research, I kind of wondered whether some of them were going to be uneasy or unhappy about it, right?
Because I had known from decades of talking to software developers that they often derived enormous pleasure from writing lines of code.
It was like solving a bunch of little puzzles, and it was just delightful when they did it.
And so I thought maybe this was going to be deflating or demoralizing, you know?
But in reality, the great majority of everyone I spoke to was really kind of jazzed and excited
about the new powers that the AI was giving them to be able to just say, in plain language,
here's what I want created.
And then five, ten minutes later, have the working code back and they're looking at it.
What they all said was that they've always loved building things.
That's the fun of being a developer.
You take an idea you have and through sweat and work, you turn these magic.
words into a machine that does things for you. And that feels like magic, right? That feels like
something from Tolkien. Totally. And they still feel that, even though they're not writing as much
or any of the code themselves. They still feel like they're a sorcerer who's thinking about
what needs to be made and then using these tools to bring it into being really quickly.
Some of them said they feel that loop of success more quickly because the AI moves faster than
they would have done. So it was interesting. A lot of them were really, really pumped, really, really
stoked. Okay. I want to talk about that. I want to ask you about this excitement about this
magic that also happens to be this massive disruption in their industry. But first, I want to just
understand how big this shift actually is. Yeah. If you want to understand just how big this change is,
you sort of have to understand a bit of the history of software development. So it's been around for maybe
50 or 60 years. It's essentially kind of a new field. Ever since we invented computers, we've been
figuring out ways to talk to them and we've made.
those ways to talk to them get a little easier and a little closer to human language.
So what's happening now with AI is probably the biggest change it's undergone yet.
So in the 1940s, you have the first computer, the ENIAC computer here in the US.
And the way that they programmed it, the programmers was a team of women.
And they had to literally rewire the entire machine to do something different.
So they're crawling around, sometimes crawling inside the machine and rewiring it to create a new
logical system to solve a problem, right?
Amazing. Labor-intensive coding.
Very labor-intensive, yeah.
And, of course, when computers started to go into industry,
it was impractical to require people to crawl around inside them
and rewire them for every single new problem.
Sure.
So they started making computer languages that you could type these commands
and they would get translated into the instructions
that are essentially like digital wiring.
But honestly, the early languages, I mean, I've poked around in them
and I've talked to people that had to write them,
And they were really hard.
Because back in those days, every single little thing in the process of asking a computer to do something, you had to be very specific.
Like, put this number in your memory here.
And then put this other number in your memory here.
Now, you're going to add them together and put the sum in here.
And so it was really like a nightmare, honestly.
It was like juggling 900 balls just to get to do the most basic piece of math.
And then over the years, coders said, let's automate some of that labor.
Let's make that easier.
And they have a funny phrase for it.
They call it adding a layer of abstraction.
So all these little finicky steps that slowed you down a lot in the 60s and 70s, those are kind of gone.
Like, I can write code much more quickly.
It's much easier to write.
It's kind of more like in human language.
Right.
Every time they realized that they wanted to accelerate the pace at which they wrote code, they would add another layer of abstraction.
So things just kept on getting a little easier every decade.
It sounds like what you're describing is a approach.
process where you move from actual human beings doing real grunt work to make coding happen,
people putting wires into different slots in computers, to something much more sophisticated
over time, but it's still sounding quite tedious. Like, you still have to write much of this
and deal with your problems, et cetera. Yeah, absolutely. I mean, coding was really a grind.
Okay. So what about now?
Well, what's happening now is that people are using AI agents to write the code for them.
So they've generally gotten sufficiently trustworthy that a lot of the programmers I was speaking to were just using to automate a lot of their writing of lines of code.
So with AI coding, they will have teams of agents.
Like you will ask the main agent, hey, write this feature for me.
Let's work at a plan.
The AI will say, here's our plan.
And when I say go, it will spawn other subagents to do different.
parts the task, right? You'll have one that's writing code and another one that is looking at that
code and testing it. And if there are errors, and you know, there's almost always errors the first
time, another agent will look at those error messages and go, oh, okay, let's change the code and test
it again and keep on going in this little loop, like a little team of agents all working in a swarm.
And only at the point in time where...
It's amazing. Yeah, where they're like, okay, it's passing its test, it's not producing any errors,
then it'll go back to the human and say, okay, here's the code, here's the test we wrote, here's
the results, you can see our work. And that's really interesting. There's really very few people
having experiences like that other than computer programmers right now. So what you're describing
sounds like an enormous leap from the progress that we've seen over the course of time
in coding. How is it changing things practically for coders? Yeah. Well, it's making it a lot
quicker to do a lot of things, right? A lot of software developers will tell me that it's just
dramatically accelerating the pace at which they work. And to give you a sense of just how fast it can
be, if you're like a small startup, these two-person shops and I visited some of them,
they would tell me that they were moving up to 20 times faster than they would have if they
were trying to build that company two years ago, right? Whoa. That's crazy. As someone said to me,
you know, a request from a customer for a new feature that might have taken like a full day
could be done in maybe half an hour. That's wild. 20 minutes to write it, 10 minutes just to look
it over it and make sure it's good, right? Now, I should say that this is true of a smaller
company where you're kind of writing entirely new code and you don't have to worry about breaking
something that already exists. So when you go to the really big mature companies, they're using AI,
but their metabolism is a lot slower. They're being a more cautious. And so when I went to
Google, they were saying, like, at a small startup, you know, 100% of the lines of code are written
by AI. At Google, it's more like maybe 40 or 50%. And it has sped up their overall metabolism
by really only like 10%. Although, as the developers at Google told me, like, hey, 10% for a company
our size, that's a huge win. So the AI is making them faster at coding, in some cases
doing a lot of that work for them. But it sounds like that.
like they still have their jobs? So for the coders who aren't doing coding anymore, what do their
jobs actually look like now? Well, their job in a way is to think about what the software
ought to be doing. Now, that was always their job, right? They always had to think about the shape
of the software and then slowly, painstakingly make it happen. Because they don't have to spend so much
time on the slow painstaking, they can spend more time experimentally iterating, right? Like,
I talked to a lot of developers who would say,
now that I'm talking to the AI and it's doing the coding,
we'll run through like 10 different possibilities,
and I'll pick the absolute best one.
They said it feels like being Steve Jobs,
where you'd go to their minions,
bring me nine designs of the iPod,
and I will handle them each and pick the best one, right?
Several of them literally said the Steve Jobs comparison.
So they're kind of becoming less like construction workers
and more like architects.
But on a deeper level,
what they're really doing is just talking.
They're having a lot of conversations.
With AI.
Yes, with AI.
They're having conversations with AI.
And having to be very clear, the thing about AI is, like, it will go off and do the wrong thing if you're not incredibly clear.
And how did they find the process of communicating all day long?
Are they good at it?
Do they have to develop that skill?
Definitely, I had some of them say that they were surprised to find that they were becoming better communicators.
Like better communicators in English, right?
Like they're writing emails better.
The fact that they've had to become better communicators to the AI
has made them better communicators to the world in general.
The bot has made them better at communicating
at having human relationships, potentially?
I don't know if human relationships would be the way, although...
That's a bridge too far, got it.
Well, although there is one coder, Manu Ebert.
He's a software developer with a small new startup, a company called HyperSpell.
He sort of said, you know, kind of jokingly, but he said it, like, I wonder if we're finally
teaching all the nerds empathy because they, you know, they're having to do with it.
I know, I know nerds have empathy. I'm a nerd, I have empathy. His point was, you know,
I think correct that the job didn't used to require quite so much communication, and now it does.
Talk to me about how Manu's work is changing. He's interesting. So Manu has been a developer for a long time,
and he's worked at very large companies.
He's done his own startups.
And when he first started using AI,
he wasn't really sure about it.
He was worried that it was going to hallucinate things,
that it would produce code that would be too flabby or inefficient.
And what he said is that over the months,
his concerns began to boil away.
He got more confident when he could see that it could do things reliably.
So Manu does something that I actually heard several developers
have settled on as an interesting technique,
which is when they want to write a new feature
or write a new function or improve some aspect of code,
they will essentially get into a conversation,
like a Socratic dialogue with their agent.
They'll say, okay, ask me questions
about how this software feature should work.
And the agent's like, okay, what is this going to do?
Should it do it this way?
Is it going to be written in this language
or in this language?
by having it interview the coder, as it were,
it got them to think about what the software should really be doing.
And then it was off to the races with having the AI agent do things.
The problem is that I don't mean to anthropomorphize it,
but it can sort of misbehave.
Manu would tell me there are times when the area would go off
and come back and say, oh, well, I didn't do those tests.
I didn't think they were that important.
And he's like, wait a minute, those tests are completely.
completely important, you know. And so he would have to figure out ways to sort of reprimand
it or ways to cajole or punish it in some way. How do you punish AI? Well, you yell at it,
basically. What Mani would do is he would write these very stern list of instructions, like a 10
commandments. And he would have this file. And he would say, every time you do anything, you look at this
file first and you always follow these commandments, very stern commands, like you must test the code
in this way or that. You must do these things.
you must not do that.
And software developers, they would show me these commandments.
And they would be in uppercase, like they're yelling at it.
And they would repeat things over and over again, like they were trying to hypnotize the AI
agent by sheer repetition.
Or they would say things like, if you don't do these tests, I will be fired.
They would have this very...
Wow, really raising the stakes.
A very emotional language.
One of Manu Ebert's prompts would say that failure to do these tests is a
unacceptable and embarrassing.
Embarrassing.
Embarrassing, yeah.
I asked him, I said, does that work?
Well, yeah.
I was going to ask, aren't we supposed to be being nice to these AIs
just in case the robots end up taking over the world?
Yeah, exactly.
Don't eat me.
I mean, in one sense, I understood why it works.
It's because large language models, I mean, they're language machines.
And so they understand the meaning of language based on the company it keeps.
So if they see the word embarrassing, they understand that like, oh, that word, that comes from a bad neighborhood, right?
Like, there's bad things there, and there's embarrassing, humiliation, you know, whatnot.
And so that just helps raise the stakes so that it grasps the import of those words.
So it turns out that actually emotional language in stern and even harsh language probably does have an effect, right?
Like, it seems kind of nuts on the surface, but when you think about the way large language models work,
It makes sense.
Okay.
You have done a very good job of describing why coders like this new way of working.
From what you said, it seems pretty obvious that there are many upsides of this for the people doing this work.
It sounds like they just feel much more powerful overall.
Yeah.
Most of the coders I spoke to were just sort of astonished at how much more productive they felt.
They would say things like, you know, I've had this to-do list of things haunting me for years, and I'm knocking it all off. I'm getting it done.
So overall, there is a real sense of excitement, even giddiness at times.
But there are some concerns.
They're worried about losing their skills if they rely too much on AI.
They're worried about jobs being taken away by AI.
And they worry that they're moving too quickly.
We'll be right back.
Okay, Clive, I want to talk about those fears. Can we dig into them? Are they right to be worried about the things that you laid out?
Yeah, absolutely. The concern about deskilling is really interesting because a lot of the people that I was talking to for this story were, they're a little more senior. So they're the generation that knows code really well because they still had to do it by hand. And so they would tell me that it's great for us to have.
AI agents because if they produce something wrong or flabby, we have the experience to look at that
and go, that's not good, do it again. And they would all say, well, you know, what about the next
generation? Are we going to discover five or ten years now that the next generation of software
developers, you know, simply don't have that deep code sense that lets them be really, really good
engineers? And a related concern there is that because they're not writing code so much anymore or at
all in some cases, they worry that they're losing a bit of that code sense, right? If you don't
use it, you're going to lose it. One person I talked to who was worried about the way AI coding
tools were deskilling her was Pietorian. She was a reasonably newer developer. And so she was
in some of her early jobs. Her employers were like, we want you to use co-pilot by Microsoft
to write code. And she was doing hundreds of prompts a day for months. And she was, and she
She started feeling, wow, this is actually degrading my own knowledge of code.
I feel like I'm losing my ability to code.
That's what she told me.
Okay, that sounds unsettling for her.
But why is it a problem that she's losing her coding skills if AI can do that for her?
I don't know how to type set.
I don't do calligraphy, right?
It's never been an issue for me.
Aren't there certain things that we can just.
say, okay, we've moved past this as a society? Is this one of those things? Well, this is the great
debate, and I don't have an answer for that. I can tell you that there are two camps of this
in the world of software, and they are heatedly opposed to each other. There is what I would say
is probably the majority of the developers who are a little less worried about that. They think
that the AI is good enough and will continue to be good enough, that it will actually be better
at doing a lot of this code than the humans are
because it won't make the stupid human mistakes.
You know, people think of code as like this magical thing
where you have to work really hard at it,
but a lot of it is just doing the same thing
over and over and over again, right?
Right. Extremely wrote, tedious work.
It's rot and tedious, and when humans do it,
we make mistakes and the robots don't, right?
So you could make an argument,
and this cohort of developers do,
that the software will actually be more reliable
because the agents are doing all of it.
But then there's a very strong argument by a smaller cohort that say, no, when you write new code, it's not just something you build and it stays up. You have to maintain it as you add new things to it adjacent to it that interact with it. There might be interactions that are bad or weird. And so the code that the AI is writing might look good right now, but there's a potential that down the line, it could cause really difficult or nasty interactions with other parts of the code base. There could be subtle bugs.
that we don't see right now that really start to pile up in five years now, you've got a huge mess.
So there's a cohort of a coders who are saying we shouldn't be using this stuff, not at scale, not the way they're using it right now.
Okay, I want to return to the question of job loss, which was another fear that you mentioned.
Do you think that all of this means that junior coders right now are actually somewhat replaceable in this field and that maybe in the near future there will be no such thing as?
a junior coder at all, because that entire job category will disappear in the same way that
the job of typesetter did?
I think there's definitely a danger that the demand for new junior hires is going to soften.
And I think we've already seen that.
You know, if you look at the research of Eric Benjolson at Stanford, he analyzed job postings,
job hirings, and he found that in software developers, it was down by 16 percent, right?
And that was already happening just in the last, you know, year or so.
So if that's happening when the AI coding tools are really just going from a crawl to a walk
to a run, what might happen when they're sprinting?
And the other problem, of course, is, you know, this is capitalism, right?
I mean, all of these large firms are always looking for ways to save money.
And at high-tech companies, some of the most expensive money are the salaries of these developers, you know?
So the idea of, like, oh, we can replace even a chunk of.
of them with AI. That's really compelling. And we're seeing this across all forms of white
color labor right now, right? All the C-suite folks love the idea of being able to either lay
people off because they can replace them with AI or threaten to do so, right? Because even if you're
not replaced by AI, if you deskill and devalue the job, it just gets easier for the owners to
push you around. Clive, if coding is at the vanguard of AI,
affecting work, the amount of it, the quality of it. What does all of this mean for the rest of us
who don't work as coders? Will white collar or blue collar workers in other fields see AI take
over in a similar way? What developers are experiencing right now is something that maybe seems a little
paradoxical, which is that they had spent years developing these very, very hard technical skills.
And it turns out those are some of the easiest things to automate, right?
The hard stuff to automate is like talking to our colleagues and our customers and figuring out what should we be building, right?
Setting priority, setting strategy.
AI can't do that, right?
There are still truly human skills.
There are still truly human skills.
And so I began to wonder if that's a pattern we might see in other forms of white color work.
Like back in the 80s, it seemed like chess was so hard to play.
There was no way a computer was going to do it.
But it seemed like speaking, that's easy.
Surely a computer should be able to just speak.
But it turned out that chess was actually easier for computers to do than to learn to speak.
They conquered chess in the 90s.
And it took like two decades more to learn how to talk like a human.
So one of the things I think we see with AI is that things that we thought maybe were like,
oh, this is my big skill.
Yeah, that's not really your skill.
Your skill lies elsewhere.
Okay, so the process that you're talking about is essentially one in which in different fields,
we kind of learn the thing that is not automatable,
that a bot can't do,
and where people sort of focus on that.
How long do you think it's going to take
for those transformations to hit jobs en masse?
My crystal ball was that it's going to be longer than we think
for the following reason.
What we learn from the history of computers
is that it can take things a lot longer
to have an impact on corporate life than we would expect.
So back in the 80s and early 90s, you get the advent of the personal computer.
A company can now, instead of having someone type memos on a typewriter, they can do it on a
computer.
And there's this assumption that it's going to dramatically increase the productivity of companies.
And at first, it doesn't.
And economists are kind of baffled by this.
And it was because to actually increase productivity or efficiency of companies, the company
had to reorganize the way it did business around.
the computers, right? They had to start going, well, we don't need you to just treat the computer
like a typewriter and print up a memo and then send it to everyone. Just email it to someone.
That means everyone at all our regional offices can all be reading the same stuff at the same time.
And so once they began to reorganize the way that information and decisions flowed around
the affordances of computers and the internet, then you began to see changes in efficiency,
productivity and GDP.
But it took a long time.
And my suspicion would be that it's going to be the same way with AI.
Well, yeah, I want to ask about that because it feels to me as though everything has been
on a really accelerated timeline recently.
I mean, the pace of innovation, it truly feels like super speed on a different level.
Is that wrong?
Is that just me?
I mean, it has, but a lot of stuff that has changed doesn't have industrial impact.
right? Like whenever someone lays off a bunch of people saying we're going to replace them with
AI, often they discover if you follow them six months later, they had to rehire a bunch of them
because it just didn't work. So yes, there is a massive acceleration on a cultural level that can
often be quite alarming. But if you go around and talk to companies, big companies, like the ones
that are moving the economy, it hasn't really happened there. And you can even see that, of course,
in my reporting, right? Small little companies, startups, yeah, they're moving really fast.
Google, it's moving 10% faster.
And I think that's closer to the impact you might see if you look at white collar work writ large.
There is something comforting about that.
I have to just admit.
So we've been talking a lot about the shift in the work of the people who are making software.
I want to ask you about what they're actually creating and how the innovation that we've been talking about might change that.
Like, what is the upshot of all of this for those of us who will mostly interact with it by interacting with the products that these people make?
In other words, what is the upside for the rest of us of all of this?
I guess one of the upsides is that it feels like we live in a world where there's tons of software, right?
Just stuff that didn't exist 20 years ago.
Sure does.
Social networks, you know, text messaging.
But if you look at the world of work, there is just a massive amount of things that have really never been helped out by software at all.
I can't tell you how many companies I talk to.
They're like $50 million, you know, concrete mixing firm.
And they would like to have better software to run their company.
And they don't really have anything because this sounds weird for a $50 million company.
But they're not big enough to be able to hire like five software people to make custom software.
can't afford to do that. So they just toddle along running their entire company on three Excel spreadsheets
on a Windows XP computer that they're afraid to update because that will break everything that they're
using to track all their expenses. An astonishingly large number of these mid-sized firms are
horribly underserved by technology. And if you work at one of these companies, you know what it's like.
So if it becomes easier to write software and if you could have a world where like, okay, you know,
I've got my, you know, Staten Island concrete mixing company, 50 million a year.
I'd love software to make life better for my employees, but I've never been able to have it
because I can't hire five people, $200,000 a year.
But what if one person came along and said, okay, for 60 grand, 70 grand, I can build that for
you and maintain it because that's just how much more productive I am as a software developer.
You could start to see a lot of improvements in these kind of actually good ways in everyday life
for a lot of people at work.
That's one area where I think you might see that happen.
And I guess at the highest level, what's going to happen is that software stops being something that is precious and rare.
It reminds me maybe a little bit of what happened with, like, this is going to sound really weird, but with paper.
So paper used to be incredibly rare.
You'd go back to pre-revolutionary Pennsylvania, and the average person had access to like four pieces of paper a year.
And then suddenly it becomes a lot cheaper on all over the place, and you've got weird things like Post-it notes,
which are these really weird forms of paper
that just transform the way that you live your life
in this funny way.
I love post-it notes, by the way.
I wonder if people were scared
of this proliferation of paper at the time.
Oh, they definitely thought it was bad.
I mean, that's why the Comstock laws existed, right?
They were worried about young people
writing smutty letters to one another.
So I think something very weird
is going to happen as software
stops being something that is special and difficult
and becomes almost like a post-it note,
where it is ubiquitous,
we call it into being for short-term reasons.
It changes aspects of the way we communicate
and the way we deal with other people in ways of that.
I can't really predict,
but I do think that that is what we're looking at.
But here's another parallel.
Word processing back in the 70s and 80s
was really hard to do.
You had a machine and, you know,
someone spent hours designing a document
and you went through many iterations
to make sure it was right before you hit print.
And then in the 1980s and 1990s,
the Macintosh said,
okay, anyone can make a document.
And suddenly people are like creating really ugly flyers for their birthdays and zines and
this weird explosion.
And if you'd said to someone in 1983, what exactly is this word processor going to do?
You would not have predicted riot girl zines in 1996, right?
Totally.
And so that is kind of, I think, this transformation that's about to hit us.
I don't really have the ability to predict what that's going to mean for us, but I think it's
going to be incredibly weird in the same way that those previous transformations are incredibly weird.
No, I mean, I think your answer is kind of, this is going to be both awesome and weird and
potentially bad in some ways. We really don't know. And that's just kind of the deal.
You know, there's a phrase in the world of technology that more is not just more. More is different.
Different behaviors emerge. And so that is something that I,
I think we are likely to experience socially, civically,
as software goes from being something that is hard and difficult
to something that is trivially easy to summon into being.
What I'm hearing is that whatever happens,
this change is going to reflect us.
We will get the technology that we deserve.
Yeah, exactly.
It is going to catalyze all of the human desires,
the malign ones, the delicious ones,
the terrible ones, the beautiful ones.
We've got, you know, Shakespearean sonnets about your birthday.
We've got people who are using them to, you know, analyze and understand personal medical records.
And we've got disinformation and wholesale cheating on essays at college and high school.
All that stuff is going to be on offer at scale, as they say, in Silicon Valley, as the AI coding revolution goes forward.
Well, Clive, you know what?
is truly a human skill still being a great guest on the daily.
So thanks for being here.
Glad to hear that I still have that human edge.
We'll be right back.
Here's what else you need to know today.
Two congressmen, representatives Eric Swalwell of California and Tony Gonzalez of Texas,
resigned within hours of each other on Monday.
Both men had faced allegations of sexual misconduct
and calls for them to step down or face expulsion from them.
the House. Swalwell, a Democrat, announced his decision after the San Francisco Chronicle and
CNN published the accounts of several women accusing him of sexual assault or misconduct on Friday.
His resignation comes on the heels of his decision Sunday to drop out of the California governor
race. Gonzalez, a Republican, was accused of a coercive relationship with a staff member who
later killed herself.
Gonzalez first denied that there'd been a sexual relationship, then later admitted a mistake,
and had been fighting calls to quit his post for months.
And a brewing conflict has ratcheted up between President Trump and Pope Leo the 14th,
who's been one of the most powerful critics of the U.S. war with Iran.
On Sunday, Trump lashed out at the Pope in a social media post, accusing him of being, quote,
weak on crime, terrible for foreign policy, and catering to the radical left.
I have no fear either the Trump administration are speaking out loudly about the message in the gospel.
The Pope responded on Monday, saying he wasn't scared of the Trump administration,
and that he would continue to speak out against the war.
Too many innocent people are refilled, and I think someone has to stand up and say there's a better way to
today's episode was produced by Diana Wynne, Nina Feldman, and Michael Simon Johnson.
It was edited by Brendan Klinkenberg with help from Paige Cowett,
and contains music by Dan Powell, Pat McCusker, and Michael Simon Johnson.
Our theme music is by Wonderly.
This episode was engineered by Chris Wood.
That's it for the Daily. I'm Natalie Kittroweth.
See you tomorrow.
