Hard Fork - Meta Bets on Scale + Apple’s A.I. Struggles + Listeners on Job Automation
Episode Date: June 13, 2025This week, Meta hits the reset button on A.I. But will a new research lab and a multibillion-dollar investment in Scale AI bring the company any closer to its stated goal of “superintelligence”? T...hen we break down Apple’s big developer conference, WWDC: What was announced, what was noticeably absent, and why Apple seems a little stuck in the past. Finally, a couple of weeks ago we asked if your job is being automated away — it’s time to open up the listener mail bag and hear what you said.Additional Reading:Meta looks for an A.I. resetApple Executives Defend Apple Intelligence, Siri and A.I. StrategyThis A.I. Company Wants to Take Your JobWe want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Transcript
Discussion (0)
Let me ask you about this.
There's this startup called the Browser Company, and they have a new browser called Dia, which
is sort of based around AI.
And so you have like an AI chat.
And I was reading David Pierce's story about this in The Verge, and he was like, there
was a point in using Dia where I came to understand that it knows what my social security number
is because I had like turned it onto a website. And when I think about all of the things that I,
you know, put into a web browser, some very sensitive information, Kevin.
Yeah.
I don't know that I want a cloud service to have total knowledge and memory of what I've been
browsing.
Yeah, that sounds to me like a bad idea.
Perfect. Cut, print. We're moving on. I'm Kevin Roos, a tech columnist at The New York Times.
I'm Casey Noon from Platformer.
And this is Hard Fork.
This week, meta hits the reset button on AI.
But does it actually believe in super intelligence?
Then, Apple's big developer conference was this week,
and it still seems a little stuck in the past.
And finally, we asked you if your jobs
are being automated away.
It's time to hear what you said. Well, Casey, we have a live show coming up.
Boy, do we.
My God.
So on June 24th, we are going to be at SF Jazz in San Francisco for the first ever Hard
Fork Live.
And boy, do we have some special guests to announce.
Now, do I have to do anything for the show? I would like you to do the following things.
One, show up.
Okay.
Two, stand on stage with me.
And three, help me interview
some of our amazing special guests.
All right, you drive a hard bargain, but I'll do it.
So Casey, tell the people who's coming to Hard Fork Live.
Let me tell you about the show, Kevin.
You are going, if you're coming to Hard Fork Live,
you're going to be hearing from the co-founder and CEO
of Stripe, the big payments platform,
that's Patrick Collison will be at the show.
You will be hearing from and seeing the work
of the founder of Skip, a mobility company
that makes exoskeleton pants,
Catherine Zeeland will be on the show, Kevin.
And finally, to cap it off, we have from OpenAI,
the CEO, Sam Altman, returning to Hard Fork,
and he's bringing along Brad Lightcap,
his chief operating officer.
We're gonna have a big conversation about AI.
And so that's the stuff we're gonna tell you about,
but if you can believe it,
there's actually other stuff that we're working on
that we're not ready to tell you yet,
but suffice to say, this show is packed.
Yes, our cup runneth over.
When we set up the show,
we sort of booked like a medium-sized venue,
sort of expecting that, you know,
some people would want to come out.
The demand was overwhelming.
We sold out very quickly.
And so the show, you cannot buy tickets to it
unless you're scalping them on StubHub or whatever.
Don't do that, by the way.
Yeah, that's right.
But here's what, if you can't come to the show,
but you just want to stand outside the building,
I'm going to come out during intermission
and just tell you what happened.
Casey, I don't know how to break it to you.
There's no intermission.
There's no interm...
What if I have to pee?
So if you did not get a ticket to the show, don't worry.
We will be bringing you the interviews from Hard Fork Live
on this very podcast feed with not too much delay.
That's right.
You'll be able to take part of the show
even if you were not there physically.
Exactly.
Yeah, but we're super excited for all of those of you
who did get tickets to come say hi.
Yeah, it's gonna be incredible.
See you there.
All right, Kevin, let's dive into the story
that I think you and I are both most excited about this week, which is what is happening over at Meta's AI division.
Yes, they're having a big reorg and they are making big moves to try to catch up in the
race to powerful AI.
So Casey, what has been happening?
So the big headline news is that as of this recording,
Kevin, multiple sources, including myself,
have reported that Metta is about to make
a huge investment in Scale AI,
which is a startup here in San Francisco.
They're gonna take 49% of the company
for somewhere between 14 and $15 billion.
A lot of money.
That's kind of thing one.
Thing two is, as part of that investment, the co-founder and CEO of Scale,
Alexander Wang, is going to come to Meta.
He's going to leave Scale, come to work at Meta,
and lead a new AI team that is devoted to creating super intelligence.
Yes. And what caught my eye about this announcement was not
only the dollar figure and the new super intelligence team,
but the fact that Meta is also going out and trying to
aggressively recruit a bunch of top AI talent to
come turn their ship around when it comes to AI and help
them catch up to companies like OpenAI and Google Antropic.
Yeah. So recently on the show,
you and I had a conversation about the somewhat botched
rollout of Llama Thor, the company's latest AI model,
and what it told us about the state of AI over there.
Today, I want to go through what happened over the past year
that led Meta to this place, and what do we make of this new plant?
Do we think that this will put them back into
the conversation with some of the real frontier AI labs?
So before we get into that,
is there anything we want to disclose to our dear listeners?
Yes. I worked in your Times,
which is suing OpenAI and Microsoft over
copyright infringement related to
the training of large language models.
And my boyfriend works at Anthropic.
So let's dive into this story, Kevin.
I think the first thing to do is
kind of lay out the state of play.
When you think of Meta's place in the AI ecosystem,
where are they right now
compared to some of the other big players?
So right now, I would say Meta is considered
a second tier AI research company.
They've had a bunch of internal turmoil
and disorganized sort of messy strategy decisions
over the past couple of years.
And so I think a lot of people feel like
they have kind of fallen off in AI.
And if you're Mark Zuckerberg, why is that a big problem?
Well, because AI is increasingly the thing
that people in the tech industry are pinning
their hopes on, not just as the future of large language models, but as really the future
of social media, the future of lots of other things that Meta is interested in doing.
And Meta has spent tons and tons of money trying to build these powerful AI systems
and buying up a bunch of GPUs. They sit on one of the largest stashes of GPUs of any company in Silicon Valley.
And I think the feeling is that they have just not been doing a lot with that.
That's right. And you compare that to some of their peers, like look at OpenAI,
the incredibly rapid growth of ChachiBT.
Look at what Google is doing and how those products are gaining
tons and tons of users. Anthropic is building a huge enterprise business.
Meta is not yet part of that conversation.
So let's talk a little bit how we got here because Meta has been working on AI basically
as long as any of these people.
What is the history of AI development at that company?
It's a really strange and interesting story because I think people who are just coming to this story may not know that
Metta was once considered one of, if not the leading AI company in the world. So here's the
capsule history. Back around 2012, Facebook tried to acquire DeepMind. Mark Zuckerberg thought Demis
Hasabis and his co-founders were doing cool and interesting things, thought this could be strategically important
for Facebook, and so he made them an offer.
Now, they did not sell to Facebook, obviously.
They decided to sell themselves to Google instead.
But so, Facebook, around this time,
set up its own research division, FAIR,
which was led by Jan LeCun.
And tell us about Jan LeCun.
So, Jan LeCun is a big deal in AI research.
He is one of the people who is considered a godfather of deep learning.
He won the Turing Award several years ago.
So he's a big deal in the world of AI.
And he was able to recruit a bunch of other really good,
well-respected AI engineers and researchers to come work at Facebook.
I would say during the 2010s,
Facebook did a bunch of really solid AI research.
They were pretty instrumental in building this thing PyTorch,
which is now used by most of the big AI companies still to this day.
They did a bunch of foundational work that
led to the models that we have today.
But then in 2017, something happened,
which is that Google published this transformer paper that
outlined this framework for building
these so-called large language models that we see today.
Would you call that a transformative paper?
Yes. It did end up being transformative
because for basically the next five years,
OpenAI and to a lesser extent, Google and DeepMind were just building these bigger and
bigger large language models and finding that they were actually getting better with scale.
And as that happened, Facebook and Yann LeCun did not really head down that same path, right?
Facebook had a bunch of other priorities.
This was right after Donald Trump's election.
They were still worried about misinformation on Facebook.
They were making bets on things like crypto and later the metaverse.
They were trying to compete with TikTok.
So there was just a lot going on at Facebook.
And I think people that I've talked to say that the AI research division just didn't
really get a lot of attention from the top.
Yeah, well, and to the extent that they were shipping AI features, it was machine learning
that would help them identify bad content that needed to be removed or improving a recommendation
algorithm.
So, stuff that was useful to them but was not the sort of large language models like
chat GPT that wound up, I think,
being a lot more interesting to people.
Yeah, and one of the reasons that they pursue that direction
is because Jan LeCun,
the guy leading their AI research division,
didn't believe in large language models
and still doesn't to this day.
He is one of the sort of foremost critics and skeptics
of the scaling era of large language models.
Yeah, if you want to know why Chat GPT didn't come out of Meta, like Yann LeCun is sort
of the reason they were never going to build that kind of product under him.
Yes.
So in 2022, after chat GPT came out, Meta, like every other company in Silicon Valley,
started to freak out.
Mark Zuckerberg says, oh my goodness, we may be behind.
We don't have our own sort of version of this that is ready to go.
And so they kind of go into panic mode.
They start buying up a bunch of GPUs and start working on what becomes Lama,
which is their version of an AI language model.
Yeah. And the first version of Lama actually winds up, I think,
being more successful than some people might have guessed.
Yeah. And at this time, Meta still has a lot of really good AI researchers,
and Yann LeCun doesn't believe in large language models,
but a bunch of other people there do,
and so they start building Llama and they make this decision to open-source Llama,
and so it does actually get widely used because unlike
ChatGPT which you have to pay for if you're a developer,
you can just build on top of Llama for free.
This, by the way, was a hugely important decision, Kevin,
because it was meant to be
a strategic move that would blunt the momentum of OpenAI.
The idea was, we will take
this product that you are selling for $20 a month,
we will give it away for free,
it will put cost pressure on you, it will give it away for free. It will put cost pressure on you.
It will make it harder for you to innovate.
So that was the idea behind Lama.
And I think it's important to remember
because whenever you hear Meta talking about open source,
it's always like, well, open source will save the world.
It was like, no, open source was meant
to slow down open AI in Google.
Right.
And so I think during the last few years,
this sort of post-Chat GPT era of AI research and development,
a lot of Meta's top AI researchers have left.
Everyone's got their reasons for leaving,
but one of the things that I've been hearing from people who
left Meta during this time is that the company just did
not believe in AI the way that some of the other big AI labs did.
Yeah. We should talk about why that is.
I think if you are a researcher at a company like an open AI, big AI labs did. Yeah, and we should talk about why that is, right?
I think if you are a researcher at a company like an open AI, from the very start, you
have been trying to build the absolute most powerful AI that you can, essentially, like
almost without regard for how much that changes society, right?
You believe that this thing is inevitable, you're going to build it, you're going to
try to steer it in a positive direction, but you think this thing is going to be hugely
transformative.
If you work at a giant tech incumbent with a trillion dollar valuation,
there is no obvious reason why you want to disrupt all of society, right?
Because if all of society is disrupted, that might not necessarily be good for you.
So I could understand why if you're running a company like Meta,
you're incentivized to think a little bit smaller.
You're thinking not, how do we build super intelligence?
You think, how can we create a slightly better advertising recommendation algorithm?
Totally. And that's fine as a strategy goes.
But if you are an ambitious AI researcher who's really committed to this idea that this is a
transformative technology, you want to do that at a place that actually believes what you do,
that believes that what you are working on is not just a better way to sell shoes to people
or make chatbots that go inside Instagram.
You want to be building superintelligence.
And so a lot of their top AI talent
did leave and go to other places.
Yes, and around that time, Kevin,
the company's playbook stopped working.
And that playbook, which we've seen so many other times
across so many different products,
is essentially the fast follower model.
You let somebody else figure out something interesting, then you reverse engineer it, put it in your own products and take over. This is what Meta did, for example, with Snapchat stories. It put stories everywhere, was hugely successful for them.
They start to think they can do the same thing with AI.
We will let the frontier labs go spend all the money, figure out all the innovations,
we'll read all of the research they publish, we'll build our own version of that, we'll give it away
for free.
We might not be a little bit behind the state of the art, but it won't matter because we'll
be basically there.
That's good enough for our purposes.
And this works up until about Llama 3, but then they start building Llama 4.
And an interesting thing happens, which is that the latest Frontier models, Kevin, turn
out not to be as easy to copy
as the ones that came before.
Yes, I think a lot of people who were impressed
by the first couple versions of Llama
saw Llama 4 come out recently and thought,
this is a company that has lost its way
and they are no longer considered a Frontier ad lab.
Yeah, and so the last thing I want to say
as part of this capsule history
before we move into the present is
that while Meta is making
some big moves now, it's important to remember they also
tried to make some big moves in January 2024, when they also
did a big reorganization of their AI teams in recognition
of the fact that they weren't getting the results that they
wanted.
They didn't go out and make a huge investment
or try to bring in a bunch of new talent.
It was sort of more on the order of, you know,
reshuffling a few teams.
But, you know, Mark Zuckerberg went out
and did an interview about it.
He started talking for the first time
about trying to reach AGI,
so artificial general intelligence,
one notch down from super intelligence.
And he said explicitly that he had to do that
because he knew it was going to attract more researchers.
And then a year went by and
that reorganization did not get the job done.
And so that is what finally brings us today,
this investment in scale and this once again,
hitting the reset button,
trying to find a path forward for them in AI.
Yeah. So I want to ask you about two possible ways to
interpret this week's news out of Meta.
One way is that this is basically a sign that Meta has kind of come to its senses after many years of betting on these directions for AI research that did not pan out,
that it is, you know, sending Jan Lakoon to sort of research Siberia,
and that it is essentially trying to buy its way back into the race to AGI by bringing on Alexander Wang and Scale AI,
and that it's going to spend whatever it takes to actually get back to the frontier of AI research and development.
The other way is that Meta is basically pretending here, that they have realized that if they say that they believe in AGI or even in super intelligence, that
might allow them to recruit these engineers who would otherwise be going to work for OpenAI
or Google or Anthropic or somewhere else.
And that it still wants to do what it has always wanted to do, which is to use AI to,
I don't know, build companions into Instagram or develop sort of things for the metaverse, but that it has
essentially changed its posture toward AGI as a recruiting strategy and that it is not actually
trying to build super intelligence. Which of those two explanations do you think is closer to the
truth? I think I'm going to cut that out and say I think that the answer is somewhere in between.
I yesterday as part of my reporting was going through the evolution of the way that Zuckerberg has talked about powerful AI.
And it is true that his desires to build more powerful AI have scaled along with what some might call a desperation to get back into this race.
Right. I think back when he thought that he could use AI as a very practical tool to enhance a bunch of his current business
Objectives he felt no need to talk about super intelligence whatsoever
But once he noticed that all of the best talent in the world did not want to come work at his company
That's when he said, okay
I am gonna have to change my tune on this front where I think your first explanation
to change my tune on this front. Where I think your first explanation resonates with me the most is, it's still not really
clear to me how superintelligence benefits Mark Zuckerberg and Metta in particular, right?
I think that if you talk to the researchers at the Frontier Labs about they want to build
superintelligence, it's like, well, they want to usher in a world of abundance.
They want to cure disease.
They want to solve poverty.
And so a lot of people think that those claims
are sort of too grandiose.
But I've talked to the real believers there.
I think they really believe that.
That's not what Mark Zuckerberg wants to do.
Mark Zuckerberg wants to rule over Metta
and have Metta be among,
if not the most powerful companies in the world
and in a world where super intelligence exists.
I'm not sure Metta will have much of a role to play.
Yeah, I wanna ask you about one other angle here
that I saw people discussing,
which was actually about Scale AI more than Meta.
So Scale AI, for people who are not familiar,
they are not sort of an AI R&D lab, right?
They are essentially a data provider to the big AI lab.
So Casey, how would you explain what Scale AI does and
how that might fit into Meta's strategy here? Sure, so the bulk of their business
works like this. They have a couple of subsidiaries. Those subsidiaries hire
people for pretty cheap and then they show them a bunch of content. For example,
they might show them content that might violate Meta's standards because it has
like violence or nudity and the content moderator will go in. they will say, okay, yeah, this violates the standard,
and I'm going to categorize it and I'm going to feed that back to
Scale AI and then Scale AI is going to label that data and
clean it up and send it back to Meta so that
Meta can then build a machine learning classifier
to sort of create automated content moderation systems.
So it's that kind of service
that has been really important for them.
Now, it's not just content moderation,
some of the other big labs like OpenAI or Google DeepMind are customers of theirs.
And they will have people out in the world labeling,
let's say like a picture of a car or something,
sending that back and that helps to train a large language model.
So we know that to make large language models more powerful,
you just need a lot of not just data,
but like clean, structured, labeled data.
And Scale AI has been one of the biggest providers on that front.
Right. So one hypothesis that I saw floating around this week online is that by acquiring a stake in Scale AI,
Meta was essentially trying to lock up that valuable data for itself and keep it out of the hands of its rivals.
Now, I think that there's probably some multi-year contracts in place.
I don't think it's actually going to be the case that MEDA can just sort of unilaterally
decide to shut down Scale AI's business with all these other AI companies.
But I do think it will give them privileged access to a pretty important ingredient in
training these large language models.
Yes, which is one reason why a person I spoke to yesterday, who's sort of like close to this deal,
said that they fully expect that the biggest customers
of Scale.i are going to stop being customers precisely
because they assume that their usage of the product
will flow back into Meta's hands,
and they do not want them to have
that proprietary information.
Ben Thompson wrote an interesting column on Wednesday
saying this might actually trigger some regulatory concerns
Because even though meta isn't trying to buy all of scale AI
It may effectively be removing a very important player from the market at a time when meta is already under a lot of antitrust
Scrutiny we just wrapped up an antitrust trial that is trying to force them to divest whatsapp and Instagram
Yeah, so let's talk a bit about what is going to happen now.
Assuming that this does go through,
here is what I've been able to piece together
about what this new team is gonna be doing, Kevin.
The first thing to say is,
these people are going to be sitting next to Mark Zuckerberg.
So this is something that Zuckerberg does from time to time
is he just will clear out everyone who sat next to him
during the last crisis, and he brings in people to work with him during the current crisis.
So for example, during the Cambridge Analytica crisis, he brought in a lot of his like communications
team to sit around him to like tell him about, you know, all of the, you know, breaking news.
Now and presumably those people shuffled off long ago, Cambridge Analytica was like in
2017, but now they're bringing in the AI team.
And so, you know, if you've always wanted to like bounce ideas off Mark Zuckerberg, that's maybe something you could do. We should also say the
people sitting around him are going to be really rich, not like Mark Zuckerberg rich, but you know,
the Times reported that these pay packages that they're offering are stretching into nine figures.
That's $100 million. I heard one credible report of an engineer being offered $75 million to go
work for Metta. Which we should just say, that's a lot of money, right?
That's like what a star pro athlete would make.
Yeah, and by the way, if you ever say to somebody, how much would it take me to give you for
you to come work with me and the person says $75 million?
Reflect on yourself.
What choices did you make, right?
So they're going to have that team. Now, I've also been trying to figure out what is this team going to do? Because look, the way that Metta has rolled out this announcement has basically felt like a help wanted ad, right? They are out basically, now they're officially they're declining to comment, but read these stories. I'm getting strong hints that someone inside Metta very much wants the world to know that there's a hundred million dollars on the table for the right person, right?
It is basically a help wanted ad saying, come work here.
Okay.
Well, so what happens when people actually take that deal?
This is what I've been trying to figure out.
It's like, okay, let's say you take a hundred million dollars and now you go get your desk
across from Mark Zuckerberg.
What does day one of your work look like?
Is there a plan?
There actually isn't.
The plan is we have to figure out the plan. We have to figure
out how to take the best practices of the companies that we came from, bring those practices into meta,
and somehow get back in this game. Alexander Wang is going to be leading that effort. I think
Wang is a capable leader. Like, Scale is a very successful company. The way that they've been
successful is by always kind of pivoting to where the money is. They successful company. The way that they've been successful is by always
kind of pivoting to where the money is.
They've been very good at that Silicon Valley
startup thing of just staying alive
by being very resilient and resourceful.
I want to say though, that building super intelligence
is a very different prospect than building scale AI, right?
Because for, when you look at what scale AI actually does,
they help you scale AI, they do not build the AI.
Right, they're sort of like a classic
like picks and shovels company
that is making money by building the inputs to AI,
but not actually training their own frontier models.
Yeah, and so, you know, Wang is 28 years old.
He's going to be now leading a team
of supposedly around 50 people,
some of whom might be making as much
as $100 million a year.
I think that's just gonna be
a very difficult management challenge.
Think about some of the big teams
you may have worked on at your job.
What is the fastest it ever gelled?
Was it less than six months?
If you're somebody who believes
that we are on the precipice
of super intelligence already arriving,
or maybe just AGI already arriving,
you're talking about what?
Six months, a year and a half before this team
has actually been able to maybe ship
their first major project.
So, you know, I am sympathetic to Meta here
in the sense that they don't have another choice.
They had to do something significant
if they were going to get back in this race,
but we should not understate the challenge
of what they are attempting to do
because they just lost the last year.
Yeah. I'm skeptical that this plan of Meta's is going to work.
And there are a couple of reasons for that.
One is that while there are many people working on AI and many
talented researchers and engineers,
the universe of people who have actually built and trained
the biggest language models on the biggest supercomputers is still
quite small. It might be a couple hundred people worldwide. Unfortunately for Metta,
all of those people are already rich. They can work anywhere they want. They can make
whatever they want. These people are writing their own checks. And so I'm not sure that
there is a sufficient amount of money you could pay some of these
people to give up their jobs and come work for Mark Zuckerberg.
The second reason I'm skeptical is that I think that even if Meta does manage to sort
of assemble this Avengers super team of AI researchers, I still don't think they have
an attractive or coherent AI strategy that is going to motivate
these people to work hard there. If you actually look at what Meta has said so far about what it
is doing with all of the AI stuff that it has built, it has basically said two things. One,
it wants to make AI companions. The second thing it has announced is that it is going to build
weapons for the military, right? This came out of a recent story where Meta is going to partner with Anduril, the sort
of military technology company, and they are going to build something like an augmented
reality headset for soldiers in the battlefield.
That might be a worthy project, it might even be a profitable project, but that is not the
kind of thing that top AI researchers
want to spend their time working on,
at least the ones that I'm talking to.
I will close my analysis of this situation by reading you
a text that I got from a leading AI researcher who I texted
this weekend to go ask if they were going to work for
the Meta AI Superintelligence Lab.
All right, let's hear it. LOL, LMAO.
So Casey, I think that tells you about how successful this new recruiting push by Meta
is going to be.
Yeah, I would be more optimistic about this if this was the first big reorg that Meta
was doing in its AI division, but it's not.
The big reorg they did in January 2024 was also not the first reorg that meta was doing in its AI division, but it's not, you know, the big reorg they did in January 2024 was also not the
first reorg that they had done in this division. You know, you
mentioned a couple of the key ways that meta has been using AI.
And to your point, this is just like not really inspiring stuff
for a lot of those researchers. But more importantly, I don't
see a way how to get from here to the there that they are
envisioning, which is super
intelligent. So look, this is one of the most interesting stories in tech to me right now,
for this reason. Mark Zuckerberg is on many days the most competitive person in the entire industry,
and he's now legitimately behind in a race that he might not be able to afford to lose.
So for that reason, Kevin, I think we just want to keep our eyes on this story
because I suspect this will not be the last big move that Meta makes
as it tries to get back in this game.
All right. When you come back, there's another big tech company
that is struggling to find its AI future.
We'll talk about Apple and what it announced this week
at its annual developer conference. Okay, see, let's talk about the other big tech news this week, which is also about a
large technology company that is on the AI struggle bus.
This week was Apple's annual developer conference, WWDC.
And unlike last year,
when the two of us were invited to Cupertino
to take part in the festivities,
we were not invited this year.
We were.
And whenever I get uninvited to something,
I think this company's in trouble.
Yeah, I don't think it is because we were rude or ate too much food at lunch or smelled bad.
I think what's going on here is that Apple is embarrassed about what has happened since last
year's WWDC when they announced a bunch of new AI features under their banner of Apple
intelligence and then many of those features did not actually ship.
Yeah. Last year, they had a story about AI that they were
really excited to tell this year that was not the case.
Yes. So the big thing that people were excited about at
last year's WWDC was this new and improved Siri
that would not only be able to
respond to more complicated questions on your iPhone,
but would be able to kind of pull things
from all of your apps and your data and your text messages
and cross-reference your email with your messages,
with your calendar, and sort of do all that seamlessly.
The classic example was like, hey, send an Uber
to go pick up my mom at the airport
when her flight gets in, right?
Which is like a very complicated multi-part query
that involves communicating with many apps.
And we saw that, we're like, oh yeah,
that'd be really cool if that worked.
Yes, and that did not work, apparently, because Apple still, a year later, has not shipped that version of Siri.
And I still have to pick up my mom from the airport in a regular car, like an animal.
It's a disaster. So we were not there, we were not able to grill Apple executives about what the heck was happening with Siri
and why it has been so delayed in its new and improved form.
But friend of the pod, Joanna Stern from the Wall Street Journal was invited and she did
interview some Apple executives about what was going on with Siri and all these delayed
features and I want to just play a clip from that because I think it really shows you how
defensive they are.
In this clip, Joanna is talking to Craig Federighi,
who is Apple's senior vice president
of software engineering.
Let's hear it.
So many people associate Apple and AI with Siri
since plus 10 years ago now.
Sure.
And so there is a real expectation
that Siri should be as good, if not better,
than the competition. I think ultimately it should be. That, if not better, than the competition?
I think ultimately it should be.
That's certainly our mission.
Yeah, but that's our mission.
We set out to tell people last year where we were going.
I think people were very excited about Apple's values there,
an experience that's integrated into everything you do,
not a bolt-on chatbot
on the side, something that is personal, something that is private. We started
building some of those and delivering some of those capabilities. I, in a way,
appreciate the fact that people really wanted the next version of Siri and we
really want to deliver it for them, but we want to do it the right way.
When's the right way going to come along?
Well, in this case, we really want to make sure that we have it very much in hand before
we start talking about dates for obvious reasons.
So Casey, they have a mission, they have a vision, they have values.
What they do not have is a date when any of this will be available.
Yeah.
So bad news for anybody whose mom is still stuck at the airport.
I shouldn't keep coming back to that joke.
But no, that's, you know, look, on some level,
it's like, what can they say?
They tried to build it, it didn't work.
It's better not to ship it and to delay it
than to ship something, you know, that doesn't work.
There has been some great reporting
over the past couple of months
about what happened inside of Apple that led us to this point. Mark Gurman at Bloomberg has done a ton of amazing reporting
on this. And the gist is like, there just were not a lot of AI true believers inside of this company.
It really kind of rhymes with the story that we just told about Meta. Apple is working on its own
thing. They have an incredible business. The last thing that they want is to be disrupted by some coming wave of AI. And so
they just kind of gave it short shrift. AI systems don't work
like the systems they know how to build. They know how to
build these rigid deterministic if this then that type of
systems, very polished, very predictable, and they do an
incredible job at it. But AI isn't like that.
It's chaotic.
It's messy.
It's probabilistic.
It doesn't work the same way every time.
They've had a lot of trouble
wrapping their arms around that.
So I want to diagnose more about what is going on with Apple
when it comes to AI.
But first, let's talk about what they actually did announce
at WWDC.
Casey, what were your top highlights
from their announcements?
Well, Kevin, obviously we have to talk about liquid glass.
Now, I don't know if you've seen the YouTube video
of WWDC where they promoted liquid glass,
but the YouTube play button sort of appeared
over a couple of the letters,
so it looked like Apple had announced liquid ass.
So if you're still thinking that that's what they announced,
I wanna correct that, it's actually called Liquid Glass.
Now, what is Liquid Glass?
Liquid Glass is a redesign of the operating system.
And on one hand, I don't want to underrate
the significance of a redesign.
These devices are used by hundreds of millions,
if not more than a billion people.
And when you give something a new look,
it is kind of a big deal, right?
You might have to relearn how certain things work. On the other hand, when that's your
marquee announcement after a year of development, when the last year you were like, the AI future
is here. And this year you're like, control center is a different color. It really speaks
to the kind of difference between the two presentations, Kevin.
Yes, it was such a small ball presentation. I did watch the event from afar.
And I gotta say, it was like very strange
to watch these Apple executives get on stage
and like express like delirious enthusiasm
over like adding polls to iMessage.
You can now start a poll with your friends in the group chat,
which, you know,
I gotta say, cool feature, I'll probably use it a bunch,
mostly as a joke, but that is not the sort of marquee
futuristic vision that I was expecting out of Apple this year.
No, and you know, because Apple made these new features
available to developers basically right away,
we've started to get some early feedback
about how they work.
And some fair number of people are complaining
that this liquid glass look in particular,
it kind of just makes everything harder to read, right?
The basic idea in here is that all of the operating system
elements are like literal glass,
and they'll sort of slide over each other.
And of course, the presentations were like very beautiful,
but then you put it onto your phone
and it's like you find yourself squinting a lot.
Yeah.
And I found myself thinking, Kevin,
about this old Steve Jobs quote that I like.
And I want to acknowledge it's very hacky
and cliche to quote Steve Jobs, but he has this quote,
and it's actually from the New York Times
in this interview he did in 2003 about the iPod.
And the thing that he said was, essentially, design is not how it looks, design is how it works.
And as I found myself looking at liquid glass, I thought, this is a design that is about how it looks.
It is not about how it works. I don't know what this design is supposed to do that it didn't before.
All Apple really said was like
Everything is more beautiful than ever, you know, but it's still very familiar, but it's more beautiful
Yeah, you know, I don't want to tell people don't make things that are beautiful for their own sake I appreciate beauty as much as the next fella
But on the other hand, I thought this doesn't actually really seem in keeping with the Apple design spirit of the past
Yeah
Okay
See I want to bring some light to this discussion by quoting another Steve Jobs quote
that was sort of lost in the archives where he said, what if we made a phone where everything
was transparent and you couldn't see anything? Oh, wow, I missed that one.
And so I think the Apple design team really found that and ran with it.
So that's liquid glass. Let's talk about some of the other stuff that came out of this.
Yeah, what caught your eye? Yeah, so the place where it seemed like they put the most engineering into a feature that
might help people just get things done a little bit more efficiently was Spotlight.
Spotlight is the feature if you press Command Space on your MacBook that brings up a search bar.
It's great for finding files.
Hasn't evolved much over the years,
been around a long time.
This year they were like,
well, we're gonna start to convert this
into a little bit more of what they call a launcher app.
We talked about launcher apps on the show before.
I love and use one called Raycast.
And the basic idea is,
this could be kind of the command center for your Mac.
So instead of just searching for a file
or like, you know, opening Keynote,
it's now gonna be about actually using it
to take some actions, run some shortcuts,
that sort of thing.
Like what could you do with the new spotlight
that you couldn't do with the old one?
What's an example of something they might type in?
So for example, you could like trigger a shortcut.
Shortcuts are these like automated routines
that you can set up on your Apple devices.
So maybe you have one that's like, okay,
I'm like, you know, going to bed for the night,
like turn off all the lights in my house, and you can just open up Spotlight, run that shortcut
and do that without, you know, having to do it some other way. The main benefit of doing it this way
is that it just becomes second nature to hit command space and then do something as opposed to
grabbing your mouse, looking for the icon somewhere on a desktop, double clicking, opening it up,
right? It's just you're just trying to take a few steps out of it to get things done slightly
faster.
Now, I'm very conscious as I describe this of like, this does not sound that interesting.
I didn't say it.
Yeah, but and I say that as somebody who like loves little, you know, productivity hacks
and like getting stuff done faster on my computer.
But that said, it was like at least in the spirit of the Apple I love, which is like,
help me get more stuff done,
make me a more creative and effective person.
Okay, so new spotlight, what else caught your eye?
There are a couple of like lightly interesting new features,
like there's live translation,
although we're not exactly sure which languages
that's gonna be available in.
Something I'm excited about is there's apparently
a phone app that's coming to the desktop,
so you can like start calls from your Mac, which I think is probably something that I
will do a lot.
They are also, yet again, rethinking how the iPad works, right?
Like how iPad should operate has been a kind of longstanding unresolved question where
it's like, it looks a lot like a Mac, but it doesn't work quite like a Mac.
This year, it's starting to feel ever more like a Mac because, Kevin, you can resize the windows on an iPad now.
Thank God, every day for the past 10 years,
I have woken up in a cold sweat thinking,
when can I resize the windows on my iPad?
One feature I'm not particularly excited about
is you will now be able to change the backgrounds
in your iMessage chats.
And I am in some group chats with some real jokers and I feel
like this could potentially wreak havoc in my group chat seven. I also saw
there introducing like a typing indicator for group chats so you can now
see the little bubbles that say like this is you know someone's typing. Yeah
you know you can already see that on a one-on-one chat for some reason you
couldn't see that in the group chat. So, by now, I feel like most of our listeners
have been like, one, I can't believe
they're still talking about this,
and two, how is that everything
that Apple announced this year?
But I think it's important just to mention, for this reason.
For the past, call it a decade,
I feel like Apple's main priority
has been trying to figure out
what is a seventh
subscription we can sell you on this iPhone.
Right.
Yes.
And while that was happening, the future was being born across town and they were not paying
attention and they haven't really started to pay the price for it.
But you come to the end of this presentation and you can kind of start to see the cracks
in the armor of a company that has looked pretty invincible for a long time
Yeah, I watched this presentation and I thought
This is a company that has not yet
Admitted that it made a bad bet when it came to AI
This is a company that is still not bought into the idea that language models are important or powerful or useful,
or that they might unlock
new ways of interacting with computers.
I think you're right that it rhymes with our last segment on
Meta because Apple had its own version of Jan Lacoon,
a senior AI researcher who was brought into
lead the strategy of AI at Apple.
This guy named John Gianandrea or JG as he's called,
was brought in from Google years ago to
oversee all of Apple's AI research.
According to Mark Gurman at Bloomberg,
JG did not believe in large language models either.
He thought they were a distraction.
He was convinced that consumers were turned off by chat bots.
He didn't think that Apple should
be putting a lot of efforts and investment
into developing its own language models.
And I think we're really now seeing
the fruits of that decision coming out or not coming out,
in Apple's case, on stage at WWDC.
Yes.
Now, here is what I will say in Apple's defense, Kevin.
For everything that we have just said,
it is also true that if you were to pick up a Pixel phone,
which is the phone made by Google,
which has access to all of the much more advanced AI features
that Google offers,
I still don't think there is one feature on that Pixel phone
that would make the average person
say, oh, wow, I got to ditch my iPhone for this. The way that Google has figured out AI, I am so
excited to ditch, you know, iMessage and become a green bubble over in this other ecosystem. And I
think that speaks to the fact that for as advanced as these systems are getting, there has been a
surprisingly long lag in turning them into really good products
You know just this week Amazon said that its new version of Alexa
Which is sort of souped up AI powered had finally reached 1 million customers now Amazon has a lot more customers than that
They had been rolling this thing out at a glacial pace
Because they're still so uncertain about the reliability that they're trying
to make sure that it doesn't blow up in its face. So while
we're being hard on Apple here, I just want to point out that
really, it's all of the tech giants that are having this
problem that folks like you and I are having a pretty good time
figuring out how to slot AI into our lives. And it mostly just
involves using chatbots. The other big companies though have not figured out
how do we bolt this on to what we're doing
in a way that is gonna make people really excited.
Yeah, there's one more Apple related story
from the past week that we should talk about
and it is not something that was discussed at WWDC,
but it is something that a lot of people
have been emailing us and that a lot of people I know
have been talking about.
And this is this research paper that came out of Apple's machine learning research division.
And this paper was called The Illusion of Thinking, Understanding the Strengths and
Limitations of Reasoning Models via the Lens of Problem Complexity, which I'll say
could have used an Apple iOS rewrite on that.
All right, well, so try to describe, Kevin,
concisely what did this paper say?
So this paper was basically an attempt to pour some water
on the hype around these so-called reasoning models,
which are kind of like large language models with
an additional step performed at
inference time to improve the outputs.
So we talked about this before,
OpenAI's O1, the latest versions of Gemini and Claude,
they all have these reasoning features built into them.
What this publication,
this research paper said, is that this is not actually reasoning,
that these systems are not actually doing anything like thinking,
that there are some big limits to how much this approach
to improving language model performance can scale.
And basically, they released this and it was immediately seized on by
a bunch of people who said,
aha, there is proof that the AI companies are on the wrong track, that all this is hitting a wall,
and that these models are not actually getting us closer to general intelligence.
Yes, this paper was beloved by what I have come to think of as the AI cope bubble.
So people who are looking for reasons not to worry about AI, this paper was manna from heaven. Yes.
So Casey, why is this paper so controversial
and so beloved by what you call the coat bubble?
Well, I think one issue here is essentially semantic,
which is the paper is trying to make the case
that as you put it, this is not actual reasoning,
which is to say that large language models
are not reasoning in the way that human beings are doing.
I think everyone involved would stipulate like, yes, that is the case, that large language models
do not work in the exact manner that the human brain does, even if there are maybe some interesting
parallels. So it's presented as this gotcha. Aha, these things are not reasoning like human beings,
when in fact, again, anyone who's paying attention could have told you that from the start.
The second problem with this paper
does relate to just the limitations
of the way that these models are constructed,
which is they can only output a certain number of tokens.
And so in order to reason through
the most difficult problems given to them by the researchers,
they simply did not have enough room.
Now, if you want to say that is a reason why large language models are bad, okay, fine.
Yeah, there are like some problems that they can't solve.
But that is not how this paper has been received within the AI copubble.
Within the AI copubble, it is, oh, well, this proves that LLMs can't reason like human beings
and therefore we should just junk it because it is essentially not real and it is not going to have any meaningful impact on my life
Yeah
so I would say this paper did not change my
Sort of view of large language models or the kind of reasoning models that have become popular recently
It did however help me understand what is going on inside Apple where you simultaneously
Have a company that is trying to be seen as being on or close
to the AI frontier, but where a lot of the intellectual
firepower and research is still being directed
at trying to prove that all of this is just hype and fake
and it doesn't actually work
and we should maybe stop investing in it.
Yeah, I think we should say this is like probably like
Apple's like highest profile AI paper,
at least in the last year, maybe ever.
And I think it had a lot of problems.
Yeah.
So let's tie that back to WWDC, Kevin.
What does it all mean?
I think what it means is that Apple is still undergoing
this kind of identity crisis about what it wants to be.
Is it a hardware company that wants to make phones?
Is it a software company that wants to sell subscriptions
to put on those phones? I it a software company that it wants to sell subscriptions to put
on those phones? I think both of those business models are being challenged right now. Apple's
iPhone sales have been sort of flat to declining over the last few years. They really haven't
gotten that much different from model to model. We may kind of be reaching the sort of pinnacle
of what a smartphone can be. And its service business is being challenged by all these antitrust actions and
these court decisions that say things like you can't
stop people from paying for
things outside of the Apple App Store anymore.
So I think they are still struggling to find
the next gusher of cash that could
replace declines in some of these other areas.
I don't think they have sort of come up
with a solution yet, but it sounds like they are still
trying to make up their mind about AI
and how big a deal it is.
I agree with all of that.
Fortunately, Kevin, you know, as you know, on this podcast,
we always try to be problem solvers.
We like to come up with solutions for the companies
that we talk about.
And I think I know what Apple could do
to turn the ship around here.
What's that?
They have to hire Alexander Wang.
I don't care how much it costs.
I think they go to him right now.
They say 49% stake.
We'll take all.
How much money do you want?
We can afford it.
Just name your price, Alex.
And, you know, not only would that turn around
their fortunes in AI, Kevin,
think about how mad it would make Mark Zuckerberg.
Oh boy, he would blow a gasket over that one.
Siri, throw to commercial.
Didn't even work.
Siri, pick Casey's mom up from the airport.
She's been there for a year.
Actually, can I tell you what happened on my computer when I said just now,
Siri, throw to commercial, it opened up a map to something called
the Commercial Coverage Insurance Agency.
No. Why?
You're looking at it right now.
This, you can't.
When we come back, it's time to pass the mic.
We'll hear from you, our listeners,
about how your jobs are changing as a result of AI. Well Casey, in the past few weeks we have been talking a lot about a different topic
related to AI, which is what is happening with AI and jobs.
Yes, you recently wrote an article saying that we were starting to see the early signs
of AI job loss.
And so we threw it out to our listeners to say, what have you been experiencing?
Yeah, so today we're going to go through some of the many, many responses we got to our
call out for stories about AI and whether it's taking your jobs.
And I think we should start with a question that I think captures a common frustration
that we hear from listeners.
Oh, that we say like and and and um too much?
That we're too handsome?
No, here is listener Christian Danielson.
Hey, Casey and Kevin, this is Christian from Hood River, Oregon.
I've noticed in a lot of interviews, yours and others, with tech executives,
that almost all of them seem to think there's going to be a categorically different level
of job displacement due to this technology rolling out.
And yet, almost all of them also don't seem
like they have any real concrete plans
or putting nearly the amount of energy they are
into their products around how to mitigate that.
It just seems like they don't feel like
it's really their responsibility
or it's someone else's problem to manage that side of things. So I'm hoping you might pose the question
why it is that the government shouldn't really frankly just tax the shit out of their technology
both as a way to potentially compensate people for all this wealth that's going to be
potentially compensate people for all this wealth that's going to be concentrated into the hands of a very small number of people. And also to slow the technology down a bit until our aging policy process can kind of catch up. Thanks.
Yeah, so why is there no sort of plan from these executives, Kevin? And what do you think about the idea of taxes?
Yeah, I think it's a really useful and important point.
I think many of the executives and the companies building
with this technology, their goal is just to automate the jobs away, right?
They are not thinking or talking much about what will happen
on the other side of that to all the people whose jobs are displaced
if they are successful.
And, you know, Some of them have done
some studies or made some suggestions.
Sam Altman actually funded
a big research project where they gave people
these unconditional cash payments and studied,
what would UBI or something like UBI do?
Dario Amadei from Anthropic has actually
proposed something like our listener is suggesting.
He called it the token tax.
And basically the idea is if you have
all these AI models out there,
generating billions of dollars of
revenue by automating people's jobs,
some portion of that should go back to fund
the sort of welfare programs and
social safety net for the people who are displaced.
But I will say that most people I've talked to about
this issue inside the AI industry
are not even getting that far.
They are not even proposing solutions
or they're just kind of doing hand waving
about how the government will have to step in
and take care of people who lose their jobs this way.
I would like to see a lot more people
not only coming up with ideas,
but actually advocating for those ideas with policymakers.
Yeah, I mean, the main thing I would say is that, like, it's not up to the corporations
to run our society.
Like, that is the job of our elected officials, who should absolutely have plans in place.
They should be developing them right now for a world where we do experience significant
job loss through automation.
I think most lawmakers are probably getting on board
at this point with the idea that this is,
if nothing else, a real threat.
And so it's unfortunate that there has just been
so little movement in this direction,
because I do think a lot of this is gonna come true,
and we're gonna wish we had better plans in place.
Yeah.
Now for some listener stories.
This first one is from the perspective of a young person navigating a tighter labor
market.
Listener Sarah writes, Hey, hard fork, I'm one of the junior software engineers who was
thoroughly depressed by the latest episode on the AI job apocalypse, mostly because it
was exactly in line with my current experience.
Oh, I graduated in 2022 and felt very lucky to get an amazing job straight out of college where I felt very
supported and valued by my team. That entire team was laid off
last year to be replaced with cheaper human labor, not AI. And
after a grueling job search, I ended up in a very large
company that's a well respected household name. They're not
really a tech company, but the leadership wants us to embrace
that culture and has proclaimed us to be an AI first company.
Developers are evaluated based on what percentage of our code we say is written by AI,
and those with low scores are laid off.
Obviously, we all say that most of our code is written by AI now.
It's been thoroughly depressing working here,
and I've been looking to move jobs since about my second week,
but there are almost no openings for someone with only two years of experience.
I think my only real chance is to stick around for a year
and hope that my career still exists by then.
With some luck, maybe I can make it
into a mid-level position
before the ladder is pulled up behind me.
I feel terrible for the people just now graduating."
Wow, does this one break my heart?
Woo.
Yeah.
Can I just say,
this is what we've been talking about the whole time.
Yes.
Is people like Sarah having this exact experience?
Yes.
And what makes this particularly bleak is this is something that I actually do think
is going to become a major problem for these companies, is that they are just going to
lose their pipeline of their future leaders, right?
If you are replacing your junior workers with AI or just forcing everyone to use AI, you are really neglecting your own future because you are not
doing the kinds of skill building and training and
mentorship that is going to allow people like Sarah,
who may be your next executive,
to build the skills and the experience that she
needs to come in and do that job. Let her cook.
Yeah. But here's the problem.
I think it's so silly that companies like this
are creating incentives for their workers
to lie to them about how they are using AI.
You're just going to get a very distorted sense
of what AI is doing in your company.
And then if you lay off those people
because you're thinking,
oh, AI is already doing 80% of everything,
then you're gonna find yourself in a lot of trouble.
So this just seems like a classic self-defeating, like corporate thing.
And these people need to get a better sense of what's really happening.
But in any case, Sarah, thank you for writing in.
And, you know, here's hoping that your next job is better than this one.
All right.
Here's a story we got from an executive.
This is from listener Joseph Esparaguera. He writes, I'm the CFO
of a 150 million dollar plus home remodeling business. Wow. Okay, Bragg. I'm
in the wrong business. I'm reaching out because I think I'm living in the awkward
middle of the AI transformation story. Not at a tech startup, not at a Fortune 500,
but in the trenches of a mid-sized company where AI could and should have massive impact,
especially in accounting and HR.
He continues, I'm trying to get ahead of the curve.
I want my current staff to be the ones who survive and thrive as AI reshapes their fields,
but I'm hitting resistance.
They'll use AI to clean up an email or write a job posting, but they don't seem to grasp
or want to grasp the bigger opportunity.
I believe AI should let us do more with fewer people and the ones who adapt will stay, but
if my current team doesn't evolve, I'll be forced to hire different people who will."
Casey, what do you make of this email?
So I suspect that this is playing out at a lot of companies where you have managers who are more excited about AI than their workers
are.
I think this is true of lots of different kinds of software, by the way.
I remember I used to get really excited about project management software, like Asana, and
I would try to get my old company to adopt it.
And then that had actually happened.
The company adopted it and no one wanted to use it because it was like, why do I want
to go fill out a new form every day
saying what my tasks are, you know?
So it's like a lot of times software has more obvious value
to the manager than it does to the worker who, you know,
in many cases is just trying to get to 5pm
so they can like get home to their family.
So I think this is like kind of a durable tension
in workplaces.
At the same time, I think that this is gonna be part
of like the rough part of this transition is more and more managers being like, No, really, like you actually have to use this thing. Because if you're doing it another way, it is going to make you slower and worse at your job. And so I expect that there are going to be a lot of clashes. By the way, I think this opens up a lot of opportunity for listeners like Sarah who can show up at the front door and say,
Yes, I know how to use AI and you're not gonna have to twist my arm into doing it.
But I think there's gonna be a lot of pain along the way.
Yeah, I think this is a really important moment for a lot of companies that are starting to think about how to use AI.
And my intuition on this is that the companies that are having the most success with AI right now,
are the companies that are doing this in a very bottoms up way.
They are soliciting ideas from workers about how they could
use AI to maybe improve
the parts of their job that they don't love doing,
or maybe eliminate them altogether.
They're holding hackathons or having days set aside
to just get together in a room and figure out how to use this stuff.
They are not imposing it from the top down.
They are not the ones sending memos out saying,
everyone must use AI,
and we're going to be tracking how much you're using AI.
If you don't use AI,
we're going to replace you with someone who will.
I think that is a short term solution.
And that's the direction, unfortunately,
I think a lot of companies have chosen to go.
But I don't think that's a strategy for durable transformation.
You really need to get people excited about this and thinking about what it could do for them.
Well, so what does Joseph do here?
Because, you know, it sounds like if he doesn't act, there isn't going to be any bottoms up enthusiasm
for AI at his company.
I think what you do is you basically start a competition among your employees.
You say, we're going to set aside a day or a half a day, or we're going to do an off
site sometime in the next few months.
We're going to give everyone access to all of the tools.
We're going to buy them subscriptions to all the tools they might possibly need to do their jobs using AI.
The person who comes up with the best idea or
the team that comes up with the best idea.
Gets to live. We'll call it the Hunger Games.
No, they get a bonus.
They get a reward of some kind.
You make it a thing where people are excited to contribute
because it is in their best interest to do so.
That's what I would do if I were the CFO of a company, which let's say it, we're all glad I'm not.
Well, but the day is young. Who knows what might happen to you later, Kevin.
All right, now let's hear from a listener who feels critical of the approach that some executives are taking to AI.
So this person writes, Hey guys, while my job isn't being replaced by AI yet,
my boss is completely obsessed with it
without actually doing anything meaningful with it himself.
He's effectively put a hiring freeze on all process jobs
because he believes that AI can do them better
and more importantly, cheaper.
I'm in charge of the sales and marketing teams
and my very meager head count ask as we grow rapidly
is challenged or ignored
because there's an AI tool
he heard of somewhere.
I get messages at all hours from him
with links to hacky LinkedIn posts
full of emoji bullet points about how Excel, Word,
PowerPoint or insert program here will soon be obsolete
thanks to these new AI tools.
Or here are 20 AI miracles to revolutionize your workload.
You know, our listener says,
I'm far from being an AI skeptic.
I make use of it daily.
But honestly, maybe I will lose my job by my own hand soon
because his attitude is exhausting.
And right now, I just need a few more human people
without spending all my time going down rabbit holes
of half solutions or privacy nightmares.
I think the time spent on reading up on AI
and testing bad AI right now isn't considered enough
when looking at the cost benefit analysis. So Kevin, what do you make of this listener's dilemma?
I think this is really interesting. It does sort of hit that there's like a new kind of
boss emerging in the sort of halls of corporate America, which is like the AI addict boss.
We've heard a lot of stories along these lines of like, my boss is completely obsessed with AI.
And I think it's tough, right?
I think this is a very good point that like,
businesses have immediate short-term needs that AI cannot do yet.
And maybe by thinking about sort of where this stuff is all heading so much,
you are actually like not listening to your employees who are telling you,
just give me three people so that I can solve this problem. heading so much, you are actually not listening to your employees who are telling you, just
give me three people so that I can solve this problem.
And I don't know what to do about that because a manager's job, an executive's job is to
think about and plan for the future, but you also do have these very short-term needs that
need to be addressed.
Yeah.
I mean, here, my question to the big boss here is, what is the actual objective that
we're trying to hit?
Right? It seems like maybe there's too much discussion about tools in this workplace and not
enough discussion about goals and what is the best way to get to those goals.
You know, it sounds like this person has a pretty informed perspective that AI is not going to be
the thing that gets them to the goals that they have. And the manager needs to listen to that.
Yeah. Have a conversation or post on LinkedIn.
They'll probably read it there.
All right, finally, let's hear a voice memo
from listener George Dilffy,
who is trying to find some short-term solutions
to keep the staff he trains employable
in this changing market.
Hey guys, my name's George Dilffy.
I live in Stanford, Connecticut,
and I work at a high gross B2B staff startup called G Clay. And my role, I actually head up the support team.
So one of the things that I've sort of leaned into is trying to hire a really, really good
people for our support team, but also sort of like turning those folks into kind of expert
generalists. So the idea being that like, they're rotating through different parts of
the company sort of learning about product or learning about engineering or learning about marketing.
With the hope that they've sort of gained a number of different skills across the company
and can sort of just generalize into any other departments.
So just wanted to share. That was pretty interesting. Love the show. Thanks so much.
Tell me, what do you make of this one?
I like this one. I think that support and customer service are always talked about as being
the first jobs to go under the new AI regime.
We've talked about some companies that are trying to
develop these AI customer service chatbots.
But I think if you are working in customer service,
you don't want to just be reading off the script on
a computer trying to help people solve their problems.
You really want to offer a more bespoke,
personalized high touch kind of service.
One of my long-term complaints about tech companies is
that they just do not take customer service seriously.
For many years, people have said,
there's no way to get someone on the phone if something
happens to your Facebook account,
or your Instagram account, or your YouTube account.
I think people at the senior levels of
these companies should be doing
a rotation through customer service just to get
a sense of what their customers
and users are actually experiencing,
and maybe that would lead them to
invest more in these areas.
So I think this is a good idea.
I think that the experience of doing customer service,
if you are good at it and are not just reading off a script on
a computer is useful in many, many jobs.
I think that in the future,
that will become very important,
especially as the more rote and
routine parts of the job get automated. What do you think?
Yeah, I think that people who work in customer support roles
often have a much better sense of what's
happening in the business at the ground level as executives.
And so I love the idea that we're
creating new opportunities for those people.
I think that those folks can often just bring experiences
to the roles that you're just truly not
going to get with an AI system.
All right, Casey, have we said enough on AI and jobs this week?
I think we have.
We thank all of the listeners who
wrote in to share their stories.
I imagine this will not be the last time
we return to this subject.
But it's very clear, Kevin, that already we're
starting to see the effects of AI on the job market.
And I imagine that's only going to accelerate from here. Yeah, and I think we're going to see the effects of AI on the job market. And I imagine that's only going to accelerate from here.
Yeah, and I think we're going to have some more conversations on this topic coming up soon.
We won't spoil them now, but let's just say this is an area where I think we are going to spend a lot of time,
because this is something that many, many people out there are starting to experience.
Hard Fork is produced by Rachel Cohn and Whitney Jones.
We're edited this week by Jen Poyant.
We're fact checked by Anna Alvarado.
Today's show was engineered by Alyssa Moxley.
Original music by Mary Lozano, Alyssa Moxley, and Dan Powell.
Video production by Sawyer Roquet, Pat Gunther,
and Chris Schott.
You can watch this full episode on YouTube
at youtube.com slash hardfork.
Special thanks to Paula Schumann, Queeving Tam, Dalia Haddad,
and Jeffer Miranda.
If you liked this episode and you found any of it useful or interesting or maybe a little funny,
you can share it with a friend or leave us a review on your favorite podcast app.
You can email us as always at hardforknytimes.com.
Send us your job offers with nine-figure salaries.
I'd settle for eight.
Such a humanitarian.