Factually! with Adam Conover - An AI Expert Challenges an AI Skeptic, with Ethan Mollick
Episode Date: December 31, 2025Even if the AI bubble bursts, the technology won’t just disappear. We’re going to live alongside some version of AI, so we have to ask: what does our future with AI look like? This week, ...Adam invites Ethan Mollick, AI expert and professor at Wharton School of Business, to challenge his skeptical view on AI and look at how it might impact our daily lives. Find Ethan's book at factuallypod.com/books--SUPPORT THE SHOW ON PATREON: https://www.patreon.com/adamconoverSEE ADAM ON TOUR: https://www.adamconover.net/tourdates/SUBSCRIBE to and RATE Factually! on:» Apple Podcasts: https://podcasts.apple.com/us/podcast/factually-with-adam-conover/id1463460577» Spotify: https://open.spotify.com/show/0fK8WJw4ffMc2NWydBlDyJAbout Headgum: Headgum is an LA & NY-based podcast network creating premium podcasts with the funniest, most engaging voices in comedy to achieve one goal: Making our audience and ourselves laugh. Listen to our shows at https://www.headgum.com.» SUBSCRIBE to Headgum: https://www.youtube.com/c/HeadGum?sub_confirmation=1» FOLLOW us on Twitter: http://twitter.com/headgum» FOLLOW us on Instagram: https://instagram.com/headgum/» FOLLOW us on TikTok: https://www.tiktok.com/@headgum» Advertise on Factually! via Gumball.fmSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Transcript
Discussion (0)
This is a headgum podcast.
I don't know the truth.
I don't know the way.
I don't know what to think.
I don't know what to say.
Yeah, but that's all right.
That's okay.
I don't know anything.
Hey there, welcome to Factually.
I'm Adam Conover.
Thanks for joining me.
Again, you know, if you've listened to the show or any of my YouTube videos, you know that I've taken a pretty skeptical stance towards AI to say the least.
The reason for that is that, first of all, I'm a skeptic at heart.
And when I hear a lot of people saying one thing, I want to ask is the opposite true.
And, you know, if you look at how we're currently basing our entire economy and the stock market and GDP around infinite AI growth, I think the position of what if that's not a good idea?
is, let's just say, under-explored.
You know, when 98% of people are betting on black,
I say, well, we could put a little money on red, just for fun.
That's the kind of thing I do strategically as a communicator,
and because, you know, you as the audience seem to enjoy hearing those arguments.
But that doesn't mean that maximal skepticism about AI
is going to be right all the time.
It's not the case that the only possible outcomes are an AGI that replaces all people
and an AI that is totally useless and does nothing at all,
we need to think about what the middle ground outcome might be
because that is, of course, the most likely scenario.
And think about this, 800 million people currently use chat GPT every week.
So it's not crazy to think that some of them are getting something useful
for their jobs and or lives out of it, right?
So as important as it is to push back against the ridiculous levels of AI
hype that we hear from people like Sam Altman and others who are profiting off of this incredible
investment in AI technology, we have to also consider, hey, this technology might not be
totally useless. It might actually change our society. And so we need to start asking,
what can AI actually do? What are the outcomes that are actually likely? What does AI do that is
potentially better than humans? And what does it do worse than humans? And how is it actually going
to impact jobs, work, and the economy at large.
And, you know, I'm not someone who wants to retreat inside of my assumptions or my
preordained conclusions about what I think is going to happen.
I want to be challenged on this shit.
And that is what we were doing on the show this week.
I want to bring on someone knowledgeable about AI, what it is being used for, and what
is likely to be used for in the future to challenge my skepticism.
And for this task, we have an absolutely perfect guest.
His name is Ethan Mollock.
He's a professor at the Wharton School of Business
where he studies AI and innovation
and he writes a fantastic substack
called One Useful Thing.
I found this conversation incredibly enlightening and challenging.
He really provoked me to think more deeply about AI.
I know you're going to get a lot out of it.
Before we get to it, I want to remind you
that if you want to support the show,
head to patreon.com slash Adam Conover.
Five bucks a month gets you every episode of this show ad-free.
We also have an awesome online community
that we would love to.
have you join. And if you want to come see me do stand up comedy with my human mouth straight
to your human ears. While coming up soon, January 8th through 10th, I'll be in Madison, Wisconsin
at Comedy on State, one of the best clubs in the country. January 15th through 17th, I'll be in
Fort Wayne, Indiana. After that, Louisville, Kentucky, Houston, Texas, and San Francisco, California
on February 19th through 21st, where I will be recording my brand new hour of comedy as a special.
I would love to see you there.
head to Adamconover.net for all those tickets and tour dates.
And now, please welcome Ethan Mollick.
Ethan, thank you so much for being on the show.
Thanks for having me.
I'm glad to be here.
I love the wall of board games behind you.
First of all, I feel like that's very personable
and our audience is really going to relate to it.
So fantastic.
I actually play them as well, so it's not just the show.
That would be probably the nerdiest possible thing
to try and impress people with.
A lot, you'd be surprised with our audience.
I think they might really respect it.
But let's jump in and talk about artificial intelligence.
And I just want to, can I start with an anecdote from my own life?
Because, look, anyone who watch this channel knows that I have taken a pretty skeptical position on AI in most of my communications.
But, you know, the technology is still neat.
I'm, you know, trying to make sure that I understand what it does.
I spent the last month or so saying, you know what, let me just use chat GPT a little more often to do the sort of
things that I hear other people doing to see what effect it has.
And I've had some, like, interesting conversation.
Oh, that was maybe a little bit helpful.
Help me work through a financial decision or two, right?
Help me break something down an interesting way.
But I got in the habit of just like, oh, I'll use it when I want to answer a question.
So I was doing that for a couple weeks.
I was watching the movie The Godfather 2.
And I get to a scene and I'm like, is that Bruno Kirby?
Looks like the actor Bruno Kirby.
I'm not sure.
I asked Chad GPD.
Is Bruno Kirby and the Godfather 2?
Chad GPT says, no, Bruno Kirby is not in the Godfather 2.
He's only in deleted scenes of the Godfather 2.
So I start going, huh, well, I got this movie on Apple TV.
Did I get the extended cut?
You know, did Coppola do another cut?
And is that what I'm watching?
Huh, I wonder what's going on.
Gets to the end of the movie.
Bruno Kirby's name is in the fucking credits.
I look it up at IMDB.
His name is in the credits.
He's in the movie.
He's a main character in the movie.
This is the easiest question to answer.
90% of the audience.
I can't fucking Bruno Kirby's in The Godfather, too.
And not only did it say no, incorrectly,
it invented a more specific false answer
that he's not in the movie,
he's only in the deleted scenes.
And that moment made me go,
how can I ever use this technology ever again, right?
If it answered such a simple question so poorly.
So this is where my skepticism is founded from.
And I'd love for you to give us a more positive,
case, right? In the face of that kind of error-prone hallucination shit, which I understand
cannot actually be removed from the technology to some degree, what is the real use case
for this technology other than really bad personal therapy that eventually tells you to
kill yourself if you talk to it long enough. There's so much to unpack. Okay.
First of all, part of this is just the fault of how these things are built, just even how
their use cases are. So you've got you may ran to a classical case of hallucination, right? The
AI says something and then justifies it, um, with more lies, which sort of is how an outcrop of
how these AI systems work, which is they can only go forward. If it says the word no, he's not in
the movie, um, that it can't go back and delete that. So then it tends to create a elaborate
justification for its false answer. However, um, I would be, we're using a paid or free version of
chat TVD for this. In this particular case, I was using the same.
Siri chat GPT integration where you ask Siri, ask chat GPT is Bruno Kirby and the Godfather
too? And it gives you back a shorter answer than it would have on the web.
Okay, so I can tell you that one of the things, there's a lot of things that change as
AI models have gotten more advanced and larger and bigger scale, which has all sorts of
consequences. And one of those is hallucination rates are dropped. And we have some information
on that. There's been some good studies in medical journals and other stuff showing
hallucination rates drop as model size gets larger, often below human level.
However, if you're talking to chat GPT and you're just using their auto feature,
and I don't even know what feature series is using,
it's probably one step down from there.
There are basically two versions of chat GPT5 or 5.1.
One is the chatty version that's designed to be your friend,
and the other is the serious version that answers questions and looks things up on the web.
And the auto router decides whether that's important or not.
So it probably decided this is a trivia question, send it to the dumb model.
The dumb model makes lots of mistakes, but it's supposed to be very fast and chatty.
You don't know that.
They don't tell you that.
another reason why it's very confusing these systems.
I'm very convinced that if you went,
and we can even try it later,
if you went to GPD 5.1 thinking
or one of the more advanced models
or Gemini that has web access,
and you ask that question,
you'd probably get the right answer.
But there's a reason to be suspicious
because why would you know all of these things, right?
And, you know, where you take my word for it next time?
So there's a lot of things happening once in AI
that are both good and bad.
There are things that are kind of miraculous
and things that kind of suck,
and the user interfaces are complicated
in ways that are hard to do.
explain to people in a first pass.
So I would be willing to bet that you'd get the right answer out of any modern model
asking the question, but you're going through Siri to chat TPT, which is probably routing
to a worse model, and things change very quickly.
So I get your suspicion.
There are tons of positive use cases, too, and we actually know about error rates overall
and have information about that.
So it's a complicated problem.
Well, thank you for explaining my particular problem.
And I do think it's a marketing problem.
on the part of these companies
to create these interfaces
where they want AI to be something
that you don't need to know anything about.
You just ask the magic box,
the question and the magic box gives you the magic answer,
which is kind of divorced from almost every other piece
of technology we've ever had,
which sort of asks you to know a little bit
about how it works,
like a normal computer, you know,
or a computer 20 years ago,
be like, you should understand what RAM is.
And now they're like, don't fucking worry about it.
So part of that is, I think, a bad choice
in the part of the companies.
But let's jump into it.
What are some positive use cases for AI right now?
And from there, let's get to a discussion of how this might change the future of work in our society.
Sure, there's tons of them.
I mean, look, at the time we're recording this, it is three years and a day since chat CPT came out, right?
So by all stretches were in early stages of technology.
And I find it very, by the way, the fast-adopted economically consequential technology in history, as far as we know, but still early days.
And so we have a lot of good and bad stuff happening all at the same time with AI.
I could tell you, for example, on the use case side, there's a nice controlled experiment
out of the World Bank that shows that kids in Nigeria using AI as a tutor with teacher
help had huge improvements in performance.
We've seen similar things that tested Stanford and Harvard or other places.
But if you just use it on your own for education, it tends to just give you the answer.
People tend to cheat, and they learn less.
But using it in the right kind of setting, you do more.
We have, there's some nice controlled experiments showing that the most recent AI models actually often outperform physicians in giving diagnoses, but again, they don't do everything a doctor would do. So you might want to use the second opinion, but I wouldn't use the first opinion. We know that they, we're now starting to see the first original science being done by these systems. So they're capable of doing math that we weren't able to do before. As somebody who uses these systems a lot, they're extremely helpful for doing real science. I don't do all the work autonomously with AI, but certainly,
for coding. You get huge improvements in performance and outcomes. I mean, across a wide swath of
economically valuable areas, AI is a good expert advisor with caveats, right? So, I mean, it's weird
to me to have the question, is there value in this when there's a billion people using these
systems on a weekly basis? We have survey results from companies showing that they're getting
positive returns on investment. I guess the anecdotal failures are absolutely true, but that doesn't
preclude the fact that there's actually lots of value for lots of people in these systems.
Well, I guess the problem is, and look, some of the news I look for is self-selecting, right?
I'm in a community online that is generally more negative on AI, and so negative stories get shared
further. But there was the MIT study recently that has been quoted all over the place that
says that most companies aren't receiving, aren't seeing financial benefits from LLMUs. You're shaking
your head. Tell me why. Well, okay. So, you know, not to not to criticize the study.
too much. But it was interviews of 52 people at a conference by one person who coded it
themselves. And as somebody who cares a lot about data, right, I worry that people who want AI to
not succeed. And there's lots of reasons why you'd be worried about AI, corporate control or
changes to our future or the way they're trained. Like, there's lots of legitimate problems,
but I finally get grouped into a mega thing, which is like AI sucks. And let me pick out cases
where AI sucks. I think if you looked at that study, you'd be like, what's going on here?
And there's a dozen other studies since then that have found different results.
I mean, we did a controlled, so I've done two controlled experiments at large companies
where we found large performance gains from AI use.
When I talk to corporate people in corporations, they're finding use cases.
My colleagues at Wharton just completed a large tracking survey.
75% of companies say they're getting positive ROI from AI, only five negative.
Like, I just feel like we're looking for these cases, like this is definitely going to fail.
This sucks.
Look, I found a sucky case.
And it's really funny that you kind of illustrate with a case of failure,
which is legitimate and real, you probably just said you have a lot of useful conversations too,
which is interesting that we fixate on one versus the other. It's both good and bad. There's good
stuff and bad stuff. There's failure cases and useful cases. And it doesn't help that AI makes it
hard for us to tell which is which. Well, I'll tell you the useful conversation that I had is still
one that I doubt, right? Because what that was was, I had a financial decision I was trying to consider
making. I was like, oh, do I do this or should I not do it? And so I put all the stuff and, you know,
here's how much it's going to cost, and here's the benefit I'd get.
And Chad GPT broke it down and said, no, this isn't a good financial decision, right,
for such and such a reason.
Then a month later, I was thinking about it a different way.
And I was like, well, actually, maybe would it be a good idea?
And I plugged it in again, right?
And it goes, oh, no, actually, this makes a lot of sense for you to do.
Right.
And I really felt in both of these cases, A, I could have done the work myself of the cost
benefit, right?
It was really just sort of rewriting what I had put in in, you know, I read it and I was like,
yeah, I know all this, but thank you for outputting it to me.
And B, it was really reflecting what I had typed into it.
And so I left that going, okay, that was like a minor aid to my thought process, you know,
but it didn't really give me an answer that I felt was reliable.
And then the rest of what I've done with it has been, uh, toying around, you know,
playing, getting interesting output out, you know, and occasionally answering a question.
that I could have had answered via Google.
And I feel like, yes, the adoption rates are really high,
but when people are just going to Chad GPT
and asking it stuff that they could have asked Google
in the first place, I'm like, is that a huge benefit?
I'm not sure.
You mentioned programming.
I've seen plenty of program.
You know, I read programmer blogs who write about here's how I'm using an LLM
to like make stuff quicker.
And then I read other programmers who read,
this is a fiction.
This is a fantasy.
And what it's going to do is, you know, create so much bad code that it's actually going to, you know, harm our overall, you know, the quality of our code and our company, right?
And so we, you know, this is a, this is a dead end to go down.
And so I'm not really able to evaluate that.
And so what I'm looking for are the more specific, like, use cases that that we can point to.
You said you're, please.
I guess my objections, we can't evaluate this, right?
Like, there's a nice study that just came out of the University of Chicago showing that when
agentic coding systems were added to cursor, which is one of the main coding tools people
use, there was a 39% increase in the amount of code being merged into systems with no
increase in error rates and actually quality seemed to go up, right?
Like, we have numbers on this.
So I worry that we're in this world of anecdotes.
And, like, you know, again, where we have evidence, we have information about this.
Like, we've done the randomized controlled trials.
We have data on this.
And part of this is also the complication of using these systems.
Again, if you were just talking to the free version of chat TPT or the auto version of it,
that is a system that is very bad at giving you consistent answers and is very sycophantic.
It will give you the answers you want.
If you use the more advanced ones, the sycophancy goes down and the results are higher quality.
It beats, you know, it cracks business school cases.
It does pretty well on finance problems.
There's a whole bunch of studies in everything from retail to finance, to health care.
So it's kind of hard to have this conversation because on one hand, I can say, well, we've got all the data that this actually is helpful.
And there are certainly edge cases of failure because the system is weird and is good at some stuff and bad at some stuff.
But it feels like I can give you data.
But if there's always an anecdote of like, yes, but this person says it's bad, there is bad stuff, right?
Like, what can I tell you?
There's absolutely like both things can be true at the same time.
We are three years into a new technology that is evolving very rapidly.
And I feel like most cases, the people who are very critical of AI would be listening to data more if it wasn't not complimentary of their view that this is garbage.
And it's very easy to make this system produce garbage.
And so to me, as somebody who's, for example, we're trying to think about how you democratize education at scale for a long time, the fact that every kid in Mozambique has actually the exact same tool you do and that we can actually make it a good tool.
and find tutoring results seems like one of the most exciting things in the world to me, right?
And like, then we have to do the work to make sure it does a good job and not a bad job.
But to start with the default that, oh, it's garbage and always produces the worst output,
when we know it can produce good output, it just feels like a strange position to be in
when this could be a liberating technology as well as a negative one.
Okay. You said so many great things there.
First of all, if you're coming to me and saying, hey, we have studies, we've done the research and we know this,
I'm not going to argue with you.
that you're the type of person I want to have on the show for that reason, right?
And especially if I'm like, hey, I heard about this MIT study that was widely cited and you're telling me the, you know, the deeper story about the study.
That's great.
So, but what I'm also looking for is the positive anecdotes.
I want to come back to education in a second.
But first I want to return to, you said that you've used it to do science.
And so please just tell me what that means for you.
how is it doing science?
So, I mean, I do a lot of fairly complicated, quantitative work.
I'm an economic sociologist, right?
So at a business school.
So I study things like, I've studied things like crowdfunding.
I've studied things like, you know, adoption rates with technology, gender differences in, you know, in all kinds of stuff.
And, I mean, to be honest, writing code takes a very long time.
Checking data does it takes a very long time.
AI does a great first pass.
It does a good second pass.
I mean, one of the most crazy things is I recently took an academic paper that's highly cited.
I wrote it.
It was published.
It had people who were some of the top people in the world read this through, was peer reviewed
at a top journal.
And I was able to throw it in a GPT5 Pro, and it spotted an error that no one's ever spotted
before.
It was a minor error that didn't change the direction of the data.
And then it rechecked it, read the math, gave me the right results from it.
So in the hands of an expert, I easily save 40 or 50 hours on working on a paper using AI help, right?
My expertise matters, and it doesn't work for everything,
but it works really well for a lot of those cases.
Or writing.
I do a lot of writing.
All my writing online is my own.
People think I probably is my fault.
I tweet too much or whatever, or blue sky too much.
I do both.
And I write a blog post, and I do all my own writing,
but I absolutely have the AI read through everything I do afterwards,
and it usually finds errors or mistakes that I make.
It helps me solve problems that didn't have.
I use it for all sorts of things, right?
and both professionally and personally.
And using it, lets you figure out what it does.
In the anecdote side, for every anecdote about, you know, a bad thing, I can tell you
know, I know people who personally have found solutions to diseases that they had
that they didn't realize were that doctors had been stumped on and the AI was able
to solve the problem for them and put together a picture.
I'm sure there's plenty of cases of misdiagnosis as well.
I mean, so we kind of have this dueling anecdote problem, which is like, it helped me
here, it cheated here. That's why I have difficulty with this, because it's a universal,
like, this is a global phenomenon. So AI is both absolutely causing mental health crises and
from controlled experiments seems to be helping some people with mental health crises. Like,
how do we deal with the situation where all of it, like a billion people are using this?
I can find you terrible anecdotes or positive anecdotes. It's why I find the anecdotal discussion
to be kind of less useful than talking about what we know about this technology.
Folks, this episode is brought to you by Alma.
You know who it's not brought to you by?
Chatbots.
You wouldn't let a chatbot watch your kids.
You wouldn't let a chat bot manage your bank account.
So why would you let a chatbot handle anything as precious as your mental health?
Chatbots are awfully good at sounding convincing,
but as we have covered on this show, they frequently get things wrong, very wrong.
That is why it is so distressing to me that an increasing number of people
are turning towards large language model chatbots
thinking that they're getting an actual substitute
for real mental health care.
Now, maybe that's because they think
that finding a real therapist is out of reach for them.
But the truth is, it's not.
You know, in my own mental health journey,
it would have helped me so much
to know how easy it actually is
to access affordable quality mental health care,
care that is provided by a real person
with a real connection to you
who actually understands what's going on.
That is why, if you are on your own,
journey of mental health, I recommend taking a look at Alma. They make it easy to connect with an
experienced real-life human therapist, a real person who can listen, understand, and support you
through whatever challenges you're facing. Quality mental health care is not out of reach.
99% of Alma therapists accept insurance, including United, Aetna, Cigna, and more. And there are no
surprises. You'll know what your sessions will cost up front with Alma's cost estimator tool.
With the help of an actual therapist who understands you, you can start
start seeing real improvements in your mental health, better with people, better with Alma.
Visit helloalma.com slash factually to get started and schedule a free consultation today.
That's hello-al-m-a.com slash factually.
I think part of the problem is that what AI does cuts so close to the core of what humans
ourselves do, it feels it's a very intimate thing that it does.
and so people's stories about it
tend to be intimate
and that I think for a lot of people
tends to transcend data to a certain extent
because it's human experience
and experience has a different truth value
than data does
and I think that's why you see that
and you don't see that with other technologies perhaps
And it's a deeply uncomfortable technology
I mean I start my book with this idea that
like I think when you use AI
you have an existential crisis
like I certainly did
I call it three sleepless nights
Like, I stood up being, like, worried, like, all night, like, what does this mean to have a, you know, why does it seem like it's thinking?
What does it mean to be a professor that it's doing a good job doing an initial teaching run?
This is something I devote my life to.
I mean, I think that if you're not uncomfortable with this, like, that's weird too.
Like, this is a deeply uncomfortable technology in many ways, right?
Whether you view it from, you know, what does it mean for humanity or corporations, whether you view this as an existential threat.
But even if you don't view it those ways, and I'm generally pretty technologically optimistic.
Like, it is an uncomfortable technology.
and there's lots of reasons why people legitimately feel uncomfortable with it.
And I don't think that, you know, and that intimacy that you mentioned is one of the major reasons.
Like, this is a weird thing.
People are willing, like the first, I remember the anecdotes people were telling me initially
was the things that were having AI right for them were, you know, bedtime stories for their kids,
eulogies for their parents, right?
Wedding notes.
Like, that was the first thing they were doing, the most intimate of all things.
And that was where they were giving to AI, not like work stuff, not like Godfather did you.
But like, help me with this difficult time.
that's a very strange technology
and it is a very strange technology
it's very uncomfortable that it pretends to be a person
like that's upsetting I think
yeah and it's an interesting choice
to make it behave like a person
or depending on how much a choice
that is on the part of the makers
but I want to quickly though
just dwell on the science use
a little bit longer because you wrote in one of your blog posts
that you felt that AI could help
solve some of the reproducibility crisis
in the social sciences and you had
an anecdote about using AI to, like, reproduce the results of a paper using an AI agent.
And I don't know enough about how one conducts an experiment in, you know, your field to know
how exactly that works.
So what does that mean to have AI reproduce the results or is it literally doing an experiment?
So a lot of what we do is, like, working with data, right?
Survey data or census data or, you know, economic data or data that we gather ourselves,
So a typical experiment, just tell you about one of my AI experiments, for example, with my colleagues at Harvard, MIT, and New Reserve Warwick, we went to a consulting group and we did a randomized trial where some people got access to AI and some did not.
And we measured performance on different tasks, found very large increases when people used AI systems versus when they didn't.
So we'd have a data set available for these kind of things.
And what I found was I could just point the AI at the PDF document and at the reproducible data set they made available.
and the instructors how to do it,
and was able to reproduce the numbers,
tell me where they differed
and what they differed systematically,
was able to do the research work.
Again, the more sophisticated models, right?
If you use a GBT5 Pro or 5.1 thinking,
they can do this kind of work
where they actually write the code, do the math,
download the files, look up the references.
Execute the code that they wrote?
Sorry?
Do they execute the code that they wrote?
They execute the code that they wrote.
I got all the graphs.
I could check all the data
to make sure the code was right
and a good source of truth than it was.
I didn't find any errors in doing that work.
I mean, part of the interesting issue is that checking the work takes time,
but less time than doing it.
So you might save 10 or 20 hours,
but it might take you an hour to go through the data.
It's still a good trade-off.
So, I mean, that to me is really impressive, right?
And the models are getting better at these kind of things.
We just keep finding these objective things that the systems are really good at,
you know, and when we're in early days still
of trying to figure out how to use these things in the right reproducible way.
Okay, this is a very strong use case, and this is what I want to hear from you.
I want to talk about another one that you brought up, which was education, and that there's, we have results of research and education.
You said from Nigeria about AI being used to help tutor kids.
I'm thinking about, again, anecdotally, you know, plenty of writing I've seen from college professors saying, you know, I'm flooded with AI produced essays.
my students can't write, they refuse to write,
and they're worried about this, you know,
creating like a huge crisis in education
that, you know, students simply won't,
it'll be like a calculator,
but instead of just for adding and subtracting,
it's for like all schoolwork, right?
And so the fear is that people won't strengthen their minds anymore
because the AI will do everything for them.
I'm curious, what are the results that you're citing find?
I mean, first of all,
that's not completely legitimate.
is absolutely happening, right?
Like, I mean, AI didn't create the cheating problem, but it creates a universal cheating tool.
So we actually knew there's a cheating crisis for a long time, right?
There's actually these nice tracking studies that show the percentage of people who benefit
from doing homework has been dropping over the last decade because they're no longer doing
the homework.
They're just copying it.
So they're not, you know, and the problem with education is it's just like any other form
of exercise, it has to kind of hurt a little.
Like if you're not being pushed, you don't learn.
Right.
And unfortunately, nobody likes to be pushed, right?
And so the AI undermines a whole bunch of really valuable educational approaches.
Like nobody asked for the essay to be dead as a concept, right?
Which is the take-home essay.
We do know how to solve this stuff.
We can do more at school, Blue Book kind of work.
We can do active learning in class.
So, I mean, there's positives and negatives, but there's absolutely, you know,
and my colleagues at Wharton have a controlled experiment that shows if you just let students,
this with those students in Turkey, if you just let students use AI, they get answers to
questions, they think they learn something, but they're just regurgitating AI information.
and they learn nothing, right?
On the other hand, if you ask the AI to be like a tutor,
because part of the problem is the AI is helpful,
so it just gives you answers to problems.
You don't want that.
A good tutor actually challenges you.
It makes you answer the question, it guides you.
And when you make the AI work more like a tutor
with either prompting, better prompts to do that,
act like a tutor, or now all the AI companies have these learning modes.
They haven't been independently evaluated,
but they do some of that same kind of thing.
Then you get positive results.
So there's this nice World Bank study in Nigeria
that did after school,
exercises with using AI and with teachers in the classroom.
The teacher part's really important because they're doing guidance after-school activities.
And they found out like a 0.31 standard deviation increase,
which is actually a really large one from six weeks of after-school things.
That's equivalent to like an extra year of sort of extra classroom learning to get that kind
of impact.
Now, it feels like a bigger number than I would have expected, but that's what they found.
And we're finding similar results.
There's a nice set of studies out of Harvard, looking at computer science learning.
a physics study, I think it is, out of Stanford,
if I remember correctly.
But well-built AI tutors have big impacts on education,
which you'd expect them to do, right?
If they're not factually wrong,
having a universal on-demand tutor who can give you feedback,
explain a concept that doesn't work, you don't get well,
give you new examples to exercise from,
help you diagnose your learning problems.
Like, that's pretty great.
And the tools can do that.
But again, you kind of need to use a most recent model
and you need to want them to act like a tutor
as opposed to just give you answers.
Yeah.
So if you have shaped the AI such that it is reinforcing learning rather than replacing it,
it can have a positive effect, I guess my next question is,
how likely is it that it will be widely deployed in such a fashion, you know,
especially when I think about how much the education system in America itself already
was not particularly well designed to produce good educational outcomes, right?
why does students feel like school is just about jumping through hoops?
Because for a lot of students, it is because that's unfortunately the way their school was designed.
You can't fault them for wanting to shortcut that with an AI.
But unfortunately, they're shortcutting the last little piece of value they were going to get from their education.
So, I mean, I am not going to be the, have my education scholar had on here too much because we can have a whole debate over
learning and where the issues are.
But there's no doubt there's lots of things broken in education.
part of this is having the willpower to do something, right?
I mean, or the direction to do it.
So one way to do it is you have to use an assigned AI that's set up as a tutor.
Another is you don't ask it for results or you use the study mode.
All of the AIs have them that makes it less likely for them to answer your question
and better to ask you a question.
I mean, if you just ask the AI, help me understand this concept.
Don't give me the answer.
You will have a good experience with a large-scale AI system to help you learn concepts, right?
Like, it's not actually, it is really a tool that you can use if you want to use it.
If you want to cheat, it's really good at cheating.
I guess my, the, the divergence that I'm thinking of here is in these controlled studies where they've developed an AI that does a specific thing in this particular controlled way, in this controlled study, they find a good outcome.
But it seems like on the whole, we have not deployed AI in that controlled way.
We've deployed it in a sort of, you know, maximal everything everywhere way and the overall effect on education, not in a small study, but like if you look at, say, all educational outcomes across the United States currently seems to be bad, or do we have research that points in the other direction?
We don't know.
I mean, we have survey results that teachers appear, actually when they use AI, at least there's a Walton Families Gallup survey suggesting that teachers generally are pot feeling.
positive about it after they start using it, or at least for helping them do better teaching,
the report six hours, saved a classroom time a week in doing prep and help.
It's a complicated picture, right?
And so we're looking across hundreds and millions of users, and a primary use case of
AI is cheating for students, at least.
But all they need to do to not cheat is to not let the AI cheat, right?
And basically say, help me understand this.
We can motivate people to do that just like we do in other cases.
You give people in class tests, and the people who learn will do better than the people who don't.
can give you a prompt, like if you paste in a prompt of the AI saying,
be a good tutor, don't give me the answers, that's all you need to do, right?
It's not like you need an entirely separate AI system.
You could do that in chat GPT, and it will give you, it will be a good tutor.
But you have to say, act like a tutor, don't give me answers, help me understand concepts.
So, I mean, part of this is we have a tool that can do many different things.
If you want to use it to cheat, you absolutely can use it to cheat.
And that is probably the default for many people for the reasons that you gave.
And that is really destructive to education.
just like it's destructive that it, you know,
we don't know how to teach about using AI properly.
Again, three years in, right?
Like, this is, I'm an educator myself.
Like, it's chaos out there right now.
We don't know what to do.
Everybody's teaching different things.
Everyone has different information.
We're all figuring out as we go along.
So I think it's fair to say the impact initially has been really negative,
but there's a lot of broken stuff already.
And it's sort of weird because on one hand,
you're saying the system's deeply broken
and the other you're like an AI made it worse.
There's also some possibilities that AI's making it better.
we just don't know yet.
Yeah, that's the case.
Like, I understand we're in that chaotic early period,
and I've lived through other, you know,
I live through the basic mainstream birth of the internet,
which was very chaotic.
But I do notice the difference that there's, yes,
AI has been massively adopted on a wide scale.
There has all,
I've also never seen a new technology launch that had so much negative sentiment about it
by the people using it and by the,
by the culture at large.
and people saying this is a bad thing.
Do you feel that that is larger than, say, the internet or mobile phones or anything else that big?
I mean, I think that there's a lot of reasons people feel negative.
And I think a lot of them are very legitimate.
I mean, they are based on, you know, everyone's IP goes into these systems one way or another, right?
And we could have a discussion about that.
It's a problem here in Hollywood.
Yeah.
Right.
And like, that's that people didn't ask.
I think that there's a lot of suspicion of technology companies having our best interest in mine for some reason.
who knows where that came from.
I think that, you know, like, and I think there's a lot of existential angst, right?
I mean, there's a quite a number of people talking about how AI will one day wake up and kill
us all, another group of people are saying it's all a scam.
There's a lot of coexisting things, and I think it's anxiety producing.
Like, this is a large social change.
I don't know whether it looks different than other industrial revolutions, right?
So, like, you know, we always talk as economists about, you know, in the long term,
the industrial revolution is really good, if people get new jobs, everyone does better.
But if you actually lived through the industrial revolution, it kind of sucked, right?
Like there was a lot of stuff chaos happening all one time.
So, I mean, I think there is negative sentiment in the U.S., less negative sentiment in other parts
of the world by far in surveys.
China is much more optimistic.
But, I mean, I think that there is, you know, there is reason to feel negative.
And I think that the job threat feels real, right?
If, you know, it is disturbing to see AI make art that seems good, right?
And we want that, like, that feels uncomfortable.
There's lots of things that are uncomfortable at AI that I think.
you know, and how it's controlled and how it's done that I think are legitimately things to
worry about. My concern is, is that I think we need to be thinking about how to mitigate the
negative and embrace the positive. And I find a lot of absolutist kind of conversation, right?
We want AI to be really bad for education. But honestly, you just told me how broken education
is. And if I can increase outcomes in Nigeria with an off-the-shelf AI system for pennies per
intervention, that it feels like one of the most exciting things on the planet to do. And so
we can embrace it both times that we have to mitigate the negative impact,
well thinking about the positive impact.
And I think that that's going to be the broad case.
This is a general purpose of technology.
It's going to affect almost every part of society in ways that are good and bad.
Well, I think that the broken nature of education, the way that it was pre-existingly broken,
I think the solution is to fix it.
I have concerns about saying, hey, we're going to use AI to fix the fact that we deprioritize
education, that when we do have it, you know, we don't really focus on what education does
and what it brings people.
We educate people for skills and careers rather than to grow as people and to become more powerful mentally and intellectually, yada, yada, yada.
I do have a concern about that.
But I think your point is really well taken that, you know, folks like me who are of a more skeptical frame of mind tend to look to confirm our skepticism with every piece of data that we see.
We try to build an argument for why this is bad and why it's not going anywhere.
and it can amount to sticking your head in the sand
and, like, refusing to see what's actually happening around you.
And, again, that's why you're on the show.
And I'm very happy with, you know, everything you're bringing us.
Because we, you know, we can't allow our suspicion
to blind us to what's actually going on.
Well, I mean, the worst thing to me,
the worst argument I made is this is useless that it's going away.
And that's sort of the position we started like,
oh, look at all the failure rates.
That's just not what's happening.
And, you know, companies using these systems,
That's not what's happening when scientists are using these systems.
Like, they're not useless, right?
They are actually quite valuable.
They do lots of actual valuable work.
They probably could be useful to you when used properly.
And they're getting more useful over time.
So my fear is that everything you're talking about,
I want the people who are very negative on AI be part of the conversation about,
not assuming it's going to go away, but thinking about what do we do to mitigate these negative
impacts, right?
I agree.
AI should not be subbed in to solve all the problems in education, but it might be part
of a solution. So how to redesign or think about the education system getting through this
crisis of AI and the pre-existing crises to create something better rather than just seeing
what happens and hoping that this all goes away. So my concern is that the people I want part
of this conversation really are just hoping they can ignore this whole thing. And that's just not
the case, right? This is a epochal technology according to every survey that we have of forecasters.
This is a very big deal. And pretending it's not a big deal is, I think, a really big.
problem because the critics' voices don't get heard because they could just ignore it.
And because the critics are, in a sense, sometimes removing themselves from the conversation
by refusing to have it.
And, I mean, you know, muting anyone who has this conversation.
I mean, I'm sure there's a lot of YouTube comments here on, you know, on AI is bad.
And it is okay to think AI is bad.
I'm not going to try and convince you that there's no bad impacts because there are bad
impacts.
I just worry that if it's we, the answer is AI is bad.
It's going away.
It's all scam.
That's the most dangerous way to think about this stuff, I think.
thing. Yeah. I will say when you say, we need to talk about how to mitigate the negative impacts.
That does make me go, well, hey, I didn't ask for anybody to release the fucking technology in the
first place. Why do I have to mitigate the bad negative impacts? You impose the negative
impacts, Sam Altman or whoever. You fucking mitigate them. Not you, Ethan. Not you. I got nervous
there. I am in agreement, right? But like, nobody asked for this. Like, when chat, and nobody really
knows what's going on, by the way. I talked to all the AI labs. I talk to nonprofits. I talk to
government officials.
Like, you should be clear, there's not like somebody steering the ship.
It just turns out the AI models do this stuff.
We didn't know they would do the stuff.
When they released CHAPD, it never occurred to them that they were destroying a lot of
college education because people would just be cheating on essays.
They never thought of its use in medical.
It turned out it was pretty good a diagnosis.
Nobody expected to do that.
Nobody expected the impacts that these things have had.
And the truth is, is that, you know, the AI labs are, you know, going to keep releasing
what they do.
If they didn't do that, there's Chinese labs that are releasing.
models that are available to everybody in the planet.
I mean, this is, this, the, the, you know, the chance that this is just disappearing doesn't
seem a realistic one to be.
We could argue about how far the technology gets and does it hit a wall or what, right now
it hasn't, right?
We're seeing pretty steady exponential growth on almost every measure we have that's long-term
measures of AI ability, long-term being three years at this point or five years, but we didn't
ask for this, but that doesn't mean that we can bury your heads in the sand.
We don't, you know, there's, there are, there are reasons.
to be hopeful as well as reasons to be negative.
So there's reasons to not take the maximally,
it's all going away, it's all bullshit position, right?
But part of the reason I do take a skeptical position
is the hype position is so dominant, right,
in our culture, in the business press,
of this is going to replace every job,
It's going to, you know, you mentioned in one of your pieces or maybe one of your interviews that, you know, a little while ago, AI boosters were saying that radiologists were going to be made extinct as a profession like within six months or something. And of course they have not been. And that this was going to replace every job under the sun. And so that's, you know, the maximal position in the other direction. Obviously, we'll probably fall somewhere in the middle. What do you think that middle will be? Like when you're saying,
hey, there's a lot of use cases.
They're not all clear.
It's good at some things, not good at others.
What are the effects you think we will see on people's jobs over the next couple of years?
So I think that there's going, first of all, things take longer than people think, right?
Actual work is complicated, right?
There's all kinds of, like, processes have to change, approaches have to change.
Adoption takes time.
Nothing happens as instantly as people think they're going to.
That doesn't mean, there's something called the Mara's law when you predict the future,
which is people overestimate short-term change.
and underestimate long-term change.
I think we're going to see that same thing happen here.
But, I mean, a nice way to think about AI and jobs
is think about jobs as being exposed to AI or not.
Like, that doesn't mean replaced.
It means overlapping.
And I think about this a lot because in the initial work
on what jobs overlap most with AI,
business school professor was 22 on the list of 1,016,
most disrupted jobs.
And I think about this a lot because, but think about my job, okay?
I'm a, you know, I'm a business school professor.
I teach.
And what I do is many things.
Like I go on podcasts like this one.
I have to create assignments for my students and grade them and teach them and provide
emotional support to them, help write a letter recommendation, be an administrator, do academic
research, write books.
My job is a bundle of many tasks.
And if the AI is better at some of those tasks than me, say it's better at grading,
which turns out it is actually better at grading, though I do my own grading for social
reasons, right?
My students would be angry if I didn't do that, even if the AI would give better grades.
I still grade myself.
But let's say grading gets taken away or part of my writing does,
then there are other parts of the task bundle I will do more of
and expand it and change what I do.
So I think transformation is a more likely outcome for most jobs than destruction, right?
What we do changes.
If the AI helps you do podcasts quicker,
maybe you launch a second or third podcast series,
or maybe that you, you know, the research side,
you can now expand to do a wider swath of things
or translate to other languages.
So I think that some of that would be negative, right?
And there's obviously corporations have an interest in removing jobs.
But I think because the AI is jagged, it's good at some stuff and bad at some other stuff.
And it's hard to know a priority what those things are going to be.
I think we'll see a lot more change than we will see replacement in the short term.
Now, it is worth noting, as you did, that all the AI companies really do think they can replace all human labor in like the next 10 or 15 years.
I'm more doubtful about that.
That's a little saying that.
There's reasons to be anxious about that because they think they could do it.
I'm more doubtful as somebody who studies technology and change in work.
But I think it is worth noting that there are going to be jobs that change and transform
and some jobs that disappear.
And we just don't have a clear picture of that future.
Like, I would, you know, I'm talking to Nobel Prize winners about this stuff.
Like, we don't know, right?
We can only make projections about this.
I think change is very likely, you know, wholesale replacement of jobs is less clear at this point.
Yeah, you used a really great analysis in one of your pieces that,
I forget what job you were talking about
but you were talking about
AI can do this or that right
and then you pointed out
and this is what somebody does for their job
but you pointed out that's not the person's job
that's just a task right
that's a single task that they do
replacing a task doesn't mean replacing a job
like grading a paper right
or I use this argument a lot
during the writer's strike
which was you know happened along with
the first release of chat GPT
people said aren't you all going to be replaced
by chat GPT
And I would say, no, no, no, you misunderstand.
The job of a Hollywood writer isn't to output text.
The job of a Hollywood writer is for the executive to call you on the phone and yell at you.
Like, that's the job, right?
And then to go talk to the director and they yell at you and talk to the actor and they don't like their lines.
And could their character be more likable?
And then to go to set and go, hold on a second.
That prop doesn't actually look like what's in the script.
We need to change it because otherwise the two episodes from now isn't going to make sense.
Blah, blah, blah, blah, blah, right?
And that's even apart from the fact is, is AI good at outputting the type of text a writer does,
which is purely creative, never-before seen text that is going to move people emotionally and make them excited to watch something.
So I take that metaphor really well.
Maybe 50 years from now in Hollywood production, AI is more involved in the process in some ways.
but I don't see the job of a writer going away.
I don't see the job of a comedian going away fundamentally.
I think that's probably the case for a lot of occupations, right?
Yeah, I mean, I think there are cases where I were.
I mean, AI rights are pretty good press release at this point.
I mean, you know, we're getting better.
It's getting better creative writing,
but it's nowhere close to the top 1% of creative writers.
I mean, one of the most interesting things we found in repeated studies of AI use
is that low performers get the biggest boost from AI use.
And it's actually a task-based problem.
What happens is you have to do many things as your job as a podcast host and probably
and you're really good at a couple of those and you're probably pretty bad at a bunch of
other ones.
So if the AI is at the eighth percentile and the stuff that you would have been bad at, that's
actually a huge net gain for you because those are probably the parts of your job that
stretched you out the most, you've got the least joy from and we're in the most danger
of being in trouble.
I'm a former entrepreneur.
I teach entrepreneurship and innovation and entrepreneurship is all about being really good
at like at the 99 percentile and something and then being terrible at everything else
and hoping the things you're terrible at do not destroy your thing you're good at.
So the fact that the AI helps you support the things you're less good at
and brings those tasks up a level doesn't necessarily mean destruction, right?
And this is another thing in which people would get more of is like this idea that, like,
sure, if it helps you with some of this, it doesn't have to be 100%
to give you a lot of support in areas that you have less expertise in
to help you do the things you do well better.
Yeah, that's ideally, you know, and I could imagine, look,
I have not found AI useful at all for the actual creative work that I do because I'm always trying to get better at this.
My actual job is to like sit and fucking think about what I actually think and think of something that no one else has thought of before and like do the hard effort.
And then also as a comedian to make myself into someone who people want to hear from.
You know, like on a level, people like comedians because there's an actual real person with real opinions who they feel connected to, right?
Maybe there's some automation down the road of like, you know, some emails I don't have to write or something.
But I guess the fear is, though, that like, is AI going to make it harder for people to climb to the highest level of task, right?
If the type of writing, if you are not able to write a eulogy for your dad, right, are you ever going to be able to do the more difficult creative work of like writing as thinking, you know?
Yes.
So, I mean, I think that's a huge problem, right?
One of the things, like, I am actually not that worried about schools in the long term, except for the reasons you're worried about schools anyway.
But like, we'll figure out how to use AI and it.
It'll take a while.
There's already models.
We know from pedagogical research that we should be doing more flipped classroom,
any way where you read books or watch videos or interact with the AI outside of class for tutoring.
And then inside of class, we have interactive experiences and show what you know and active learning
experiences. We'll figure it all of that out. I mean, it might take a while too long for most
people, but I think we'd figure it out. I'm deeply worried about what happens to my graduates,
right? Because I send them off to places as generalists and learn to be specialists just like you said.
Like they start off as an intern working for you as a writer and they write over and over again
and you give them feedback. And there's a deal, right? The same way we've taught,
people to do jobs for 4,000 years, which is apprenticeship for white color work, which is they
work for you and you correct them. You're like, no, this is not actually a really good show
pitch. Here's how to do a good show pitch. Or, you know, even a company that they don't care
about people and they're just yelling at them, they're still picking up how to do the work. And the
deal is you get paid a little bit or maybe not at all, but you're an intern and you prove yourself
and by doing the work, but you learn as you go and the more senior person gets somebody to do the
grunt work for them. And what's happened over this past summer is the complete destruction of internships
and initial apprenticeship
because every middle manager
would rather turn to an AI than a person
because the AI is going to do the work faster
more accurately than an intern
and will probably not complain.
And every intern is just turning an AI work
because the AI is better than them at initial work
and why would they be dumb
not to use the AI to do things?
So it's just like AI talking to each other
and no one's learning in that apprenticeship way.
And that is a real crisis
that I think we're going to have to deal with
in your future and think about more
how do we apprentice people
in sort of intellectual and white-collar tasks
that used to be sort of a natural process
but we may have to actually teach that formally
which is a weird thing to be able to do.
I think there's an even bigger problem
or I want to heighten what you just raised
because it's not just that the middle managers
look at the AI and say,
oh, it can do it better.
A lot of middle managers actually don't know
the difference between good and bad.
They don't know what the people
who work for them actually do.
And so when they see AI,
they just see it as an opportunity.
to fire somebody and get some okay output.
And in so doing, they reduce the quality
of the work that they're getting,
but they actually don't know that
because they're too stupid to realize.
And I'm someone who I'll talk shit about managers
all day long, right?
But like, you know, there's many businesses out there
where you look at them, you're like,
you made your product worse
and you, the fucking CEO,
don't realize that you've done it, you know?
And that to me is a huge risk, you know,
that I have a friend who said they were laid off from their job of doing customer service
to be replaced by AI. That's what they were like literally told by the company. Whether or not
that actually happened, I can't verify. But I'm like, there's no way an AI does a better job
than a person at customer service. This company has reduced the quality of the service that
they offer willingly in order to save a couple bucks. So, I mean, I agree completely in the threat.
I will keep pushing back a little bit on that the AI is going to produce worse output than
people in all of these cases. Because I think that is, I mean, I get why you'd have that
concern. I don't know if that's the case. And I want to be realistic about this, right?
Like we could say AI isn't capable yet or may have issues or it's not as good as the best
customer service person. I mean, you know, I don't have customer service stats on the top
of my head that I can throw at you, right? There's Eric Brunelson at Stanford has an interesting
study showing customer service people use AI get better results than those who don't. But, you know,
we don't have information on this. I worry a lot about the idea that this, we're back in
that underlying idea, which is AI is pretty bad at everything. So the only reason to use it is
cynically. And I think the danger, I want everyone to know that the danger is that AI actually
can be good at these jobs. And it might be worth thinking about those kind of concerns about
what do we do if it actually is good at customer service? What do we do if it is actually
producing better work? Because I don't see, like, you know, so there's a, there's a really
interesting study that was done by Open AI with some outside researchers. They've made everything
open. So, you know, we could take corporate research one way or another, but I know enough
about the methodology to say it's at least interesting and valuable, called GDP Val. And what
they did was they had a whole bunch of experts with an average of 14 years of experience
representing 5% of the economy. So retail and financial services and nursing and chemists and everything
else. And they had them each create tasks that they would do in their job. And then they hired
other outside experts with a lot of experience to do the tasks. Took them about four to eight
hours of work to do those tasks.
They had a third set of experts judge the results and judge the results of the AI doing
the same tasks and then blindly vote on which one they liked better.
And when this came out this past summer, the best model of the world was something called
Claude Opus 4.1.
It's now obsolete.
But the experts preferred Claude Opus's output in their own fields after spending an hour
evaluating the results.
So check it factually and everything else, 48.5% of the time.
Right?
And every new generation got closer and closer to that 50% mark.
I do think we need to take into account again the idea that these systems are quite good.
And it's one thing to say writing, I think creative writing at the highest levels, which is
what you're at.
You've had a TV show.
You have a podcast.
Like, you're not a slouch in this.
I do worry that we're giving, that we really want, like, go back to that idea that AI
kind of sucks at everything.
I just don't think that that's true.
And I think we need to deal with the world where that's not the case.
It may not be best at whatever you're best at for tasks with lots of bundles.
Like, I'd love to think about customer services, something about.
out like, how do you make the job more dignified and human than it is before?
How do we have AI do the grunt work that makes people miserable in customer service?
How do we have it escalate so that you can have deeper personal relationships with people?
That strikes me as a better answer than is automation, you know, is it terrible at automating or not?
We have to have augmentation be the counterparty to automation.
Like, it can't just be replaced the job.
It's how do you make jobs better for people?
And again, I know it sounds like weird and, you know, and fantastic, but starting with the idea that AI sucks and it's only being,
put in as a pure cost-saving measure, it absolutely is, but the danger is that it gets good at
stuff. And I want us to take that possibility seriously. Yeah, I agree. We should. I guess the
problem is that capitalism as an economic system does not seem set up to make people's jobs
better or augment them or make their lives better. It's to make profit and to save money. And so I guess
I'd ask you this as a little thought experiment. If you imagine your insurance company or an
airline, right? Let's say an airline. Let's say you're, you're at JFK and a bunch of flights are
canceled and you need emergency help, right? And imagine you're calling American Airlines seven or eight
years from now, right? And you get the AI customer service on the line. Are you actually
happier in that case? Or are you going, fuck, fuck, fuck, fuck, representative, representative. And then you
finally get Cheryl in Atlanta on the phone and she says, yeah, oh, I can help. Yeah, let me help you
out, you know, like that. Because that's the experience most people have had, right?
Is like, is you actually need to get that person who's on the other end, you know?
It's why people hate when they call, they get the call center in India. And they're like,
no, no, no, you're just, you don't actually have the power. I need the person who works for the
company. I mean, there's a lot of stuff going on there, right? One of them is the incentives in
society and, you know, and in capitalism in general to cut costs, right?
and, you know, be competitive in various areas.
Like, there's a whole societal issue that we have to face about, I mean, look, I mean, Sam
Altman and company, they are telling you that we need UBI, which I think is kind of funny
because, you know, on the more left side of things, like, well, that's the desired goal
is everybody's free from wanted some way.
Like, their vision is this weird utopian vision that in some ways is, you know, is very
anti-capitalist.
I mean, I guess capitalists for them, they make tons of money, but nobody else does.
But in any case, leaving aside that bigger issue question, I think, again, when you talk to a customer
service AI tool right now, you're almost certainly speaking to a pre-generative AI or older edition
tool. When you talk to a real recent AI systems, people like having conversations with them.
They find them helpful. They can do agentic work. Again, I think that the idea that seven years
from now we should plan on AI systems being just as bad as they always have been for the future,
like that is the fundamental disconnect, I think, that you and I have in this conversation, which is
you believe this technology is crap.
I'm going to say crap, or at least that's the argument you're making.
And like, that's just not what the data is showing.
That's not what we're going to get, like where the direction's heading.
It might happen, but I don't like, and by the way, there is an advantage.
Delta would like you to get incredible customer service for cheap, right?
Like they're balancing both of those things.
They want you to call them and, you know, love your customer service interaction because that makes them more money.
As long as the customer service bot doesn't help you too much.
In fact, one of my favorite examples of AI guardrail,
getting around, Claude Opus 4.5 just came out. And one of the things the AI companies do is they
test them for different kinds of behavior. And they did a customer service call with the AI. And
somebody asked, they were in a basic economy seat. They wanted to get, you know, change seats.
And the AI system realized that you couldn't do that. It said there's a policy that we can't do
that. But it found a loophole. It found that if you changed the class of seat in a legal way,
you could then upgrade the person and move them. And it actually did all of that automatically,
with, you know, and figured out the way around the loophole in the law, which would have hurt
the airline and helped the person. So, like, these systems are quite capable in a lot of ways,
and I thought it was just interesting that people were hacking it in the other direction.
So, I mean, again, I think that if I had something, one thing for your listeners and watchers here,
it's assuming these systems suck and will always suck and it's all hype and they're never going
to get better and they're not very good at jobs is, I think, a dangerous position to have.
That may be true, but then we're in a better situation as workers than a world where they actually
if I could. I mean, I'm conflating a couple of concerns I have. One of them is, is yes,
exactly what you're saying. And I can accept that maybe I'm wrong about that. And especially
time will tell, right? But I think the other concern that I have is that my experience with
companies like this is that when they do things, it is not to delight me and make my life better,
right? It's to save money and to often make my life worse. There's a lot of companies that make
the deliberate decision to make the lives of their customers worse, right?
And they don't care.
And airlines are one of them.
If anyone's ever taken a greyhound bus, they treat their customers like shit because
their customers are poor, generally, or on the lower end of the incomes.
And they'd feel like they don't have to do any better than that.
And I think that's why my imagined future of me dealing with an AI is me going, fuck, fuck,
fuck, fuck, fuck, fuck, right?
of like in the same way that I'm used to being fucked by these companies.
And so it's more of a criticism of like the social structure of capitalism rather than it is
the technology itself, perhaps, and who is excited to be deploying it, you know?
Well, I mean, on the other hand, when I talk to sort of corporate leaders, they're deeply
worried that somebody's going to come along with a better solution to their problem now that's
friendlier thanks to AI without their corporate bureaucracy and, you know, take away jobs by
delight in customers, right?
So, I mean, we can argue, I mean, I think that there is a pressure on companies to both
cut costs, but also to retain customers.
And those things are cross-cutting problems.
And in a lot of ways, but their incentives are not your incentives.
It's not to make you, you know, to make your life better.
It is to find the right balance to make the most profit, right?
That is their job.
And I think there's every reason to be suspicious that companies, when faced with a profit
incentive, are not going to do the kinds of things that are best for society.
I think it's, in my experience, again, I teach in a business school, right?
I'm a sociologist by training, but, like, I, you know, I teach in the halls of capitalism.
And, you know, I think that there is, I think that there are lots of people who are concerned
and want to make a world better.
I think there's people who want to make the world worse.
I think there's a lot of things happening all at once, just like any other group of people
you talk to.
So, you know, and I can't, just like we can't solve all of education, we can't solve
all the problems of capitalism and, you know, lay stage capitalism here in a podcast.
though. If we could, I would be happy to work that out with you. But I think that the main thing
I'd kind of push back on is the idea that you're going to be saying, you know, you're going to be
swearing at your phone and hitting buttons. You might end up, I want you consider a world where
you're actually delighted by your, like the Delta representative calls you, speaks in a perfectly
normal human voice, knows everything about you. Like, is like, oh, you know, we're like, because
in some ways, the first company to give you amazing customer service, you'll fly with them forever.
So there is a model where an AI bot that's credible and talks to you and knows your entire history
and can have a reasonable conversation and can take action in the world, which is what all these AI
companies are trying to build would be the most amazing thing in the world.
They'd be like, hey, listen, you'd get the same service as somebody who has a billion miles flying.
Like, companies are working in that piece too.
So I'm not saying it all works out under a capitalist system, right?
We've just talked about some issues.
But I think that the incentive that everything is going to always be crappier, that's not necessarily
the only outcome right here, but it is one possible one. But again, I think the assumption that this
is a static technology is the most ominous one. Yeah. Every point you're making is very well taken.
And this is, again, exactly why I wanted to have you on the show. Just as you were talking,
and we'll get off a customer service in a second. But have you seen the videos of people asking
chat GPT to count to a million? Have you seen these videos? Yeah. Of the new voice mode where
there's chat GPD count to a millennia.
It goes, okay, that'll take a long time.
But I can start, one, two, three, four, five, and so on.
And then the person goes, okay, keep, keep going.
And chat GPD is like, oh, oh, okay, six, seven, and more numbers after that.
And it has this cheery note in its voice as it refuses to do the thing that you're asking it to do
and also won't tell you why and speaks in this sort of, you know, almost conversational legalese.
And I think that is what a lot of people, you know, that's what they imagine, right?
When they, when they, the, the, the customer service AI is being this thing that's always so friendly and chipper.
And there's always acting like it's helping you, but it's never actually helping you.
I mean, it might be reasonable, right?
Like, companies want to cut costs.
They will have the AI, the chipper AI agent in an ideal world, they make you happy and make you spend more money and don't deliver any additional service because that would be the most profitable thing they could do.
I think there's every reason to be worried about systems that are hyper persuasive that try and convince you that you actually.
actually had a great experience. I mean, again, I think that the incentive structures are cross-cutting
here. There are some pointing in positive directions and some in negative directions.
And I think that assuming inevitable negativeness sort of, again, leaves us in a place where we feel
no empowerment. I can tell you we're in the early days of this stuff, right? There is an opportunity
to move, whether that's through policy or starting alternative organizations for this sort of stuff
or pushing back. I mean, this is the time where there's, you know, options to do that. I think
AI disappearing, though, is the least likely option.
I 100% agree with that.
But let's talk about the business case a little bit because a more recent argument you hear is about the AI bubble that these companies are investing so much money that I believe the stat that I've used in past videos is that in order to make back their investment,
they'll have to like sell services that are five times the revenue from all business software in the world currently.
that it'll be, it's such a massive amount of revenue that it'll be almost impossible to make up
and that even a middle ground case where AI is somewhat useful is going to fall far short of that
and result in economic devastations. Being that you're, you know, teaching a business school,
what do you think of this argument? I'm sure you've read it. Yeah, I mean, so I should make it clear.
I'm not that kind, I'm not the finance professor. So I will answer as an informed amateur rather than
from the ivory tower on this one to the extent that that's useful one way.
another. I mean, so there is a investment boom going into AI. The question is whether it's a bubble
or not. Most of that investment is going out to building data centers that would be used, and we can
talk more about data centers. The easier way to think about it is rather than data center,
call them supercomputer centers. Like, they're basically giant computing centers that have lots of chips
that will, that allow you to both build bigger AI models and let the AM be able to answer questions
and do all the work of the AI. There is an open question about whether or not,
not you, there will be enough revenue to sustain this buildout. And there's both bull and bear
cases, right? And you could make an argument either way. The, I mean, to be honest, if, if,
if, if, like, Open AI reaches the scale of a Google or an Apple, this is not a huge amount of
investment relative to a Google or an Apple, right? So if you're betting that that's the case,
then they, they will make $100 billion in revenue in 2030. That's not actually that much revenue
compared to Google or Apple or Facebook or meta, whatever. And so, like, there's a possible way that
they win there. There's also possibility that people are overbuilding and we have a dot-com
style crash. In the end, like, I think that stockholders in that case would lose a bunch of money,
but again, AI wouldn't disappear as a result. Those data centers would be used just like all the
dark fiber that was built in the late 90s would be used. So in some ways, whether there's a bubble
or not is important for the economy. It's interesting. I don't think it's a definitive answer one way
or another. You know, I think there's a case to defend that the AI build out is if AI turns out
as big a deal as people think, then we're underbuilding. If it turns out to be, you know,
as the hype believes, if it's the middle case, it's probably not that far off. But if there's
a decrease in value of AI, if new cheaper models come along, there could be a burst bubble
and a financial crash in the AI space. I don't know from the perspective of people who don't
own a huge amount of stock or, you know, have the normal economic up and down where that changes
the long-term future of AI, but it certainly is worth paying attention to it. I mean, good, so
in your view, if there's a crash, maybe it slows down the AI revolution, maybe the
AI revolution was oversold, but it's still going to be with us to some degree on if we're
on that end of the graph of possible outcomes. Yeah, I mean, look, there are Chinese companies,
there's at least five really good companies out of China, one out of France. They're releasing
models for free on a regular basis, right, like that keep getting better and better. The UA,
Google is not going to go away as a result of a, if there is a stock market decline. And they're still,
have a huge amount of build. Meta's not going away. I think a lot of this comes down to
debates about like, oh, is Open AI spending too much? Are they too overvalued? I mean,
there's a lot of obsession over Open AI, and I totally get it and Sam Altman, but like,
from a macro perspective, I don't see any reason why we'd see a slowdown. Now, there's a possibility
there's some sort of research wall ahead, but people will keep predicting AI is going to stop
developing. On all of the long-term tasks that we measure AI ability on, it just keeps increasing,
whether that's, you know, Humaney's last exam, how long a task the AI can do, how well it does
certain hard problems, whether it can win the math Olympiad.
Like, we just keep seeing better systems.
So I don't think a huge, we see,
AI's not going to go away, right?
You have a billion people using it.
There's hundreds of millions of dollars and hundreds of billions of revenue
that will be generated one way or another.
Whether they're generated by Open AI or a successor company
is not going to matter that much to most of us.
I don't want to take up too much of your time,
so I want to run through one or two more things quickly.
Another broad criticism you hear of AI,
similar to that it's going away or that it's a bubble,
is that it's fundamentally uncreative as technology,
that it's, you know, based on the work of other people
and that it's, you know, remixing, it's mashing it up, right?
It's chewing it up and spitting it out.
I've certainly experienced that trying to use it
to do creative work in my own life, right?
When I say, try to write something in the voice of Adam Conover,
right?
It writes something generally that I've already done before
because it's basing it literally on my past work,
which it got from reading subtitle files of my past television shows.
And I've had trouble
getting it to do anything that I felt replaced the creative work that I have to do.
It feels like it's regurgitating.
Now, I think when you're talking about something like medical diagnosis, right,
where it's hearing about a novel problem and suggesting a solution,
that sounds more creative.
I'm curious what you think of creativity as a property of these systems.
So a lot of complex stuff there to unpack.
So one thing is just how AI works, it's just worth two seconds on this.
A large language model doesn't like have a database.
it's pulling things from.
Like, there's no database of Adam Kahnover quotes
that it's pulling from.
Instead, it's been trained on all of this information,
including all of your pirated subtitle files
or whatever else.
I don't know which models train on which things,
but huge amounts of the closed internet.
I know my books, my previous copyright books
have all been trained on by these AI models
without asking me.
I have different feelings, maybe, about them that you do,
but, like, you know, it certainly has happened.
But what they're doing is they're finding statistical patterns
in language.
So they're statistically producing one word at a time,
what the most of those next most likely word is.
And so Adam Conover, there are tropes online
about what Adam says and it will say things
that sound like what you do,
but it's not pulling an exact quote, right?
Yeah.
Because it remixes, because it pulls other information together,
a few things.
One is, if you really want to give it the try
of making it sound like Adam Conover,
you should probably use, you know,
the latest Claude 4.5 opus model,
give it a bunch of your writing to work from
and then criticize it the way you would a person.
Like, actually, you're being really corny.
This is not how Adam talks.
Really, you think he's like,
like this and see if you can get it to write better.
I mean, that's a worthwhile experiment to do because, again, we don't want to bury our head
in the sand to be like, this is ineffective in every case.
I would like more critics to try to make the maximalist approach of trying to make AI do stuff,
right?
In science, we call this that you actually want a really stringent test.
You want to actually test the hardest possible case of trying to make AI work rather than walking
away before it does.
So that's another side note.
Well, based on my past experiments from like a year and a half ago, it's fair, I haven't
done it recently.
But, you know, I can't help but have the experiences I've had using the technology and having been underwhelmed.
Yes.
And then the other point on creativity.
So look, we are very bad at measuring creativity, right?
There's a bunch of tests that we do when we measure people's creativity.
There's a remote association test where you ask people to, you know, or they, you know,
where you ask people to come up with, you know, with different words that are unrelated to each other.
There's tasks where you ask people to come with as many uses for a cheese grater as possible.
We use these as proxies of creativity.
On all of those tests, AI beats almost all humans, right, on creativity.
When my colleagues at Wharton did an experiment where they had their MBAs and their design thinking class,
and you can say whatever you want about MBAs, but these are really creative people and creative class,
all from creative industries.
A lot of startups have come out of that group, and they asked them to generate 200 startup ideas.
They had the now obviously GPT4 generate 200 startup ideas.
Outside judges, judge the ideas by willingness to pay.
Of the top 40 ideas, 35 comes with the AI, only five from the humans.
So, like, on one hand, we find AI is quite creative.
On the other hand, it's kind of created like one person that has a whole bunch of themes
that kind of gets stuck on.
Like, if you talk to AI, it's kind of obsessed with other AI concepts.
It talks about VR a lot.
Like, it's sort of like what somebody who was online a lot in 2022 would think about.
Crypto comes up way too often.
But we found that if you actually ask it to think more diversely, you can get more diverse
ideas out of it.
So again, another thing I'd be careful about is saying that the AI can't be creative.
I don't think it was creative as you, but again, you're at the very top of a field of, like,
writers who get paid to write and have had television shows and like that is an elite group of
people and I think creativity not like there's a lot of things you're not create you know that you may
not have creative ideas on or people might want creative help I think we creative types you know creative types
of people tend to elevate creativity as this very human task I mean AI shows apparent creativity in most
tasks that we do but like and when you compare it to groups of people does a very good job I think
there's downsides they have creativity creativity I think it could replace human creativity like all
kinds of other work where you outsource things to the AI. But again, I want to be cautious about
saying it's not creative because it can't write like Adam Conover, the well-known television
personality and writer and stand-up comic. Like, we're talking to, I'm talking that most elite
people in a field. And it's like, yes, if it could do everything you're doing, we'd be in a very
different world right now, right? Like, it should be able to do what you're doing at the top 1%
or 0.001% that you're at. But I also think it's a mistake to say it can't do creative work or
it just regurgitates the work of others because it doesn't quite work.
that. It's absolutely inspired, you know, remixing in some deeper way, but it's not, you know,
directly regurgitating other people's work as directly. Image models are a little weirder and
different than language models, but otherwise, that's, that's, tends to be true. That's a great
answer. Another argument I've heard about AI is that there's so much focus on language models
as being, you know, the frontier of AI. And yet not all human thinking is linguistic, right? Or not all
tasks that need to be done are linguistic.
And something that I've always tried to get a better handle on is, is there a
frontier of AI?
I know there's image models.
There's language models.
Is there some other kind of model that is not predicting, like word tokens that we should
be aware of and that's coming?
Yeah, I mean, first of all, you should put all the models now are multimodal, which
means they can take audio tokens and they can take visual tokens and they can take word.
Like, all of those can go into these systems.
There's also an attempt to build what's called world models,
which are sort of like video games that the AI can operate in,
and so it learns how the physics of the real world works.
The truth is, like, the weird mystery at the center of all this
is we don't know why large language models are as good as they are.
Like, we know how they work technically,
but we don't know why a system that just produces the next token in a sentence
can produce seemingly novel information that we haven't seen before, do new science.
Like, that is weird stuff, and we don't have great theories on that.
Stephen Wolfram, the famous mathematician, his view was by,
making a model of human language made a model of human thought or some segment of human thought
and they can think in a humanish way. We don't know if that's the case. So, but there are, I mean,
I think one thing for a lot of people is, you know, there's a lot of critics of large language models
and an approach. And if you look at today's large language models, they're quite different than they
were three years ago. They have all of these other features bolted onto them that make them more
than just large language models. My very first academic paper I ever published was on Moore's Law,
the famous law that has held since like the 1960s that the speed of computers and
number of trips on a computer chip doubles every two years or so. And if you actually zoom into
Moore's law, it's not one thing. There's a thousand different processes that have involved
to make chips operate. And every time we almost hit a wall, a new technology comes along. But as a
consumer, you don't care. Your next year's chip is just better than the last one. I think we're
to say the same thing with AI. There's so much money in this and so many research paths forward
that if large language models sort of hit a, you know, hit some sort of wall and they have in the past,
people figure out another way forward. So I think that people fixate on this, your current large
language model has a reasoning model bolted on. It has tool use built it on. It has web search
bolted on to it. Increasingly, they do a lot of different things in one go. And I think there's
going to be a lot of exploration about different approaches to somebody who doesn't care about the
details of token prediction. It's just going to look like steady progress in most cases.
Got it. So like these other, you know, each AI company, as they want to make their model a little bit
better. They'll come up with some new, maybe AI adjacent thing for it to do. They'll bolt on some new
bit of functionality that'll make it more capable in a way that might have unpredictable results that we
don't even see now. Right. Or that might just keep the capability going. Or one of these companies does
hit a wall and then a successor company comes up with a different thing. I mean, this is the flip side
of the capitalism piece is we're very good at creative destruction in capitalism, right? Like, if you fall
behind the technology, other people will come along as long as there's money and incentive to be made.
So I just, again, going back to our big picture view, I think this idea that we're going to hit some sort of like, you know, impossible point and everyone's just fixing large language models and there's no other way forward, half the people who left Open AI, the senior leadership who included the former CEO, are launching companies that offer competing different kinds of models or approaches that are ready to step in if large language models break.
So, you know, again, I think it's, if we conflate open AI and large language models in chat TBT with AI, you're kind of missing the bigger picture that this is not.
just a one company kind of town at this point.
Yeah.
Okay.
Thank you.
I feel like you've been batting away all these pitches I've been throwing you very effectively,
and I really appreciate that.
I guess I'd like to ask you for yourself, what are you most excited about and what are you
most frightened about, alternatively, about the future of this technology that maybe I wouldn't
even know to ask about?
I mean, I think there's a lot of things that worry me.
I'm worried about persuasiveness.
There's early evidence that these models are more persuasive than most people.
And I don't think we've started to think about what that means.
Certainly people are thinking about false information.
I don't think people are thinking enough about the fact that it's just video models are going to be good enough to replicate any scene you want by the end of next year, basically, at least in short segments of it, then most people couldn't tell the difference.
These open source models that are the ones competing against the closed companies are generally available.
Anyone can overcome guard whales.
I think we're not ready for a world where, like, you know, whether you call it AI,
slop or AI gyms or whatever kind of combination
to do, where we're not going to be able to tell what's real
and what isn't anymore. We're not ready for that
world. We haven't set ourselves up for that.
I think we're not ready for a world where AI is hyper
persuasive. I think in general, the
maximalist case that AI turns out to be really good at
stuff is something
that we need to be spending a little bit more time worrying about.
Not hype levels of replacing every
human in labor in the next two years,
but in the general, we are in the
early days of an industrial revolution that
we have to be paying attention to.
But I'm also excited by a lot of things.
Like, I have seen personally, like, I teach classes using AI help.
This, I mean, I still remember one of the earliest, you know, and people would probably
found this right now.
I had a student when I was first, when the week after Chatchabidi came out, I showed my
students how to use it.
And I had a student who grew up in it from a, from an, you know, an impoverished background,
very smart guy, but he told me he never learned to write very well.
And he'd taken writing classes, but just hadn't taken for him.
And his cover letters would always get rejected from places.
And he said, since I started using AI, I've got three interviews for jobs.
and I landed a job.
Like, I think there is some really exciting stuff
about giving capabilities to more people.
I wish more people would view this
as a profoundly democratizing technology
because it is.
Like, even if open AI's profit motive goes away,
these open models are available.
They can do a lot of things
that have held people back
in education and medicine
and, you know,
and helping you do daily tasks
and getting good advice.
I mean, my standard has attended to be
BAH, best available human.
Is the AI worse or better
than the best available human you have access to?
And, you know,
if you move in very elite circles and you know the smarts people in the world,
maybe the AI's crappy compared to what you know.
But a lot of people, the AI is pretty good as a solution to many problems.
And I think that has to be our standard of comparison.
I'm very excited about what that means for a lot of people in a lot of the world
who don't have access to these sorts of tools.
I think we're about to see a lot of good things, a lot of bad things happen all at once.
And, you know, they're already happening all at once.
Like a billion people are using these things.
They're using it for everything from companionship to cheating on tests to, you know,
doing cyber attacks to figure out their diseases to doing new science.
And we're going to have to make some thoughts of, you know, decisions about what that means
and what matters to us and how do we start emphasizing the good stuff?
Because the bad stuff is going to happen as well.
And I think, you know, it's all going to be at once.
I think one of my biggest concerns, and I hope that this, what I'm about to say, like,
even makes sense.
Right.
So I hope you'll work with me on it, right?
But, you know, there's times when I'm sitting in front of my computer and I'm thinking about all the shit that I can do and I'm thinking about your argument.
I'm like, this is going to transform everything, right?
And then I walk down the street and I see a guy mow on the lawn because a lawn needs to be mowed, right?
And that guy needs to go to the store and get some food, right?
And that guy needs to find love in his life and he needs to raise his children.
And on weekends, he needs to, you know, take his wife out and get a drink somewhere, right?
And when he does that, he wants to see, like me, a stand-up comedian who is maybe somebody who he's aware of in the world, who's an actual person, right?
And I think when he calls customer service, he might be happier to get a real person on the phone because at root, society and the entire world is made of people, right?
Like, and we're social animals.
We need each other.
And there is a sense to which we're irreplaceable.
I'm not trying to put that in some spiritual, oh, our humanity is the most important thing about us.
I'm talking about, like, we're fucking animals and we have emotional, physiological responses to each other.
And we have physical dependencies on each other.
And what I worry about the AI revolution is that when we focus on that, we tend to forget that that's the case.
You know, we tend to forget how important, like, all of our human needs.
Like, even in Hollywood, you're like, oh, people are going to be watching AI movies.
I'm like, well, maybe, but don't people also want to see a hot person?
who really exists in the real world whose name they know and they know who they're dating.
You know what I mean? And then they, oh, you could see them at a restaurant and oh, I happen to see
a famous person and there's gossip about and like all that other shit, right? Like that in my line
of work, that's what's actually motivating people. It's not just I want to see colors and images
on a screen that, you know, that distract me for a couple seconds. It's, it's part of the fabric
of human experience. When your, when your student is putting in a job application and they're
having AI write it and someone else is a real person who's going to read it on the other end and like
when that person is hired they're going to need to go into the office you know what I mean and be flesh
and blood and I worry that like so much innovation in this level of like automation like distracts us
from our from our fundamental dependence on each other in this way that there's a whole suite of
problems and solutions that we could be addressing that that were not I does any of that make
sense to you. Absolutely. I mean, I think we're in a complete agreement on this here. I think
that it's a Silicon Valley view, right, that the world is, you know, replaceable by bots. I don't
think it is. That's part of why I am, you know, I'm less, you know, whatever you'd call the
maximalist hype view that that AI is going to replace all human work. I just think we, there's,
first of all, AI is not that good at everything, right? It's good at some stuff, bad at some
stuff. But on top of that, there's a lot of human need here. Like, I don't, if you think about a job as
producing a PowerPoint, then the job is pretty meaningless. If you think you think
about it as the debates and conversation
that you have to come up with a decision about
what product you're going to launch. Like, that's
a human thing. And I don't think that goes away.
And by the way, my
student who wrote that email, absolutely right.
By the way, a new paper just
came out showing that cover letters are no longer valuable.
It used to be if you were a good writer, you
could get a better job because you were a better cover letter writers.
Now cover letters have no information in them, right?
We're going to have a lot
of transformation of valuable stuff
is going to vanish. Like, it's important to
recognize that, right? Like, the
there's blind optimism is a mistake here, right?
But I do think in the end, the human stuff is what keeps us together.
I think we'll adjust to a role with AI because we still do want those human interactions and
connections.
I think that people are foolish if they think that AI will replace all of those human connections
or that it should replace all those human connections or that we will want it to do that.
But, you know, I think that that is going to be an ongoing debate about what those lines are.
I don't know the answer to whether people, let's say, AI movie making gets really good in three years.
You could create whatever movie you want on demand.
I don't know whether people gravitate towards that or not, right?
I don't know whether I don't have an answer to those questions.
We're going to find out together, I think, to a large degree.
But I think humanity wants to still be human.
And I think that that is a thing that gets missed a lot in the world where, you know, of AI hype.
I'm glad we're ending here on a position of agreement because I agree with everything that you said.
And by the way, I want to take that back.
I think everything that you've said is eminently reasonable
and is a challenge that I and I think probably
a lot of people listen to this show
and a lot of people in the world need to hear.
And, you know, the future
will be the future and we're going to see what it is.
And I think we need to be open to all versions
of what it becomes.
I want to go further than the future will be what it is
because I don't like that kind of powerlessness.
Like there is the opportunity to steer the future.
And the biggest problem I have for the people
who are listening here who are like AI is going away
and who have just yelled at this thing that like,
I can't believe this, probably.
Hopefully it hasn't come across as a great,
like I'm not a representative of AI.
I take no money from any AI lab.
I, you know,
and I think there's a very mixed future ahead of us.
And I don't think their vision of the future
is what everybody wants to live in.
But that means there's a chance for agency and change, right?
Like, we do have an opportunity to use these technologies
that shape them for good,
whether that's organizing and do them,
using them personally in your life in a way
that enhances creativity and ability
that helps you solve problems that you're facing
or your friends or other people are facing.
Like, to me, the democratizing aspect is something we need to take advantage of.
And I don't want to just be the future, be what it should be.
I want the people watching this to think about, is there a policies I need to be advocating
for?
Is there a legal issues I need to be dealing with?
Is there a way to use this technology to build the future that I want to build
rather than waiting for other people who I don't agree with to build it for me?
And to me, that is the biggest concern I have is people sitting this out and listening
with comfort to AI stupid.
It's going to go away.
It isn't.
It isn't.
it's not going to do that. So please help make the future you want to see rather than
trusting the people who believe AI is real to do that for you.
Beautiful. I love that you're ending with agency and not agentic AI, human agency, our own
human. This is something I preach on the show all the time is that we always have the ability
to do something to affect, you know, what tomorrow is going to be. And I love that you're not
just saying, oh, learn to use AI to shape it. You're saying, no, be active in the policies that are
to control what AI becomes and determine what AI becomes, that it's still something that is
being deployed by a society that we participate in, that we can have power over. Do you have any
specific examples of policies that you'd like to see that people could be advocating for
to take us out? I mean, a few things I think that are really important. I think we need really
strong protections against deepfakes. I find those really worrying as an approach. I think that we
should be making, we should be thinking about how do we establish both societal guidelines
about what's acceptable to use. Like, you know, I feel like shunning people for all AI
use is kind of mistake because everyone's secretly using it and then publicly shaming other
people for using it. I think that ends up being a problem. So I think we need to be
drawing lines about what is acceptable use and what isn't. And enforcing those rather than all
AI or no AI, there has to be some sort of guidelines there. I think at the governmental level,
I think we need to be thinking hard about, let's say, the AI companies are right. And over the
next 10 years, a lot of jobs are under threat. What do we want to start to do by thinking
about retraining? Do we want to put limitations in how things are used? Is this a problem that
unions can help address or worker groups can help address? Is this something we have to step in
in a society level? Like, you know, if we are indeed watching a transition happen between
forms of kind of, you know, capitalist competition, this is a great time to intervene if you want
to see an alternative viewpoint. Like, government policy still matters in these cases. And I think
there's an interesting chance to advocate for doing those things. At the same time, I'd also
say building stuff that matters.
Like, I'd like to see more people build educational tools and tutors that are not-for-profit.
Everything we do at the generative AI that work in that I help run is all open weights and
open source, and everyone can copy those and change anything they want on them.
I think we need more people doing the non-for-profit kind of work about building AI
tools that matter for more people.
There are lots of ways to make a difference here.
Similarly, in your own field of expertise, warning people what AI is good at or bad at,
helping people become more expert.
Like, this stuff is helpful as well.
Ethan, I can't thank you enough for being on the show, and I appreciate you pushing past.
I sense a little frustration sometimes in your voice, and I appreciate you being very clear
and making your points in a very considerate way and bringing all this to us.
It's like exactly the conversation that I, and I think so many people need to be having right now.
So I'm grateful to you for coming on the show.
Thank you.
And it's funny because I think in this audience, I feel like I'm the big AI is coming and good guy.
But, like, you know, this is not, this, I feel like I very much represent the middle ground here, which is like, you know, I think you do too.
And I, and I, and I appreciate it.
I think part of what I, like I said, my frustration continues to be the same one, which is we can't bear your head in the sand.
Like, this is the time to participate, not to walk away.
Well, if folks want to participate, you have a book out called co-intelligence.
What's the full title?
It's got to have a subtitle.
Co-intelligence living and working with AI.
Folks, and get that right now at our special bookshop, factuallypod.com slash books.
Where else can people find your work?
you have a substack?
Yes, one useful thing.org is my main substack.
If you want teaching prompts that we've actually tested about
that helped turn AI into a tutor or how it gives you simulations
to your practice negotiating, if you go to the generative AI lab at Wharton,
we have a whole bunch of free prompts that are available that you can modify or use,
a lot of research there also, if that's interesting to people as well.
Ethan, thank you so much for coming on the show.
Thanks for having me.
Well, my God, thank you once again to Ethan for coming on the show.
Once again, if you want to check out a copy of it,
book, you can do so at factuallypod.com
slash books. Every book you buy
through that link will support not just the show
but your local bookstore as well.
Of course, if you'd like to support the show directly,
that URL is patreon.com
slash Adam Conover. Five bucks a month
gets you every episode of the show, ad-free.
For 15 bucks a month, I might read your name in the credits.
This week I want to thank Thorntron,
John Crump, David Snowpeck, Aaron Explosion,
Robert Fuss, Game Grumps, I love you, Game Grumps,
Paul McCollum, Rick J. Nash, Howard
and Kevin, Fatim Merrickon, Darren Kay, and Ed.
Oh, and don't forget, a screaming Batman.
Thank you so much for your support of screaming Batman.
If you'd like me to read your name or silly username at the end of the show
and put it in the credits of every single one of my video monologues,
once again, that URL, patreon.com slash Adam Conover.
I want to thank my producer, Sam Routman, and Tony Wilson.
Oh, my tour dates.
If you want to come see me on the road, Adamconover.combeer.net.
Madison, Wisconsin, Fort Wayne, Indiana, San Francisco, California.
so many other places,
Adamconover.net for all those tickets.
Now, I want to thank my producer,
Sam Robin and Tony Wilson.
Everybody here at HeadGum
for making the show possible.
Thank you so much for listening.
And I'll see you next time on Factually.
