Freakonomics Radio - 554. Can A.I. Take a Joke?
Episode Date: August 24, 2023Artificial intelligence, we’ve been told, will destroy humankind. No, wait — it will usher in a new age of human flourishing! Guest host Adam Davidson (co-founder of Planet Money) sorts through th...e big claims about A.I.'s future by exploring its past and present — and whether it has a sense of humor. (Part 1 of "How to Think About A.I.")
Transcript
Discussion (0)
Hey there, it's Stephen Dubner. Today on the show, a rare occurrence and a welcome occurrence.
We have got a bona fide guest host. This is a person whose name will be familiar to many
of you, Adam Davidson. Adam, welcome.
Thank you so much, Stephen.
So Adam, for Freakonomics Radio listeners, you are almost certainly best known for having, is it, co-created and then hosted the NPR show and podcast Planet Money? Is that correct?
Yeah, I sort of had two careers. I had a career doing more human interest, more narrative stories for This American Life, and a career doing very straight business stories. And with my buddy, Alex Bloomberg, who was at This American Life at the
time, we thought, well, what if we put them together? What if we made-
Peanut butter and chocolate.
Peanut butter and chocolate, yeah. You may be the one other person on the planet who can fully
identify with that thought. And so, we first did this big hour about the housing crisis called
The Giant Pool of Money. So, that led to Planet Money, which Alex and I ran together for about five years, and then
I eventually left for The New York Times and then later left for The New Yorker.
Planet Money, we should say, is still alive and very, very well.
And you, Adam, as I mentioned, are coming in to guest host this Freakonomics Radio episode,
but not just this
episode. This is a three-part series on essentially how to think about artificial intelligence. Is
that about right? That is about right, yeah. And actually, I was in my mind thinking of
that giant pool of money show we did so long ago, which is, you know, if you remember back in 2008,
all these things nobody had been
thinking about, the mortgage market, subprime housing, interest rates, the Fed, suddenly it
was this massive force that was going to, we didn't know what it was going to do, but it seemed scary
and big, and spending the time to just figure it out.
Like, what is this thing?
How can I think about it?
How can I make it life-size enough that I can just engage it?
What would you say was the main thing or a main thing that you wanted most to understand about, let's say, the next year or two of AI?
I would say the fundamental question is, is this time different? Is this just the
latest or is this a new kind of thing? You know, certainly in my life, I'm finding a few people
are all in on AI. A lot of people are saying, I don't know, it seems creepy. I don't want to have
anything to do with it. And I would encourage people, it doesn't mean you have to love it,
doesn't mean you have to hand your life over to it. But the more people who are involved in thinking about how it should
be used, probably the better outcome. I would like to think that one good way to get more people
engaged in it is to make a three-part series for the show. So I'm glad you did that. And most of
all, I'm just so happy to have you playing on our team. So thanks for joining.
Thank you.
It was so much fun.
I hope that comes across.
Thanks, Adam.
The thing I want, the thing I've been searching for for about a year now, should be simple.
At least I think it should be.
Like you, like everyone, I keep hearing about AI, artificial intelligence, and I want to know how to think about it.
I want a simple, clear, middle-of-the-road explanation.
Here's the deal with AI.
Here's how to use it.
Here's how not to use it.
But the problem is that the idea of AI inspires people to start talking about the future in extreme ways. AI is the most existential threat
to humanity. Serious people say it will kill us all. But other serious people, they say different
things. They say that AI is ushering in a new age, maybe a better age, where humanity can achieve
things never before dreamed of. It will eliminate disease
and poverty and allow us to live for centuries. I don't know about you, but I find that my brain
sort of shuts down when I hear these huge pronouncements. It will kill us all. No,
it will bring about heaven on earth. I've spent months now talking to as many smart people as I can find about AI, and I learned a lot. The main
thing, the big headline, nobody knows where AI is heading. That's why there's such a crazy range of
predictions. As one expert told me, there are no experts yet. We're still figuring this out.
So over the next three episodes, we're going to take a little tour
through the world of AI as it is now. We start today with the basics. What is AI? Why is everyone
talking about it? How does it work? What can it do now, not what might it do a decade from now?
And crucially, what happens when we start asking it to do things we think of as distinctly human? This is Freakonomics Radio, the podcast that explores the hidden side of everything with guest host Adam Davidson.
One major lesson I learned is that the big fears and the big hopes are not really about what we have today.
They're not about OpenAI's
chat GPT or Google's BARD. This current generation of AI, which as we'll learn probably shouldn't
even be called AI, it's not going to kill us. It's more mundane than that. In fact, all the
talk of existential threats and complete transformation is distracting us from the current reality,
which is really quite interesting and also plenty confusing in itself.
Have you played around with ChatGPT or any of the other AI tools?
I have a lot, and I'm continuously struck by two experiences.
One is that it can seem magical.
I ask it to do something.
Write a sonnet about basketball,
write an essay about the history of farming, whatever. And the AI generates words and sentences and full paragraphs. And it seems impossible that some computer software is
creating all that. But the other experience is that those words it generates are a bit off.
They're weird.
No person would write them.
That has become my obsession.
Not just mine.
It seems to have captured the world's attention.
Is AI becoming human or is it altogether something else?
I wanted to try to get at that by asking a really simple question.
Can chat GPT be funny?
Can it tell a good joke?
Almost.
I don't think it's as good as people yet.
That's Lydia Chilton.
She's a professor of computer science at Columbia University.
All it knows how to do is, from a sequence of words, predict the next one. So if you say, tell me a
knock-knock joke, what would you as a human being predict the next word would be? It would be
knock-knock, who's there? And how did you know that? Well, because you've heard it many, many
times before. And you don't even have to know what a knock-knock joke is to do that. You just
follow the patterns.
Because the software that is behind ChatGPT, as I understand it, is not looking at words. It's just looking at numbers.
So knock would be translated into a number.
Joke could be translated into a number.
And then it's just doing a bunch of math.
And when this number is near this number, then this other number comes up a lot.
Computers, at the end of the day, really only know how to operate on zeros and ones.
They add them together.
They subtract them from each other.
That's all they do.
But even with just zero and one, you have to figure out how to represent the number two.
And with those numbers, I can also represent words.
I can line all these up and actually say, if someone has typed A, what's the most likely letter they're going to type next?
It's like the dumbest thing you could possibly do, which is great.
It's one of my loves of computer science.
You take something really complex and make it so simple that a computer could do it.
Imagine setting out to write the rules of being funny.
You could probably just about do it with knock-knock jokes.
Rule one, you say knock-knock.
Rule two, the other person says who's there.
Rule three, you say a word that kind of sounds like another word.
You see where this is going.
But when's the last time you actually laughed at a knock-knock joke?
If you can write clear rules for how to generate a joke,
it's probably not a very good joke. If you can write clear rules for how to generate a joke, it's probably not a very good
joke. That is essentially why Lydia Chilton gave up on AI the first time she looked at it,
which was about 15 years ago. When she was in graduate school, she tried out the cutting edge
of AI at the time. It now goes by a phrase I love, GOFAI. That's G-O-F-A-I, good old-fashioned AI. You can think of it as rules-based
AI coming up with super complicated rules to achieve some outcome. So GoFi, good old-fashioned
AI, is just all the things that we did pretty much before the internet. GoFi had this vision
of allowing computers to see, and sort of the method was, let's, you know, take a picture of a human face and break it up into features.
Here's the one eyeball, another eyeball, nose, a mouth.
And then test that against all the other eyeballs that are in the database to identify, this is Adam, this is Lydia, this is Barack Obama.
This is the classic model of a computer program.
You give the computer a series of rules, and it follows the rules in sequence and spits out a
result at the end. To go-fi, recognizing a face, telling a joke, diagnosing a disease,
coming up with the fastest route to your mom's house, whatever task you have in mind,
it's all just a series of really complicated rules.
So the AI researchers would try to write more and more complicated lists of rules.
And that didn't work, at least not very well.
There's two reasons.
The computers just weren't powerful enough.
Turns out this does kind of work, but you just need a lot of examples of what eyeballs look like
and what everyone's eyeballs look like to make that work.
And unless someone's going to sit there and type in everybody's eyeballs, it's just not
going to happen.
So it's really like it was a good idea, but the scale wasn't there.
Of course it didn't work.
The human brain evolved to work in a way quite different from Go-Fi's long list of sequential rules.
Our brains don't start with a bunch of rules.
They start by taking in sights and sounds and smells and the rest,
and then they build connections among neurons which prepare that brain to interact with the world it finds itself in.
It's sort of this illusion that computer scientists were under
that if I write down enough rules, I can describe a cat or a table or anything.
But it turns out it's really hard to write down those rules.
You try table, you know, it's got a flat bit and then some legs, four legs.
Oh, but some of them have two legs.
Oh, but some tables fold.
And so then they have no legs.
And the world just doesn't break down in this categorical sense. And guess what? That's not
how people learn either. We just fumble around as newborns and toddlers and see a bunch of stuff and
kind of figure it out. And those toddlers have a lot of data. And so if computers could have
that data or even much, much, much more,
maybe they'll just figure it out on their own with the right information architecture,
which is neural networks. It's just taking in all this data and trying to predict,
is that a table? Is that a table? And it doesn't have to conform to hard rules.
The reason you're hearing all about AI now, the reason it is getting so much attention,
is that AI researchers shifted from good old-fashioned AI, the long list of rules,
to what Chilton just mentioned, neural networks designed to be more like the human brain.
The AI software is made up of a huge network of nodes designed to simulate the brain's neurons.
You feed this AI tons of data and let
it form the connections. Interestingly, this neural network approach has been around for a
long time. It was first proposed in 1943 by two researchers at universities in Chicago,
a neurologist and a logician, but it wasn't until pretty recently that computers were fast enough with
enough memory that those neural networks fully took off as a powerful tool.
Also, for decades, researchers had a problem.
A neural network needs a ton of data.
If you want it to be able to identify a table, you need to show it a lot of tables.
If you want it to predict how human beings communicate,
you need a lot of examples of human beings communicating. And you need those examples
to be in a form that computers can read. And for most of the 20th century, there was just
not that much stuff. Then the internet happened and people just started dumping information. There's, you know, like probably 100,000 photos of me even on the internet and of everyone.
We all just gave away all our personal information.
And so this amassing of data, not just of facts, but of people and personal experiences
and thoughts really created the trove of information that we needed to train these algorithms rather
than trying to engineer rules and figure it out because there's just too many rules.
In all that information we dumped on the internet, all those blog posts and Instagram stories and
angry comments, as well as movie scripts and just about every book ever, we gave neural networks a ton of examples of our faces and our
experiences and our thoughts. Back to humor, if you're a computer connected to the internet,
it's very easy to find examples of people being funny and of people trying to be funny and then
being told whether or not they actually are all that funny. So after giving up on rules-based AI, Lydia Chilton decided to give neural network
based AI a chance because she had this obsession. What makes something funny?
And can I make a computer be funny?
Well, I will be honest. One thing you get to do in computer science is overanalyze things that you find fascinating but are not good at.
And that was me.
It's a power to be able to tell jokes, good jokes.
And did you not feel like you were good at it?
No.
I would say most of my humor is a little bit unintentional.
I would say certainly for myself and maybe other computer scientists feel like understanding people is a real challenge.
For me, it does not come naturally.
And so I like studying it so I can understand these things so I can feel like a normal human that understands other people.
And humor is a big part of that.
And I always just felt like this nut that I could crack.
We know computers can do math well.
We know they can store a ton of data. But humor? Making another person sincerely laugh out loud? Feels so human. there are actually ways that AI can get around that. The main way it gets around that is by
simulating those emotions. But we all simulate emotions as well. You can do it without feeling
it. So can a machine. And it learns it from patterns, just like you did.
But the best humor is really surprising. That's the fun of the humor. Like,
you never would have thought that person would
have said that. So is that also just following rules? There's this sort of myth out there that
creativity is somehow magic and jokes are one of the most creative things. They just come out of
nowhere and they don't follow patterns. And it's really even hard for a person to do unless you're
like Mozart, Picasso, Shakespeare, Einstein, someone like that, that you're not going to come
up with something super creative. But it turns out that creativity is not that hard. It's just a lot
of hard work. And you always lean on patterns. The trick is that humor has that, this structure
beneath the surface, like a plot, like a chord progression.
But what it really is, is it's violating expectations in a very particular way.
Chilton and her collaborators did eventually get a computer to make up a joke.
Not a great joke, but a joke. Here's how they did it.
They focused on the American Voices section of The Onion, the humor website.
In American Voices, they take some topic from the news and then have a few fake person-on-the-street reactions.
It's a classic setup punchline.
But here, there's one setup and three punchlines.
This is great for a computer science researcher.
American Voices, which was originally called What Do You Think?,
has been around since at least the mid-90s.
So, 30 years, 50 setups a year, 3 to 6 punchlines,
that's thousands and thousands of jokes with exactly the same structure.
All that data allowed Lydia Chilton to come up with a series of 20 steps,
she calls them micro-tasks, that a writer
goes through to make a joke. For instance, if you're given a headline, first identify all the
elements. In her paper, she looks at a headline that says, Justin Bieber baptized in New York City
bathtub. Task one would be to identify four elements. There's Justin Bieber, there's baptism,
there's New York City, and there's a bathtub. Then task two would be to identify four elements. There's Justin Bieber, there's baptism, there's New York City, and there's a bathtub.
Then task two would be to figure out what people would normally expect from such a headline.
And then, this is where the humor comes in, you subvert that expectation.
So with all of her structure and all of that data, how does AI do?
Chilton tried one for us.
Okay. I like these because I have toddlers.
So the real headline is 10-year-olds found working at McDonald's until 2 a.m.
AI says, talk about commitment. I can't even get my 10-year-old to finish their vegetables.
Another one. Well, that explains the finger painting in my Big Mac box last night.
Another one? Finally, a solution to the never-ending debate of homework versus real-world experience.
Eh, the punchlines are not great.
AI seems to understand the overall structure of a setup punchline kind of joke,
but it's struggling to make those punchlines actually funny, actually work as jokes, or even always make sense.
Though, to be fair, a lot of us human beings struggle with that, which made me wonder how
this looks from the perspective of an actual funny person.
My name is Michael Schur.
I am a television writer and producer based in Los Angeles.
Michael Schur is one of our era's most prolific and successful creators of TV comedies.
On his own or with others, he created Parks and Recreation and The Good Place.
He was a major force behind Brooklyn Nine-Nine and the American version of The Office and is an executive producer of the show Hacks.
I love all these shows. I think I've seen
every episode of television Michael Schur has had anything to do with. At the moment,
he's on the negotiating committee for the Writers Guild of America in their ongoing strike. He told
me he has dabbled in chat GPT. I will say that I am generally averse to it because I know that by playing around with it,
you're helping it learn stuff to some extent. And as a longtime fan of science fiction writing,
I don't want to contribute in any way to the advancement, the rapid advancement of these tools.
So I have tended to shy away. I mean, that's actually an interesting moral question.
Like, as someone with some influence in your industry, you know, I certainly understand the
position of let me not participate, let me not encourage it, which makes a lot of sense. But
on the other hand, maybe let me understand what this thing can do so I can better represent my
community or something like that. Do you see a tension there?
Yeah, I do. And I do understand that at some level, both as a writer and as a member of the negotiating committee for the WGA, it is probably part of my job description to understand these
things, know how they work, play around with them, that sort of thing. But I've also, I think I get
it, you know, and I kind of don't want to encourage it.
Schur's approach to AI, to not use it, to hope it goes away, made me think of the Luddites,
the movement of British textile workers who smashed factory machinery in the early 1800s.
Because, of course, they've become the go-to historical analogy for anyone who resists a new technology.
But the real story of the Luddites
is a bit more complex. The Luddite movement was made up of highly skilled textile workers,
the very people who were most familiar with the new industrial technology. They weren't
against the machines, they were against the way the factory owners were using those machines.
Factories were using the technology to make inferior products,
and in the process, destroying the pipeline of skilled textile workers. And in that sense,
Michael Schur is almost exactly a Luddite. He is not so much worried about the technology itself,
he is worried about how industry will use that technology to weaken the power of writers.
He is also worried that those
studio execs don't even realize that they are being self-defeating. If they damage the current
comedy writing ecosystem, they might find themselves without anyone who knows how to be
funny professionally and reliably. Let's take Shure, for example. He got a job at Saturday
Night Live when he was fresh out of college.
I was extremely bad at the job for a good long time, and by all rights should have been fired,
but eventually figured it out through observation. The head writers at that time were Tina Fey and Adam McKay. And I had good friends who worked on the show, Dennis McNicholas and Robert Carlock.
And I just decided to be a sponge. I just decided to say like, okay,
I'm going to watch these folks. I became a forensic scientist. I would look at their
sketches and I would break them down and I would try to understand what made them good and what
made them successful. And eventually, through a combination of observation and genuine mentorship,
I kind of got to the point where I could do the job. Sure, and his fellow WGA members are striking right now for a bunch of reasons,
but one big one is AI. Specifically, the writers don't want studio executives to be able to use AI
to supplant writers as the creator of a new idea for a movie or TV show.
The way Hollywood works is that writers have the most power and make the most money when
they generate original ideas, like Schur did with The Good Place.
What Schur and the WGA fear is that executives will ask AI to generate a bunch of ideas for
TV shows and movies and then hire writers to flesh those ideas
out into scripts. There is no AI program that can actually write a ready-to-shoot full script,
at least not yet, but AI can generate a ton of ideas, and at least some of them might be usable.
If AI creates the original idea, then the writer is just a hired gun,
which means that more of
the rewards of the show's success accrue to the studio. The thing that we're fighting for here,
very simply, is the concept of writing being a viable career. It's never been remotely this
hard for young writers to move to LA or New York and begin a career and then sustain that career.
And I have watched as what was already a difficult path has become nearly impossible.
And that is essentially why we are fighting this fight. Because if it doesn't change,
if we can't make it more sustainable, it's going to stop. People will just decide that writing falls into the
same category as being a professional basketball player. Like, I love basketball, but I'm not
making the pros, so there's no point. And that would be a real shame. We would lose out on a
lot of great stories and a lot of great brains and hearts and souls of people who have something to
say. For Michael Schur and the WGA, this is existential.
It would mean a near total collapse of the career of writing for movies and TV shows.
And that fear is about the current generation of AI, the one that cannot yet write a full
script.
AI, of course, is getting better all the time. My fear is that even if these machines and programs only ever get really,
really good at doing the thing that they do, which is predictive text, that they will still,
at some point, with enough data and with enough computing power, get to the point where they could
accidentally stumble into something that might look enough like a
genuine human idea that people wouldn't really care one way or the other. And that's what
honestly worries me is the idea that it will be so good at imitating or predicting based on its
vast reservoir of existing knowledge that people won't really be able to tell the difference when it generates
whatever it generates.
Obviously, if you write for a living, the idea of AI writing as well as you is pretty
worrisome.
But what about the rest of us who don't write TV shows, but we consume them?
What would it mean if, as Shore says, AI just keeps getting better at predicting things?
That's the thing that keeps me up at night and haunts me and makes me feel like there's
something very, very dangerous that is right around the corner.
That's coming up after the break.
I'm Adam Davidson, and this is Freakonomics Radio.
Welcome back to Freakonomics Radio.
I'm Adam Davidson.
I am on a journey to figure out how I should think and feel about AI and its place in our society.
A way that doesn't have the panic or the excitement cranked up to 11.
So I knew who to call.
I'm Joshua Ganz.
I'm a professor of strategic management at the University of Toronto.
And I guess I'm an economist for a living.
I've been turning to Joshua Ganz for years
for exactly this sort of thing.
There is some exciting new trend
and everyone is freaking out.
What's a calm, grounded way to understand it?
Joshua Ganz will know.
I call that process desexification.
Meaning, like, we're taking something really exciting and brand new, and how can we make
it boring and predictable and like a lot of other things?
Exactly.
Exactly.
That's my mission in life.
Ganz has co-written two books on AI, Prediction Machines in 2018 and Power and Prediction in 2022.
He also runs a program at the National Bureau of Economic Research on AI, through which he's written and edited a ton of smart papers on the subject.
Nearly everything he writes includes that word, prediction. He says the best way to understand the economics of AI is to think of it as a process that
reduces the cost of prediction.
And what is prediction?
Prediction is taking information that you have and turning it into information that
you need.
For instance, when we predict the weather, we're taking information of historical weather
trends and other things going along at the
moment, and we use it to turn it into information we need, which is a forecast.
Not to say that these predictions are perfect.
They're just better than what we have to make decisions in their absence.
But the big leap was turning things that we didn't normally think of as a prediction problem,
realizing they were a prediction problem, and then applying these new methods of statistics to solve it.
Let's step back a moment.
Earlier, I mentioned that artificial intelligence is not the right term for the current generation
of what we have all come to call AI.
The word intelligence suggests that there is some active process of thought, but that is not what ChatGPT or any program is doing.
All it is doing is taking in information and using a lot of mathematics to predict what
information comes next.
The pros call it machine learning.
There are no words or pictures or sounds.
There are only numbers.
Words are turned into numbers, pictures into very long numbers, sounds into numbers.
And then the AI does math.
It's not even very complicated math.
Each step is fairly straightforward.
It's just that the software does a lot of math, a lot of linear algebra equations over
and over again. So, after being trained on a ton of joke setups
from the onions, say, the AI can use math to more accurately predict what is likely to come next
in the punchline. So, let's get back to writing. In December of last year, the Harvard Business
Review asked Gans and his book co-authors to write an essay about chat GPT, the team got together and hashed
out some big ideas and some key insights they wanted to put in the essay. And then Gans was
given the task of turning those rough notes into an actual finished work. What I did instead is I
looked at that and said, ah, I wonder what happens if I just put in the notes that we have into ChatGPT and say, write a 700-word piece describing these things at the level of an MBA student in terms of reading and terminology.
So pretty low.
Pretty low.
Exactly.
And so I did that.
Pressed enter and out popped exactly 700 words.
We did some light editing.
So I'd say about 10% of it was altered.
And off it was in the Harvard Business Review
and people read it and found it interesting.
We put a note at the bottom saying
we'd use ChatGPT for this purpose
because it was so new that seemed appropriate to do so.
And so you look at that and you say,
oh, well, why was I even necessary? And it's true. I saved myself an hour worth of time doing something that we normally
call writing. But let's think about that whole task. What really happened? The task of writing
was now decomposed into three things. The prompt, the actual physical churning out of the words and then the sign off at the end.
And then when you step back from that and say
what was the important part of this that makes it worthwhile to read
it's not the writing in the middle
it's the prompt and it's the sign off at the end.
It is not that all of a sudden you can't write
or what you've done is not valuable.
What that means is that anybody, even if they can't string a few words together,
can prompt ChatGPT to churn out their thoughts and then read it and sign off on it.
There's this potential for a great explosion in the number of people
who can participate in written activity. And that's the change that's going to come from this.
Because, to be fair, a lot of human-generated everyday communication is not great. Think of
PowerPoint presentations you've sat through, or memos from your colleagues,
or the instructions to some new gizmo you bought. We are inundated with communication that doesn't
meet the basic hurdle of being clear, comprehensible. To a professional writer,
AI that is good at writing sounds like a threat. But to a lot of other folks,
people who have to communicate but aren't great at it, AI might be a threat. But to a lot of other folks, people who have to communicate but aren't great at it,
AI might be a solution. Joshua Ganz made me feel a bit calmer about AI, a bit more settled.
I can see why some people are afraid of it and others like it,
but then I remembered a part of my conversation with Mike Schur.
That's what honestly worries me is the idea that it won't actually be creating a new idea, but it will be so good at imitating or predicting based on its vast reservoir of existing knowledge that people won't really be able to tell the difference.
By its nature, AI is backwards-looking.
It looks at whatever it is fed, and then it uses that stuff to make
predictions. So what happens if most of the writing we have was produced by AI, and then
if that AI is being trained on all that AI-written stuff to write more stuff? If our TV shows and
movies and essays and articles are all created by AI and then are used to train AI to write more of the
same. Think of the funniest thing you've ever seen, your favorite book or movie or TV show,
that thing that surprised you, that came out of left field and just blew you away.
For me, I instantly think of Monty Python or watching Spinal Tap or seeing Ali G in the UK
version of The Office for the first time
in the movie Step Brothers. Your list may be different, but you have one, right?
When you're talking about the relationship that audiences have to the art form, what you're
really talking about is, can you reach through the screen and grab someone by the lapels of
their jacket and shake them a little bit
and make them see the world differently or make them understand themselves differently.
And the AI piece of this to me is giving up on that concept. It's saying that's not the goal
anymore. If we go down that road, I don't think we can ever come back. I don't think that there
will ever be space for the better version of the art form to break through because the world will
be so cluttered with garbage and dreck and the slurry of other shows and movies that has just
run off into a processing machine and been spit back out in a new shape and form,
that there won't be any room for the good stuff. That's the thing that keeps me up at night and
haunts me and makes me feel like there's something very, very dangerous that is right around the
corner from where we're standing right now. You're supposed to be the funny guy.
Well, there's nothing funny about this. That's the problem, man. You know,
you think I want to be walking in circles for four hours a day and
talking about the death of the art form?
Much like the Luddites who saw a flood of inferior machine-made textiles replace the
higher-quality, more expensive stuff made by hand, Shore pictures a world of AI-driven dreck.
Middle-of-the-road stuff produced by a prediction machine.
A machine that predicts the most likely-to-satisfy answer.
Not the single, very best, most amazing thing.
No, the average.
The middle of the road.
So yes, Michael Shore is right.
If all of our comedy
was written by AI, we would probably only have what I think young people call mid,
middle of the road, derivative comedy. And let's be honest, a lot of human written comedy is pretty
derivative, pretty middle of the road. But people, at least some people, do want that grab-you-by-the-
lapels experience. That new thing that is fundamentally
unlike anything that came before. For now, that requires human beings.
Okay, so if creativity is what human beings can offer that AI can't fully replace,
it's pretty important to our economic future. In which case we should probably know what creativity is,
which is easier said than done.
There has been, in my viewing of economics literature,
kind of an abstraction away from the individual and that individual act of creativity.
That's right after the break on Freakonomics Radio. Economists sometimes have a hard time talking about creativity.
Although one exception is Dan Gross from Duke University's Fuqua School of Business.
It's this ephemeral thing.
There isn't a broad consensus on what
this even is, let alone what a good way to measure it would be. You want to get an economist excited,
tell them there is some vague thing that can't be measured. They'll obsess over how to measure it.
Creativity is a deep issue for economics. As you've heard on this show many times,
economic growth, where more people
have more of their needs met, most often comes from innovation, from the output of creativity.
That could mean a new technology or a new TV show. They're both bringing something into the world
that wasn't there before. Some societies and some moments in history produce a lot more creativity than others. Economists want
to understand that. So they look at the kinds of things economists pay attention to. Property
rights, population density, interest rates. They don't usually look much at individual people.
Partly this is because of the tools that are available and the data that are available.
There has been, in my view in the economics literature, kind of an abstraction away from the individual
and that individual act of creativity. And that's what I decided I wanted to try to get
a little bit more insight into. Gross happened upon something economists love,
a natural experiment, a real thing happening in the world that would generate the data he needs.
Not something I would have thought of, online logo design competitions.
This work that I did in graduate school, it was studying how competition affects
creative production. And in particular, it was examining design competitions where you have
individual designers who are competing for a fixed prize that has been posted by a sponsor, typically a small business that's in need of a logo.
So I've done these, by the way. It's kind of awesome. I had a small podcast production company
and we just went on this site and explained what we wanted. And suddenly we had hundreds and
hundreds of options. And so let me tell you how this really worked in the setting that I studied.
The principal mode of feedback was one to five star ratings. So this design got
three stars, this one got one star. The designers can see the ratings of their own work. They can't
see what ratings have been given to specific designs by other people, but they can see the
overall distribution of ratings. They can see, okay, you know, somebody out there seems to have a winning idea
because there's five-star floating out there somewhere. And then they can think about what
that means for them. What do you do when you get a three-star rating for your design
and you know someone else has five stars? You know you're not getting the gig. You're not winning the
award unless you do something different. Do you go for broke? Try something wild?
Do you get even more conservative and do something classical but boring?
Or do you just quit?
Gross was able to peer into this carefully controlled space
to get a real sense of how people respond.
What I've found here is that
when a designer gets their first five-star rating,
they'll really transition from trying out
different ideas to just iterating on the one that you've rated highly. And that's especially the
case if they don't have any high-rated competition that they're aware of. On the other hand, if
they're aware that there is other high-rated competition, they'll then be induced to actually
revert back to experimenting a little bit more.
So with competition, creativity goes up.
But, you know, spoiler alert, I did read the paper.
So I know there's another part of this story.
Let me tell you about the twist here.
As a contest gets re-crowded, so if there are a lot of high-performing competitors,
these individual designers, their incentive to keep investing more effort,
to keep putting more effort, to keep
putting more in to trying to make their designs better, that starts to go down. Because essentially
in a crowded field, it becomes a bit more of a lottery. The chances that that incremental effort
are going to really yield some return for them, they really shrink to zero because you have a lot
of other good contenders out there. The odds that you're going to slip by them start to become smaller and smaller, even if you have a good idea. And so crowded competitions actually discourage
effort. They might actually drive these designers to just stop participating.
What Gross is saying is that there's this magical Goldilocks zone where there's just
enough competition to get the largest number of people to step up and do more creative
work. The story of the paper in a nutshell is that too little competition, you don't get a lot of
variation. Too much competition, you don't get a lot of effort. And it's somewhere in the middle
where at least incentives to be creative, to produce novel work, to just come up with new stuff
seem to be the highest. AI means that there will be essentially infinite competition for that creative middle of the road.
AI can produce so much work in that space that it probably makes sense for people who can only
create middle-of-the-road work to bow out, let the AI do it. But Gross's study contains a warning.
When the contest gets crowded, it's not just the middle-of-the-road folks who stop competing.
It's everyone.
One big question.
Will AI always be stuck in the middle of the road?
Or can it generate new ideas, new forms of writing, new ways of creating art or telling jokes?
Is AI fundamentally different from us?
Or is it just early on its journey?
There are some similarities between what people do and what computers do.
That's computer scientist Lydia Chilton again.
Certainly, both rely heavily on examples. The more examples you look at and analyze,
you usually get better at your craft. No one is born knowing how to do these
things. We're all learning from examples. The computer really is trying to simulate aspects
of human experience, but there are some things like if you can't actually feel it, you don't
have what we call ground truth data. You don't know what's real. You're only seeing part of it.
You're only sort of guessing. And I think we've all been in experiences where like,
I don't really know what's going on, but I can kind of guess. And so that's what the computer
is. It's just guessing, but it's seen enough data that it can guess correctly often enough.
I feel like I want humans to win. And I would love it if you said
there's something fundamentally human that computers
will never be able to do. It's hard for me to separate what I think will happen from what I
want to happen. And nobody knows. Here's what I want. What I really want is to show people
that these things like creativity that we think are mysteries, it's not a mystery.
You can do it.
Now, in this process, if I have accidentally enabled the machine or helped the machine in any way do better than people, I'd be like, oh, maybe I shouldn't have done that.
This is a classic computer science thing where we're so excited about just showing the computer can do it.
Maybe we should have thought whether it should do it.
Your job is not really fundamentally to figure out what are the implications.
Your job is to advance science and to teach science, right?
That's what I'm good at. I'd say I'm not that great at the other thing.
At thinking through the implications.
I try, but I have to admit, I get a little bit stuck.
I'm so caught up in the idea of understanding this process.
And I do really think, it's hard for me to think of,
okay, computers can make jokes, like what comes next?
The question about humor is really a question about humanity.
Are there things, valuable, important things,
that only humans are able to do?
If there are, then the answer is clear.
People can thrive so long as they focus on the human stuff.
Let AI do whatever it is that AI can do.
But if we learn that there are no things, or very few things, that humans can do better than AI,
then our position is a lot more confusing.
What is our role in a world where we're not needed?
We're not there now.
That's not today's issue.
But it could come soon.
GPT-2 was roughly the size of a honeybee's brain.
And it was already able to do some interesting stuff.
Now I think GPT-4 is roughly the size of a squirrel's brain,
last I checked. So we've moved, you know, from honeybee to squirrel. And I was trying to forecast
when would it become affordable to train the human brain? How long will that take? And what
will it mean for humans like you and me? Next week on part two of our series, How to Think About AI,
we'll answer those big questions and a few
others, including, is AI coming for your job? And if so, what can you do about it? That's coming up
next week on Freakonomics Radio. I'm Adam Davidson. Thanks for listening.
Hey there, Stephen Dubner again, and that was our guest host, Adam Davidson.
He will be back next time with part two of How to Think About AI.
Until then, take care of yourself, and if you can, someone else too.
Freakonomics Radio is produced by Stitcher and Renbud Radio. You can find our entire archive on any podcast app or at
Freakonomics.com, where we also publish transcripts and show notes. This series was produced by Julie
Canfor and mixed by Eleanor Osborne, Greg Rippin, Jasmine Klinger, and Jeremy Johnston. We also had
help this week from Daniel Moritz-Rabson. Our staff also includes Alina Kullman, Daria Klenert,
Elsa Hernandez, Gabriel Roth, Lyric Bowditch, Morgan Levy, Neil Carruth, Rebecca Lee Douglas, Ryan Kelly, Sarah Lilly, and
Zach Lipinski.
Our theme song is Mr. Fortune by the Hitchhikers.
The rest of our music is composed by Luis Guerra.
As always, thank you for listening.
Can I call you Che, man?
If you really want to, sure.
Che Gans.
You know, it's your podcast. Can I call you J-Man? If you really want to, sure. J-Gans.
You know, it's your podcast.