Fresh Air - A Chatbot Reacts To A Book About Tech
Episode Date: July 23, 2025After writing chapters of her new book about how tech companies help and exploit us, tech journalist and novelist Vauhini Vara fed those chapters to ChatGPT. She told the chatbot she needed help with ...her writing, but her real goal was to analyze and critique the AI's advice. Her new book is Searches: Selfhood in the Digital Age.Also, TV critic and historian David Bianculli reacts to the cancellation of The Late Show with Stephen Colbert.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
Transcript
Discussion (0)
These days, there's so much news, it can be hard to keep up with what it all means for you,
your family, and your community. The Consider This podcast from NPR features our award-winning
journalism. Six days a week, we bring you a deep dive on a news story and provide the context and
analysis that helps you make sense of the news. We get behind the headlines. We get to the truth.
Listen to the Consider This podcast from NPR. This is Fresh Air. I'm Terry Gross. Here's the kind of conflicted relationship my guest
has with big tech. Tech journalist and award-winning novelist Wahini Vara has ethical reasons why
she shouldn't shop on Amazon, and at least as many reasons why she does. Then there's
Google. She's opposed to how Google monetizes
our personal information to sell ads geared to our interests, but she
appreciates the archive of her own stored searches, many of which she lists
in her book because of what they reveal about different periods of her life. As a
tech reporter, she got access to a predecessor of chat GBT. She loves playing with AI and has found ways it can be helpful.
But she's skeptical of its use as an aid for writers.
She's written twice about testing a chatbot in that capacity.
First in an essay that went viral called Ghosts, in which she asked AI to help her write about her sister's death.
And now again in Vara's new book Searches, Selfhood in the Digital Age.
After she wrote chapters of the book, she fed the chapters to ChatGPT and asked for
help with her writing.
Then she analyzed the advice and what it says about the abilities, shortcomings, and biases
of the chatbot.
She added her interactions with ChatGPT
to her book. The theme of the book is how tech is helping and exploiting us.
Vara started as a reporter at the Stanford University campus paper where
she edited its first article about Facebook when Stanford became the third
university to get access to it. She covered tech for the Wall Street Journal,
was a tech writer and editor for
the business section of the New Yorker's website,
and now contributes to Bloomberg Business Week.
Her novel, The Immortal King Rao,
was nominated for a Pulitzer Prize.
Her short story, I, Buffalo, won an O. Henry Award.
Well, Hini Varo, welcome to Fresh Air,
and thank you for getting here. Your windshield got shattered by who knows what on the way over to the
studio. I'm so grateful to you for making it. It was worth it. I was like,
I'm getting to that studio. And you did. Thank you. And welcome and I enjoyed your
book. So you did this exercise of feeding chapters of your book to chat GBT, asking for advice.
What did you tell the chatbot? Why did you tell it you wanted its help?
I'm glad you asked the question that way because I'm really interested in the way in which we sort
of perform different versions of ourselves when we communicate, whether it's with other human beings
or with technologies. And so I was
definitely playing a role with the chatbot. I told the chatbot that I needed help with
my writing and I was going to feed it a couple of chapters of what I was working on and I
wanted to hear its thoughts. The reality was that I wanted to see how chat GBT would respond. And so the interplay between sort of my performance and its performance was super interesting
to me.
I have an ethical question for you, Wahini.
Is it ethical to mislead a chat box and ask questions under a kind of false pretense?
100%.
I say that as a journalist with the full expertise and authority of my
role as a journalist. You know, I think so. I think our relationship with these products
is really different from our relationships with other human beings. I feel really strongly
about obviously things like accuracy and ethical standards in my daily life when I talk
to other human beings, whether it's as a reporter or not. What I think is really
interesting about technology is whether it's chat GBT or something else is the
way in which we can sort of play with these ideas of how people are supposed
to communicate in ways that are I think pretty interesting and freeing.
After you got some feedback on the first couple of chapters,
you asked the chatbot if it's okay to share a couple of more chapters.
And ChatGBT answered,
absolutely, feel free to share more chapters.
I'm looking forward to reading them and continuing our discussion.
And that gets to a very fundamental question
that you asked ChatGBT about, which is,
when a chatbot uses the first
person I what exactly does it mean because it is not a person it is not an
eye it is artificial intelligence it's a computer program it's you know it's
basically a machine so what what is the eye what does that mean that chatbots using AI? Yeah, I mean, I would argue that that AI is a fictional creation of the company OpenAI
that created ChatGBT.
So we think about these technologies, I think, sometimes as being very separate from human
experience and human desires and goals. But in fact, there's this company called
OpenAI whose investors want it to be very financially successful. And the way to be financially successful
is to get a lot of people using a product. The way to get a lot of people using a product is to make people
feel comfortable with the product, to trust the product. And one device
that a company might use in order to do that is to use, have the product use language that makes you
feel a bit like you're talking to another person. So in reading Chat GPT's responses to your chapters,
one of the biases you noticed was that it suggested you add more about the positive side of AI and its creators.
I thought that was interesting. Did that say to you that it was revealing
the chatbots bias or
pointing out your
negative bias or both?
It's such an interesting question and it gets to the heart, I think, of what is
problematic about these technologies because I can't claim to have any
way of knowing why it said what it did. So basically, I fed it these chapters about big
technology companies, and I said, What feedback do you have for me? And it said, you could be
more positive. And then later, it goes on to provide these sample paragraphs that thinks I should include in the book about how Sam Altman, the CEO of OpenAI, is a visionary leader who's also a pragmatist, like this really glowing stuff. It would be fun to be able, and it would support a strong critique to be able to say, oh, clearly OpenAI has built this product in such a way that it's deliberately
having the products about this propaganda, about its CEO that's positive.
It's certainly possible that that's the case, but there are all kinds of other explanations
for it too.
It's possible that the language that the technology behind the chatbot absorbed in order to learn quote unquote how
to produce language happens to be somewhat biased toward people like Sam Altman. There
are all kinds of possible reasons and we just don't know. But I think that not knowing is
a problem.
Did you use any of the chatbots advice about balancing your reactions to AI and including more positive
aspects of it?
I didn't.
So one thing that I wanted to be really thoughtful about was actually not writing a book that
was influenced in any way by the rhetoric of the chat bot that I was then conversing
with about the book. And so I wrote the entire book and after doing that I fed it to the chat bot. And
certainly later there were these edits that I made in the editing process with
my editors at my publishing company but those were not in response to integrating
anything that the chat bot said because on a philosophical level, I did not want to integrate anything the chat bot suggested.
So you asked the chat bot about how it seemed programmed to flatter and to sound empathetic
and kind because before it gives you any kind of criticism ever, it tells you like, this
is very well written and this brings out like a combination of like tech history
with your personal history.
And oh, I have to say, let me just interject here that just to see what happened, I asked
Chachi P.T. what questions would Terry Gross ask Wahini Vara in a fresh air interview.
And it was very flattering to me. It
praised me as a good interviewer with like sensitivity and deep questions so
you know just another example that it can be very flattering but so you ask it
about how it seemed programmed to flatter and to sound empathetic and kind
and it responded the way I communicate is designed to foster trust and confidence
in my responses, which can be both helpful and potentially misleading.
And I thought, that's so true.
It can be helpful and misleading.
And I thought, maybe the chatbot is actually very transparent. OpenAI has said that chat GBT and his other products
are designed to foster trust, to appear helpful.
So it says those things explicitly.
And right, by saying, this is how I function,
yeah, I guess chat GBT is being as transparent as it can be.
I don't know that I would trust it
to always be as transparent as that,
or even to know
how to be transparent, right? Because it's just a machine that's generating words. It has no way of
definitively always generating material that's even accurate, right? And so I have fairly low
confidence that that transparency is always going to be there. I am curious, though, about what you
thought of the questions that it produced, whether any of them were like at all interesting to you. is always going to be there. I am curious though about what you thought
of the questions that it produced,
whether any of them were like at all interesting to you.
They were kind of pretty broad general questions.
They all touched on themes in your book.
Yeah.
They were all subjects I wanted to explore,
but they were so like broad in nature that, you know,
there was no personality I'm not saying like
well look at my personality but there was no point of view and there was no
follow-up that was just like a list of questions but there was no follow-up to
try to go deeper after the answer and I know it wouldn't have heard your answer
but still when I prepare an interview like one question leads to the next question to
go deeper into that answer. And there was nothing like that.
Yeah. Well, what's interesting, I think, about that and what I experienced too using chat
GPT in the context that I did is that there's something fundamental about human communication,
about like two
people talking to each other or the fact that right now, for example, you and I are talking
to each other, but I imagine you always also have an awareness of the fact that eventually
millions of other people will hear the same conversation. And so both you and I are keeping
in mind like this kind of complex idea about who we're
communicating to, what the communication is for, and like our own backgrounds and experiences
and ways of communicating come into it.
And a chatbot is like not doing any of that.
It seems like it is because the words it produces sound kind of like the language humans use,
but it's not using language in any way that's remotely like how we do.
So in 2021, when you wrote your essay Ghosts, which went viral, it was published initially
in the Believer, it was adapted into This American Life Story. And the premise of this
was you had wanted to write about the death of your sister. She was diagnosed with cancer, Ewing's sarcoma, when you were a freshman in high school and
she was a junior and she died when you were both in college.
And you felt like you just didn't have the words to describe how bereft you were, what
a life-changing event this was in every way. And so in 2021, while you were playing around
with a predecessor of ChatGPT,
you asked it to help you write the essay.
And I want you to describe what your process was.
This was before ChatGPT was out.
This was before AI was a big part of our lives,
but I got early access to this AI model. And the out. This was before AI was a big part of our lives. But I got
early access to this AI model. And the way it worked was that there was this web page
and had a big white box and you could type in that box and then press a button. And it
would seem to complete the thought for you and complete the text for you. And so I'd
been playing around with it for a while. I would just like I put in the beginning of
Moby Dick, which is my favorite novel and hit the button just to see what would happen.
I did all kinds of stuff like that. And then I started to think about what the promise
was that a technology like this was making. And it seemed to me that the promise was that
it could produce words for you when you were at a loss for words. And because I'm a writer, I tend to want
to make the effort to come up with my own words to describe my experiences or things
I've observed. But there was this thing and continues to be this thing that I have a really
hard time finding words for, which is the death of my sister and my grief over it. I mean, I think anybody who has
experienced death or any other kind of loss will be familiar with that feeling of, of not knowing how to come up
with the words to describe this experience that was so profound. And so I kind of took this technology somewhat at
face value, I thought to myself, well, if this is the promise
this technology is making, let me try to get it to communicate for me about my sister's death.
And so I sat down and I wrote this sentence, which was when I was in my freshman year of high
school and my sister was in her junior year, she was diagnosed with Ewing's sarcoma. And then I hit the button.
And it produced this story that really has had very little to do with my actual experience
and my actual sister. And the last line of that little, you know, three or four paragraph
story it produced was, she's doing great now. Which was like, it was like the opposite of what I
wanted this chatbot to do, right? Like it was producing a
lie, a falsehood that was sort of like the worst possible
falsehood that it could produce, right? Because my sister, my,
my challenge was trying to communicate the reality of what
had happened. And this was like the opposite of that reality.
And so I thought, okay, well, I know enough about these technologies to know that if I give it more words and hit the button,
then it'll have more to go off of and it might match more closely my experience.
And so I erased everything it wrote, kept my first sentence, I wrote a little more, hit the button again.
And in some ways it did get a little closer. And I did that over and over, sort of deleting what the chatbot wrote every time.
And the strange thing that happened was that as I did that, the technology did get closer to
describing something that resembled grief or my experience of it in a way that was like, weirdly moving to me
and impressive to me. But ultimately, this machine was not me. And so it couldn't say
anything authentic about my actual experience. And so I realized eventually toward the end
of writing this thing that there was nothing that the technology could come up with that was actually going to fulfill my desire to be able to communicate because it wasn't it wasn't me.
It wasn't doing the thing that I wanted to do, which was to communicate myself on my own. And so ultimately, I published that experiment as an essay, and I thought it was so interesting
how it showed both the ways in which a product like this can legitimately produce language
that somebody can find moving and intellectually stimulating and interesting, and yet be doing
something very different from what a human is doing when we're trying to communicate.
Well, a couple of things.
Early on when you were giving it very little information,
it twice had you, it was like very athletic.
I don't know if you are or not,
but like in one you're like a lacrosse player
and in the other it's like you run
for like miles and miles and miles.
And it also seemed to have a bias toward a happy ending.
She's fine now.
It's like he watched too many mediocre movies.
Yeah, I mean.
Notice how I personalized it.
Like he watched.
And I genderized it.
Totally, and it's interesting that you genderize it too
because in that second one,
the one where it thought that I was a runner,
it seemed to think that I, the one where it thought that I was a runner, it seemed to think
that I, the writer, was male. So I think there were all kinds of things going up. But then
later it realizes that I'm female and then ends up generating this meet-cute
between me and this like handsome professor who helps me deal with my grief.
And so there are all these tropes and biases that are embedded in what it's producing.
So I want to give an example of some writing that
is very dramatic, but also very puzzling, very odd.
So this is how hard it is to describe your sister
and what you felt for her.
This is the technologies. Right, this is AI speaking here. This is the technologies.
Right, this is AI speaking here.
This is AI writing on your behalf.
Exactly.
So I can't describe her to you,
but I can describe what it felt like to have her die.
It felt like my life was an accident, or worse, a mistake.
I'd made a mistake in being born,
and now to correct it, I would have to die.
I'd have to die and someone else, a stranger, would have to live in my place. I was that
stranger. I still am.
What? It sounds very dark and interesting but I'm not sure it makes any sense. What
do you think?
It's funny because I think so much of the experience
of reading is about making your own meaning as a reader.
And so for me, I think there's something
that like in my reading of it is kind of poignant.
I read it as saying when somebody who's very close to you, whose existence is a big part of your identity dies, you have to then rebuild a new version of yourself, right? Like a kind of new identity. So I read this as talking about the period after my sister died, and I had to become a new version of myself. And I was learning who that new version of myself was.
And so that person was kind of like a stranger. And in a way, there's a sense of estrangement
that continues. What's interesting is like, I read it that way. But I read it that way,
because I'm a reader making meaning from language that a technology generated with no particular intent, no knowledge of what my experience of grief was.
I'm thinking of how weird it is that you and I
are doing literary criticism of AI.
It must sound a little strange, don't you think?
Yeah, I agree.
And I think the funny thing about it
is that we're two human beings trying to make meaning
out of something that is fundamentally, one could argue, meaningless in that the entity that
created the language wasn't doing it with any consciousness, right? Any intent
behind it. Well let's take another short break here. If you're just joining us, my
guest is Wahini Vara. Her new book is called Searches, Selfhood in the Digital
Age. There's
a lot more to talk about, so stick around. We'll be right back. I'm Terri Gross, and
this is Fresh Air.
It doesn't matter if you're a fan or not. We have to talk about season seven of Love
Island USA.
It's a huge indication to me of this kind of generation of people who don't know how
to be open and vulnerable.
I'm Brittany Luce, and on this episode of It's Been A Minute,
I want to show you how reality TV is getting a little too real
by revealing what it's really like to date today.
Listen to the It's Been A Minute podcast today.
So based on your interactions with AI,
what are your thoughts about chat box and the use
of AI for writing or editing?
It wasn't that useful for you.
It was very instructive about AI.
But do you think there are other people that it would be very useful for?
There was a study out of Cornell a couple of years ago that I found really interesting where they had some people write an essay about social media, just on their own, and
then they gave these two other groups special AI models.
For one group, they gave them an AI model that was predisposed to produce positive opinions
about social media.
And then they gave this other group an AI model that was predisposed to produce negative opinions about social media. And then they gave this other group an AI model that was
predisposed to produce negative opinions about social media. What they found was that when people
wrote essays with the help, quote unquote, of these AI models, they were twice as likely to
produce essays that reflected the, quote unquote, opinion of the AI model. It seems from that research and other research
that's emerged since then,
that even if we are using these AI companies' products
to edit our work or ask for feedback on our work,
there's a real danger that the responses
that we're going to get are going to change our writing in fundamental
ways that we might not even be aware of.
Your father uses AI, including to write Haiku.
So how does he use it?
And what do you think of that?
Oh, my gosh, we could do a whole interview about my dad's use of AI.
My dad has recently started sending me messages on WhatsApp, where the whole text of the
message is something he asked ChatGBT to write. So for example, he recently sent me one that was, it's hard to
decide whether to retire in Canada, the United States or India. Here are some pros and cons for each option." So he never said to me,
I'm wondering whether I should move to India or Canada for my retirement. He just sent me that
response. And so the subtext, like what he's communicating through chat GBT is the thing
that's actually unsaid. And so there are a lot of people out there. I think my dad is one of many people who want to communicate something.
My dad was explaining to me on the phone the other day
that he's not a writer,
he can't communicate these things himself.
But if he gives chat GBT enough of a prompt,
it can communicate the thing he wants it to communicate.
And there are things that I find problematic
about that for sure.
You know, AI in some ways is being used like a personal Cyrano de Bergerac.
Like you want to express your love for someone you don't have the words.
So you have this other guy write it as if you were saying it and signing your name to it.
Yeah.
So how do you use AI for real in your life?
So the truth is that I use AI in very limited ways.
The fact that I fed large portions of this book to ChatGBT might give people the
impression that I'm some huge AI super user, which I'm not.
I'm a journalist who writes about AI.
So to the extent that that's part of my work, I think it's really important for
me to engage with
the products. At the same time, I'm really concerned about all the things we don't know about how these
products function and how the companies behind these products might ultimately use everything
we're putting into their products to exploit us, to expand their own wealth and power. I sometimes use chatGBT. A use that comes to mind is like if
there's a word on the tip of my tongue, I'll go to chatGBT and write a sentence and with a blank in it and kind of
explain the gist of what I'm looking for. And one thing it's pretty good at is coming up with what that word was that was on the tip of my tongue
So that's a small example of how I use it when I do use it. I tend not to log into it I tend to just go to chat GPT
Use the interface without logging in so that my use of it is not associated with my account
I do still have an account because again as a journalist, I want to be able to have access
to these products.
So I'm unclear since I've only used it twice each time
to ask good questions to help me understand
how it worked for the interview I was about to do
as I did in your case because AI is at the center
of the interview so I wanted to ask it some meta questions.
You know?
But, you know, I used it for free.
I don't have an account.
I just put my question in and it came up with stuff.
What are some of the ways that you expect it
to be monetized in the future
that it's not monetized for yet?
Because I feel like being able to use it at all for free
is kind of like a teaser until like, no one can use it for free. I'm just speculating.
I have no knowledge.
Yeah, so I'm speculating to an extent too, but these products are really,
really expensive to build. And so investors are putting a lot of money,
companies themselves are putting a lot of money into building these products.
And some small number, some small percentage of users are paying for premium versions of
the product. But that's just not enough to turn these companies into the enormous businesses
that the investors are betting that they are going to be. And so that leaves us in this
really interesting situation in 2025, where the companies are starting to say, okay, we're going to need to figure out how to monetize our free users is how they put it.
And the CFO of OpenAI said to the Financial Times last year that the company is looking into advertising as an option.
Other AI companies, and here I'm talking about big companies
like Google and Microsoft,
also seem to be thinking about this.
So this is speculation, but here's one way
in which it would be obvious for AI companies
to monetize our use of the products.
When people trust these products a lot,
they end up going to these products with all kinds of
personal information, their marital struggles, their conflict with their boss at work. And while we focus a lot on
the question of like, how accurate or unbiased or useful the information is that these products are giving us. I think something we kind of forget about
is everything we're providing to the makers of these products
in asking them these questions
about really intimate details of our lives.
And so eventually these companies are gonna know a lot
about who we are, about what kind of language
can be used with each specific user to persuade them of something, to influence them in a
particular way. And that puts these companies in a position to, for example, recommend products to us
using language that's geared toward us specifically and our circumstances and our vulnerabilities, and ultimately collect this huge database of all of us
who are using these products, who we are, and what makes us tick.
Yeah, and it sounds like, you know, as you're saying that AI and the companies
that own the AI products are gonna know a lot more than, say, knowledge based on
what I search for on Google or
the books I bought on Amazon or the TV shows I'm watching on Netflix or the algorithms
are going to recommend what I want to buy or watch next.
Exactly.
Yeah.
There's parts of your book where you describe your life through your searches because you
don't like the fact that Google has a lot of information on you based on
your searches but you do like the fact that your searches have been archived. You can access that
archive and learn about where you were at different periods of your life based on what you searched
for. How did you start thinking of searches as a record of your life?
So the first thing that I ever wrote that ended up in this book was this chapter made up
entirely of my Google searches. I wrote it in 2019.
I had been covering tech for a long time by that point.
And so I knew that Google kept records of our searches unless we turned off its ability to do that.
And I could have sworn I'd turn it off, but I hadn't. And so for the past 15 years off
and on, but mostly on, Google had been collecting all my searches. Realizing that, it freaked
me out on one level. But then also I found it fascinating because as a writer, as a journalist,
I'm always interested in archives, right?
And I used to keep a diary when I was a kid, but I haven't in a long time.
And it occurred to me that probably the best possible archive of my life was the archive of everything I'd searched for over the years.
And it made me think about the way in which, like, it's sort of too simplistic to say, these companies exploit us, and we have no say in the matter. Or to do what the companies say in turn, which is, you're only using these products because they're useful to you, you could stop using them tomorrow if you really wanted to. I think there are these like very binary positions. And I think the reality is that the exploitation and the usefulness totally go hand in hand with all of these products. And I think what makes that
really uncomfortable for us as users is that then we have to contend with our own complicity, like our own role in the exploitation
that's taking place when these companies, you know, collect our personal
information and use it to become more wealthy, to become more powerful, to
influence political systems. We have to admit like, well, that's partly our
fault because here we are using these products and giving them permission to
keep archives of our lives.
Well, let's take another break here.
Let me reintroduce you.
If you're just joining us, my guest is Wahini Vara.
Her new book is called Searches, Selfhood in the Digital Age.
We'll be right back.
This is Fresh Air.
Is the internet and social media making you feel obsolete as a novelist.
And also worried that all your writing, your essays, your journalism will be appropriated by AI.
Yeah, I mean, what I would like to think is that we have choices here.
And part of the reason that in this book, and when
I think about my own personal use of these products, I'm so interested in like, our choice and agency
in the matter, is that if it's the case that big technology companies are just going to continue
to amass more wealth and power, and AI is here. And so AI is going to be even bigger in the future and take everything over.
Like that suggests that we don't have a choice in the matter, right?
However, if we say we have a choice in the matter and we can actually decide to
choose a different future because we are unhappy with the one we're currently in,
then we can potentially build a future
that's different from the one that we're in now.
But I think like in 2025,
we're in this really interesting crucial period
where not as many people are using AI
as I think we think.
So in the US, for example,
most people have never tried chat GPT still in 2025. And so
we're in this interesting position where we can actually decide as individuals, as communities,
as societies, the extent to which we want AI to be a part of the future, the extent to which we want AI
part of the future, the extent to which we want AI generating novels or generating something that is going to substitute a newspaper or magazine article or a radio show.
I have to ask you about the spelling bee. A moment of semi-fame in your younger years
was when you were third in the National Spelling Bee. I always wonder
what is the point of asking young people to spell obscure words that no one uses
and no one even can define? Can you explain that? Because it makes no sense
to me. Yes, I have so many thoughts on this, Terri. I continue to love spelling.
I love language. I think, you know, to get
philosophical about it, I think we could ask that question about anything, honestly. I
think we could say, what's the point of trying to run a three minute mile, right? Or like,
I've taken up rock climbing. What's the point of like trying to climb to the top of a rock
wall when elevators exist, right?
Yes.
And I think it speaks to this thing about AI too.
What's the point of trying to write an essay if AI can write it for you?
And I think the point is that we as humans are like idiosyncratic curious creatures and
we created this thing called language that's really important to us.
And for me personally, who knows why? I don't know why. I love words. I love language. I love knowing
how sounds fit together and produce meaning. Like that feels, that's always been fascinating
and meaningful to me. And so, like, I love it because I love it. And I kind of love about
spelling that there is,
you're right, there is no point,
especially in the age of spell check and AI.
Like there's probably even less of a point
than there was in the mid nineties,
but strong supporter of spelling.
I think everybody should be in spelling bees.
Yes, but the question though was about asking young people
to spell words that nobody ever uses
that no one can even define, really obscure words.
Do you take pleasure in spelling words
that you didn't even know existed?
Yes, I love words that I didn't even know existed.
I take pleasure in spelling them.
I take pleasure in like knowing that they exist
when previously I didn't know that they exist.
I know, I love it.
And I think they do too.
I think there's this misconception
about spelling bee kids that like,
they're all doing it because their parents are making them
so that they can get into college
10 years later or something.
But every, I wrote an article about spelling bee kids
in 2018, I think it was, for the magazine Harper's.
And so I got to spend time with like
a more recent generation of spelling kids.
And the thing that I think they had in common with the spelling kids I knew in
the mid nineties is just like this genuine idiosyncratic,
strange love for words and how they're put together.
So you placed third in the national spelling bee. What was your losing word?
Oh, Terry, the losing word was para plus. Para plus. Can you spell it?
Yeah, but again, I've never heard the word before. I have no idea what it means. I would
spell it para plus or para plus because one sounds like an animal and the other sounds
like an amount. Okay, so the word is para plus, there is an alternate pronunciation, which is para plus.
Okay.
So I would spell it P A R A P L U S or P L U S S or if it's para plus P L O U S E.
Oh, I know, but this isn't the spelling bee.
So I'm allowed to do this.
You're out. You're out, Terry.
After three or four tries, I'm out.
Okay. You're still out.
Yeah, so it's P-E-R-I.
Oh, you said paraplus.
I know, I think that's fair.
It might be my pronunciation,
but it is P-E-R-I-P-L-U-S,
and I spelled it P-E-R-I-P-L-U-S-S-E.
Ah, okay, you got a little fancy there.
You over-fancified it.
Yeah, exactly, exactly.
And what does it mean?
It has to do with the,
I don't remember the exact definition,
but it has to do with the log that is kept on ships
when they circumnavigate,
like when they're trying
to see where the borders of islands or continents are?
Well, I think it's only fitting that we started the conversation doing a literary critique
of AI's writing and we're ending it with spelling.
It's amazing.
Perfect.
Full circle.
Yes. Thank you. It was really great to talk with spelling. It's amazing. Perfect. Full circle. Yes.
Thank you.
It was really great to talk with you.
I really enjoyed it.
And thank you again for coming in spite of the fact
that your windshield shattered on the way over.
I should let you go and get it repaired
before it rains or whatever.
I appreciate it.
This was so fun.
It was a real honor to get to talk to you Terry.
Well, he knew Varas new book is called searches selfhood in the digital age. Our TV critic
David Bianculli will talk about the significance of CBS's cancellation of the late show with
Stephen Colbert. After a break. This is fresh air. Last week, CBS canceled The Late Show with Stephen Colbert,
although it will remain on the air until next May.
Our TV critic David Bianculli says that even in an era
when broadcast late-night talk shows are viewed less than ever before,
this amounts to a significant moment in television history.
Most of the time, the landscape of late-night TV
seems almost exactly like that. A landscape ever familiar never changing.
But once in a great while the tectonic plates shift suddenly and what we see
becomes notably different. Johnny Carson ruled late-night on NBC for an amazing
30 years and when he stepped down, the network should have
given The Tonight Show to David Letterman.
Instead, Letterman defected to CBS and launched that network's talk show franchise, The Late
Show, which Stephen Colbert eventually inherited.
Conan O'Brien had The Tonight Show briefly, but walked away to protest NBC's plan to
present The Tonight Show in a later time slot so that a show by Jay
Leno could air first. Jay Leno ended up with The Tonight Show and the time slot, then turned it over
to Jimmy Fallon, while another Jimmy, Jimmy Kimmel, established his own mini empire at ABC.
And for a long time now, that's been it. Colbert, Fallon, and Kimmel in Late Night, with Seth
Myers checking in on NBC even later.
But a late-night TV earthquake, the first big one since the 1990s, set off tremors last
week that are bound to have repercussions for years. Returning from a two-week vacation,
Colbert opened his late show on Monday, July 14th, by noting what Paramount,
the parent company of CBS, had been up to.
While I was on vacation, my parent corporation, Paramount, paid Donald Trump a $16 million
settlement over his 16-minute lawsuits.
As someone who has always been a proud employee of this network, I am offended.
And I don't know if anything will ever repair my trust in this company
But just taking a stab at it. I'd say 16 million dollars would help that was a soft enough jab
Kind of like when Letterman would poke fun at his corporate bosses at General Electric once that company acquired NBC
But then getting deeper into the weeds and the controversy Colbert said this now
I believe this kind of complicated
financial settlement with a sitting government official
has a technical name in legal circles.
It's Big Fat Bride.
Because this all comes as Paramount's owners
are trying to get the Trump administration
to approve the sale of our network to a new owner,
Skydance.
["Skydance"]
Three days later, Colbert opened his program seated at his desk, informing his studio audience
and the viewers at home of a stunning piece of news.
Before we start the show, I want to let you know something that I found out just last
night.
Next year will be our last season.
The network will be ending The Late Show in May. And... Oh!
Oh!
Oh!
Oh!
Oh!
Oh!
Oh!
Oh!
Oh!
Yeah, I share your feelings.
It's not just the end of our show, but it's the end of The Late Show on CBS.
I'm not being replaced.
This is all just going away.
There are several issues here, and some are less clear-cut than others.
Whether the decision to drop Colbert and his program
indeed was a direct reaction to Colbert's jokes and observations
about his bosses and Donald Trump is debatable.
CBS said the cancellation was a purely financial decision,
and the show's annual losses have been reported as an estimated $40 million.
By comparison, back in the glory days of The Tonight Show, Johnny Carson's late-night show
was responsible for almost a quarter of NBC's profits.
It also is arguable whether the timing, with Paramount needing federal approval for that
proposed merger, is a major factor. When you consider the recent cases
of network news organizations
bowing to lawsuits and other pressures
at both ABC and CBS, it's not an unreasonable conjecture.
And in terms of presidential administrations
putting pressure on comedians critical of their policies,
there's plenty of precedent.
Most famously with CBS back in the 1960s with the firing of the Smothers Brothers.
Stephen Colbert isn't being fired, of course.
And like David Letterman and Conan O'Brien before him, Colbert is likely to be embraced
and rewarded for whatever he does next.
But the big loss here, from my view as a TV historian, is that CBS also
is throwing out the late show franchise, which Letterman built from scratch and
which, under Colbert's auspices for the next ten months, will continue to emanate
proudly from the Ed Sullivan Theatre on Broadway. I'm certain Colbert's final
months on CBS, especially his last week, will be very vibrant and quote-worthy.
Almost no one I know still watches a late-night show on broadcast TV from start to finish.
Instead, we all wait for the highlights to start circulating on the Internet or the morning
shows the next day.
But the secondary reach of those monologues and other clips is significant.
They pull many more millions of viewers on average
than the late night shows themselves.
And CBS and Paramount, by planning to take the late show
out of circulation, is silencing one of its few
meaningful remaining CBS broadcast platforms.
By not appreciating, defending, and nurturing the late show,
or 60 Minutes for that matter,
the parent company is making a paramount error. It's muzzling its best voices and diluting
its own future.
David Bianculli is a professor of television studies at Rowan University and the author
of Dangerously Funny, the Uncensored Story of the Smothers Brothers comedy hour. Tomorrow on
fresh air the Trump administration has been pressuring elite universities like
Harvard and Columbia with widespread demands and threats of federal funding
cuts. So why are they now investigating George Mason University? Education
reporter Catherine Mangan tells us about her investigation and why GMU's president thinks it's driven
by a backlash to DEI efforts. I hope you'll join us.
To keep up with what's on the show and get highlights of our interviews, follow us on
Instagram at NPR Fresh Air. Fresh Air's executive producer is Danny Miller. Our technical director
and engineer is Audrey Bentham. Our managing producer
is Sam Brigger. Our interviews and reviews are produced and edited by Phyllis Myers,
Anne Marie Bodanato, Lauren Krenzel, Theresa Madden, Monique Nazareth, Thea Challener,
Cezanne Yakundi, Anna Baumann, and John Sheehan. Our digital media producer is Molly Sevey
Nesbur. Our consulting visual producer is Hope Wilson. Roberta
Shorrock directs the show. Our co-host is Tonya Mosley. I'm Terry Gross.