TED Talks Daily - Why are people starting to sound like ChatGPT? | Adam Aleksic
Episode Date: December 18, 2025Algorithms and AI don't just show us reality — they warp it in ways that benefit platforms built to exploit people for profit, says etymologist Adam Aleksic. From ChatGPT influencing our word choice...s to Spotify turning a data cluster into a new musical genre, he reveals how new technology subconsciously shapes our language, trends and sense of identity. "These aren't neutral tools," he says, encouraging us to constantly ask ourselves: How am I being influenced?(After the talk, Aleksic sits down with Elise Hu, host of TED Talks Daily podcast, to discuss how he became interested in language and its evolution — from writing on leaves, clay and stone to AI models like ChatGPT.) Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
You're listening to TED Talks Daily,
where we bring you new ideas to spark your curiosity every day.
I'm your host, Elise Hugh.
Is AI changing the very way we talk?
Atomologist and content creator Adamalexic sounds the alarm
on how AI tools are influencing our behavior,
down to our very word choices.
He encourages us.
us to remember that these emerging tools are not neutral and how they are possibly rewiring
the very underlying patterns of our thoughts and why. Afterward, I sat down with Adam to go beyond
his talk and learn more about what sounding human even means anymore. The tools will need to build
as we continue down this rapidly changing path and more. Stick around after his talk for our
conversation.
How sure are you that you can tell what's real online?
You might think it's easy to spot
and obviously AI-generated image,
and you're probably aware that algorithms are biased in some way.
But all the evidence is suggesting
that we're pretty bad at understanding that on a subconscious level.
Take, for example, the growing perception gap in America.
We keep over and overestimating how extreme other people's political beliefs are,
and this is only getting worse with social media
because algorithms show us the most extreme picture.
of reality. As an etymologist and content creator, I always see controversial messages go more
viral because they generate more engagement than a neutral perspective. But that means we all end
up seeing this more extreme version of reality, and we're clearly starting to confuse that
with actual reality. The same thing is currently happening with AI chatbots, because you probably
assume that chatypt is speaking English to you, except it's not speaking English in the same way
that the algorithm's not showing you reality. There are always distortions, depending on
on what goes into the model and how it's trained.
Like, we know that Chagipit says delve at way higher rates than usual,
possibly because OpenAI outsourced its training process to workers in Nigeria,
who do actually say delve more frequently.
Over time, though, that little linguistic over-representation
got reinforced into the model even more than in the workers' own dialects.
Now that's affecting everybody's language.
Multiple studies have found that since Chagapit came out,
people ever have been saying the word delve more
in spontaneous spoken conversation.
Essentially, we're subconsciously.
confusing the AI version of language with actual language.
But that means that the real thing is ironically getting closer to the machine version of the thing.
We're in a positive feedback loop with the AI representing reality,
us thinking that's the real reality,
and then regurgitating it so that the AI can be fed more of our data.
You can also see this happening with the algorithm through words like Hyperpop,
which wasn't really part of our cultural lexicon
until Spotify noticed an emerging cluster of similar users in their algorithm.
As soon as they identified it and introduced a Hyperpop playlist, however,
the aesthetic was given a direction.
Now people began to debate what did and did not qualify as hyperpop.
The label and the playlist made the phenomenon more real
by giving them something to identify with or against.
And as more people identified with hyperpop,
more musicians also started making hyperpop music.
All the while, the cluster of similar listeners
and the algorithm grew larger and larger,
and Spotify kept pushing it more and more
because these platforms want to amplify cultural trends
to keep you on the app.
But that means we also lose the distinction
between a real trend and an artificially inflated trend.
And yet, this is how all fads now enter the mainstream.
We start with a latent cultural desire,
like maybe some people are interested in macha or Labibu or Dubai chocolate.
The algorithm identifies this desire
and pushes it to similar users,
making the phenomenon more of a thing.
But again, just like how ChatTPi misrepresented the word delve,
the algorithm is probably misrepresenting reality.
Now more businesses are making Labibu content
because they think that's the desire.
More influencers are also making Lubbubu trends
because we have to tap into trends to go viral.
And yet, the algorithm is only showing you
the visually provocative items that work in the video format.
TikTok has a limited idea of who you are as a user,
and there's no way that matches up
with your complex desires as a human being.
So we have a biased input.
And that's assuming that social media
is trying to faithfully represent reality, which it isn't.
Instead, it's only trying to do what's going to make money for them.
It's in Spotify's interest
to have you listening to Hyperpop
and it's in TikTok's interest
to have you looking at the boo-boos
because that's commodifiable.
So once again, we have this difference
between reality
and the representation of reality,
where they're actually constantly influencing one another.
But it's incredibly dangerous
to ignore that distinction
because this goes beyond our language
and our consumptive behaviors.
This affects the world we see as possible.
Evidence suggests that Chatsy BT
is more conservative when speaking the Farsi language,
likely because the limited training text in Iran
reflect the more conservative political climate in the region.
Does that mean that an Iranian chat GPT user
will think more conservative thoughts?
We know that Elon Musk regularly makes changes
to his chatbot GROC when he doesn't like how it's responding,
and that he uses his platform X to artificially amplify his tweets.
Does that mean that the millions of GROC and X users
are subconsciously being trained to align with Musk's ideology?
We need to constantly remember that these aren't neutral tools.
Everything that ends up in your social media feed
your inner chat by responses is actually filtered through many layers of what's good for the platform,
what makes money, and what conforms to the platform's incorrect idea about who you are.
When we ignore this, we view reality through a constant survivorship bias,
which affects our understanding of the world.
After all, if you're talking more like chat TPT, you're probably thinking more like chat TPT as well.
Or TikTok or Spotify, but you can fight this if you constantly ask yourself, why?
Why am I seeing this?
Why am I saying this?
Why am I thinking this?
And why is the platform rewarding this?
If you don't ask yourself these questions,
their version of reality is going to become your version of reality.
So stay real.
Don't go away just yet.
Stick around.
My conversation with Adam is coming up right after a short break with word from our sponsors.
Congratulations on your talk. How are you feeling now that you're done?
Feeling good. Tell us what made you so curious about language in the first place and how you got hooked?
Yeah, wow. Etymology in particular, I always like to tell people. It comes from the Greek word etymos, meaning truth. So you look at etymology and you're actually studying truth. You're studying how humans understand the world, how we relate that to other people. In sophomore year of high school, I read this etymology book. I got super into it.
And then I just started like really studying it more for myself.
I started a little website in high school.
I studied linguistics in college.
And then I was graduating with the linguistics degree.
And I was like, well, what do I do now?
So I started making linguistics content and then actively sort of studying the language of the social media space as I was in it.
And then wrote a book, AlgoSpeak on how social media is changing language.
And then that ended up getting me the TED Talk.
Is there a problem with social media changing language?
Because our language has always evolved.
This is a living and dynamic thing, right?
English or any other language in the world.
So is there anything wrong with it?
No, and our language has always evolved around the constraints of a medium, right?
Before we had written history, we would rely on oral tradition.
We would tell stories through rhyme and meter.
And then we started writing things down.
The places that used leaves to write things down developed curly scripts because that was better for the leaves.
And the places that used clay and stone to write things down developed rigid scripts.
So again, the medium is literally shaping language.
language. We have chapter books. We have the internet. Internet allows for this written replication of
informal speech. It's, again, kind of a paradigm shift in how we speak. And I think algorithms are that
new paradigm shift. AI is a new paradigm shift. We're in this like really fast-paced moment where
our language is rerouting around these new mediums we're interacting with. Yeah, you mentioned
fast-paced. As an etymologist, how are you managing the speed at which things are changing?
Yeah, well, it's really good for me that I am kind of studying things in the open.
I make videos about phenomena I'm observing.
And then I get, like, tagged in videos where people are using new words.
It's sort of crazy in that sense.
But I also have to be in it.
I scroll TikTok for research.
But you have to be in the milieu to really know what you're studying.
Everything's contextual.
There's an aesthetic that words are evolving through.
There are communities that words are evolving from.
You have to understand at least, like, a little bit of,
about internet culture to know about these communities because there's so much depth to them.
Right. And it strikes me that everything is changing, like we're talking just within a matter
of days, a trend can emerge and then be gone and then be considered passe or old,
which is at a much faster cadence and speed than, say, academia, where language was traditionally
researched and linguistics was studied. What do you feel like are some trends this year in
language that have come up and have really hit the zeitgeist that you have to explain the most.
Yeah. Wow. Well, we're definitely on the tail end of six, seven. I feel like most people understand
by now that that's this nonsensical interjection coming from meme communities, originally parody and
clip farming. But there's been so much going on. Yesterday, I made a video about low-kenuantly,
and I guarantee you by the time that this podcast airs, that word will already be passe.
say. But it's like a combination of low-key and genuinely. And it's like true in a muted way.
Yeah. Okay. It's not going to stick her. It's like a meme word. But that's exactly kind of
illustrating how quickly these words come and go. I really doubt that'll be around beyond like a
month. Okay. I'm trying to go through what I'm hearing in my house that sounds like nonsense
because I have a 13-year-old and a 10-year-old and an 8-year-old. No matter how much you think
you know, they know more. So I have friends who are middle school teachers, sometimes
They let me sit down in their middle school classrooms, and that's where you really learn the culture.
And then, like, Gen Z, older people, parody their language, and then it becomes brain ride.
But it starts with Gen Alpha.
Yeah.
I do like being called chat, though, instead of mom.
Chat is funny.
And that's definitely a phenomenon that's gotten way more popular this year.
I definitely started seeing that around 2023, sort of as like a general vocative, what do we think, chat, you know?
And, yeah, it sort of reflects the rise of streaming culture.
And I've seen a lot of words come out of Twitch spaces back when Riz was popular.
That came from Twitch.
Oh, okay.
So when I'm referred to as chat, it's from like a live streamer typically saying like, hey,
don't forget to subscribe.
Yeah.
It's addressing an unknown audience.
You know that there is an audience.
You don't know who's in the audience.
Chat is a catchall.
There's a sort of collective unity to it.
And then there's also a sort of, yeah, the strange dynamic of digital surveillance where
we really don't know who's going to see a message, where it's going to.
can be distributed, even on the surface level. So right now, this will go somewhere. But then what
if it goes viral and then that will then go in directions you don't know. That's also what 6-7 was
parroting at the core of it, because it comes from this joke that you could go viral by saying
6-7. So people said 6-7 to go viral. It was a little self-referential as a nod to itself.
And then, yeah, drifts from NBA players who are saying this to go viral to Gen Alpha kids who are
saying this to go viral. And then it goes off-camera. And now the implied joke is still this
possibility that a camera is watching you. And I think that's maybe a defining trend that I keep
seeing that we're kind of aware of this constant surveillance or panopticon, and we're ironically
performing for the algorithm when we say six, seven. And the early iteration of a joke now,
and of course it just is layered into abstraction. Yeah, I mean, there's the panopticon element
of it, but it also strikes me as fascinating that so many young people today, when you ask them
what they want to be when they grow up is a YouTuber, right? Or to go viral or to be an influencer.
so now our life aspirations aren't a particular virtue, but instead to be seen.
I think it's like 50% or something. It's not an astronaut anymore. Right. We want to be seen. And there's more seeing going on. I kind of worry about the amount of seeing that we're doing. I was in Washington Square Park a few months back and I saw somebody with like the meta glasses trying to like make RIS content. And he was like talking up girls. But like it's not a real flirtation. He's performing. He's, uh, he's, he's clipped.
farming, you know? I see like politicians saying stuff that they know will algorithmically go viral
later, but in the present moment, they sacrifice like a moment of decorum. I, you know, I do worry
about what is the notion that we could all be perceived due to us. So on one end, you could act
out more because you want to be perceived. On the other end, it makes you more docile because
you're worried about being perceived. They're bad effects for society in both ways. And so we can
use words like six, seven, or chat to point to what's happening in culture.
And then from there, we get into subjective territory, right?
But I do want to say that the words that people are using are merely a way to categorize reality.
In that sense, they are just a tool.
A tool can be used for good or bad.
You can draw your own conclusions about culture.
In your talk, you focus on how LLMs, large language models, and chat body AIs, are affecting speech, affecting the way we talk.
How do you think it's going to change our language practices in the future?
Yeah.
mostly stuff is happening ambiently right now.
So first we observed an increase in the word delve because chat GPT overrepresents
the word delve.
And, you know, maybe you heard of that.
Maybe you heard that chat GPT says the word delve.
Maybe you heard about like sentence structures.
Like it's not just X, it's Y.
And you're trying to avoid that.
It's still going to affect you.
There's so many other words that it uses at a slightly higher rate like surpass or boast or
garner.
I don't know.
I don't know why I'm saying.
But you see a word being used around you and you use it more.
That's how we adopt language.
And chat Chabit and these other LLMs represent language as like a series of numerical kind of coordinates.
And these representations are close, perhaps, to how we actually feel about words.
But they homogenize it and they get a little bit wrong because representation can never be reality.
So they mess things up.
And now we have new studies coming out showing that people, yes, in spontaneous spoken conversations,
we're using the word surpass and boast more, simply because we see it more.
we absorb it. That's how I think AI's bots are going to be affecting our language. I'm more,
I think, conscious of algorithms. Ideas really travel. You can visualize it like a virus,
infecting a population. It starts with a host. It goes to some early nodes in the network of social
contagion, and then it diffuses further. Algorithms represent those social networks. They do literally
accelerate ideas. So if we want to think about how chat GPT is influencing language, you don't even
have to be using AI to be affected by these words because they're showing up all around
us. I'm thinking about, oh, now we might be looking at more content on social media using the
word delve or something. 14% of all research papers are now written with AI. We have like
parliamentary speeches being written with AI. You're going to see it more, no matter how immune
you think you are, and then you're going to start saying it more. Yeah, it's this loop, right? It's
this unending fun house mirror or feedback loop of our language. We feed it. It feeds us back to us
from like an aggregated data set.
Right.
So what are we losing?
Yeah, if our language practices are sort of undifferentiated,
we're losing the individual quirks or flare that can be in language or slang or intricacies of dialects.
Are we losing connection to each other?
Are we losing connection to a certain culture?
What's the cost of this?
Yeah, with language, again, this is a way to reflect our reality.
And our reality is not purely this algorithmic AI reality.
You will have a different dialect always with your culture.
close friends with your family. You will speak, you will code switch between your regional
dialect and then this homogenized AI dialect, whatever, that we're all talking in. But yeah,
you'll, you'll always find different ways to communicate given the context. It's not a categorical
homogenization of language. But in the domain of public speech, I think we are kind of traveling
towards a norm. There's also a really good book by Kyle Chaker called Filterworld about how
algorithms sort of dilute culture down. I think that's very much happening. And so that will be
happening with language. We have a language dying out every two weeks. There's only 7,000 in the
world. Every two weeks, one goes. And this was happening before, I think. The internet perhaps
accelerated it, but globalization already kickstarted it by nationalizing and centralizing our
languages. We were on this path since like the 1850s. But it's definitely happening even more
with algorithms, which are more of this force for homogeneity because there's an expectation.
that users have of like, oh, I want you to be speaking in American English or in British English
or whatever. So that's like one effect that's certainly happening. I do not think it's going to
be happening in every sphere of your life, but it's going to be influencing you. And that's
really sad because with some of these dying languages are such incredible perspectives for looking
at the world. There are like these different frames. I was just reading this book Braiding Sweetgrass.
I highly recommend. There's this pot on with Tommy Word to be a Saturday. So like we don't have the
verb idea of being a Saturdays, but I really like that that is a frame you can look at Saturdays
through. Different languages have different understandings of time and direction. And the more you condense
down into this like sort of Western centric view we got going on, you lose the color and the
beauty of all these different ways we could look at the world. Yeah, and sort of the richness
and the diversity and the dynamism of being human, right? So I guess my next question is,
what do we do now? You know, in your talk, you say that we are subconsciously, now confusing
the AI version of language with actual language. And then that means, as we've been talking
about, the real thing is getting closer to the machine version of the thing. What do we do
about this? It seems like we have a collective action problem. You can't avoid it. You simply
cannot avoid it. Now, I do think by being conscious about this, the more aware we are of what these
platforms are doing to us, the more resistance we have. This is a virus. You are able to form your
own kind of antibodies through media literacy. And it goes so much beyond language. I think language is
the canary in the coal mine, that sort of proxy for greater cultural shifts that we can pay attention
to because it tells us what's going on in society in that telling truth kind of way. But I'm worried
about political shifts. I'm worried about social shifts. ChatGPT has different political leanings in each
language because it represents the values of those countries differently. That's really concerning
to me that there's a direction that we are being trained to think. When you interact with a platform
like X, you've got to know that Elon Musk is artificially amplifying his tweet so he can go
more viral. When you interact with the chatbot like GROC, we also know that Elon Musk changes
what the chatbot says so that we align with his picture of reality. We need to know that
so that we can maintain our reality.
Who do you think it was responsible for not only continuing to have these conversations,
but helping make sure that the next generations coming up are media literate and hip to what's
happening here?
Yeah.
And I also want to caveat.
I do not think the responsibility should merely be on the consumer of media.
I'm simply pushing that right now because I think that's the best thing we can do in this
current cultural moment where it doesn't seem like we have any power against the platforms.
the moment when we can start regulating these platforms, seriously, we should be doing that.
But in the meantime, in our personal lives, the way that I have been navigating social media
is trying to build that radical media literacy for myself.
It's my dream that one day in 10th grade ELA class, along with poetry, scansion,
you have a unit for how to look at TikTok.
And I know that sounds ridiculous, but it's not a joke.
In the same way, you should think about the news.
You should think about the New York Times is not printing some stories because they're filtered
through layers of manufactured consent, we should teach our kids about engagement optimization
algorithms and how these are working to trigger your reptile brain impulses, and they're
not actually aligned with what you want. So all of these things, I think, should be taught.
This is actually why I haven't joined the whole wait until eighth on phones. You know how
there's this big campaign to keep kids off of screens. I actually worry that it's making them
completely illiterate until they just jump in and, you know, jump into the deep end.
I would say it's pretty bad to go from zero to 100 like that. But that's such a delicate question.
Like, clearly it is bad for children to be looking at iPads at age two. I would also really have to navigate that when I become a parent. Sure. Yeah. It's a scary question to be grappling with. But I think like slowly integrating while teaching them lessons about this thing that's being shown, oh, that's an AI generated video. That is not what things really look like. Or a lesson that I would always want to teach my kid is like, why did this show?
up on your for you page. When you get a video, ask yourself that question, right? Think about
what videos are not showing up because there's always a survivorship bias to what's filtered.
The thing that's showing up is generating engagement. It's past content guidelines for the
platform. It's targeted to the platform's idea of who you are, which is an incorrect idea in the
same way that the word delve is incorrectly represented. And then along with that strategies for how
to remain present and mindful and literate. So I think the thing we least like about scrolling is
that it makes us feel like we just wasted a bunch of time. I've personally sort of found ways to
extract meaning and presence from being on social media, but it takes like turning off the
cognitive frames that they're trying to trick you into, and that's part of it as well.
Okay. Adam Alexic, we could talk about all this stuff with you for so much longer, but this is
an excellent extrapolation and building upon your talk. Okay, stick with us. I'm going to hit you
with a few rapid-fire general questions unrelated to your talk specifically, but you can tie it back
if you want. All right, let's go. What does a good idea look like to you? Something that is
weird and different from what other people have done, and you have to draw on something that
exists already, I suppose, but remix it in a new way. Yeah, collisions of ideas that have been
previously out there, right? Drawing the connection between the data points that doesn't exist yet, yeah.
Yeah, I like kind of mixing and remixing for creativity.
All right, what is a New Year's resolution or intention or a ritual of yours if you have one?
Okay, ritual. I do have a ritual. So it's become a yearly tradition that my birthday is on January 3rd and the Moby Dick Marathon in New Bedford, Massachusetts is also on January 3rd.
So I'm going with a bunch of friends to read Moby Dick for 25 hours.
What? Everybody just gets together and sits and reads?
It's terrible. It's wonderful. Yeah, it's a really painful experience, but I think it's the only way you can read Moby Dick. And I, you kind of go mad along with Captain Ahab. So you were quietly reading in a crowd, or are you reading a loud popcorn style? No, no. It's like there's one person chosen to read and it goes for 25 hours. So I have like a 1.30 a.m. reading slot. My friend John has a 5.30 a.m. reading. Yeah, this is the second year. We're definitely making this a yearly thing. Do you know you're reading, like which part of the book you're going to be at or everybody's going to collect.
be at at your 130 a.m. reading slot? I'm really hoping for the chapter Stubb kills a whale.
Yes. That's such a good one. But I, uh, yeah. So I don't know. We'll see. We'll see.
Okay. That's so funny. All right. What is a hobby or interest of yours unrelated to your work that you
love so much that you might be able to give a TED talk about it? I don't believe in hobbies. I think
hobbies are strange. I think like hobby is defined as this thing, which is like slightly less
serious than work, but more serious than your other leisure activities. And I want to treat everything in my
life with the equal importance of work and leisure. I like sitting. I like eating. I like listening to
music. But that sounds boring now. Now I sound boring. I do have like activities I do for fun. But I'm
anti-calling things hobby. I like that. All right. That might be a hill you're willing to die on,
because my next question is, what is an ant hill you'd be willing to die on? For example, I would die on the
hill that, you know, one location's pizza is better than another location's pizza. Do you have
anything that you feel like? I hate the word content. And I know as a linguist I'm not supposed
to hate words. This is from my cultural critic perspective. I think it's so strange that we talk
about making content because the word implies something that is contained, like the contents of a
box or drawer. And now ask yourself, like, where is it being contained? It's being contained in
the medium of social media. So TikTok is like the box and then your content is this thing held in the
box. And that implies, first of all, that it's interchangeable with other pieces of content and
that, you know, the content doesn't have anything special within it. Another reframe potentially
is that your video is the container and your idea or message is the content. But we don't talk
about it on that level. We abstract the level up. And then it becomes this like commodifiable thing where
you can talk about, oh, this is how you make better content. This is how you make content every day. And
then you're like, you lose the plot of what you're trying to do, which is spread good ideas. And
spreading ideas critically also means your idea leaves the platform. And with content,
it's contained. So it's very strange. I try to avoid calling myself a content creator and perhaps
trying to reclaim the word influencer, because I know that's a little negatively coded, but it's what I'm
trying to do. I want to influence people. It's like it'd be disingenuous to not say that. Okay. All right. Linguist,
influencer, author, etymologist, but certainly not merely a creator of content. Adam Alexic,
thank you so much for sitting down with us. Thank you.
That was Adam Alexic speaking at TED Next 2025 and in conversation with yours truly, Elise Hu.
If you're curious about Ted's curation, find out more at TED.com slash curation guidelines.
And that's it for today.
Ted Talks Daily is part of the TED Audio Collective.
This episode was produced by Lucy Little and edited by Alejandra Salazar.
The TED Talks Daily team includes Martha Estefanos, Oliver Friedman,
Ryan Green and Tanzika Sangmar Nivong.
Additional support from Emma Tomner and Daniela Bala Razeo.
I'm Elise Hugh.
I'll be back tomorrow with a fresh idea for your feed.
Thanks for listening.
