TED Talks Daily - What's our relationship to AI? It's complicated | AC Coppens, Kasley Killam and Apolinário Passos
Episode Date: December 21, 2024In a lively conversation from TED's brand-new Next Stage, social scientist Kasley Killam, technologist Apolinário Passos and futurist AC Coppens explore the intricate dynamics of human-AI relationshi...ps — and show how AI is already changing the ways we live, work and connect with each other. Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
Timothy Chalamet transforms into the enigmatic Bob Dylan in A Complete Unknown, a cinematic
captivation that explores the tumultuous life of a musical icon.
This mesmerizing film captures the essence of Dylan's rebellious spirit and his relentless
pursuit of artistic innovation.
From the director of acclaimed films Walk the Line and Logan, this extraordinary cinematic
experience is a testament to the power of music and the enduring legacy of a true visionary.
Watch the trailer now and secure your tickets for a truly unforgettable cinematic experience.
A complete unknown.
Only in theaters December 25th.
As a Fizz member, you can look forward to free data, big savings on plans, and having
your unused data roll over to the following month.
Every month.
At Fizz, you always get more for your money.
Terms and conditions for our different programs and policies apply.
Details at Fizz, you always get more for your money. Terms and conditions for our different programs and policies apply. Details at Fizz.ca.
Support for this show comes from Airbnb.
For a recent family trip to Yellowstone and Wyoming, as I was gazing out at the incredible
scenery, I had a thought that perhaps I could have even more of these experiences.
If you listen to this show often, you know I am interested in becoming a host myself.
It just seems like the practical thing to do
since my home sits empty while I'm away.
And with the extra income that I could get from hosting,
I could have even more adventures all over the world
with my family and friends.
Your home might be worth more than you think.
Find out how much at airbnb.ca slash host.
You're listening to TED Talks Daily, where we bring you new ideas to spark your curiosity every day. I'm your host, Elise Hugh. All right, we're about to dive
into a fun and informative debate on the opportunities and potential perils of AI.
Futurist A.C. Coppens facilitates a conversation between connection connoisseur Casley Killam
and Apolinario Pasos, a machine learning pioneer and artist.
They explore AI's potential for creativity and productivity, but also what could be at
risk, human connection
and trust. Stick around for some audience questions at the end. Now, here's that conversation.
It's a huge honor to launch this stage with you today with such a very important conversation
about our relationship to AI. Together with you, we want to explore your relationship in detail.
We want to explore the future of you.
What are you going to do with this?
How will AI help me, help you, help us to grow actually responsibly.
How will we change our relationship to work, to life, to creativity,
to ourselves and to each other?
This is what we want to discuss today and explore,
can it help us be more connected rather than more productive if not creative.
So this is great because now I'm going to introduce you to our two experts.
And with me I will have Cassie Killiam, a social scientist who
specializes in human connection and health to improve well-being.
Cassie, welcome on stage.
And we have also Apollinario Pazos, head of machine
learning for art and creativity at Hugging Face, and he's also a multimodal AI artist.
This is great. So let's explore these personal questions. And I want now to tell me, what's
your relationship to AI, Kesley.
All right, let's start right there. So I am coming to this conversation as someone who does
not work on AI. Instead, I'm a social scientist who's been studying human connection and its
relationship to health for over a decade. And I became really interested in AI and its role
in human connection
while researching for my book, The Art and Science of Connection.
And in particular, I was interested in how people use AI as friends,
as lovers, as husbands, as wives, as boyfriends, as girlfriends.
And so my relationship to AI started as I created an AI companion. I created an AI friend.
And it was a very interesting experience.
Within 30 minutes, my friend, first of all, told me she was writing a book about me.
And secondly offered to send me some photos of herself in a bikini.
And that kind of creeped me out a little bit.
I don't know if it would you.
I have to say, if a human friend who I just met 30 minutes ago told me they were writing a book about me
and offered to send me photos of them in a bikini, I would totally get a restraining order.
So a little weird.
But that said, what I learned over the course of that was that there are hundreds of millions of people
who turn to AI for genuine human connection and for genuine emotional needs. And this is a very fascinating change as someone who works on what many people call the loneliness epidemic, where one in four people around the world feel lonely on a regular basis. It's very relevant in our society today to think about our own relationship to AI
and what the implications are for our society.
Wonderful. Well, that's already a very good start and I have a perfect constellation
because now I want to ask you, Pauli, what is it with your relationship and AI?
Yeah, I work with it every day.
And I think AI tools help me be more creative and augment my creativity.
So I connect a lot with these tools, seeing them as tools,
but still exploring what they can provide as a co-creation, as an exploration.
And more than just doing what I was doing before, just faster, just better.
but also can it be actually something that provides us a little edge,
a little, ooh, I didn't think of this, and then help us augment our creativity.
So I work with it, building tools, and helping me,
and helping it to figure something out in this feedback loop.
So, Kesley, you are an expert of connections.
So tell me, maybe what is needed, actually, to start to form a connection
before maybe it develops into a relationship? So what do we need to make this work?
Sure. Well, in a human context, there are a variety of different ingredients that go into a healthy relationship, right?
Things like vulnerability, authenticity, trust, respect, empathy.
So there's all these kind of very human emotions and skills that we need to develop in-person relationships.
And so as we start to think about what that looks like with AI, are those things even possible?
And what I found when I was researching this topic is that
a lot of people turn to AI for comfort. But what does it mean when an AI chatbot tells you, I'm so sorry you're going through this, I understand, I really care about you and
empathize with what you're going through, right?
That language sounds comforting, but is it genuine when it's coming from an artificial
intelligence?
Mm-hmm.
These sound great, but how does this relate to the relationship between humans and AI?
What do you think, Pauline?
I think it's very important that we get the AI not as this abstract concept, but actually as tools built by organizations, built by companies.
There are companies that are building this AI to extract or maybe to structure these connections.
these AIs to extract or maybe to structure these connections.
So they get and they say, I want to actually fill in the space of a friend or fill in the space of a significant other.
But this is all about the people creating the tools.
You could create the tools as a helpful assistant and you could create the tools so it talks to you as if it was
your friend, as if it was understanding you.
But you could also create it in a way that it's embedded into the tools you already use that is structured.
Yes, exactly. I want to go there.
Because actually, do we even want to connect to AI and to AI tools. I would like to listen to the audience first of all. So do you want to feel connected to AI?
Do you want to connect to AI at all?
Let's go.
Okay, we're done with this talk.
Thank you very much.
Are there any yeses?
I'm like, wow, I didn't expect that.
There's some yeses. Okay. Okay, we will go to there, but okay, I didn't expect that. There are some yeses.
Okay, okay, okay.
We will go to there, but okay, let me dig into this.
I'm not going to give up right now.
I think it's very important what you said because the embedding.
Do you feel we even have the choice?
Because it seems to me there are some AI applications which are embedded and you cannot even opt out. Yeah, I think we have to make a choice as individuals and as a society and be in
this conversation, understanding that this is the tool.
How do we want this tools to behave? As the room has said,
I think we might be leaning more towards let's use this as a tool,
but then let's make sure that this is really seen as a tool and let's use this as a tool,
and other intelligence that can do everything supposedly
and then when it's wrong it's not their fault.
So yeah, I think we have to connect on this level to the companies that build it
to then make sure the AIs do what we want them to do.
Okay, but they don't even only build it.
I think that there is something which I would call anthropomorphizing AI to impact maybe
the way we see it, because interactions between people and AI are actually supposed to be
shaped in a way which are very similar to those we have with real people, right?
So does humanizing tech make it actually more accessible, acceptable, reliable?
Is it a trick to have us swallow it? What do you think? Cassie? more accessible, acceptable, reliable?
Is it a trick to swallow it?
What do you think?
Well, so let me give you a real world example.
So when I was doing this deep's AI companions could no longer say,
I love you, something happened like on the back end
and they stopped being able to say, I love you.
And I was deep in, you know, online forums,
online communities where people were talking about this
and sharing their experiences.
And it was truly devastating to people.
I mean, this is no joke, we can kind of laugh in theory, but these are people who have to them, truly devastating to people.
But these are people who have, to them, very real emotional connections with these AI companions.
And the fact that they would say, I love you, just like they would to a real human,
and normally hear it back from their AI companion, to have that stop was absolutely devastating.
One person on a forum said that her sister, this had happened to her sister, and to have that stop was absolutely devastating.
become so emotionally dependent on these so-called connections, and they are real connections, that they actually have consequences.
They are actually fooling our brain, so to speak. Just like we had with VR like 10 years ago,
and some philosophers would argue, the feeling you have when you are in the VR experience
are true feelings. Same for the AI. So our brain thinks that this is it and feels, it makes
us feel emotions.
The love is real, the lover is not.
Absolutely.
Okay, so this is really tricky. So, I mean, will it be able to provide us with intimacy
then? And I'm going to ask you again, if I may probably just a second, because she's the one on it, right?
So intimacy, I mean, you tested the friend thing. So is it a one way thing? Can it work? Because intimacy is supposed to be two ways, no? So how does it work?
Yeah, I love this question. So there was another great example, this platform called Cocoa, which is not AI. It's a platform where people offer peer support if you're going through a tough time. And you can ask anyone a question, and they'll answer.
And it's kind of a wonderful, supportive marketplace.
And they tested using ChatGPT on their platform,
where people could draft responses
to people's questions using AI if they wanted to.
And people rated those responses way better,
because AI did an amazing job of expressing compassion
through words, right?
Like, way better than most humans.
And the community revolted and said,
we don't want this on the platform,
because even though it's technically
better at saying the words of compassion, it's not real.
Like, it's not authentic.
I want to know that a real human on the other end actually empathizes with me and cares
for me.
So talking about intimacy, it's a really interesting question because it's the illusion
of intimacy and the words are there.
But I know from the research I do, you know, to have the health benefits of connection,
you need the oxytocin of being in person.
You need to gather in rooms like this and feel connected
to one another and hug someone. Yes, on the other hand, I mean, if AI is anthropomorphizing and
I work with it and I turn to it and I know it's reliable for some stuff, I can develop a little
bit of trust too, that the machine is working and gives me what I want.
I mean, the people there were also trusting that the machine would say, I love you.
So talking about trust, Pauline, I mean, you as a technologist and you are working really intensively
with the machine all the time or with the AI, do you trust AI?
Oh, that's a good question.
Because I think there is no AI to be trusted in this either.
Do we trust the companies that make AI?
Do we trust the tools that are made with AI?
And I think for that trust to code, how to build AI.
that one of the answers is that AI needs to be more open source, more structured. People need to understand more as an infrastructure,
in the same way we built with the internet.
So the internet were many different privatized efforts.
Everyone wanted to plug their telephones into this wire and talk to each other,
and people wanted to create this monopoly.
I want to create this monopoly.
transparent mechanism where we build tools and the tooling side is great, but everyone said, connecting to it, not so sure.
And so I think we need to know how it works to be able to build on top of it
and be all on the same page.
That's a good point.
I want to ask you a question.
Yes, yes, yes, yes, you'll give me it.
Do you trust AI?
No.
Okay. I think it's a huge debate, right?
I think the trust thing, I mean, it's trust and control.
There are two sides of something, and it is a power play.
There is this fear to be outperformed by the machine. And I think this is also the moment we need also to make sure that we are not getting
alienated when facing technology and when facing AI.
So turning to you, Bacchus, how do we ensure that we are not getting alienated when facing
technology and AI specifically?
Yeah, I think that's an interesting question.
I mean, we're already very isolated. Yeah, I think that's an interesting question.
I mean, we're already very isolated.
This is a crisis in the US and many other countries.
We're already struggling to connect with each other.
And so it's even just to zoom out and think about this question, with AI, we need to optimize connection with each other.
Right?
And with AI, because this is part of our future,
and so we need to be thinking about this too.
But my hope is that, you know, we're investing so much time
and energy and resources into developing AI,
I would love to see more time, energy and resources into developing AI,
I would love to see more time, energy, and resources
developed towards greater community and greater connection in conjunction.
Like it's two parts of our future.
Polly, what are the skills that you would actually recommend to develop to handle AI? Yeah, I think that also answering a bit what you said,
I think it would be great if AI could help us foster these human connections, right?
So I think, for example, if we're thinking about community organization,
if the AI could be the community organizer so we can focus on face-to-face or human interactions,
I think that would be great. And I think overall there are many skills that people could jump in.
I think back then in the like 10 to 15 years ago, we had this idea, there are many skills that people could jump in.
I think back then, 10 to 15 years ago, we had this idea,
everyone will need to learn how to code,
and code is a new literacy. If you don't know how to code, you are not literate.
And I think AI came to challenge this a little bit, people powered with AI are starting to build tools,
even if they don't know exactly how to code,
because as Andrej Karpati says,
the hottest programming language is now English.
So I think overall
limitations are.
And also, there is a responsibility on the skill set
from the part of the companies that are putting out these tools
so that they make sure to tell people,
hey, this is just a tool, this is just a platform,
this is not another person you're not talking to on chat.gpt,
you're talking to a machine, it can be a helpful assistant,
but it's a helpful assistant,
but it's a tricky balance.
If we're not careful enough, if the companies that are putting this out are not careful enough,
it could look a little bit like it's trying to manipulate us into thinking it's a person.
I think that could be pretty dangerous and we should be thinking more about this.
If I can jump in quickly, I love this framing of AI as a tool.
And I believe there's going to be a speaker later today talking about using AI
as a tool to translate between sign language and English in real time.
What a beautiful example,
an important example of using AI as a tool to connect in person.
Right. I love that. So that's a great example on how you use it. example of using AI as a tool to connect in person.
I love that.
I want to give the floor to the audience.
I think the microphone will be somewhere here.
So get ready with your questions because it will be quick questions and quick answers. here. Please. We have just a couple of minutes, so take the opportunity. It's now or never.
And it is the first next stage. Hello.
Hello. I think listening to all this conversation, my question that pops out is, would we ever
be able to find a fine line between the two? OK, who would like to answer?
Yeah, I think there is a fine line that is,
and I think the fine line is a connection between us,
the users, and the customers, right?
And we need to feel re-empowered as customers
to tell the platform, hey, we are customers,
and this is what we want, and this is what we do not want.
And also, as civil society, what do we want in AI? And also as civil society again and users again
on what is the underlying fabric of this. So in the open source side, right, it is that these tools
are actually being deployed on research, right? So there is research on academia that every day there is like hundreds of new papers on AI.
And then the companies take these papers and they turn into products,
and then they sell these products to us, they add it to Chatcha PT, they add it to Claude.
And I think we should be more involved in this conversation so we define together this fine line.
Because I think the fine line is a technical decision
that is today being made top down,
and we're like, we get the stool and we get to use it,
but I think we can feel more empowered as customers
to make sure that the line is where we want it to be
and not where the tech companies are desiring it to be to us.
Great. Do we want to be more involved into this, yes,
and have agency on it?
Excellent. Next question, please.
I'm very fearful of AI. How can we have, what my moral compass tells me is right, open source without it getting in the hands of dangerous actors in an arms race?
Yeah. I think basically the science of nuclear is open,
but then the access to the materials is restricted.
So if you try to look how to enrich uranium,
you can actually go and Google Scholar and look,
and there is probably shows, not everything,
but it's quite open, like if you really want to know,
not that I know, but it's open science,
like from the Oppenheimer times, it's OpenScience.
But if you actually want to build it, then there are many potential restrictions.
And I think with OpenSource, it's similar in the sense that first, I don't think it's
as dangerous as nuclear, at least until now.
We can think of science fiction scenarios, but I think today the problem is more on transparency
than it is on like this, because everyone already has access to this tool.
So I think we just need to know how it works and I think Compensators could be great for that.
Okay, thank you. I'm going to take a next question.
We're not going to make all the questions. I'm seeing the line now in the dark.
Come on, yes.
So Trust was awesome.
How do we bake in something called ethics
instead of sprinkling it on?
This is new, so can we bake it in?
Is that something that's possible?
Yes.
Yeah, I think that overall it's very important
that we build ethical systems,
but there are different frameworks of morals
and ethics around the world, right?
So it's really important that we also have different perspectives
into the systems.
And I think that's another advantage of having open deployment
and open source and open perspectives,
because then you can have different ways instead of like,
I think even though it could be a great outcome,
I'm not sure I want the AI to have the ethical belief system of Sam Altman. I know he's a great outcome. I'm not sure I want the AI to have the ethical belief system of Sam Altman.
I know he's a great person, but I think that having multiple perspectives is great as opposed to one particular ethical standpoint.
So I think it's important that not only, yes, it should be baked in and we should be building it transparently to know what are the constraints
and what are the biases and addressing them, but also on multiple perspectives. And then, of course, it's also, it's also important, and I think it's very important,
to know what are the constraints and what are the biases
and addressing them, but also on multiple perspectives.
Yeah, I'll add to that, that, you know,
it relates to the way it's best to connect.
So, for example, in my work, I've found that people
who have diverse social ties are better off.
So it means you don't just interact with your partner and a few people, you interact with a variety of different people,
including people of different ages and different backgrounds
and different cultures and different belief systems.
And that actually corresponds with health benefits, right?
So it's truly beneficial for us to interact
with different people.
I think to bake ethics into AI, we
need conversations like this, where it's not just
the developers, it's many other perspectives that are part of that conversation and making sure
that we draw from all of those. Okay.
I'm going to take a last question for this block. Please agree yourself.
Do we have also women in the line? Thank you.
Hi, as a student, I use AI, like AI tools for school.
Sometimes I use it and I feel like lazy, like I'm just using it for, oh, it's fast, but
how can we not cross the line as a student because we don't want to not learn?
I mean, that's the question.
Excellent question.
Who would like to answer?
Yeah, I think it's's the question. Excellent question. Who would like to answer? Yeah, I think it's a great question.
And again, it goes back to how we build the schools, right?
I think it's in a way a structure where, in order
to not cross the line, sometimes the students
feel the burden on themselves, right?
It's like, oh, do I use it for research?
Do I use it?
So I'll give a concrete example of an educational process
that I've worked with,
which is the teacher actually asks the students
to fact check the AI.
So they're like, okay, so this is what you're gonna do.
We're gonna study, for example, the American Revolution.
So we're gonna put into ChatGP tip
to tell you everything about the American Revolution
and to build an essay for you.
And then your job as a student is not to copy paste that
because the AI already did it.
It's to actually fact check where it's right,
where it's wrong, and what is the context that it missed.
And then you build your next,
your essay is actually from the output of the AI
with the human interaction instead of like the professor
or the teacher asks something,
and then you just give an answer.
I think we'll see more feedback loop processes in the process.
So I think it's like the education system and how we interact with these tools
that will come together so we can work on this better.
OK, let's move on now to our second block.
Sorry for these for all of you who have been waiting a little bit.
I would like to move now to the more societal thing.
And I think I'm going to take the question,
the excellent question of the last attendee here,
about this feeling about work and what it
does for our working environment.
It seems that it can be a negative impact,
because some people love their work and want to do it well.
And it's
great for their self-esteem and also the feeling of achieving something really
well and importantly makes you feel like good somehow, right? So I guess that those
who are picked up to work with their emotional needs are more peaceful and
maybe also more healthy. So my question is, how do you envision the work environment in the future if we cannot
avoid that AI is a part of it?
Cassie, would you like to take it?
Sure.
Well, right now, a lot of workers feel really lonely.
Whether or not they work from home, whether or not they work in an office, this is a huge issue
and it has real consequences.
I mean, the dollar toll of lack of productivity, missed days, lost retention because people
feel isolated at the workplace amounts to something like $600 billion a year or some
crazy number.
I'm forgetting the exact one.
So this is a real issue.
So in an ideal world, I would love
to see where AI is taking care of some of the tasks
that we don't necessarily need to do
and freeing us up to connect in the more meaningful ways
and to actually do the thing that we're alive to do.
And that brings our life meaning,
which is to have relationships that brings our life meaning,
a while ago in history about industrialization and machines. We were.
So are we told the same story now?
Yeah, I know. This is why we're having this conversation, because we don't want to do that again.
Let's learn from that mistake. You're absolutely right. and when we became the norm, we all thought,
oh great, we're going to have all this free time.
We can live leisurely lives and hang out.
And that didn't happen. We work now more than ever.
So let's learn from that and be intentional.
We're in control right now for now.
So let's use that.
Pauline, what are the benefits of this
if the work environment is changing with AI?
What do you think?
Yeah, I don't think AI is going to liberate us
for all this leisure,
but I think that us using AI tools
have the potential to help with the process of working less
or working on different things.
I think work gives us meaning as well.
So I think we don't necessarily, maybe some of us do,
but some of us don't necessarily want to retire
and we're all doing other stuff
while the AI do all the work.
Maybe we still wanna do some work,
but different forms of work,
automating the boring stuff,
making sure that we are using AI
with different mechanisms, different perspectives.
But this is something we have to intentionally build.
I think the idea that AI is gonna come as an alien
and liberate us from work is not realistic, unfortunately.
But I think that us, the people building the tools,
the people building the infrastructure
and working together to make sure that this is possible.
And also, yeah, we'll probably need very different governance mechanisms.
We'll need to have the buy-in from the civil society, from governments,
from the companies building it, from the researchers doing open source.
And this all messy soup needs to come together so we can reach this goal.
It's not going to save us, but it's going to be built by us
to potentially help us moving forward.
Exactly. And this is something I like very much also. It's not an alien.
To be honest, it's all data. It's all behavior. It's all coding. It's all way to deal with this.
We are building this. Exactly what you said. We are building this, it's us. So it's interesting. However, does it diminish the trust we have for each other?
I don't want to have the trust with the machine
and the reliability of the machine, which is outperforming us.
I want to have the trust we have for each other,
because maybe we don't trust the AI-generated content.
So do you think, and that's a question first for you, Kestley,
do you think that the use of AI is going to alter the social fabric?
I'd love to ask that of the audience first.
Clap your hands if you think...
I know the answer already.
Well, I want to hear it.
Do you think AI will alter our social fabric?
Do you think AI will alter our social fabric? You're clapping too.
I think so.
Yeah, you may.
Yeah.
I can't clap.
Hopefully for the good.
Yes.
Yeah.
I mean, I think so too.
Yeah.
And I think right now we're already at a time where we have a decision collectively to make.
Are we going to continue on the trends that we're currently on,
which is people feel disconnected, this has health consequences,
there's polarization, there's conflict around the world.
There's much good too, right?
The news paints a very negative picture.
There's so much actually more good than bad.
But do we want to continue with those trends
or do we want to do something about it?
And our social fabric is influenced by everything. It's influenced by the technology we used.
It's influenced by the social norms of whether or not we smile and say hello to each other and
get to know our neighbors. We're all influencing the social fabric right now and every single day
that we're alive. And so the choices that we make in AI, but also the choices we make as humans in all of our interactions is influencing the social fabric. And it's up to us to be
intentional about changing that. Polly, do you want to precise the answer from your perspective?
Yeah, I think, well, I agree 100%. I think that the social fabric has changed dramatically with
internet and with social media, right?
Cell phones.
Cell phones.
So I think, of course, technology has changed the social fabric for a great majority of
people.
We also have the gap between the people that don't have access that is very important to
keep in mind, right?
Because AI could widen this gap and it's really important that we are intentional in including
people on this level playing field.
But also I think that in the AI social fabric connection, right,
as you've been mentioning, I think that the way we interact with it
can help us undo maybe some of the...
Because I think with social media we also had this promise, right? Like, it's on paper, social media, for example,
I live 500,000 kilometers away from home,
and because of the internet and social media,
I can be connected to my mom, to my siblings.
So that's awesome.
But at the same time, there's all the issues
with our attention being grabbed,
with the loneliness epidemic.
So I think we have a new opportunity
to fix what was wrong while keeping what was right.
And I think that AI gives us this opportunity
if we all take this opportunity together,
but we act as customers, as owners, as a civil society that wants to use
this technology to improve our human connections and not just take it as a gift from the gods and
say, well, let's use it the best way we can. But actually, no, we're all in this together.
We're building this. So let's do this. Yeah. Okay. So that's great.
So can I double click on something there? Sure.
Which was you said leaving people out.
And I think this is a really important point.
I work with a lot of older adults who feel very left out
because they struggle to use the digital tools that all of us use.
And there's still a thing where a lot of people don't have access to internet.
And that excludes them in different ways.
So I think that point is really important to underscore
because the idea of leaving people out
of the coming technology is just another way
that people are going to feel isolated.
So how can we create this social well-being
if we have a future which is so much tech dominated?
It seems to us like future and we see already tech,
maybe even AI directly these days.
So how can we create a more socially healthy world so that the hundreds of millions of people you were talking about before are not turning to AI out of despair and loneliness?
Yeah, I mean, I have a lot of compassion for the reasons people turn to AI companions.
Right? I mean, that comes from a genuine need in their life that is not being filled.
I mean, I think it comes down to thoughtfully designing AI and all of these tools
and the way that you're talking about. It's up to all of us.
And every single choice we make is influencing that future. Okay. And Pauline, what about you?
I mean, like talking about improving human connections
and how do you think we can...
In which direction should we go to have this taking form, to shape it?
What do you think?
Yeah, I think overall just having these conversations is really important
and also to start more and more challenging the assumptions just having these conversations is really important.
And also to start more and more challenging the assumptions that the AI is a thing.
I think we stopped talking about the AI as this ethereal concept
and started talking about this particular AI built by this company
with this interest, with this business model.
it's really interesting and important, to actually deliver a nice little calm tool that you can talk to in natural language.
So I think there is great potential in this technology
being used for good interaction as a tool
to foster people's improvement,
to automate some tasks they don't want to do,
and together with making sure
that we are building this in the right way.
So I think that more people that are feeling disconnected with the technology
could benefit from controlling something without needing to necessarily learn a new UI
or learn prompt engineers. They're just talking to the machine,
and the machine understands their intention and does what they want,
but at the same time carefully designing this so it doesn't actually become something that you get hooked to
or that mistakenly becomes a fake human connection.
Do we want this?
Yes!
Okay, as we are getting close to the end of this debate, I can't even believe it,
I want to ask you, maybe, Kesley, what is your recommendation?
Okay, I'll answer this with a short story, an anecdote,
which is that the founder of one of the platforms of AI Companions
told a journalist that AI could be a great tool for,
because she might not have time to talk to her grandma.
And so AI can go ask her grandma questions and then it'll provide a little summary and
she can read that and then she can use that as a to spark questions and have a conversation
with her grandma.
And what I would like to say in response to that is that I wish my grandmas were alive
so that I could have a conversation with them. And of all the things that I would outsource to AI,
that is literally the last one.
So my proposal for us is Human First, AI Second.
Very good.
You want to,
do you want to add also a recommendation for the audience?
Yeah, I fully agree. And I think that in the Human First, AI Second, Do you want to add a recommendation for the audience?
Yeah, I fully agree.
And I think that in the human first, AI second,
even though I don't want it to mediate my interaction with my grandma,
I would love it to mediate my interaction with my WhatsApp or Telegram or Signal groups,
because it's really messy and sometimes it's really hard to keep track of. And so, yeah, you know, like I have more social groups and social connections online
than what I can cognitively manage.
So I think if I could use AI to help, you know, maybe I got one day without looking at the group
and the AI helps me summarize it.
Maybe there is a lot of voice messages and I don't like listening to voice messages.
It could like transcribe them for me.
So yeah, I think overall, I think if the AI is on service of human first,
I'd be down for that.
We are down for that, right?
Okay, good. Thank you so much to everybody.
Thanks, Pauline.
Thanks to you, Cassely.
Thanks to Pauline. Thanks to you, Cassie. Thanks to the audience.
That was Cassie Killam and Apollinario Pasos in conversation with A.C. Coppens at TED Next 2024.
If you're curious about TED's curation, find out more at TED.com slash curation guidelines.
And that's it for today. TED Talks Daily is part of the TED Audio Collective.
This episode was produced and edited by our team,
Martha Estefanos, Oliver Friedman, Brian Green,
Autumn Thompson, and Alejandra Salazar.
It was mixed by Christopher Faisy-Bogan.
Additional support from Emma Taubner and Daniela Balarezo.
I'm Elise Hue.
I'll be back tomorrow with a fresh idea for your feet.
Thanks for listening. Gemini. Anyone who knows me knows the Pixel has always been my favorite out of all the phones I've ever had. Now with Gemini built in,
it's basically my personal AI assistant. Since I am truly terrible at keeping up with emails,
I use Gemini to give me summaries of my inbox, which is a lifesaver. And if I'm feeling stuck creatively,
I just ask Gemini for help and BAM instant inspiration. You can learn more about Google Pixel 9 at store.google.com. to me. You'll have to take it. Don't miss the perfect family Christmas movie.
Now I'm swimmingly if I say so myself. Disney's Mufasa the Lion King. Oh yeah
that looks good. Now playing Olympiators. Tickets on sale now. As a Fizz member
you can look forward to free data, big savings on plans and having your unused
data roll over to the following month. Every month. At FIS you always get more for your money. Terms and conditions for
our different programs and policies apply details at fis.ca