TED Talks Daily - The mental health AI chatbot made for real life | Alison Darcy (Kelly Corrigan takeover)
Episode Date: May 5, 2025Who do you turn to when panic strikes in the middle of the night — and can AI help? Psychologist Alison Darcy shares the vision behind Woebot, a mental health chatbot designed to support people in t...ough moments, especially when no one else is around. In conversation with author and podcaster Kelly Corrigan, Darcy explores what we should expect and demand from ethically designed, psychological AIs.This is episode two of a seven-part series airing this week on TED Talks Daily, where author, podcaster and past TED speaker Kelly Corrigan — and her six TED2025 speakers — explore the question: In the world of artificial intelligence, what is a parent for?To hear more from Kelly Corrigan, listen to Kelly Corrigan Wonders wherever you get your podcasts, or at kellycorrigan.com/podcast. Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
An Apple Watch for your kids lets you stay connected with them wherever they go.
They can call you to pick them up at grandma's or text you because they forgot their lunch again.
Their watch will even send an alert to let you know they finally got to school.
Great for kids who don't have a phone because their Apple watch is managed by you on your iPhone. iPhone
XS are later required with additional wireless service plan.
Support for this episode comes from Airbnb. Winter always makes me dream of a warm getaway.
Imagine this, toes in the sand, the sound of the waves and nothing on the agenda except
soaking up the sun.
I think of myself in the Caribbean, sipping on a frozen drink and letting my troubles
melt into the sea.
Maybe Jamaica, Turks and Caicos, St. Lucia, lots of possibilities for me and my family
to explore.
But vacations always fly by too quickly.
I was planning my next getaway when I realized my home will be sitting empty while I'm
away.
That's why I've been thinking about hosting on Airbnb.
It'll allow me to earn extra income and could help me extend that trip just a little
longer.
One more sunset, one more amazing meal, one more day to unwind.
It sounds like the smart thing to do and I've heard it's easy to get started.
Your home might be worth more than you think. Find out how much at www.arabianbee.ca. This show is sponsored by Aura Frames.
My mom taught me that thoughtful gifts connect people, and that's exactly what Aura does. Named
Best Digital Photo Frame by Wirecutter, it stores unlimited photos and videos that appear
instantly on my mom's frame, no matter where you are in the world. Plus, setup just takes minutes.
Save the wrapping paper, every frame comes packaged in a premium gift box without a price tag. Ready
to win Mother's Day? Nothing says I cherish our memories like an Aura digital frame. And Aura has a great deal for Mother's Day.
For a limited time, listeners can save on the perfect gift by visiting AuraFrames.com
to get $45 off plus free shipping on their best selling Carver Mat Frame.
That's AuraFrames.com.
Use promo code talks.
Support the show by mentioning us at checkout.
Terms and conditions apply.
You're listening to TED Talks Daily, where we bring you new ideas and conversations to
spark your curiosity every day.
I'm Kelly Corrigan, I'm a writer, I'm a podcaster,
I'm a TED Talker, and I am taking over for Elise Huw
this week for a special series on AI and family life.
A guest curated a session about this topic at TED 2025.
And I'm here now to share these very special talks with you,
along with behind-the-scenes
recordings and personal insights that shed light on the process of bringing them to life.
So I was listening to another podcast and I heard this fabulous woman named Dr. Allison
Darcy and I admit I was swayed by her charming Irish accent but also she has a
very light touch in a very heavy world which is creating AI therapy for people
who probably wouldn't do therapy at all anywhere anytime but
nonetheless need support. And so I wanted to have Allison share everything that her
company which is called Wobot, W-O-E-B-O-T, has learned by trying to be in this very
soft intimate place with people when they're struggling.
You know, when I was starting to learn about WoBot
and Allison and the work that they'd been doing,
my initial reaction was negative.
I was like, oh my God, this is the last thing people need.
People need people.
This is like, I'm on team human.
I want everybody to be connecting with each other way more,
eyeball to eyeball,
heart to heart.
I wanted Allison to go first because I thought that as my initial reaction to the whole idea
of AI and parenting was kind of like, sure, like they can sort through your schedule stuff,
like they can do logistics, but they're not gonna get involved
in the deeper interactions that help kids
come into themselves as adults.
Like that's just territory that will never be touched.
And then all of a sudden it was like,
oh no, of course that will be touched.
Of course there are cases where AI will be better
than some parents at some conversations.
And so I turned to Allison to say, tell me everything about the 1.5 million people who
have used WoBOT and help me understand how that might end up looking in the context specifically
of family life, to disavow the audience of this separation
that they might have in their mind as I did, that there's sort of a logistics
level that AI could play at, and then there's this very intimate emotional
level that AI will never touch. I had to show them that AI was already in the
intimate space in very meaningful ways.
And so to start us off this week, here is my conversation with Dr. Alison Darcy about
the ways that AI might participate in the most intimate consequential conversations
we ever have.
Welcome.
Hi.
Thank you.
So will you describe for us the average woebot interaction?
Sure.
So, well, first of all, I suppose it's important to say
that we built woebots to meet an unmet need.
In 2017, depression was already the leading cause of disability worldwide.
And I'm team human, too.
And I also really believe'm team human, too.
And I also really believe in what human therapists do.
But, you know, it doesn't matter how good a therapist is.
You could be the best therapist in the world,
but unless you're with your patient at 2 a.m. when they are having a panic attack,
you can't help them in that moment.
And, you know, therapy doesn't happen in a vacuum. We all have real lives. And so, and I was a clinical
research psychologist making some of the world's, you know, most sophisticated psychotherapeutic
treatments. And, but I was always haunted by this idea that it doesn't really matter how sophisticated
the treatments are that we make if people can't access them. And so access has to be
part of the design. And approachability has to be part of the design. Because what do
you do at that 2 a.m. moment and you can't think straight, you know, and you can't remember
the thing that your therapist told you you should do in this moment.
And so that for me is the why we built WoBOT to meet people where they're at in
those moments when it's actually hardest to reach out to another person.
And how long do they stay on with you?
So it's their brief, very brief encounters.
Six and a half minutes is the average length of time,
and about 75 to 80 percent of all of those conversations
are happening outside of clinic hours.
The longest conversations people have are between two and five a.m.
Yeah.
And is WoBot good for role play?
Actually, we have found that generative AI,
so WoBot was built to be rules-based.
Everything WoBot says has been scripted by our writing team
and under the supervision of clinical psychologists.
And so WoBot will never, it's very safe, it's on the rails,
WoBot will never make up something new.
Fantastic.
But we have been exploring the generative models
and it turns out generative AI is really good for role plays.
Yeah.
And it kind of speaks to some of the advantages that AIs have
and that they're really good at doing the stuff
that humans aren't so great at.
And I think role plays are one of those things, for sure.
Do people disclose more quickly with an AI than they would with a person?
Yeah, so that was shown in an early study in about 2015 I believe that people would rather
disclose to an AI than when they believe there's a human behind that and that's particularly
pronounced for things that are perceived as very stigmatized.
And so, yeah, there's a sort of an advantage to being an AI in that it's never judging you.
You don't have to think about how you appear to the AI. So, yeah.
When I think about it, I think there's at least four concerns come to mind.
One is price always, one is privacy always, like who gets these transcripts?
One is control, like who defines what an AI responds and what theories and theses of change are they working from?
But the one that scares the hell out of me is the perfection problem.
And sometimes I wonder if we might inadvertently
be creating the conditions for a total rejection
of humanity, of like dumb, boring,
incomplete, half asleep humans,
when you could have this thing that is so hyper responsive.
Do you feel like people, once they find WoBot,
they never want to leave it?
Definitely not.
No, well, because that's how it was designed, right?
So WoBot would have been designed,
it all depends on what are you building the thing for.
And if you're building it for human wellbeing,
human advancements, the objective is similar to a parent.
You want the success looks like individuation and independence and growth.
And that's partly, you know, challenging the idea of perfection, just like you did.
A great AI should be helping you see that perfection is just an illusion,
particularly when it comes to humans.
That's what makes us human,
and that's something to be celebrated.
But of course, to your point,
it really depends on who is the designer
and what is this AI being built for
that is going to be and is such a crucial question.
Which goes to business model.
Sure.
So who pays for WoBOT?
Well, currently WoBOT's distributed
through in partnership with health systems.
But that's right. We build for, again, these short encounters,
let people talk and be invited to use a skill
that's inspired by one of these great therapeutic approaches
like cognitive behavioral therapy,
and then get them back to their life as soon as possible.
Yeah.
We never build for engagement,
keeping people in the conversation as long as they can,
which we just think is sort of a road to addiction, right?
And that's all about the incentive.
How are you being paid?
And as entrepreneurs, we all have a responsibility
to ensure that the AIs are in service of humans,
not the other way around.
Yeah.
And I wonder if we create this dependence on AI therapy companions that you'll never
be able to say, I did it myself.
None of us will.
Well, I still think the humans are doing it themselves, right?
Because that's the beauty of an AI.
It's not a great therapeutic process, if you like.
While this isn't really therapy, structurally it is so different.
But a great process is just asking the person the right questions.
They are the ones that have to do all of the work.
They're the ones that have to do all of the work. They're the ones that have to shift their mindset
or acknowledge their role in a conflict with somebody
or tune in to their deepest, darkest negative thinking,
and that stuff is hard, and that is all on the person.
The AI is just going to ask you the right questions to get there.
So I don't think this isn't giving advice or giving a diagnosis.
It's very much and should be about helping people develop their own resources.
It's like I use the analogy of those mechanical machines
that shoot tennis balls at people
so they can practice their swing to get better at the game with the human.
These are fundamentally tools, I believe that,
and I think they should be built like that
and make sure that that is the objective function,
if you like, is human betterment.
Is there anything you do explicitly
to push people back into IRL interactions or?
Oh, right.
Well, yeah, exactly.
That would be part of, you know,
WoBOT's kind of value set.
We, you know, we constantly would talk somebody through,
hey, you know, what is the point of avoidance?
And if it is discomfort with other people,
then WoBOT will sort of encourage that person to
follow through with speaking to another human and then we'll come back a few
days later and say, hey, you said you were going to talk to Lucy, have you done it?
And we find actually in our data that aside from the daily sort of check-ins
which facilitate sort of emotional self-awareness. That accountability is the thing that people find their most favored
feature of this technology.
So they want that kind of accountability.
Do you have red lines?
Has Wobot sat around and said there's a whole set of things
that people might do in this space that we are not going to do?
Yeah, absolutely.
Like, give advice, diagnose, give away data, sell data,
especially to advertisers.
Flirt, right?
Because that muddies the dynamic of what is happening here.
Again, it has to be so clear what
is the purpose of this conversation
and what are we trying to achieve.
And staying within that boundary is really important.
Is the effectiveness of therapy getting better over time,
or is this sort of element in the mix
maybe going to increase the efficacy across the board?
You see, this is the question.
I think we haven't done a great job of innovating,
I think, in psychotherapy.
Forgive me.
Some of my best friends are clinical psychologists,
but we're not doing a great job.
Since founding the company, things are much, much worse now.
And it's interesting seeing all of this incredible innovation
and technological advancements.
And you know where we haven't moved the needle at all?
We are still as anxious and depressed as ever.
And in fact, a recent World Health Organization survey
or study found that 20% of high schoolers
have seriously considered suicide.
This is getting so much worse. So something needs to change. and I think we need to expand the aperture and bring in tools, additional tools.
It's never about replacing the great human therapists that we have,
but most people aren't getting in front of a therapist.
And even if they are, they're not there beside you as you live your life.
Yeah.
Could you imagine a point where you could put yourself in front of a therapist
and say,
I'm going to go to the hospital, I'm going to go to the hospital, I'm going to go to the hospital, of a therapist, and even if they are, they're not there beside you as you live your life.
SB Yeah.
Could you imagine a point where you could put an AI on the kitchen table
and then the family could have one of its sort of little fights, shall we say,
and then it would take the transcript and say,
well, Edward, you shouldn't have said this,
and Kelly, you interrupted, and dot, dot, dot. Like, could you imagine that kind of feedback
on the dynamics that are keeping a family cycling
on the same dumb patterns over and over?
Not being personal at all here, Edward.
Anyway.
As you were saying that I was imagining my own family,
I'm the youngest of six, and just thinking,
I had that laptop flying through the window so fast.
Yeah, but again, this is a tool set, I believe, and we can build tools, we can use the tools
in certain ways, but I think you're bringing up something else that's interesting in that, you know, it's
not about replicating the models of therapeutic approaches that were built for human delivery.
I think it's about leaning into now, what can the AIs bring to the table that's new
and that's novel and that's specific to that technology, that tool set.
And that's really, I think, the opportunity moving forward
with these more advanced tools.
I'm thinking about your comment about flirting,
and my best friend is pretty sure that her therapist falls asleep on her.
But she has bangs,
and she can't tell if she's nodding off or just really thinking.
And, you know, obviously, like, therapists vary, parents vary.
Do you have a thought about which has more potential for damage,
an AI or a human?
That is a big question.
I think that AIs have plenty of potential for damage, as do humans, and it's very early
days with the technology.
The thing is that we have the opportunity to develop AIs with intentionality, and of
course there are unintended consequences, and we need to build those structures in addition
to be able to monitor and watch those
and take advantage of positive directions.
So we'll see, fundamentally these are just tools,
and also humanity is...
humanity, for a reason.
You know, there are things that are common.
Pain is something that is common to all of us,
and we will all go through difficult moments.
We will all experience grief.
We will all lose a loved one.
And it's about understanding how we're going to work together.
But yeah, just to reiterate that point,
we have to make sure that the
the tech is in service of humans, not the other way around.
Thank you so much for coming to Ted.
We have much more after a short break.
So 12 minutes is not a very long conversation, I have to say that from the start.
And so we were so constrained.
I don't think I've ever done a 12 minute interview in my life and I've interviewed probably six
or seven hundred people over the years.
So it was my job up front to reduce the number of questions from like 18 was the number that
I really wanted to ask in a perfect world down to something like
four or five so that she could really open up the topic. There's just one thing that I'm missing
from having heard this. That's Lucy Little, my producer. Lucy was one of several TED staffers
who gave feedback on earlier stages of the conversation. I do think the question of where is this AI learning
that's information from, like who?
Oh yeah. Oh yeah.
Because I think that's a huge.
Oh yeah, we should say that from the outset, right?
Should we so that people realize this is not a generative AI.
So this is, yeah, we're that whatever,
like, Wobot's so old school in lots of ways.
It's like the retro AI therapist.
Everything Wobot says has been scripted.
So we're using sort of machine learning and natural language processing
to understand what people are saying in key areas.
But it's a simulation of a conversation.
So it's very on the rails.
Now, where we've used large language models, it has been to, you know,
tap into their power to better understand
what the person is saying.
But fundamentally, it's just not it's not trained on the Internet.
Right. So I think we want to say get to this really good note.
There's two things at the top, which is it's based on CBT
and it's scripted, it's not generative.
Yeah.
And that's quite a lot of heady thinking.
If I can only touch on four or five ideas here,
what are the most salient essential ideas
such that as you go through the rest of the session where you're going to meet these other five speakers,
you are positioned in the best possible way to process what people are putting in front of you.
So the first thing that Alison clarified for me that I thought was essential and doesn't get talked about enough honestly is who pays for the product and
How will the product be evaluated in terms of its efficacy?
because if the profit motive
involves keeping people on the app longer so that you can say collect more data or
Place more ads the app will be designed in one way, which could be enfeebling. It
could be creating this terrible dependency where a person becomes less
strong and less independent and less confident in their own instincts and
more addicted to this little assistant who's going to tell them what to say or
do every time they have a strange interaction with somebody in their life.
It sort of forces people into a direct consumer construct, which then has the danger of forcing
innovators into this place where they're now just trying to hijack attention and build for
addiction and not actual well-being. So I think that the big frustration is not the limitation
of the tech per se.
It's trying to find the construct where it can live and be ethical still and be built around an objective function of human well-being.
But when it comes to Wobot, I was relieved to know that they are paid by insurers based on well- wellbeing metrics. The biggest shift in my feelings,
thanks to talking to Allison,
is this possible partnership between people and AI.
Like, one thing that she said during one of the pre-calls
was that for some people,
the way that WoBOT is talking to them
is a model for the ways they could be talking
to the people in their own the ways they could be talking to
the people in their own lives.
What we're talking about, I think, is making it easier to disclose, to share something
and have practice with sharing that thing.
That's what we've noticed from WoBot is like, you're right, people are going to WoBot with
things they may never have shared with another person, but it makes them more likely to then go on
and share it with somebody else.
Like it's a practice of externalization.
It's very helpful to be in a healthy interaction
and see how that flows,
which is basically like asking follow-up questions,
making sure that you understood
what the person said and meant.
All of that gets modeled in these AI conversations asking follow-up questions, making sure that you understood what the person said and meant.
All of that gets modeled in these AI conversations so regularly that it does seem reasonable
that a person might start using those same techniques, follow-up questions, confirming
that you understand what they really meant in their live interactions with other people.
And that would be a tremendous step forward.
I mean, if people talk to each other that way
with more intention to understand,
less determined to be understood,
people might get somewhere.
Relationships would change.
Another huge takeaway from being with Alison
throughout the prep period and then also sitting across from her in front of all those people was that we should know who's behind AI.
Because when you meet somebody that lovely, charming, and conscientious, you feel very differently about AI, which is so anonymous in its nature.
But the way that Allison was talking about it, it's not at all that.
It's something that Allison and the people that she has recruited,
based on their knowledge of cognitive behavioral therapy,
what those techniques are and what makes it effective,
what they're putting in front of us.
So it's nice to put a face behind these big, huge letters
that seem to be like towering over everything, AI.
One of the things I've been thinking about since TED,
in terms of everything I learned by getting
to know Allison and WoBot, is could there be a really smart, effective way to use an
ingestion AI to observe, if you will, a family interaction or a couple's interaction or an
interaction between a parent and a child and give notes back to all involved.
Loads of people say to me, but when I'm a therapist, you know, I'm reading the,
um, the, the nonverbal communication in the room and I say, well,
you're doing that because you're human and that, you know,
humans are not as we know this,
aren't as able to disclose to another human as they are to an AI.
And so, you know, it's actually a very different dynamic.
And the AI dynamic doesn't necessarily need to read nonverbal communication.
I would say with the massive asterisks and the asterisks is,
I don't know how an AI would fare when it's not looking at the nonverbal communication between siblings.
Exactly.
Exactly.
It's louder than all the words.
Yeah.
So theoretically it might be useful, but I would say a behavioral family therapy as I
was trained to do it, that's a bit better of a fit because the therapist role there
is as expert.
What about if a robot or another AI is treating all the members of a family? If
impartial, unbiased robot was talking to me and my brothers and my parents, maybe it could
help me by saying, here's what your brother's really mad about from that vacation in 1978.
But is the important thing for you to have that insight or is it for your
brother to be able to share, have the insight and share that with you?
The only tension I felt was like, was it to the greater good that there was an AI
that could walk you through some of your cognitive distortions at two in the morning?
Or is it important in some way that we learn how to do that moment alone?
I think at the end of the day, maybe the most important takeaway from talking to Allison
is a process point.
It's like, how are you evaluating AI options that are going to come across your desk?
Like, could we be smarter consumers and advisors to one another as options become available?
I wonder if AI is going to be a great new receptacle for very scary thoughts,
like, I tried cocaine or my boyfriend wants to try choking.
Yeah.
There's no way my children are gonna come to me with that.
And they never would have.
Yes.
But will they come to you?
And then what is the responsibility of a company
who's in that conversation?
Yeah, this is a real topical issue
because while some of these AIs right now used for
this purpose wouldn't necessarily be covered by the same law as a confidential therapist-patient
relationship, we feel that people using it may feel it is.
And so we actually try and hit those things regardless.
We treat it as if it is a
confidential relationship. So for example where we are working with a health
setting where they have in the past and they have asked for the full transcript
data we've absolutely said no way and we've walked away from deals to say
there's no way those things are they're sacred. They are sacred and while
they're not protected by law in the same way as a patient therapist notes are, they're sacred. They are sacred. And while they're not protected by law in
the same way as the patient therapist's notes are, we hold it to that same bar. But of course,
there is no law. So we're one company among many that are doing this right now. So that's
just something to note.
And I think we're getting smarter through Alison's talk and the talks that are coming
up about how we evaluate. What questions would we ask of the developer
and the designer and the company that's offering us these certain products, how would we know
who to work with and who to let in?
Because we begin the interaction.
We say yes to something and it enters our home. And I think it's already been clear
that we were asleep when we let social media enter our lives.
And I think many people were asleep
when they let something like Alexa into their lives.
Like, I think you have to know
exactly what is in those terms and conditions
that we're all clicking, okay, okay, okay,
before you open the door to your most private spaces.
Well, I think it is interesting.
I think that a lot of people aren't comfortable.
That's Chloe Sacha Brooks, the TED curator,
who was my guiding light through this entire process
during my first exploratory call with Allison.
At the same time, I think that the way
that you're talking about it, Allison,
is extremely reasonable.
Like, I think you really have a clear sense
of the boundaries and the lines that we need to draw
between what is human and what isn't.
I think, to me, the bigger question
for people listening to
this would be what if the boundaries that we set around this or that our
intentions set up for this get violated just by nature of the thing spinning out
of control? Yeah. And so how do we prevent that? Can we prevent that? Yeah,
like I know and that's right and I think one of the challenges we have in the field of AI is that most people don't
understand the tech.
And so it's so easy to scare and scaremonger.
But you have to imagine, you know, when we say they can self-improve, you still have
to tell it, how are you improving?
That's the objective function.
What is the objective function?
And what are you proving towards?
We still get to say that's human empowerment, that is human well-being, right?
But they are still just tools.
And that's it for today. Come back tomorrow for the legendary anthropologist, Sarah Blaffer-Hurty.
for the legendary anthropologist Sarah Blaffer-Hurdy. TED Talks Daily is part of the TED Audio Collective.
This episode was produced and mixed by Lucy Little,
edited by Alejandra Salazar, and fact-checked by the TED research team.
The TED Talks Daily team includes
Martha Estaphanos, Oliver Friedman, Brian Green, and Tansika Sangmar-Nivang.
Additional support from Emma Taubner and Daniela Belarezo.
I'm Kelly Corrigan, guest host of TED Talks Daily, here for a special week of content
around the topic of AI and family life.
And please join me at my podcast, Kelly Corrigan Wonders, wherever you listen to podcasts.
I'll be back tomorrow. Thanks for listening.
I mean, this is a total overstatement, but do you think everybody, every family should have like a
robot as part of their family unit?
I don't. I think everyone can use a personal ally and I think AIs need to be
should be constructed so that they're you know they they actually help the human condition and
that that's their prime objective. But the an AI in the family unit is something I have to say,
I must confess even a being trained family therapist, I've never really told about before. It's not interesting. Why is that?
I don't know. Thank God we met.
Opening a whole new thing for you.
Just kicked open the door.
An Apple Watch for your kids lets you stay connected with them wherever they go.
They can call you to pick them up at Grandma's or text you because they forgot their lunch.
Again.
Their watch will even send an alert to let you know they finally got to school.
Great for kids who don't have a phone because their Apple Watch is managed by you on your iPhone.
iPhone XS are later required with additional wireless service plan.
We were gifted the new Google Pixel 9 and the built-in AI assistant Gemini has been
so helpful with our weekly science podcast. Okay, listen to this. Hey Gemini, what is an
interesting science story from the last couple of weeks? One particularly interesting science story involves a potential breakthrough in understanding
and treating Alzheimer's disease.
It's ridiculously useful.
Right?
I don't have to stare at my phone being overwhelmed trying to gather information in the same way
anymore.
Check out Google Pixel 9 at store.google.com.
Do you eat food?
Three times the points.
Do you go anywhere? Three times the points. dot google dot com. Daily transit, streaming services, digital gaming, and more. All the time.
Get the RBC ION Plus Visa.
Conditions apply.
Visit rbc.com slash ion cards.