Big Technology Podcast - How Tech Can Actually Help Mental Health — With Ex-National Institute of Mental Health director Dr. Tom Insel
Episode Date: January 26, 2022Dr. Tom Insel led the National Institue of Mental Health for 13 years, departing toward the end of the Obama administration for a career in tech. Insel joins Big Technology Podcast to discuss how the ...data on our devices may unlock the key to treatment. Our conversation covers how tech can help manage and treat illness, the privacy ramifications of collecting this data, and whether social media is actually harmful for our mental health. You can pre-order Dr. Insel's forthcoming book, Healing: Our Path From Mental Illness To Mental Health, here: https://www.penguinrandomhouse.com/books/670329/healing-by-thomas-insel-md/ Subscribe to Big Technology to read Dr. Insel's opinion piece: https://bigtechnology.substack.com/subscribe
Transcript
Discussion (0)
Hello and welcome to the big technology podcast, a show for cool-headed, nuanced conversation, of the tech world and beyond.
Well, we often hear about how tech is starting to harm our mental health, but can it help our mental health?
It's a question that doesn't get asked too often.
But someone that is asking it is our guest today, Dr. Tom Insel.
He was the director of the National Institute of Mental Health from 2002 to 2015.
He also led the mental health team at Alphabets Verily.
You may also know Alphabet by its other name.
Google.
And Verily used to be part of Google X.
And he's got a book coming out.
It's called Healing, Our Path from Mental Illness to Mental Health.
It's coming out on February 15th.
I encourage you to pre-order it.
And for a very first time, we're having a op-ed in big technology, the newsletter, and Tom is going to write it.
It will come out tomorrow.
Tom, welcome to the show.
Thanks, Alex.
Delighted to be here.
Yeah, it's been really great spending the past few weeks workshopping some ideas with you.
And I'm really excited about the op-ed that you have coming out in big technology tomorrow.
And I think it's going to be great for us to sort of riff on some of the ideas that you're
you bring up. First, I want to talk a little bit about your background. You know, you're the second
longest tenured director of the National Institute of Mental Health. You left towards the end of
the Obama years and then went to Verily. So I'd love to hear a little bit from your perspective
some of the problems that you think are institutional mental health organizations, the government
pharmaceutical companies, you know, the psychology practice at large weren't addressing and how
you think tech might have actually been able to help that stuff.
Oh, you bet.
So let's start with, because a lot of people listening probably don't know what the National
Institute of Mental Health is, NIMH.
It's part of the NIH, the National Institutes of Health.
And that's gotten a lot of play recently because of people like Tony Fauci, who runs the
Allergy and Infectious Disease Institute, or Francis Collins, who's the NIH director or
better known. But there are 27 of those institutes. And I ran the one called the Mental Health
or the National Institute of Mental Health, which is, I think, one of the oldest and one of the
larger institutes within the 27. And the remit for that institute was as a government agency
taxpayer funded, was to provide the support for science on mental illness broadly. That included
brain science, neuroscience, genetics of the mental disorders, development of new treatment,
a whole range of research that was geared to trying to reduce the morbidity and mortality,
or what people usually would call death and disability from mental disorders.
You know, part of that job is you, as Tony has been doing a lot lately,
you talk to the public and you're in the public eye a lot.
These are very interesting leadership roles at a national level.
You deal a lot with Congress.
You deal a lot with the White House.
But most of all, you're using taxpayer dollars.
So you talk to the taxpayers and you try to explain what you're doing with their investment.
And I was doing that about five or six years ago.
I hear out on the West Coast and somebody got up after I explained the brilliant science that we were supporting,
work on stem cells, very cool, new kinds of technology for understanding how the brain works.
And then somebody got up and said, man, you just don't get it.
It's like, I have a son who's 23.
He has schizophrenia.
He's been hospitalized four times.
He's been in jail three times.
He's tried to kill himself twice, and now he's homeless.
So, man, our house is on fire, and you're telling us about the chemistry of the paint.
And that was a really important moment for me.
I mean, that's kind of why you talk to the public because you want to hear people tell you what you're not hearing inside the Beltway.
And that was that moment when I realized he's right.
You know, at first time, it was very defensive.
But the science that we were doing, which was spectacular, just wasn't really answering the call.
It wasn't putting out the fire.
And I began thinking, maybe there's a better.
way to do this, maybe supporting research at academic medical centers, as important as that was
in this moment, wasn't dealing with some of the issues that people with schizophrenia, bipolar
disorder, anorexia, some of the really deadly illnesses were facing. And so I began wondering,
like, how do we do that? How could we do better? And it was about that time that I ran into Andy Conrad, who was
just starting Verily. It wasn't called Verily at that point. It had another name, Google Life
Sciences. And we started talking about this, and he had a similar passion. His mother was a
psychiatrist who worked at the L.A. County Jail. And so he knew quite a bit about it, just from
growing up in a family that talked about this stuff. And so we started scheming together about
what could we do? How could a tech company with access to better data, with
scale with people who understood data science in a deep way. How could we do better? And that began
that transition from working in the government to working eventually at Verily and then a series
of smaller companies to try to figure this out. So let's talk a little bit about the work that
you did at Verily, which was under the Google banner. One of the things I found out was found really
interesting was that it wasn't like how can we shift the products to make it better for
people's health, but it was something much more fundamental, the fact that you thought that
you could use, you know, streams of data. I'm looking at, you know, a quote that you gave
to the Atlantic a few years back. You know, you said you could use streams of data that a smartphone
can provide to use that as a diagnostic tool. And you write about that this week and big technology
about how the sensors can help you
and how natural language processing, for instance,
might be able to help you diagnose psychosis earlier.
So I'd love to hear more about your perspective
on how the technical inputs that we can get from a smartphone
because the iPhone was introduced while you were running the NIMH.
I'm curious like, well, actually, yes,
I'm curious how you reacted when you saw the iPhone come out,
if you remember that at all.
And then sort of, yeah, talk us to us a little bit about these sensors that can be used in diagnosing and treating mental health.
Well, I'm not sure I can remember exactly how I reacted when the iPhone came out because it was, you know, it would not have occurred to me at that time that this was in some way an important advance for mental health.
But what I probably didn't understand then but grew to understand as time went on is that a large impediment of progress in the mental health space has been the lack of measurement.
In diabetes, we look at blood sugar or hemoglobin A1C, and hypertension, we measure blood pressure.
And, you know, in most of medicine, the way we manage disease and the way we improve outcomes is through measurement.
We have really good, objective, reproducible measurement.
That's just not true in mental health.
The mental health space has been entirely dependent on subjective report,
and often through a self-report using screening tools that are not really robust or rigorous.
So if one asks the question, how do we measure the way somebody feels, the way they think,
the way they behave, it's pretty obvious that technology can capture a lot of that, and it can do
it passively.
I mean, it's just simply something even as simple as getting a vague sense of how we sleep can be
done better through technology.
You know, if people often look at their phones, the last thing they do before they go to
bed, and they look at them first thing they do when they get up, even that simple measure
of time off, time on for a phone could be a pretty decent objective estimate that could be better
than self-report. And you can certainly validate it with even more objective measures.
So beginning to think about what are the things you'd want to track. Something as simple as,
you know, in mental health, we care a lot about your social interaction. Are you becoming socially
isolated? Are you connected? Well, we can get a pretty good sense of that from text going out
versus text coming in, calls going out, calls coming in. I mean, it's not, this is not complicated
stuff. It's pretty obvious. And yet, almost none of those measures are part of a typical
mental health assessment. And so my sense was that we could do better. There's the privacy
concern part of it, where people will be skeptical of a company. Maybe it's a startup. Maybe it's
Google that is monitoring text in, text out, phone calls in, phone calls out. So how do you get past
that and in your work at Google and then elsewhere after you left Google? Were you able to
create anything that was able to use some of those inputs to actually treat mental health,
diagnose mental health? Yeah. So I'll get to answer those questions in reverse.
I think the first question is really does this work.
And it's actually not as simple question to answer as one might think.
Because unlike the diabetes analogy, we don't have a good ground truth here.
You know, when you say, well, are we picking up a change in depression or a change in psychosis?
How do you know, right?
You have to go back to actually have a pretty good sense of where the person is.
is by getting their subjective reports.
So at the end of the day, your ground truth ain't great.
And understanding that problem is what has taken a lot of time for many of the
companies that are in this digital phenotyping space, is trying to figure out what
are the signals that actually matter and how do they matter?
What signals actually tell us what we need to know when we need to know it?
Do you have any sense on what those signals are, or are we still trying to figure it out?
My guess is, well, it's still early days, and I think we'll get better at this.
I think some of what we are going to find most useful are the most obvious things.
Like the ones you mentioned, calls in, calls out.
Those kinds of things are just activity, literally even does somebody ever leave their home?
In the case of, you know, when people become manic, we used to say when I was at NIMH in the old days that the best biomarker
for mania was your credit card report because people go out and they start spending money.
But, you know, a far more effective way is simply to look at somebody's activity.
They stop sleeping.
They are going all day all night.
And you know that.
You can know that from a phone.
Now, Alex, you asked about the privacy issue.
And of course, one of the things you want to think about here is how do you do all this within the phone?
You don't have to collect the actual data.
You can collect some derived measure, some metric that says your social interaction score is 0.71.
Now, that's a piece of information that can go to the patient and they decide who they want to share it with and how they want to share it.
So a lot of this is, I know people think about this as surveillance, and I get that.
I understand where that comes from.
But there's another lens in which you look at this ability to measure, and you can think of this as a way to empower patients to be able to manage their own illness.
And I talk a lot about this in the book, that what, you know, for somebody with bipolar illness who's ended up in the hospital two or three times or maybe ended up in jail, they really want to know, when is mania starting, when is depression starting?
Is this just a good day when I'm feeling good?
or am I becoming manic?
And if you have a better way to monitor that so that they themselves can look at their
behavior and track it in a way that allows them to manage their illness better, just as if we
were giving them back their blood glucose measurement if they had diabetes, they'll know whether
to increase or decrease their insulin.
This is really the kind of, that's the vision of how this could and should work.
But to be clear, I don't think we're there yet.
I don't think we're that far away.
I think we could get there, but it's not been that easy to develop the signals that people can use to manage in the way we'd like.
I'm kind of astonished that you think that it is really possible for these signals that we get from a smartphone to let people know that they have the onset of mania hitting.
Yeah.
It's really going to be possible?
Oh, I think it's entirely feasible.
And I think it will.
I tell a story in the book of somebody with bipolar illness
who does this in a way that's just elegant
using a range of signals that he himself has figured out to collect
and then uses this as a sort of, as he says,
a dashboard of his mind.
That's a way of, you know, you can call this,
yeah, there's lots of names that people,
some people call it, sort of,
screenomics. There's just a whole series of ways of classifying this kind of personal monitoring of
your own behavior. But it is a great way for people who have struggled with destructive psychotic
illnesses that have landed them in places they don't want to be. It's a great way for them to
get those early signals. And they learn over time. What do these things mean?
what would you um what what sort of lift do you think you mentioned you can you communicate with the public what sort of lift do you think it would take to get the public on board with having this sort of monitoring even if they control it on their own devices because there seems to be like this and you know reporting on social media in particular i know there is a wild uh discomfort with the fact that you know facebook might be you know listening
to you, which it doesn't, but it does track what people do and people are like sort of freaked out
by the fact that it, you know, might suggest that they purchase, you know, a pair of boots
that their friends have. So that is, and the ad targeting is not really good. That is one side
of the scale. This is radically, you know, more, a radically deeper look into psychology of a person.
And so do you think it's going to be just from a public standpoint, what sort of lift do you think it's going to take to make people feel comfortable with this stuff?
I don't know if I'd be curious.
I don't know if I'd be comfortable with it.
But yeah, what do you think?
You know, it's a little bit like the early days of step counting and all the things that you can pull off of the health features on the iPhone.
I think there are going to be some early adopters who care, you know, who are in the.
the business of training for a marathon who will look at that data and really need it and use
it and it's important to them. And they're going to be, you know, 90% of the public who will have
no real interest or care about it. I think here what we're talking about needs to be put into the
frame that this isn't necessarily for everybody. I mean, the way it's being developed, it's for
people who have a deadly illness so that they can stay alive. And if we can create the sorts of
data that they have full control over, full agency with, and they manage when it's collected,
how it's collected, who gets to see it. And this is really data to empower them to manage
their chronic illness. I think that's a better way for the public to understand this.
In fact, we're not there.
I mean, I don't think we can, at this point, I don't think we have a system that allows
all of this reduction, all the analytics to go on within the phone.
But that's not a ridiculously difficult problem to solve.
And I think we have to show that this actually does help people to manage, which various
companies are in the process of doing that.
So the way I often talk about this, Alex, is that we're kind of in the first act of a
five-act play. I think we now have the tools that can allow us to say, okay, this could be done.
And let's try doing it A, B, and C ways. But we've got a long ways to go before this becomes
as useful as the continuous glucose monitor that people use to manage their diabetes. We're not there
yet. And of course, privacy becomes paramount because if this stuff got hacked, for instance,
even if it's on device, it's a catastrophe.
It might be, although, you know, what we're talking about.
I mean, I'll give you an example that one of the companies that I co-founded called Mindstrong
was collecting this kind of data, but not content.
They were collecting the haptics.
They were collecting how you type, how you tap, how you scroll, the way in which you
navigated the phone. And they were able to create algorithms from that that were pretty good
substitutes for the ratings of depression. So they really were tracking pretty well with mood.
No way. Can you give an example of like what sort of scroll pattern or type pattern might correlate
with? Yeah. So, you know, if you were looking at somebody who was hitting delete, delete
frequently or somebody who was making lots of misspellings. And then there were some that were
really far more subtle. But it wasn't about the content. It was really about the way that somebody
was actually interacting with the device itself. And of course, if somebody were to hack into that
information, it would be completely meaningless. Even for us at Mindstrong, it was often really
hard to decipher. So, you know, we should get smart about this. I know that
there's a, and this gets back to your introduction, you know, there's a sense from everybody
that they approach tech with great suspicion, and they are intensely worried about privacy
and hacking and surveillance capitalism. I got that. I come at this from a somewhat different
perspective, which is I'm concerned about the 47,000 people who die by suicide every year. And now,
the over 100,000 people who die from drug overdoses. And I see the damage that mental illness does
to families and, you know, in fact, right now to the social fabric. And I know, I know that all of that
is avoidable. It's avoidable with better care. Right. And yet the care we give is not very
great. And what I'm telling you is that one of the reasons that care isn't good, there are many,
but one of them is that we don't measure anything.
So if tech can help us to fix some of that,
and that's an if, not saying it does,
but if we can figure out how to use the data and the data science to save lives,
I think that's worth taking a real run at and trying to understand better.
And honestly, maybe that is one of the ways in which big tech redeems itself by showing that it's actually able to do something that helps people in the moment of greatest need.
Yeah.
And I'm hearing two things from you here.
One is that we're in the early innings.
And two is that you actually have worked on products that have actually worked on this problem, such as the stuff that you were doing at Mindstrong Health,
I'd imagine that verily too.
So what does it look like, you know, in terms of what we have right now?
And have you seen any success with the actual products that exist today?
Yeah, it's a great question in the sense of when we look at this through the lens of technology,
we're really asking, is it software, is it hardware?
And, you know, on each of those, what's working?
And certainly for hardware, you know, it's fair to say that VR is,
really helpful for people with phobias who need an immersive experience to overcome avoidance.
And that works.
There's no question that's effective and it's convenient and there's something there.
And in terms of software, you know, the ability to either provide a kind of digital therapeutic
where people can get access to evidence-based therapies or to provide better connection.
and coordination of care, super helpful.
If you look at the companies that have been most successful,
they kind of fall into two or three groups.
I mean, certainly one of them are the companies that use tech to improve access.
So the sort of the teletherapy effort, which says, look,
psychological therapies are largely about two people sitting in a room.
That doesn't have to be in the same room.
They can connect by Zoom.
They can connect over our platform.
And we can help patients who are looking for a therapist to find a therapist they like.
And we can help therapists who are trying to fill their caseloads to find patients in need.
And a lot of companies have done that.
Often they've done it within the employer insured market.
So they've helped people who have employer insurance to get psychotherapy.
something, by the way, that was not happening before.
So this has solved a really important problem.
I call that the access problem.
And that's a good thing.
The other place where I think we're beginning to see some real positive signs with technology
is using technology to measure outcomes in a way that hasn't been done well.
So it's creating either better software or better ways of visualizing the data so that the providers actually know when somebody's getting better and when they're not.
And sometimes that it could be digital phenotyping.
It could be kind of those passive measures.
But it could be something as simple as having people fill out rating forms, self-rating forms,
and making sure those get loaded up and get visualized in a way that allows the provider to know,
is somebody getting better or not.
So I think we're seeing the beginning of success there.
What I'm excited about, and this was my most recent company called Humanest,
is using technology in a very different way,
which is to redefine mental health care.
What I think has become clear and became really clear to me in writing the book
is that there's a difference between health and health care that we tend to,
of us in the healthcare industry tend to think that health is what we do. But in fact, for the
public, health is how they feel. It's how they function. And only a small part of that is how many
pills they take or how many visits they have to the dock. A lot of it is their lifestyle.
And a lot of it is the world that they live in in terms of social support. I call it the
three P's, the people, the place, and the purpose. They need the three P's. And this most recent
company, Humanest, is pretty interesting that way because it's connecting people with people in
psychological distress. It's allowing them to connect with other people who are struggling with the
same problem and giving them the tools to help each other. So again, it gets back to this
idea of technology to empower people. And so Humanist does a lot of that. And so Humanist does a lot of
that, I think it's super interesting to see the efficacy of simply giving people a chance to
help each other. One of the things we've discovered is that it's one of the most therapeutic
interventions you can imagine. Rather than sort of maintaining the hierarchical healthcare system,
if you simply say to somebody, your experience matters to somebody else and you give them the
chance to share that and to use that experience to help somebody else get through a similar
problem, that's an incredibly, it's profoundly therapeutic. And it's also self-affirming in all
such of great ways. Humanists likes to say it's not about what's wrong, it's about what's
strong. And so it's giving people those opportunities. So I think there's a lot we can do.
And again, it's early days. We still have a lot to learn. But it's not. It's not.
not just about apps. It's about also connecting people in important ways. Yeah, and I want to talk
about that more in just a bit. Dr. Tom Insel is with us. He was the director of the National
Institute of Mental Health from 2002 to 2015 and the author of Healing, our path from mental
illness to mental health. It comes out on February 15th. I encourage you to pre-order.
In the first segment, we talked a little bit about how tech can help our mental health.
I want to talk about the arguments against in the second segment here. So stick around.
be back right after this. Hey everyone, let me tell you about The Hustle Daily Show, a podcast filled
with business, tech news, and original stories to keep you in the loop on what's trending. More
than 2 million professionals read The Hustle's daily email for its irreverent and informative
takes on business and tech news. Now, they have a daily podcast called The Hustle Daily Show,
where their team of writers break down the biggest business headlines in 15 minutes or less
and explain why you should care about them. So, search for The Hustle Daily Show and your favorite
podcast app like the one you're using right now.
And we're back here with Dr. Tom Insel.
He's the author of Healing, our path from mental illness to mental health.
In the first segment, we talked a little bit about how tech can help mental health.
In the second segment, I want to talk about the arguments against mental health
startups.
And Tom, maybe you can help us go through them and give your perspective on each.
Before we jump into that, though, I just kind of have a broad question for you.
We talk a lot about social media here.
You know, you've worked in psychology for a long time.
You've worked in tech.
Do you think that overall, and I know this is kind of an unfair question, but do you think
overall Facebook and Twitter, TikTok, social media platforms like that, Instagram, are those
good or bad overall for our mental health?
Yeah, it is an unfair question.
I mean, I say it's unfair because I don't think we have as much data.
as we would like to think we have.
You know, there's evidence that goes both ways.
There's a lot of correlational evidence that for kids
who are spending a huge amount of time on screens,
and that could be not just social media,
but even video games,
that beyond a certain number of hours, it takes a toll.
And I've been looking at some of this literature recently.
It's, you know, we're doing a massive social experiment,
I don't think we have a good control group for this.
So it's a little hard to quantify how good or how bad any of this is.
And a lot of it depends on the individual and exactly what they're doing online.
I mean, I come at this as somebody who came from a generation where people, my parents tore their hair out about us watching television, which was a new thing.
in the era when I was growing up, and maybe it was incredibly destructive, but we somehow survived it.
I think whatever, it's a massive, it's a transformational moment in the way people get information, in the way they connect.
and I think there could be no question that this era of disinformation is incredibly destructive
and it's led to the sort of tribalism that has overtaken this country.
On the other hand, when I travel internationally, I'm kind of amazed by how much in Tanzania
are in parts of South Asia that the access to this kind of, that access to sort of
social media has really changed what was a very parochial view of the world.
They now have a sense of the world that wasn't there before.
Again, is that good or bad, time will tell?
Did you see Francis Hogan, the Facebook whistleblowers, revelations that the internal research was showing that Instagram, for instance, was bad for teens' mental health.
And, you know, we don't get, you know, people who've led National Institutes and mental health on the show too often.
And so I want to run that by you because it's been a hot topic in the newsletter and in the podcast.
And do you think that, you know, you said we don't have enough studies.
Do you, how do you, in what regard do you hold the internal studies at these tech companies?
Yeah.
So I haven't seen the data.
So I don't know exactly how they're done or what, you know, what they're representing.
But the real, the indictment there is if they themselves felt that this was a problem and they did nothing about it,
That, of course, is egregious.
But what it was they saw and what it was they thought about that, I can't tell you because I haven't been part of that.
It's, you know, we're at this really interesting moment where we're going to have to figure out how to live with this thing that has been created.
And as much as we may want to push back against all the evils that these platforms can represent,
I'm still of the mind that we ought to figure out a way to try to bend them towards something that could be useful.
And I do feel deeply, and this gets to the bigger question of what can tech do for mental health,
that a profound problem for mental health care, especially in this country, is that is what I call not the access problem, but the engagement problem.
So while all those startups are really focused on access, what strikes me is that many of the people who most would benefit from mental health care, whether that's social support or therapy or whatever,
that might look like, don't get it. And they don't even maybe don't want to get it in the way that we
provide it. So I call that the engagement problem. But they are engaged someplace and where you find
them. Yeah. As if they're young, you find them on Instagram or TikTok. And if they're a little bit
older, you might find them on Facebook or Twitter. And so the question for me isn't like,
are these good or bad, but how do we get those platforms, which have engaged the people in need
to be able to provide something that actually helps at a population level?
Facebook says that it has used artificial intelligence to find signals for where people might be
considering suicide. And it says that it's interventions that have been signaled by AI are
more effective than human moderators. A, do you buy that argument? B, is this the type of
thing that we need? You know, is this sort of like the potential to help that you're looking
for? No. That's a Facebook answer. I mean, you know, so think about that for a moment, Alex.
You know, I'm not interested in somebody telling me there's a bigger problem. I'm interested in
somebody coming up with solutions.
So what I'm talking about is what are the solutions that they can provide?
You know, it is true.
They have gotten, I think, very good.
Actually, all of the platforms are pretty good at this point in knowing when somebody's
in trouble.
It's not that hard.
It really, I mean, people will largely tell you they confess to struggling.
And they do a lot on these platforms.
The question for me is, what do you do about that?
Do you pop up a 1-800 number, the suicide prevention lifeline, and tell people to call this number?
Do you have the crisis text line link?
Okay, those are good, but why not think about what the platforms themselves can do, what they can provide?
I've already told you that you have all industry out there of startups that are working on interventions that help to
help people to recover. Why not, since you've got the engagement problem solved, why not use
that platform to actually do something that's helpful? And what could that be? Well, it could be
something as simple as what I mentioned before for Humanest, bringing people together who have
similar issues and giving them access to a trained therapist and in a moderated platform,
potentially time-limited, help them to get past this.
Often, you know, these platforms are getting to people in a hot moment when they're thinking
about suicide or when they're thinking about doing something desperate.
That's the moment in which an intervention could really matter.
And so that intervention isn't just popping up a link to somebody else, but creating
something within the platform that can actually make a difference.
There's a nonprofit called Coco.
that's been doing this.
Mostly, as I understand, that they've been working with Tumblr as a kind of pilot
to go after things like body image, suicidal ideation, concerns about very low self-esteem
and anxiety.
And what they do is they have a whole set of interventions that are within the platform.
And it's really quite remarkable.
They get like within a very short time, a 67% reduction.
in abnormal body image issues simply by sort of, you know, taking very good content
and putting it into a place where people can see it and use it in a way that makes sense.
It's interactive.
It's really smart.
There's a lot we can do.
We just aren't thinking that way.
And so it does pain me a little bit when companies say, you know, we are really good at detecting
that somebody is suicidal.
Okay.
So tell me what you're going to do about that.
because we don't, we really just, you know, we don't need more people being identified as much as we need someone to put out the fire to go back to the original metaphor.
Yeah, it's interesting.
I think Facebook has sent people out to these folks' houses and, at least in the last moment, prevented some of these situations.
But I agree.
More needs to be.
Yeah, so there's a blog that Mark Zucker were put.
out in 2018, 2019.
This was a company-wide effort.
And nobody talks about this, but they deserve a lot of credit for actually taking
this seriously.
And they were trying to figure out a way to be able to intervene.
He claims, I think, that there were something like 1,300 suicides that were prevented
by their responding.
And what they did in that case, though, was to alert first responders in the area.
And that's okay.
I mean, that's great.
I'm interested in figuring out whether there's something much, much deeper and more ongoing that can be done, again, within the platform.
And if that were to be done, I think it would need to be done with the ability also to link to crisis services outside the platform.
That's going to be key.
There's a very beginning of a conversation going on across all of those companies that you mentioned about this.
with a group that I'm part of called the Mental Health Coalition being led by the designer
Kenneth Cole, who did something similar around AIDS, is now doing this with mental health.
And so we've literally just had a single meeting with the mental health teams at each of these
companies, and each of these companies does have a group that cares about these issues.
And we've begun to think about, so what could be done?
What could that look like?
And we're just, you know, I would say this is at the very earliest stage.
But it gives me hope that we may be able to actually have a sort of population public health impact if we can do this in the right way and define a problem that's very distinct, like something like reducing suicide in young people and figuring out what are the ways of intervening once we identify those hot moments where the intervention is most necessary.
necessary. Yeah, and I promise listeners I'm going to get to some of the criticisms of mental
health startups that I tease at the open of the segment. But now that we're here, you know,
you talk a little bit, Tom, about the engagement where people are. I'm curious what you think
the broader effect of the, you know, year and a half, two years, depends on who you ask, a few
months of lockdowns and, you know, staying inside that we've had due to the coronavirus and sort of
the fraying of social bonds and the fact that people have gone from meeting in person to
meeting through screens. What's your perspective on this? Because the folks that I speak about
speak to, you know, seem to believe that this is going to be a long-term trauma that we're going
to have to deal with, especially among kids. And that therapists, for instance, are also just
booked to the gills at this point and have no more room for intake. So what's your broad
perspective on what we're about to see next.
Well, I think we have some pretty good data for this.
The CDC has been monitoring psychological aspects of the pandemic.
There have been several large-scale, some of them global studies that have looked at this as well.
It's pretty clear that this has just been a psychological shitstorm for the population, most
of all for people under the age of 25.
So whereas the virus was most destructive in terms of its infectious disease potential on the respiratory system for people over 55, the psychological fallout is for people much younger.
And it's for all the reasons you mentioned as far as we can tell the changes, just the lack of routine, the lack of predictability, the fact that you're not able to see your friends.
if you're in school and you're hoping to progress, it's just been, for many, many kids, a lost era.
And if you're a kid, a year or two years is everything.
It's your whole life.
And so the numbers are stunning.
It's just really stunning when you look at whether it's done through ER visits, whether it's done through the ratings of,
depression and anxiety, the numbers are unprecedented. We're in the middle of a youth mental
health crisis. The American Academy of Pediatrics has called this a national emergency. The
Surgeon General last month released a public health advisory. I mean, everybody is waking up to
this fact that as we go into the third year of the pandemic for kids, this has just been
incredibly destructive, traumatic. And it will outlast COVID. I call this the COVID generation.
they are going to be defined by this two, potentially, almost three-year period of having life interrupted.
So it's, it is going to be an important task for us all to figure out how to support them.
My own sense about this is we don't want to medicalize this.
This isn't the time to start giving everybody a psychiatric label.
but we need to think about schools and families
and also the kinds of social connection
that I mentioned before and providing
because those are the things that count.
I mean, the kids are mostly in school.
That's where they can get supports.
I talk a lot about future-proofing,
giving kids the skills, the life skills,
to be able to deal with this kind of stuff.
And we know how to do it.
It's not, there's no mystery to that.
What are the tools and the skills that they,
need. But we're not delivering that in most places. Thankfully, in California, we've made a phenomenal
historic commitment to this $4.4 billion effort in children and youth behavioral health. And that
involves creating a whole new workforce to work in schools across the state. It involves creating
a capacity for those kids who develop actual mental illness, crisis.
services, just a whole range of new approaches and new supports.
I'm not sure that's being met in the same way in other states, but this is the moment.
We're going to have to get ahead of this because we don't want these kids to be defined by
these two years.
They need to get past it.
They need to get on with their lives and catch up academically, catch up socially,
and catch up in terms of their life skills.
I just wanted to quickly go through.
We talked a lot about how mental health startups can help solve some of the problems
that we're seeing.
There are some arguments that they end up hurting more.
I want to start with one that you're going to, well, I don't think this is expressly
what you articulate, but a potential problem that you articulate in your op-ed coming out
in big technology tomorrow, which is that sometimes people focus so much on the analytics
and the data collection that they forget about the human aspect of care.
So can you address the situation there and how that is something that, you know, might be plaguing mental health startups and how to get past that?
Yeah, I mean, I think when I got into this, I was so, well, into this, I mean into this tech space around mental health.
I was really so focused on better measurement and better engagement that what I was missing was that tech can only do so much.
You know, if you talk to people, particularly people with serious mental illness who have recovered, they'll almost always begin telling you that the essence of that recovery was a relationship, that it was a person.
Now, sometimes tech can help to make that connection, and that's great to the extent that we're able to use the technology simply to allow people to get social support and to create.
that kind of therapeutic relationship, that's awesome.
But we really need to be focused on that.
I mean, I think everything we know about how people recover from depression,
from severe anxiety, from psychotic illnesses tells us that relationship matters.
And we can't lose sight of that in our zeal to get better measure.
or in our fascination with shiny objects like VR,
all of that is a piece of it,
but we're also going to need not just the high tech, but the high touch.
Right.
Lastly, I spoke with a friend.
I think you and I have spoken about this in the past when we were talking about the op-ed,
but I spoke with a friend, you know,
when I texted him a little bit about the work that you're doing,
and he said the last thing the world needs is more mental health startups.
Now, over the course of this conversation,
okay, I'm convinced that the technology can help in some ways.
But he does make a good argument.
His name is Bill Samlin.
He's a mental health professional working out West.
And he makes a good argument that I wanted to run by you, where he says that new
mental health companies complicate our mental health system, making it more splintered and ineffective.
So I think the gist of his argument is that, you know, you could have a therapist through a health system,
you could have a therapist on talk space, you could have all this data, you know,
on your phone and instead of actually
everything working together to help somebody,
it becomes splintered, makes it more complex.
And it's already a frustrating experience
for a lot of people dealing with the mental health system.
What do you respond to that?
Well, that's the problem.
Fragmentation is a massive issue in this space,
more than in the rest of health care.
Fragmentation here really is a problem to solve.
So I agree with him on that score.
my question is whether technology can help us to solve it.
And it is possible if fragmentation is largely the failure to share information,
the failure to coordinate care, and the siloization of interventions.
Yes, all of that is a solvable problem.
You know, the way I think about, and I've thought about this,
a lot. And it's what I really wrote the book about. It was like, why haven't we done better?
Why is suicide not going down? Why are all the public health measures for mental health
getting worse and not better? Some of it is COVID. But even before COVID, we were not doing
well. And at the same time, they were doing so much better in many other parts of medicine.
And there were three things for me. It was the lack of engagement, the lack of quality control
and the lack of accountability. And each one of those technology can help.
with technology knows how to do engagement, especially the big tech companies, quality we can
fix with better training and better access, but also with access to what works. And accountability
comes out of measurement, and those are things that we should be able to do. So I think if you put,
if you, it's true, we have a ton of mental health startup, so we probably don't need more,
but we do need solutions for those three problems. And that has,
hasn't happened yet. We haven't really solved those three. And when we do, I think we will begin
to bend the curve. I think we'll begin to see better outcomes. For me, this is all about tracking
outcomes and knowing whether people are getting better, whether they're recovering or not. So
far, we're not there. But I think tech can be part of the way we get there. Yeah. And if I could
bring up one last one to you that he brings up and you bring up in your op-ed and big technology
tomorrow, that mental health startups, you know, improve access, right, which we can all agree that
makes it easier to get on the phone with the therapist or do a video call. But they tend to
overlook serious mental health illness where that type of care doesn't scale. It's not quickly
profitable, making it less interesting to venture capital. And ultimately, we have a very big
problem with serious mental health illness, you know, in this country as well as the world. So I'm
curious how you think that tech can't address that or is that just something that it's not going
to get to? Yeah, it's the question I've been thinking the most about for the last six months and
been writing a lot about it too. I think the serious mental illness, that is the psychotic illnesses
that are most disabling and lead to some of the most expensive and most horrific outcomes
has been largely neglected by innovators. It's just not on anybody's map. Again, not true
in cancer, where serious cancer has been very much the focus of innovation and entrepreneurs,
not true in a lot of other areas of medicine, but here people have gone for the easy targets,
mild and moderate depression, anxiety, all of that. And it's pathetic because really this is a group
that once again is kind of left out on the street and isn't getting the attention. They
deserve and would benefit as much from innovation as anybody else.
Thankfully, in the last three months, I've had meetings with five companies and a one venture
capital group that is absolutely focused on this problem and committed to trying to solve
particular aspects of what people with serious mental illness face.
So I'm for the first time getting a little more hopeful about this, but it's been
for me, a real thorn on my side, that with all of this investment and all of this innovation,
it really wasn't having an impact on the people who need it the most.
Right. Well, Tom, look, I feel like I could speak to you all day about this stuff. Your
perspective is so fascinating. The work is super important. And I appreciate you speaking,
frankly, about the challenges that are lying in front of, you know, where tech and mental
health collide, but at least we're doing something. So I appreciate you coming on.
Thanks, Alex. Real pleasure to be with you and love the conversation.
Awesome. The book is Healing Our Path for Mental Illness to Mental Health. It comes out February
15th, but you could pre-order it now. I also hope you check out Dr. Incell's
op-ed in Big Technology tomorrow. You can subscribe big technology.com. I hope you're already
on the list, though. It's a good one. And I think you're going to appreciate it. We'll highlight some
the themes we spoke about today and some new stuff.
Thank you, Nick Guatney for editing and mastering the audio.
Appreciate it as always.
Thank you, Red Circle, for selling the ads and hosting the podcast.
And thanks to all of you, the listeners.
We've got a handful of really great shows coming up.
So if this is your first time listening, please subscribe.
If you're subscribed already, or even if you're not subscribed and want to rate the podcast,
a rating would go a long way.
Other than that, nothing but good wishes for you.
Until we meet again, we'll be back next Wednesday.
So have a great week and we will see you then.