The Dr. Hyman Show - How Social Media And AI Impacts Our Mental Health: Reclaiming Our Minds And Hearts And Healing A Divided World with Tobias Rose-Stockwell
Episode Date: November 1, 2023This episode is brought to you by Rupa Health, BiOptimizers, Zero Acre, and Pendulum. The rise of social media has revolutionized the way we connect, share information, and interact with one another. ...While it has undoubtedly brought numerous benefits, there is growing concern about its impact on our mental health. Today on The Doctor’s Farmacy, I’m excited to talk to Tobias Rose-Stockwell about how the internet has broken our brains, what we can do to fix it, and how to navigate this complex digital landscape. Tobias Rose-Stockwell is a writer, designer, and media researcher whose work has been featured in major outlets such as The Atlantic, WIRED, NPR, the BBC, CNN, and many others. His research has been cited in the adoption of key interventions to reduce toxicity and polarization within leading tech platforms. He previously led humanitarian projects in Southeast Asia focused on civil war reconstruction efforts, work for which he was honored with an award from the 14th Dalai Lama. He lives in New York with his cat Waffles. This episode is brought to you by Rupa Health, BiOptimizers, Zero Acre, and Pendulum. Access more than 3,000 specialty lab tests with Rupa Health. You can check out a free, live demo with a Q&A or create an account at RupaHealth.com today. During the entire month of November, Bioptimizers is offering their biggest discount you can get AND amazing gifts with purchase. Just go to bioptimizers.com/hyman with code hyman10. Zero Acre Oil is an all-purpose cooking oil. Go to zeroacre.com/MARK or use code MARK to redeem an exclusive offer. Pendulum is offering my listeners 20% off their first month of an Akkermansia subscription with code HYMAN. Head to Pendulumlife.com to check it out. Here are more details from our interview (audio version / Apple Subscriber version): The superpower that social media has provided to us (5:55 / 4:21) How our traditional knowledge systems have been deconstructed (7:39 / 5:15) The challenges of uncovering what is true (12:43 / 10:18) How Tobais’s time in Cambodia led him to this work (15:05 / 12:42) The harms of social media (26:57 / 22:36) Historical media disruptions (32:57 / 28:37) The dangers of misinformation (35:27 / 31:06) Challenges and opportunities around AI (42:09 / 37:58) How governments and platforms can reduce the harms of social media (55:10 / 50:59) Individual actions to improve the impact of social media (1:02:30 / 58:09) Get a copy of Outrage Machine: How Tech Amplifies Discontent, Disrupts Democracy―And What We Can Do About It.
Transcript
Discussion (0)
Coming up on this episode of The Doctor's Pharmacy.
We need to renegotiate our time with these tools.
We need to renegotiate our attention with these tools.
And there are increasingly better ways of doing that.
Hey everyone, it's Dr. Mark.
I have a question for all you healthcare practitioners
listening to this show.
How much time do you spend
ordering functional medicine lab tests?
Well, lab work is a critical tool for functional medicine. It helps practitioners find the root cause of illness, but the process of
ordering, managing, and tracking lab results can take hours of time away from actually caring for
your patients. And that is why I want to talk to you about Rupa Health, the absolute best way to
order testing from over 30 different lab companies like Genova, Dutch, DSL, and lots more. Rupa makes
ordering labs ridiculously simple, which means not only are
you able to spend less time on tedious processes, but it also allows you to provide a higher
standard of care for your patients. If you're not already one of the tens of thousands of
healthcare practitioners using Rupa Health, make sure to go sign up for a completely free
account at rupahealth.com. That's R-U-P-A health.com. Supplements are one of those things
that I'm always being asked about. Is it worth
spending the money on them? Do we need them if we really eat well? And can your body even absorb
them? The answer to most of these questions is, it depends. One important one is magnesium. Most
soils have become depleted of magnesium, so it's a tough mineral to get enough of through diet
alone. 80% of Americans are actually deficient or insufficient in magnesium. And since it's crucial
for hundreds of reactions in the body and impacts everything from metabolism to sleep,
neurological health, energy, pain, muscle function, and more, it's really important that we strive to get enough of it.
Magnesium also plays a role in our stress response.
And everyone I know could use a hand in better managing stress to promote better overall health.
I like to call it the relaxation mineral.
My favorite magnesium is from a company called Bioptimizers. Their magnesium breakthrough formula contains seven different forms, which all
have different functions in the body. There's truly nothing like it on the market. I really
noticed a difference when I started taking it. I've tried a lot of different magnesium products
out there. I also love all their products because they're soy-free, gluten-free, laxose-free,
non-GMO, free of chemicals and fillers, and made with all natural ingredients. Plus, they give back to their community. For every 10 bottles sold, they donate one to someone in
need. And let me tell you, a lot of people need magnesium. Right now, Bioptimizers is offering
their Black Friday and Cyber Monday deal to my listeners during the entire month of November.
They're offering the biggest discount you can get and amazing gifts with the purchase. Just go to bioptimizers.com forward slash Hyman and use the
code Hyman10. That's B-I-O-P-T-I-M-I-Z-E-R-S.com forward slash Hyman. And now let's get back to
this week's episode of The Doctor's Pharmacy. Welcome to The Doctor's Pharmacy. I'm Dr. Mark
Hyman. That's pharmacy where I, a place for conversations that matter.
And if you've been worried about the impact of social media and technology on our minds, on our mental health, on our well-being, you're going to love this conversation.
Because it's with a good friend of mine, Tobias Rose Stockwell, who wrote a book called The Outrage Machine, How Tech Amplifies Discontent, Disrupts Democracy, and what we can do about it. Thank God for the what
we can do about it, because otherwise it's a very depressing story. It'll be a horrible
broadcast to listen to. So Devay is a writer. He's a designer, a media researcher. He's worked
in media for a long time. His work has been featured in The Atlantic, Wired, NPR, the BBC,
CNN, and lots more. His research has been cited in the adoption of key interventions to reduce toxicity and
polarization within leading tech platforms.
And I don't remember a time when we had more polarization in our society in my lifetime.
He has previously led humanitarian projects in Southeast Asia.
He's focused on civil war reconstruction efforts.
And for that work, he was honored with an award by the 14th
Dalai Lama. He lives in New York with his cat, Waffles, and he's an awesome dude. Welcome,
Tobias. Thanks so much for having me, Mark. Really happy to be here. Okay, well, you know,
this is a topic we've covered a little bit on the podcast, but I want to dive deep into it because I
think, you know, we are not really aware that we're living in the matrix. And we may not be plugged in a pod like Keno Reeves
in the Matrix movie, but it's not much different where our worlds are being curated, controlled,
and manipulated by forces that are often invisible and that change our behavior,
change our beliefs, change our actions, and ultimately lead to
serious problems, including mental health issues, anxiety, increased rates of suicide.
And it's a big problem.
Many of you might have seen the movie The Social Dilemma, which was a very sobering
movie about the impact of social media.
There's a new online video, basically a lecture by Tristan Harris and his
colleague Azra called the AI Dilemma. And we're going to go both into issues around AI and social
media because we're at this next inflection point where if we don't deal with this, it's going to
get out of control. And I think there's increasing awareness from even the leaders in the space
that we need to start being more judicious about how this gets rolled out with social media,
just kind of was like, you know, shoot the gun and then see what happens. Like, you know,
fire ready aim, you know, fire ready aim. So, uh, you know, the rise of social media has really
revolutionized the way we connect to share information, interact with each other. It's
wonderful. I use it for my business. You get to watch me on Instagram. It's awesome. But it also has significant effects
on our health and our well-being. So tell us from your perspective, as you begin to research this
and your book, Outrage, how has the internet broken our brains? What do we got to do to fix it?
I mean, give us a high level the real
yeah let's go a simple question there yeah and then and then i want to get into like
why you got into this what you're doing and yeah and more totally yeah so uh you know social media
specifically has given us these very strange new superpowers uh and the the primary superpower i
think that has given us uh that we didn't really recognize
that we were getting when it happened
was the superpower of hyper-virality, right?
The ability to instantly say a thing, feel a thing,
post a thing and have everyone you know
instantly feel it themselves, right?
So that hasn't really happened in our species
history. We haven't had this kind of this, this, uh, this kind of magical, uh, uh, you know,
power to just beam our thoughts anywhere on earth. Um, and, and it, you know, we've,
we've been struggling with it ever, ever since this, this, uh, this particular set of features
were given to us on social media.
And we don't really think about it.
We like the power.
But it actually is just kind of like Spider-Man, right?
Great power, great responsibility.
You end up doing a great deal of damage when you don't understand how to use these kind of magical new abilities we've been given.
So, yeah, I think that's a big, a really, really big, important piece of it is just recognizing this is something wholly new in our species history. And also, it kind of follows an interesting kind of historical path of disruption and renegotiation
and, you know, learning to manage our new powers that we've had in the past. So I'm hopeful that
we can figure it out.
But the last decade specifically has been, the chaos of the last decade has specifically been a result of this new superpower of our reality. Yeah. So it's fun if you're watching a gorilla
video and I want to share that with my friends. It's awesome. But it gets very dangerous when
misinformation and increasingly things that are fabricated i mean that's the
scary part about artificial intelligence you have somebody looking like they look sounding like they
sound and being totally fabricated so you could have trump or biden completely being ai manufactured
and saying stuff that they didn't even say uh and that right right. And so we don't know what's true. It's not true. I mean, I, I'm working on a project called function with, with my partner, Jonathan's
word line. And, you know, one day he sent me a video of this doctor talking about this lab test.
And he says, what do you think of using this for, you know, our platform, which is, you know,
function health is a lab testing platform. And I, and I'm like, wow, that's great. Who is this
doctor? And they go, well, that's great. Who is this doctor?
And they go, well, she's an AI-generated doctor.
Not even a real person.
Oh, boy.
And I'm like, whoa, this is scary.
So, but, you know, how is it really, the virality piece, how is it actually shaping our behavior in a negative way? We know the positive benefits, but how is it actually
hurting us? Because I think this is where I kind of worry as a doctor, what's happening to the
mental health of children, you know, of increasing adults who are hearing, you know, one perspective.
I mean, you click on one conspiracy theory in your social media and you're going to get every
single one. I mean, I was in Hawaii once've had these these hippies that lived up in the
country and they're like well there's no such thing as germs and bill gates implanted a chip
in our neck and uh you know the earth is flat and i'm like what what totally yeah totally like how
do you get all these things in one yeah you know yeah it's funny you know i think we when we think
about conspiracies and stuff i you know you probably remember uh back in the day
conspiracies used to be kind of fun right they used to be kind of like the funny jfk right who
killed jfk you know with alien autopsy that kind of thing they used to be actually kind of like
fun uh entertainment for us right uh but there was a national inquirer right exactly exactly
yeah you know uh hillary clinton and and you know the bat person had a baby, that kind of thing.
And there is something that's kind of inherently just like joyous and irreverent about engaging with conspiracies, right? You're like, but what if? And it's like a fun sharing activity with
our friends. It feels really, you know, it feels really positive a lot of the time when, you know,
it's like, especially when we're feeling, you know, a little disempowered by the man and society and our jobs, it's like, it's like, yeah, but what if maybe,
um, but, but we don't really have a mental model for understanding, uh, reality, uh, without
what is basically historically the media, right?
Like we all use proxies to understand the world writ large and, uh, and we need to rely on other people to tell us what is true.
And that involves a whole system of verification of checks and balances of citation that comes
from 100 plus years of institutional knowledge that has gone into the world of academia and
journalism.
And it's not without faults.
Don't get me wrong.
Like there's problems with that system, uh, and ways it can be improved, but it's much, much better
than just the instant viral shares and the instant, uh, you know, confirmation bias that
comes from, uh, from clicking on something, getting more of it. And you're like, Oh, see,
I told you. So it was that person that did that thing, right? We can kind of find any thread that,
uh, that confirms our inherent beliefs
and confirmation bias is a real fundamental problem when it comes to understanding the world
writ large. And we require, yeah, so we require these institutions. We require better knowledge
systems to actually understand the world. And social media has exploded our traditional knowledge,
uh, management systems, which, uh, you know, historically used to be, uh, journalism,
like journalism has really deeply suffered in the age of social media um and i think you can see that across the board uh with with the
you know the vast uh vast new conspiracy theories that are uh that are becoming yeah i mean you had
the news it was like three channels it was like walter cronkite and brinkley and i forget the
other guy but it was like and they all basically said the same thing reporting on stuff that was
factually checked
and verified.
And now investigative journalism is, I mean, I was reading an article about some scientific
report from the WHO yesterday about aspartame and the challenges of potentially causing
cancer.
And I was like, wow, this journalist really talked to a lot of people, but they didn't
actually do the homework of looking at the studies and actually dissecting them.
And as a scientific reporter, they should be able to look at the data and talk about
the validity of the data, the challenge of the data, the nuances of interpreting it,
help people make uneducated decisions to understand the context.
Not just say, oh, well, the American Beverage Association guy said it's fine, but the
W.A. showed it's not.
And the FDA supports the Soda Pop Association, or we call it the Soda Pop, but it actually
was called the Soda Pop Association. It we call it the Soda Pop, but it actually was called the Soda Pop Association.
It's now called the American Beverage Association.
And, you know, it's like, wait a minute.
Like, why don't you actually tell us what the science showed so I can make an informed decision rather than just tell me what all these experts said?
And then I'll have vested interests and I'll have conflicts of interest, right?
Totally.
And that's hard, right?
I want to acknowledge that it is actually very difficult to figure out what is true. Truth has been something that our species has struggled
with forever, right? Finding the actual truth of the matter, there's very few points of reality
about the wider world that we understand on our own, that we learn about on our own, right? We
actually rely upon these proxies, these trust proxies, in our extended networks of humans, that whose job it is,
ostensibly to, to figure out what is real and what is not real. And that is, that is, you know, we
just, we only see our day to day lives. And we have to hear these stories from other people to
really figure out what is true. And yeah, so historically, journalists would do
a decent job of that. I think that one of the biggest difficulties now is recognizing that
the media is very imperfect. And we're exposed to so many anecdotes about journalistic failures
that it becomes hard to trust the entire enterprise right um and we all
have we all have at least one of these in our in our pocket where we're like you can't trust that
newspaper outlet you can't trust that particular journalist or you can't you know that i don't i
don't really believe them and uh and that's that's hard right that's hard and like that's it is
important to i think they'll put this on a spectrum of, of, uh, of kind of sense-making
right. In that, in that, like they probably know better than your neighbor, uh, or your uncle Joe,
right. Uh, about this particular topic. Um, uh, they don't know, you know, as they don't have the
best possible information. They're not in the, you know, they're not in the academy studying this
stuff, uh, every day, but they, they're, if you put it on a spectrum, they're not in the you know they're not in the academy studying this stuff uh every day but they they're if you put it on a spectrum they're probably at a higher point um a higher
point of of accuracy than your than your average person that you're going to see on facebook um or
your average you know your average uh relation that you're you know person that you follow on
on twitter um and so that's been one of the biggest things is kind of reckoning with how
truth is hard um
truth is just a very difficult thing but like i want to be a little bit hopeful here is like
we actually know how to figure it out like we've done this before a few times in the past um when
we've gone through periods of like real chaos uh and misinformation um and there are there are tools
that we have to actually help sift through what is real and what is not real. So, so Tobias, um, how did you get into all this work? I mean, you know, you've done work in
Cambodia, you've, uh, you know, helped, you know, deal with some of the aftermath of the civil war
and the traumas there. You want an award from the dilemma for it. Like, how did you get into this
whole work around technology and its influence on our health and mental health and wellbeing?
Yeah. Yeah. So, uh, so I had an experience going, I have kind of a weird backstory here.
So I just want to acknowledge that this is kind of a curious entry into the world of
technology.
But when I was 23, I was traveling through Asia as like a wee backpacker after college.
And I met this monk, this Cambodian monk who uh basically invited me out
to the middle of uh his uh the countryside where he lived with his with his extended family and
uh so i was like yeah sure why not i'll go out with you that that sounds great i've been doing
some volunteer work here and there and i was just i was very curious about the culture
and uh he brought me out there and uh brought me this little squat little pagoda in this like dusty square and
uh sat me down a bamboo mat and i didn't meet his family i met hundreds of villagers coming
council of village elders and they got up one by one and said thank you for coming and for agreeing
to help us rebuild our reservoir uh thank you for helping us with this project we are so grateful that
you've agreed to do this i'm like excuse me i'm sorry i agree to do what and i'm thinking of maybe
the wrong person this was not discussed before but basically this monk um ran a local ngo um with a
bunch of other monks in this community um in siam province this tiny little province in cambodia
uh and he was looking to rebuild this massive irrigation system that had been destroyed during the civil war there and civil war was really bad in cambodia um if you've ever heard of the killing
fields that movie was entirely based on uh real life events in cambodia really like a terrible
horrific time it had the country saw the highest per capita loss of life in modern times.
It was like 3 million people in a very small population, right?
Yeah, it was over a million people in a very small population.
And it was a weird sort of civil war.
It wasn't like one tribal group against another tribal group, which is usually what happens in civil wars.
It was actually kind of an auto-genocide in which there was this hyper- hyper partisan political extremism that just swept
through the whole country and turned uh the nation against itself it was this it was this really
scary weird thing that happened in the 70s anyway uh basically i i got uh i was motivated to help
these monks i said like look you have the wrong guy but i'll at least be an advocate on your behalf um and so i wrote an email
to some friends and family back home um it was like kind of an impassioned email it had a lot
of like emotional uh kind of resonance in it i'm talking about this experience of meeting these
monks and that email um it went into a uh i had an email list of friends and family, but that email specifically went into a list serve
that a friend of mine had developed
that you would recognize it immediately today.
This is 2003, so this is before Facebook,
before Twitter, before anything.
Yeah, yeah.
But you'd recognize it as social media.
It was a platform to connect friends of friends.
It was for my extended community in California.
We were all interested in a specific kind of music
and we hang out there.
But if you looked at it today, you'd be like,
oh, this is a social media platform.
Anyway, my friend was very ahead of the curve.
He's a programmer ahead of the curve.
And he, along with another person, built this platform.
That person went on to be one of the first engineers
at Twitter, actually.
But the email, rather than just being like, being like oh cool that's a weird story cool
it actually went hyper viral in my existing community and i got this huge outpouring of
interest in this project um you know friends of friends of friends they would pass it on and all
of a sudden i being this you know poor backpacker uh traveling through asia had peopleing, random strangers emailing me being like, how can I help?
What can I do to support this project?
This is amazing.
So for that reason and that way, and suddenly I had support for this project, I was pulled by this new viral superpower a little bit before other people did um and was able to suddenly find this
huge uh reservoir reservoir of of of interest in rebuilding a reservoir in cambodia and um
so i ended up living i thought it would take two months and fifteen thousand dollars to rebuild
this thing uh i ended up living there for almost seven years rebuilding the segregation system
um and uh raised a quarter million dollars
this community to rebuild it it was a huge project we found landmines we had to demine
it was like a whole a whole ordeal but it was empowered and that that entire the entirety of
its of its kind of inception came from touching this new superpower virality a little bit earlier
than other people so so i've been tangling with this kind of concept of the influence of virality
in our lives for a little bit longer than most people um i worked i went back to silicon valley and i
worked in silicon valley with a lot of designers and activists and developers that were working to
make social media a really good thing um and we're very familiar like we're very very interested
in trying to make social media um uh you know had this great promise you remember the arab spring
you remember this era of like 2011 2012 everyone's like social media is great for democracy i mean
think about it for the world the arab spring allowed people to coordinate and communicate
to help liberate themselves from oppressive governments absolutely but in the same way it
was used in january 6th to coordinate and help people attack the Capitol and create an insurrection in America.
So it can be used for good or bad.
Yeah, exactly.
It's like you can use a gun to sort of feed yourself by hunting deer or you can kill somebody with it, right?
Right, exactly.
Exactly.
And I think I'm a little skeptical of some of the more pernicious, I think, narratives about social media. I know
it was not designed to be... Mark Zuckerberg is not like a Machiavellian genius trying to
ruin democracies on the world. I think he really does care about solving these problems. He makes
a good scapegoat and so do a lot of the owners of these companies and stuff. But there's these
inherent harms, what I call a dark valley of hidden harm that comes when we start using tools, uh,
when there is this huge mass adoption and we just don't know how it's going to be used for ill.
And, uh, and understanding that in the context of AI, which you, you mentioned appropriately,
uh, similar thing, I think we're going to start to see huge benefits. And we're also
going to see this very distinct set of harms that are going to be hidden from us until they're right
in front of us. And I think we need to learn to manage that and approach that with a real sense
of caution because these tools are so, so powerful and getting more powerful by the day.
Yeah. It's quite almost like Charles Dickens, right? It's the best of times,
the worst of times. You know, these tools can be used for incredible good, but also incredible
harm. We saw that with the movie, The Great Hack. We saw that with The Social Dilemma.
And now we're going to see it with the, you know, increasing awareness around AI.
And I think, you know, when you look at the political process,
when you look at the polarization of society, when you look at the enmity, you know, I was just at a, you know,
a dead show in Gorge in Washington State.
And it was so beautiful.
There were 30,000 people there.
And I just looked around and there were people from, you know,
all sizes and shapes, colors, well, many tie-dye colors for sure and and everybody was
sort of in this culture of and the dead culture is sort of like this of collaboration or friendship
of helping each other of celebrating together dancing together and enjoying the music together
and it was you know i imagine if you you
know had political conversations with all these they'd all be fighting and pulling you know
shooting each other tearing each other's hair out but there was this moment of like you know wow
we're all people first you know we're all americans second or all or whatever we're ever from and you
know we're we're and then we're whatever else we are you know republican democrat you know
communist anarchist libertarian or whatever none of that kind we are, you know, Republican, Democrat, you know, communist, anarchist, libertarian, you know, whatever.
None of that kind of really matters, you know, when it comes to our humanity.
And it seems like we've lost this common thread of our humanity and be able to sort of look at each other in the eye and understand that, you know, maybe we have differences of opinion or different perspectives.
And it doesn't mean that we have to hate each other. And it seems like this, you know, the algorithms are so driven by things that are really fostering the worst in humanity. They're designed to keep us
addicted, to keep us engaged. And those algorithms are based on this psychographic profile and that
actually engage with our limbic system, our ancient reptilian brain, and keep us, you know,
in this state of fight or flight and stress response. But it also has another possibility, right?
It can really help in a very different way
to elevate people, but the problem is that we haven't,
you know, we haven't built a system of social media
that actually prevents this.
In other words, the algorithms are set up
to increase eyeballs. But what, the algorithms are set up to increase
eyeballs. But what if the algorithms were set up to elevate our consciousness and the systems of
engaging that were different? What if you had to pay to use Instagram or pay to use Facebook or
pay to use Twitter and their revenue came not from advertising, which encouraged the, you know,
sort of stimulation of our dopamine receptors and, and, and our worst instincts,
uh, instead did the opposite. So there could be different business models that allowed for this.
Now it's harder to get 3 billion users if you have to pay for it. Right. But, but I think we
might want to rethink how we're doing it. Hey everyone. It's Dr. Mark. For me, sustainability
is the name of the game. So when I hear about a product that's both healthy for you
and good for the environment, I get excited.
And that's why I want to talk to you about Zero Acre Oil.
Traditional seed oils are a huge problem.
They've been linked to a widespread array of health issues
and environmental issues, and yet they're in everything we eat.
Zero Acre Oil is an all-purpose cooking oil
with more healthy monounsaturated fat
and significantly less inflammatory omega-6 fat than even avocado and olive oil. And since it's made by fermentation,
it has an environmental footprint that's 10 times smaller than vegetable oils. I love the fact that
I can swap out my regular cooking oils with Zero Acre. Its clean, neutral taste lets the flavor of
my food shine, and its high smoke point makes using it a breeze in any recipe. Head over to ZeroAcre.com for an exclusive offer.
Go to ZeroAcre.com forward slash mark and use the code mark at checkout to claim this deal.
That's Z-E-R-O-A-C-R-E dot com slash mark.
One of the most important keys to unlocking metabolic and gut health is acromantia mucinophilia.
That's a mouthful, but it's really important.
Acromantia is a bacterial strain known for strengthening and regulating the gut lining. And it's also one of the most
important determinants of maintaining your long-term health and metabolism. In fact,
some of the oldest living people in the world have some of the highest levels of acromantia
in their guts. And increasing the acromantia in my own gut had such a huge payoff for my health
that I've become a super fan. And now it's easier than ever to up
your acromantia thanks to Pendulum Therapeutics. They've done something that nobody else could do.
They're the first and the only company to offer products that directly contain acromantia
mesenophilia. If you want to try it for yourself, Pendulum is offering my listeners 20% off their
first month of an acromantia subscription with the code HYMAN, H-Y-M-A-N. Just head over to PendulumLife.com
to check it out. That's P-E-N-D-U-L-U-M Life.com with the code Hyman for 20% off.
And now let's get back to this week's episode of The Doctor's Pharmacy.
We've been studying, you know, mental health and social media for years. You know, what do we know
about, you know, the pitfalls and the harms as well as, you know, how it sort of impacts our overall sort of levels of stress and mental health issues?
And then, you know, how can we, you know, how can we understand how that technology is used in a little more granular way?
And then eventually I want you to take us through like how we can maybe reimagine this to sort of the promise that's at the end of the sentence of your your subtitle of
your book which is right you know how to take amplifies discontent and disrupts democracy and
what we can do about it i'm very interested in what we can do about it but before i want to get
into the i want to sort of outline the the problem first and then i didn't want to get into the sure
what the hell are we going to do because we're screwed yeah absolutely absolutely yeah so there's
there's i think it's important to think about uh harms of social media, not in terms of just one specific thing.
You know, these are very complex systems. Humans are very complex. Right. The human body, the human mind is very complex.
So if we approach it as if there's just one silver bullet, we're not going to get anywhere. Right.
If it's just fixing advertising, I don't think that's actually going to solve the problem necessarily. So I think it's more important to think about it like you would maybe like a human body and an illness.
Right. You're not going to you're not going to solve your, you know, your your athlete's foot with the same thing that will solve your broken arm.
Right. So I think there's there's there's there's this I think some good analogies from from the world of medicine in terms of thinking about how social media influences us and how we can intervene and make it better.
So we'll just talk about the problem for a second, which I think is important to just be clear about.
One of the primary problems of social media in terms of mental health and anxiety is that it's becoming our primary source of news and information for a lot
of people it's becoming um you know even if you still go to the new york times or you go to fox
news or wherever you get your your news from social media actually is a certain type of filter on the
news um and more than half of american adults uh get news from social media today um and social
media posts tend to be negatively valence when it comes to news.
We actually tend to respond more, uh, more immediately to news that is, uh, that is
emotionally negatively, emotionally valence.
So stuff that is, uh, that is outrageous, um, missing context, um, anger inducing, uh,
disgusting, uh, we actually tend to, we tend to respond to that most most quickly and immediately and
that's a really good signal for algorithms to track right and if you just if you build a basic
engagement algorithm that's trying to track what people are interested in you know this actually
goes back to traditional news if it bleeds it leads algorithms have figured that out right
if it if it makes you if it if it gets you stuck on the item you are going to keep
responding to it um that's right is that why i get so many gorilla videos maybe perhaps i mean
i love it actually totally totally and i you know i want to i want to calibrate that like you know
we've been with this with these tools for for a little bit now long enough i think in some ways
to start to see uh new versions of of slightly more kind of healthy algorithmic
curation that have begun to be that have tried to keep you happy and not keep you depressed
and sad and doomscrolling.
Right.
But we have we have a word for doomscrolling.
Right.
Like it's like it's like everyone knows what that is.
Everyone has done it at this point in time.
It's a strange new pathology that has no clinical, you know, reflection yet.
But it's something that we absolutely feel. And I think that when we are exposed to too many of these negative stories, too many of these negatively valenced posts, it does create a sense of learned helplessness, right? Which is when we feel
a stressful situation consistently and repeatedly, we basically start to believe that we can't
control the situation and we can't actually respond and we can't do anything to solve the
world's problems. And that's really problematic, right? Like that's really, that's a problem in itself. We, uh, we need to feel, uh, we need to feel like we can
actually tackle the problems that we're facing. And if social media has given this, given us this
huge new body of available problems and issues, uh, if we are feeling helpless to solve those
issues, then that's a, that's a, that's a recipe for, for extreme depression, right? It's a recipe for disaster in terms of our mental health.
And that's really important.
When we think about, you know,
what's the sort of the genies out of the bottle, right?
You know, and our, you know, identities, our beliefs,
our, you know, it's like,
I can't think of any better analogy than
the matrix. I feel like we're all plugged in the matrix and we like need to fricking unplug
so we can actually have a sense of what's true and real. And, and we don't even know where to go
to find out what the truth is anymore. It's a little bit disorienting for people. It's
disorienting for me. And I feel like I'm fairly well educated, fairly well read.
Yeah.
You know, I pay attention. I travel all over the the world i meet all sorts of people i listen to different perspectives and it's like you know what actually is true and and uh
and what is what is actually a person to do as they're trying to navigate their life and figure
out you know how to make sense of the world and and and in a way social media has has helped us um make nonsense of the world
and and how do we start with sense making again how do we how do we deal with these digital
technologies that we have now is there a way to sort of put the genie back in the bottle
and and i want to eventually get to artificial intelligence where the genie is not quite
fully out of the bottle yet and And what do we got to do?
It's a scary genie.
It's a little pinky out of the bottle right now.
Yeah.
So I think focusing on the issue of sensemaking is really important.
So in studying this book, I kept on.
So this book is actually a history book.
Once you get through the first half of it, it i go straight back to um every previous media disruption
in history uh that i go back as far as the printing press trying to understand what um you know
basically what happens when you increase people's ability to see knowledge share knowledge uh and be
emotionally excited by knowledge uh and every, major media technology has had a tremendous
influence on our species. Um, and, uh, so, so yeah, starting about halfway through the book,
it goes back, uh, to, to, uh, Martin Lutheran, the printing press and like what happened when
we were introduced to the printing press. And, uh, it turns out the printing press was arguably the most violent invention introduced to continental Europe in in up until that point in history.
Right. It caused huge schisms within the existing power hierarchies.
It totally upset society and it caused about 100 years of civil wars.
The printing press, the printing press, literally books, those books.
Right. And like you wouldn't you know, you wouldn't at the end of that point have said, no, we don't want the printing press.
We don't want all these books.
We want people to go back the old ways.
But there was this deep, deep, dark period in which people were deeply confused.
In that era, a tolerance, like tolerance, the idea of tolerance was actually a sin, like it was actually a sin that that so someone was of a different political persuasion or religious persuasion. It was kind
of your job to go up to them and like confront them about it and or be violent with them about
it. Right. So you think about how disruptive that was to go from one way of being to another way of
being in the world. And so so we don't tend to think about
information technology as being such a disruptive and violent thing, but it absolutely can be.
And the reason is because it confuses us. It confuses us. It gives us access to huge new
models of kind of moral reasoning about the world. And it also exposes us to a tremendous
number of possible outrages. And many of those outrages are real.
Many of those outrages are not real.
And to figure out the differences between what is worthy of our attention
is really part of the problem that we're facing right now.
So when we talk and just kind of like lean towards some optimism
towards solutions here, coming back to the beginning of our conversation. It's, you know,
I can't emphasize enough how problematic misinformation is. And we have, you know,
as Americans, I think we have a healthy skepticism of authorities. You know, we have we have this
kind of anti authoritarian disposition where it's like, don't you know, don't trust the don't trust
the government, don't trust the experts, like we can figure it out on our own that's that's a
very american disposition right um but there is a real difference between between authoritarian
speech right and uh and mediated speech right which is like speech and information that comes
from uh from media entities that are that are built to help try to parse truth from falsehoods and they don't always get it right they're not always gonna you know they're
not always gonna give you the exact right um the exact right uh result uh the exact right
truthful item uh but but it's gonna be oftentimes much better than your average person trying to
figure it out by doing their own research online um and so that's that's you know we we do need
to find these proxies,
these middle layers of proxies.
And you can, you know, we're actually lucky
insofar as good information has a fingerprint.
And what I mean by that, by good information,
I'm not just saying like good information.
I'm saying that like good information,
like accurate information has a fingerprint.
It tends to be well-sighted.
It tends to come, it tends to go through
a few of these layers of refutation and, uh, and, uh, and peer review
and people that are trying to figure out whether or not it's accurate. Right. And, uh, and we can,
we can look at that in how the information travels, right? If it's one person's idea
that just comes to you directly, it's less likely to be true than one person's idea that has gone
through three or four cycles. Other people calling bullshit on that idea yeah trying to actually figure out if it's true
or not um and call it because we we're much better at identifying the uh the failures of
other people's logic and the failures of other people's uh you know assertions than we are our
own so and that's that's really the point of free speech in the first place is so that we can share
and we can criticize each other openly, and improve the available knowledge for everyone.
Yeah, I mean, it's increasingly disturbing to hear, you know, censorship happening. And yeah,
you know, free, you know, range of opinion not being heard, and, you know, debate being shunned.
And I think, wow, like, what kind of society we're in where we're burning books, you know,
I just watched the Ken Burns documentary, U.S. and the Holocaust.
And it was just, you know, I thought, you know, gee, you know, we had this sort of relatively new rise of division in society.
But it always existed.
It's always sort of been a threat in America, right?
The North and the South, the slave owners and the not.
You know, the sort of isolationists and, you know, the globalists
in America. And, you know, I think it's, you know, we're sort of, I think always prone to this, but,
you know, how do we, how do we find the, you know, as, as sort of Abraham Lincoln talked about,
how do we find the search for our better angels? You know, how do we, how do we get to our better
angels who are going to inspire us
instead of take us down the path of the worst aspects of humanity you know there's been war and
violence and rape and destruction for millennia but you know i feel like human consciousness
it seems like it's slowly getting better but it's in again these these forces that are at play now
in technology are are seemingly taking us really out of any age of
enlightenment that we were in for any period of time yeah how do we find our way back from that
yeah so i think that's really it's really important to recognize that and this is why i kind of come
back to the misinformation point a lot is is because um the the feel the feeling right now
is one of of deeply there's threats ever everywhere.
Right. It feels that we feel deeply threatened by the world, um, by, by, um,
you know, our political enemies, um,
our ideological enemies are the entity enemies to the identities and the
people that we hold dear. Um, uh,
and that is something that social media does very well is serve us threats more
than it would, uh, more than traditional, uh, media would, something that social media does very well is serve us threats more than it would,
uh, more than traditional, uh, media would, right. It's like, it's like social media would
focus on a single threat at a time. And now social media has exposed us to basically, uh,
infinite threats, uh, um, uh, always, right. It's like, if you, if you, if you, if you're
worried about something, you can find an anecdote that represents that deepest fear.
And social media is very, very good at serving us up those algorithms that prioritize certain content over other content.
And our own biases play together to actually make us see these threats more apparently than otherwise.
And something strange happens when we are exposed to a lot of threats, right? We actually seek, there's this, there's this basically this kind of tribal switch that happens in our brains in which we start to affiliate ourselves more strongly with, with, uh,
in group and out group behavior. So we look for, we look for safety and identities that, uh, that
feel, uh, that feel like they are more like us. Right. And we look to denigrate the out groups
that are threatening. Right. And if you, you know you know if you want if you're curious about why there's so many hashtag
identity this hashtag to do that on social media that is actually one of the reasons why is because
everyone feels a little bit threatened on social media and they feel like i need to declare my
allegiance i need to protect the things that hold that i hold dear in this space and i think it
really does come down to this this uh this this fundamental idea of threat that if we're exposed to too many threats, then it actually causes us,
it causes these basic tribal emotions to increase dramatically. Um, and there's, there's some
research showing that this, uh, uh, J Van Bevel at, at NYU, uh, has, he has a great book called
the power of us, which I'd recommend on this topic as well. Um, which, which shows how much identity shapes our behavior, particularly online. It really does. It does
influence our, um, how, how we see others. It's like a lens that we, we suddenly are putting over
our eyes and like, I see you and you are not, you know, you're not, you're not this, you're not just
a human. You are that you are now a Republican. You're now a Democrat. You are now, and you will be, these identities become far more instantaneously salient to us, uh, because we're
looking at everything through the lines of social media and these, and unfortunately, you know,
it doesn't just stay on social media. That's, I think one of the biggest problems here. It's like
these narratives stick with us. They, you know, they follow us to our dinner tables, to our
congregations. They follow us everywhere we go.
And if the threat is pernicious enough, then if it's scary enough, then it will keep running a little process in the back of your head and it will kind of infect most of your interactions.
So how do we start to sort of roll this back?
I mean, what's the last part of your book?
Let's go into that.
Because I think people understand that we're in this crisis and and and maybe before we kind of get into the how to fix
it let's just sort of touch a little bit on artificial intelligence because i think
you know hope to have tristan harris on the podcast soon but he you know he he is sort of
fighting this fight uh in a very vigorous way to kind of ring the alarm bell before things are too
late and and it's accelerating so fast.
And I wonder if you could talk about how maybe we're at the year 2005
in terms of where Facebook was.
We're now there with artificial intelligence.
And nobody was really paying attention.
Everybody thought this was great.
There's no downsides.
There was no regulation.
There was no oversight.
I mean, it was just striking.
Even regulators don't have an idea of what the heck it is.
I mean, even recently, a year or two ago, I think one of the senators was questioning Mark Zuckerberg at a hearing.
And he said, well, how does Facebook make money again?
Like advertisement, right?
Which was just amazing.
So not our best moment in our political understanding of the world.
Yeah.
So kind of tell us, like, why should we be concerned?
You know, there's a lot of problems to social, I mean, to artificial intelligence.
I'm building a platform for helping you using artificial intelligence to help people with
their own health information, with their lab tests, and to kind of bring in functional
medicine into kind of water acceptance, using the power of machine learning or artificial
intelligence with your own health data, from wearables, from your lab data, from your medical
history.
And it's a good thing, but there's a dark side.
So how do we kind of think about this?
Can you explain you know
where the challenges and where the opportunities for us to shift things before it's too late yeah
absolutely so uh first i think it's really important to note that i think ai is going to be
and it already is in many ways uh going to be a a true miracle of economic, you know, amazing economic engine, an incredible tool for our,
for our medicine, for understanding the world better, for, for, for creating, you know,
incredible new opportunities for everyone. I want to, I want to note that, like upfront,
that I think that, yeah, the opportunities here are truly magical. And, you know, anyone that's
used, you know, Dali or Midjourney or, or chat GPT,
I think you can instantly see that these are powerful, powerful tools with tremendous
potential. I, I think that, you know, the greatest danger that, that, that I can see in it,
as it relates to our cohesion as a society, as our,
our ability to coordinate together and cohere is when it comes to our
perception of reality itself,
reality is becoming a little tenuous even without AI,
right?
Where it's like what reality,
like capital R reality is is it's just very difficult to figure out what is true. And
when there are enough countering narratives, when when you see enough AI, you know, generated images
of a political thing that happened, we stop believing in base reality, we actually start
start kind of just going with our gut and going with this,
that we go with this confirmation bias. We go with our initial gut reaction when it comes to,
oh, that thing didn't really happen. Like I saw another image online. If you can't know that's true, you know, what's most important is just to go with my gut. It's most important to just go
with the feeling that felt true and go with the guy that resonates with my political message that
feels true. And in a world like that, basically, politicians operate with impunity, right?
They actually they can say, oh, no, that didn't happen.
Right.
No, no.
This this particular this this thing that that that I said, I didn't say that.
That's just a deep fake.
Right.
And so and so that's that's a huge problem for democratic norms and discourse.
Right. For actually because we're not able to check our biases. And so that's a huge problem for democratic norms and discourse, right?
Because we're not able to check our biases.
We're not able to check reality against it.
So the coupling of AI plus social media, I think, poses the greatest kind of systemic risk to our ability to understand what is actually happening.
I think, again, fortunately, we do have systems in place to help us figure out what is true and what is actually happening. I think, you know, again, fortunately we do have systems in place to help us figure out what is true
and what is not true.
I think a good heuristic for this is like,
don't believe everything you see on social media, right?
Like don't, especially in this upcoming election.
I think this is going to be a very, very, very hard time
for our country because AI is getting extremely good.
It is becoming basically zero cost to generate deep fakes,
to generate any images
that are politically triggering.
At the same time,
while social media companies
are pulling back from moderation systems
that might actually, I think,
provide some mitigation to these things.
And there are big problems
with bias and moderation tools.
There are big problems
with censorship on these platforms,
for sure. But there are ways to manage it that help us reduce the spread of the most viral
and the most fake content. And I think it's important to note that basically information
has patterns, right? And if you look on social media, the stuff that tends to travel the fastest tends to be lacking context, tends to be emotional.
And it tends to be it tends to be, you know, it can also often be false, like really true.
It can be false as well. But I think a good good model for this is Dan O'Connor's work, behavioral psychologist who studied our brains and how we process information.
He found that we have these two fundamental systems and how we process information.
We have system one, which is like heuristics, snap judgments, tends to be emotional, reactive, reflexive.
And there's system two, which tends to be more thoughtful, reflective.
It's a little more expensive for our brain to run,
but it actually is this,
it's just like the part of our brain
where we do hard math problems
and where we try to make hard decisions.
It's like, that's where our system two is.
The social media-
I'm trying not to do any more hard math problems
like I did in high school.
I'm done with that.
Social media is built for system one and not system two.
And it can be, there are elements of it
that can be oriented towards system two, I think.
And when it comes to parsing good information versus bad information, the social web could be designed for better parsing of good information.
I mean, I just want to note, you can look at a couple of different platforms online that we use regularly right now.
They have their own problems, but they're far better at social media for parsing the truth.
There's like Wikipedia, which is a great reference point for a lot of,
a lot of information,
right?
It's a free,
a free resource for the world.
And,
and Google,
which actually has citation embedded in its design,
right?
That's actually how Google became a search powerhouse because it,
it rank ordered based on references and citation inside of it.
So I just want to,
Wikipedia is challenging because like, if you read Wikipedia, I'm a quack and functional inside of it. So I just want to- Although Wikipedia is challenging
because if you read Wikipedia,
I'm a quack and functional medicine is bullshit.
So there's somebody controlling that
and we tried to fix it.
And there's people who have agendas
who are actually manipulating the content
to their particular perspective.
So pharma has billions of dollars
to spend on its agenda.
It's going to undercut things that
actually challenge the existing and prevailing paradigm so yeah i've been shocked to find because
you know i think wikipedia i think okay i can rely on this information yeah but it's actually
not accurate and it's it's often wrong and i'm like when i'm in the one who's in the wikipedia
or something i know very well i'm like geez you know it's it's uh it's it's kind of not 100 yeah definitely and
people make you know people make mistakes in these now maybe i'm a quack but so far so far
people are getting better so maybe just because i'm a nice guy i think i had one one one doctor
telling me at cleveland clinic oh your patients get better because you're just a nice guy i'm like
i don't know you're a nice guy. They don't get better from the same problem.
No, I mean, you're absolutely right that each one of these tools has problems associated with it,
for sure. I think that it's, again, just really important to put them on a spectrum of, you know, is it hearsay or is it someone whose job it is to actually try to figure out what is
right and what is wrong? And I think it's important like to try to improve the tools that we have and to really,
you know, fight to actually make things better, not throughout the entire baby with the bath
water, if that makes sense.
Right.
Because we need better tools for understanding the world.
I mean, I'm looking at it right now.
It says I was born in New York City.
I was born in Barcelona.
So no way.
Yeah.
I'm just kind of like, OK.
Yeah.
I mean, it's gotten better actually it's gotten it's
gotten better it's gotten longer and better they they said instead of saying it's uh unproven and
quackery it says it's a a controversial form of alternative medicine which actually it isn't it's
not even alternative medicine it's just a operating system based on systems biology so yeah i'm
actually really curious if you if you go through the and you've probably done this before and tried
to fight with moderators on this.
I'm curious.
Yeah, we have.
We spent hundreds of thousands of dollars through the Institute of Functional Medicine
and it's brutal.
Like there was just no way to fix it.
Yeah, yeah, yeah.
Yeah, that's interesting.
Okay.
Yeah, well, I would say don't not trust Wikipedia, but you can be skeptical of Wikipedia for
sure.
I think that's important that you should be skeptical of things especially stuff that is
you know that is again emotionally valence content i think that that's that's problematic
uh but i'm sorry you've had that that uh that interaction with the i don't worry about it
probably doesn't matter to you that much but um uh but i do think it's important to to to look for
these tools in these areas in which there are more uh more people involved in the process of verifying verifying stuff. So, um, and yeah, I would hope that they, they do, they are able to
kind of update that in process. There sounds like there's probably one moderator there that is like,
that has a bee in his bonnet, um, uh, when it comes to your work. So that's not great.
Yeah. Yeah. No, it just, it just, I mean, I'm saying it's just tough to know where the truth
is even, you know, when you know, that you think is right. I agree. And you know, just to be just to be clear, back in the day when there was a encyclopedia
person, right, that whose job it was to go out there and find the particular fact about
the particular notable individual, like they probably got it wrong more often as well,
right?
Like they probably got it wrong a lot of time and that one to print and then people could
see it.
So so, yeah, putting it putting on a spectrum of information i think is really important um and and trying to figure out our you know check check our biases wherever
we can um is is super critical as well so so the ai genie like you know how do we how do we keep
that in the bottle we need regulation legislation like what do we how do we deal with that and then
i want to get into social media because i think you know this next part of your book i think is
important what we can do about it so take us through both social media because I think, you know, this next part of your book, I think, is important, what we can do about it.
So take us through both social media and AI.
And how do we address this crisis?
Because, you know, it's not just, you know, creating, you know, divisions politically.
It's creating serious mental health issues for children, increasing suicides and, you know, wars and destruction and violence and really serious things.
I mean, you know, literally insurrection, you know, in the Capitol. I mean, literally insurrection in our capital.
I mean, it's not trivial.
So how do we think about fixing this?
Yeah.
Yeah.
So I think it's really-
Now the depressing part is over, everybody.
We're going to get into this.
Optimism.
Joy.
Yeah.
So look, the harms of social media, I think are very clear right now.
My colleague at NYU, Jonathan Haidt, has focused really, really closely on the harms, particularly to a very specific subset of the population, which is teen girls seem to have very, very negative reactions to social media.
They're very special and tender parts of their life and uh there is a uh there's
there's a lot of evidence showing that there's increased risk of depression and suicide uh for
young girls um as a result of increased social media usage and that's partially a result of
social comparison that's partially a result of i think uh the the you know the social hierarchies
that that young girls are very um very prone to uh at a young age and i
cannot imagine growing up uh with social media uh you know in in middle school or or elementary
school or you know even high school be probably worse but i cannot imagine what that would be
like these days to actually have uh have these tools uh you know the likes of your friends and
the you know uh the your enemies uh plotting against you and the the hierarchy of your friends and your enemies plotting against you
and the hierarchy of middle school and high school.
I think it's terrible.
And so I think there is a role for governments to step in
and actually enforce age limitations
because it's a coordination problem for girls, right?
If all of your friends are on it and you're not on it,
then you're actually ostracized from the group,
which is a huge problem.
So I think there's a role for authorities to step in and say, look, no social media use before a certain age.
Now, what we can do about these broader problems, again, to come back to the kind of more specific items that I think are really important to focus on here, I think it comes down to three different buckets. And I think it's
actually somewhat applicable to AI too, in terms of the three buckets are what individuals can do,
what governments can do, and what platforms can do. For governments, I think that focusing on
Section 230 and making platforms more liable. What is P30?
Yeah, so sorry, Section CDA 230.
This is a law that was passed, and thank you for asking.
I mean, I'm sure everybody's familiar with all the bills in Congress
and what they mean.
Right, 100% totally.
All the sections of all the bills, but I personally haven't kept up.
Yeah, yeah, yeah.
This is the Communication Decency Act that was passed in, I believe, 1995.
Section 230, it's called. And if you hear people
talking about social media and regulation, usually it involves Section 230. But basically, it makes
platforms not liable for the harms that come from people using their tools, right? So it was a super
instrumental and important law for regulating the internet in such a way that allowed for free and open exchange of
information, right? You can post something and the company that is responsible for serving that
content is not liable for what you post, right? Which I think in general was a fantastic idea.
But when it comes to the algorithmic amplification of content, there are opportunities for great
harm that make this not a neutral telephone system, right?
It's not like a neutral kind of, oh, I'm just sharing this with my friends.
It's actually something that is serving content that might actually cause real harm in the
world.
And so I think that focusing on Section 230, updating it to make sure that platforms are more liable for some of the things that happen on them, I think is really important.
On the platform side, I think that there's many, many, many interventions that can be done
to help improve our relationship with these tools.
There was a handful of actually using AI.
There's a bunch of interventions that you can use AI
to actually help identify content
and not demote content based on the identification,
but to give users basically the the ability to see for instance
when they're like a when a kid is about to post something like a comment that is extremely toxic
and bullying of someone else the um the platform can cause you to pause and it can say it can
identify that content and it can cause you to pause and say actually like maybe you should think about
not sharing this or maybe you should just take a minute.
You can share this still.
But, like, here's a little bit of friction in place that keeps you from doing the worst thing.
And I think there's a tremendous number of possible frictions that can be employed that reduce a little bit of the kind of outrageous engagement and addiction problems with this.
Which is, again, part of the bottom line of these companies.
But they make it a much less toxic place for us, for us all. So if you give users more choice and
more frictions at key intervals, it can actually really dramatically improve the kind of content
that's shared on these tools. And that's been shown in studies that frictions actually really
do dramatically help. They help us make sense of misinformation, they help us keep they keep us from sharing on the
worst type of information to all of our friends and they keep us a little bit
saner because we're seeing fewer fewer threats that are just yeah you know
stuffed up in our feeds hmm so so that's one way to sort of create these sort of
regulated structures within social media to give pause
when there's content that might be damaging right yeah yeah and that goes a long way i just want to
note that actually goes a long way if you think about these vectors of of information sharing
you can you know if you if you if you stop a single share of of a single piece of outrageous
inflammatory conflict like if in if if in Myanmar, right,
during these ethnic pogroms that happened, if you had, if you had put small frictions in place at
that moment in time, it probably could have saved thousands of lives. And that's, that's the kind
of thing that's really important to recognize. These are really influential things. Yeah. I mean,
I think that's powerful, but do you think it's enough? I mean, do we, do we need to go further
and figure out how to, you know, just change
the economics of social media so that it doesn't just sort of create this perverse incentive
of the worst information that there is, the more inflammatory, the more disturbing it
is, the more likely it is to spread.
Is there a way to sort of say like, wait, wait guys, like business model change.
Like, is that, is that even because or or how do we fix that it just seems like that is such a pernicious force
where all the incentives even if you put it like a little pause like okay well you can't you know
you have to kind of um you know be asked a question if you want to drink a bottle of wine
well maybe you think you want to drink that bottle of wine maybe it's not so good for you
like ah and i'll just drink that bottle of wine you know like i just i don't
know i guess they're gonna really work yeah i i think that uh you know we have we have small
frictions like this throughout uh society uh writ large right to keep try to keep people a little
bit more on track in their lives to keep them from getting totally sucked into a, uh, to a, to a problematic thing, a problematic behavior. Um, you know, certain,
certain extremely addicting drugs are illegal for a reason. Um, you know, I'm not for legalization
of every single drug that's available. I think the world is, would maybe be worse off with, uh,
if we had a heroin that you could buy at the corner store. What we do is called sugar.
That's actually true.
That's a good point.
That's a good point.
And a huge, huge, huge other problem there for sure.
And thank you for fighting a good fight on that.
But yeah, I do think,
so and just to qualify and put some historical context here
around advertising as a business model.
Before we had advertising as a business model before we had advertising as a business model we actually didn't have newspapers like newspapers rely upon
advertising as being a real foundational uh subsidizer of our sense making capacity right so
npr just like laid off 10 of its workforce earlier this year because they had an advertising
downtick right so so there is a huge subsidy on sensemaking
that comes from advertising.
And I think that that's important to note
that like advertising itself
is not the most demonic thing in the world.
It is the kind of internal structures
at these platforms that are pushing
a level of engagement with the extreme.
And it's really important to note this, right?
It's like they are going to fix it
if enough of us are pissed off about the extreme stuff
that we're getting served on a regular basis.
It's just we can't really coordinate well right now
because there's so many other issues
that seem pressing and urgent.
Like we've been kind of flooded
with threatening information and problems.
And it's very difficult to coordinate collectively
when everyone has a different problem.
And I think this is one of the core problems
that we're facing is that we can't cohere
if there is more misinformation than real information.
So that's a big part of it.
Yeah.
So that was sort of one strategy
is about this sort of regulation
and sort of enforcing this pause.
What other things can we do to sort of solve this problem?
Yeah.
So as individuals, I think it's really important to recognize.
So I have this concept of healthy influence online,
which is recognizing that we are not just influencers.
We are being influenced by these platforms as well.
So it's important to recognize
this kind of multidimensional influence.
It's omnidirectional, right?
It goes both ways.
We are influenced by these tools
and we are influenced by our communities
as well as we are influencing our communities.
And taking some responsibility for the stuff we post
is really, really critical. It's a really important piece of that, right?
Don't just do it for the metrics, do it because you think this is actually a really important,
healthy thing for the rest of the world. I think that's an important piece of it, right? Recognizing
that you are a steward of an audience, you are actually creating content that that that other people will see. And, you know, I feel this. I'm sure we've all felt this to some degree, that the FOMO that comes from someone posting something about something that's happening and you're like, oh, I wasn't invited to that.
That sucks. Right. Oh, bummer. Yeah. which is like holding on to the post for a couple of weeks and then posting,
yeah, post-posting essentially, post-posting well after the fact so that people don't feel and your existing audience, they don't feel as kind of left out by that particular thing.
And, you know, I think it's important to think about social media as like a community.
Like we are in these communities together and you need to approach your communities online as
if they are real life communities, because these are real humans that you're impacting with your
content. It's not just for you. This is for other people too. So I think that's a really important
piece of it. Interesting. So that's sort of individuals can sort of be more conscious
about their use about me. I mean, I try to go on social media holidays. I don't really
look at Facebook. I don't really pay attention to Twitter. I use Instagram to post stuff for myself, uh, to educate people about
health issues. Sometimes I'll just get, you know, scroll while I'm standing online or something and
look around, but I don't spend that much time doing it. And I feel like if I do, it's just a
big suck. And yeah, you know, my life energy is really important to me. And I think, you know,
the question is, what are we missing by not picking our heads up?
You know, what are we not getting?
And I think you talk a lot about in the book about how we need to kind of reconnect.
We call it social media, but it's almost anti-social media.
Right.
And how do we get back to really being in connection with each other in person?
Yeah.
And real-world relationships and why those are really important.
And how, you know, when you think about what's going on with our children today, I mean, the rising rates
of mental illness, depression, anxiety, suicidality, ADD, mental illness, obesity, I mean, it's
all connected.
And so how do we kind of help our generation of kids who are growing up in this social
media world to actually shift what they're
doing. Cause it seems it's just, it's like taking a, it's like taking a, you know, heroin needle out
of somebody who's an addict's hand. It's like, good luck, you know? Right. Yeah. Yeah. Yeah. No,
the, the, the addiction is very real and I think it's, it's important to recognize that it is real
and that these are, you know, these are pretty powerful tools for addicting us and keeping us there.
I can offer a couple of really just pragmatic, easy things that people can do that are helpful.
There's an app called OneSec, which is great.
It takes about five minutes to set up, five to ten minutes to set up on your phone.
But it actually will probably save you dozens of hours of your life over the course of the next few months. But basically it's a content blocker, but it's a content blocker. It's actually just a
little piece of friction that you can employ that what it does is that when you click on your app,
you open up your phone and we impulsively reflexively always go to the app that we just,
we black out for a second and we wake up 20 minutes later and we'reulsively reflexively always go to the app that you know we just we black out for a
second and we wake up you know 20 minutes later and we're in someone else's stream or watching
a crazy video or in tiktok whatever it is um but this app it basically forces you to take a breath
before you enter into uh the uh the app and something as simple as taking a breath, it reformats, it snaps you out of that
impulsive click. And it makes you think about your intention, why you are doing this. And I found it
to be amazingly helpful. Things like that, content blockers like that to keep you from uh just
defaulting to the instant habitual behavior um there's another app uh called self-control
on uh for the desktop that's really fantastic as a content blocker um that you can set time
limits for yourself that is extremely helpful for um for managing your time if you find yourself
automatically going to a new site or to Instagram or to Facebook or to TikTok.
But a lot of these tools, I think they will become much more prevalent because we need to
renegotiate our time with these tools. We need to renegotiate our attention with these tools.
And there are increasingly better ways of doing that. that actually just they snap us right back into that system to
processing right back into that, that that better part of ourself. We're like, ah, this is this is
why I'm doing this. I'm not just an impulsive, emotive human doing things with no control. I
actually have agency. And what what can the platforms do? I mean, are there ways that they're
self policing or regulating? Yeah, there are there are a lot of great people at these, you know, at these companies working on trust
and safety teams that are trying their hardest to figure out what to do to help people.
I think that, you know, these platforms could invest more in those teams.
I think they've been, you know, some of the first to get cut in this recent round of layoffs.
But no, the trust and safety teams are incredibly important for helping us understand what is best for us.
So all the research that Francis Haugen pulled from Facebook, that was the result of trust and safety teams internally doing good research on what Facebook's harms are to the world.
And so there are good people
inside of these companies really trying to make these tools better. I think, I think that, you
know, you can, you can see it's all the, all the work that you don't see behind the scenes to kind
of keep out the really toxic stuff from social media. I think that if social media was just had
zero moderation, you would be horrified by the kinds of things
that would end up on a regular basis.
And you kind of see this with different WhatsApp communities.
This is actually the moderated version we're getting,
is what you're saying.
Right. Oh, absolutely.
Yeah, yeah, definitely.
Which is creepy and strange to think about,
but like social media could be so much worse.
And in fact, in a lot of other countries,
trust and safety teams don't have as much of a significant say in presence. So a lot of other countries, trust and safety teams don't have
as much of a significant say in presence. So a lot of the social media that goes to other countries
is actually some of the worst stuff, which is bad. We're kind of exporting the worst version
of social media. We think what we have is bad. It's actually much, much worse in a lot of other
countries. And then I think it's really important to recognize that, you know, the moderation decisions at these companies are having huge influence and a say and the information that we see or don't see.
So TikTok, just to touch on this for a moment, is a huge problem, right, in terms of actually getting accurate news. Because we are facing one of the most sophisticated algorithms at tracking our attention and giving us more of what we want.
It's used by so many people in the country right now.
And it could throw 2024, right?
They could make some algorithmic decisions at TikTok that would throw the election in 2024.
That's how powerful it is.
And you think about this in terms of uh this is a tristan harris well facebook did it in
uh 2000 and inadvertently right facebook accidentally did it right and uh i'm saying
i don't i don't think people at this but we're like we're gonna we're gonna we're gonna you know
we're gonna uh we're gonna we're gonna throw the election necessarily but i think that i think that uh you know if you think about the equivalent of of this is just on harris
ism but uh if during the cold war if pbs kids was owned by the soviet union would that be okay
right and that's what we have with tiktok right now we have such a huge number of our children
that are uh that are addicted to this tool um and being served
information uh based on the whims and uh the algorithmic decisions that are being made by
company overseas that is embedded with the ccp so um i think that's really important to note because
uh these are hugely influential influential tools and the version of tiktok that they have
in in china is like eat your vegetables
it's like it's highly regulated and they you know they're they're they're serving kids uh how to be
an astronaut video you know they're not that yeah so so uh there i think there's a midpoint between
this like hyper authoritarian uh you know uh, uh, that influences what we can and cannot
see, uh, in a draconian style and something that is, that is moderate and like, and reasonable
for, uh, that is not just the, the, the race to the bottom of the brainstem, which is the
majority of the, of the tools as they stand. Yeah. I think that's a great line. The race
of the bottom of the brainstem, meaning our reptilian lizard self that is impulsive, reactive,
instinctive, but not rational and not controlled by our frontal lobe, which is the executive
function or the adult in the room.
And I think that's what I'm seeing increasingly, you know, across the world is this sort of
activation of the limbic brain and the loss of our ability to have, you know, deep conversations
about challenging topics, to have, you know, disagreements with people, but not vilify them,
to be curious and understand what's happening in the world rather than being stuck in ideological
battles. And I think, you know, it's having such a detrimental effect on our well-being. And,
you know, as a physician, it really worries me. And I
think we all use it. We're all in it. We're all doing it. And we're all going to promote your
book on it. It's like this podcast is going to be on it. But it feels like we need some really deep
thinking about this. And we need some really deep solutions. And it just seems like it has to come
from the government.
There has to be some regulatory or legislative way.
This is not something where the industry
is gonna self-police, just like with food.
I mean, they're gonna keep selling addictive,
deadly substances unless they're forced not to.
I think that's a fantastic note
and kind of metaphor for what we're dealing with, right?
It's like we have fast food in our we have we have we have diet, we have
dietary restrictions, we have, you know, we've been exposed to so many caloric problems, as a
result of fast food, and this, this, this kind of, you know, industrialization of our of our food
creation, and, and dissemination, right? It's like we, you know, we used to think that fast food was,
oh, it's just food that's fast. Actually, it turns out it's really bad for us. And there's
a lot of things we should try to do to mitigate and manage our diets, right? And our information
diets are very real. Information diets are a real thing. And we need to manage them in much the same
way. Yeah. And they are currently like junk food for your brain. A hundred percent. A hundred percent. Absolutely. So any, any final thoughts on a hopeful note?
Yeah, absolutely. So like I said, you know, this book has really been focused on the history of
these media disruptions and, uh, I I'm very hopeful and you can go, if you buy the book,
you can, you can read through these different periods of tremendous disruption that came from different media technologies. And they all follow a pretty well-worn pattern of increased virality, increased confusion, oftentimes violence. by governments, by people alike, by a bunch of concerned citizens trying to fix things.
And the book is full of those anecdotes.
And I think I'm very hopeful looking to history
that we can find our way through this dark valley
into a better place.
Well, thank you so much, Tobias, for your work,
for what you've done, for digging into this
and having a hopeful view
where we can actually find our way to a better place where we can maybe have a little bit of a search for our better
angels and create a better world. I think we're in a very precarious moment, and particularly with
the advent of AI. So thanks for what you're doing, for the awareness you're bringing. Everybody got
to check out the book. It's called Outrage Machine, How Tech Amplifies Discontent, Disrupts Democracy, and What We
Can Do About It.
Check it out.
You can get it everywhere you get books now.
It's available for sale.
And do the right thing.
And for those of you who love this podcast, please share it with your friends on, dare
I say it, social media.
It can be used for good.
And leave a comment.
How have you felt your life has been
compromised or enhanced by social media? We'd love to hear. And what are your concerns about AI? How
have you seen it affecting your life? And subscribe wherever you're podcasting. We'll see you next week
on The Doctor's Pharmacy. Hey everybody, it's Dr. Hyman.
Thanks for tuning into The Doctor's Pharmacy.
I hope you're loving this podcast.
It's one of my favorite things to do
and introducing you to all the experts that I know
and I love and that I've learned so much from.
And I want to tell you about something else I'm doing,
which is called Mark's Picks.
It's my weekly newsletter.
And in it, I share my favorite stuff
from foods to supplements to gadgets
to tools to enhance your health.
It's all the cool stuff that I use and that my team uses to optimize and enhance our health.
And I'd love you to sign up for the weekly newsletter.
I'll only send it to you once a week on Fridays.
Nothing else, I promise.
And all you do is go to drhyman.com forward slash pics to sign up.
That's drhyman.com forward slash pics, P-I-C-K-S
and sign up for the newsletter. And I'll share with you my favorite stuff that I use to enhance
my health and get healthier and better and live younger, longer. Hi, everyone. I hope you enjoyed
this week's episode. Just a reminder that this podcast is for educational purposes only. This
podcast is not a substitute for professional care by a doctor or other qualified medical
professional.
This podcast is provided on the understanding that it does not constitute medical or other
professional advice or services.
If you're looking for help in your journey, seek out a qualified medical practitioner.
If you're looking for a functional medicine practitioner, you can visit ifm.org and search
their find a practitioner database.
It's important
that you have someone in your corner who's trained, who's a licensed healthcare practitioner,
and can help you make changes, especially when it comes to your health.