Good Life Project - How Wearable Tech & AI Read Your Mind (and What to Do About It) | Nita A. Farahany
Episode Date: March 14, 2025Brace Yourself for the Decoding of Private Thoughts by Consumer Gadgets - Everyday devices like headphones and watches could soon interpret your brain activity and inner experiences. Nita A. Farahany,... author of "The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology," unveils the remarkable potential and alarming risks of this emerging neurotechnology. Get ready to rethink assumptions!You can find Nita at: Website | LinkedIn | Episode TranscriptIf you LOVED this episode you’ll also love the conversations we had with Adam Grant about rethinking.Check out our offerings & partners: Join My New Writing Project: Awake at the WheelVisit Our Sponsor Page For Great Resources & Discount Codes Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
This is like all of the sci-fi kind of scenarios that I've played out are about to come true.
Today we're joined by Nita Farhani, a world-renowned scholar at the forefront
of exploring how cutting-edge neurotechnology is reshaping our brains, our minds, our laws,
and our lives. From her work as a distinguished professor at Duke University
to her groundbreaking book, The Battle for Your Brain. She's tackling
the big questions about how emerging brain technologies are influencing not just our
mental health, but our fundamental freedoms. This is a conversation you don't want to miss.
We haven't hit a like button, but our brain lights up like a like button when we see something.
And for companies and employers and governments
to have access to that information
and the closed loop environment that has been created
where not only do they have access to that information,
but they can use it to shape the environment
that we're interacting with.
It's exciting and terrifying simultaneously, you know?
It's not hard to start to see how this becomes dystopian
pretty quickly if there aren't any
limitations on the kinds of data each of these different entities can have access to.
With the Fizz loyalty program, you get rewarded just for having a mobile plan.
You know, for texting and stuff.
And if you're not getting rewards like extra data and dollars off with your mobile plan, you're not with Fizz. Switch today. Conditions apply. Details at Fizz.ca.
Good Life Project is sponsored by Self-Conscious with Chrissy Teigen and new podcasts from Audible.
So if you love our deep conversations about living well and personal growth, you'll want to listen to
what Chrissy Teigen is creating. Each week week she partners with brilliant minds like Mel Robbins,
Adam Grant, Gabby Bernstein to unpack transformative ideas about living well
and understanding ourselves better. What makes this show really special is how Chrissy approaches
each conversation, not as an expert, but as someone genuinely curious about growing alongside us, whether
it's exploring the science of sleep with Dr. Matthew Walker or understanding boundary setting
with Nedra Glover-Tawab, every episode offers practical wisdom that you can apply right
away.
So if you're ready to expand your self-awareness and discover powerful new perspectives, go to audible.ca slash Chrissy Podcast or wherever you get
your podcasts and start listening today.
Good Life Project is supported by Audible.
So this year, why not let Audible expand your life by listening?
You can explore audiobooks and podcasts and exclusive Audible originals that will inspire
and motivate you.
Just open the app and tap into your wellbeing
with advice and insight from leading influencers
and experts and professionals.
Whatever your focus or interest,
there's a listen for it on Audible.
You'll find titles on better health,
including personal fitness, nutrition,
relationships and relaxation,
maybe explore new career strategies
or reimagine your financial life.
I recently listened to No Bad Parts by Richard Schwartz and just learned so much about my
different parts and how they affect me. Ultimately, it's all about starting good habits.
Making a positive change is the best resolution you can make for yourself and Audible can help.
There is so much opportunity and more to imagine when you listen.
Let Audible help you reach the goals you set for yourself.
Start listening today when you sign up
for a free 30-day trial at audible.ca.
In 2018, I was at a conference
where one of the co-founders of Control Labs stood up
and he was showcasing the technology
and he had it in the form of basically a watch
on his wrist and he said, why are we humans such clumsy output devices? We're incredibly
good at taking in information but we're really bad at getting it out of our brains and what
if instead of using these sledgehammer-like devices on the ends of our arms, we just think
about typing or swiping instead.
And that was the aha moment where it was both the form factor had been solved by integrating
brain sensors into everyday devices like a watch and the functionality was being addressed
and that it was an interface to all of the rest of our technology rather than just a
limited application.
And I thought like that's it.
That's the pivotal acquisition.
I'm going to watch that product because as soon as one of the major tech companies
like Apple integrates it into the Apple Watch, all the things I've been following
forever are going to go mainstream.
And, you know, sure enough, that was the pivotal acquisition.
It just happened to be Meta who acquired them a year later.
Yeah. I mean, it is so interesting, right?
I remember back and unfortunately I joined
you in the Chronic Migraine Org Club. It's been a part of my life for as long as I remember.
Like so many people, I have tried so many different things, and there have been all
sorts of tech and gadgets and things you stuck on your forehead and all the different stuff.
But it's interesting. I think what you're referencing also in terms of,
like one of the really early devices around meditation
was this device.
I think it was called Muse.
I don't know if it's still around or not.
Yeah, it is.
Where you would put it on.
It was sort of like a very simplified neurofeedback
type of thing, which would try and kind of tell you
when you're in the zone or not and help you try and get back
there. But it's fascinating that you looked at the world and you said,
like, okay, we're not there yet, but I can see where this is going.
And when we hit that tipping point where technology actually is able to do what we
wanted to do in a much more sort of commoditized and public and ease-filled and
accessible way, it's going to be game on.
Right. That's exactly right. public and ease-filled and accessible way, it's going to be game on.
Right. That's exactly right. And, you know, and I think because I've been watching it so long,
to your point, like, I could see what was necessary for the tipping point and then to see the
technology finally come into fruition to be like, wow, this is like all of the sci-fi kind of
scenarios that I've played out are about to come true.
And the urgency of it then just became really clear to me.
So maybe let's do a little bit of defining here also,
because I'm sure we'll have folks joining us who are kind
of saying, what are you talking about?
Of course.
What actually is this?
And we've used the phrase neurotechnology
a couple of different times here.
If you're explaining this to somebody who's never met you before at dinner, and how would
you actually break that?
How would you walk somebody through understanding what is neurotechnology the way you talk about
it?
Yeah.
I mean, so the easiest entry point for people is at this point to say like how many of you
are wearing like a smart device, like a smart watch, like an Apple watch or a smart ring or a Fitbit or any of these other devices that have a sensor
in them that are tracking some aspect of your bodily functions, right?
It could be the number of steps you take per day.
It could be your heart rate.
And you know, at least half the people at your dinner table probably have at least one
of those devices on or that they've, you know, have one in their life that they've acquired.
And neurotechnology, at least the way I'm talking about it, isn't what Elon Musk is doing, although
we can get into that.
It's really taking many of those same devices, whether it's, you know, AirPods or a watch,
and adding another sensor.
And this is a sensor that tracks your brain activity.
And you know, it's surprising that those aren't already in our everyday lives because people
are so used to quantifying so many other aspects of their bodily functions.
But all this is tracking the brain functions.
And most people will say, well, how can you do that?
And you know, it's most of these are tracking what's the electrical activity
in your brain.
So if you're thinking, if you're listening to this podcast
right now, neurons are firing in your brain,
giving off tiny electrical discharges.
Now, that's happening with hundreds of thousands
of neurons at the same time.
And they give off characteristic patterns
that can now be decoded thanks to advances
in AI.
So you have these different sensors that can pick up that electrical activity, then you
have AI that can interpret what those signals mean, and they can interpret increasingly
more thanks to advances in AI and thanks to, you know, massive training of these models
on this electrical activity.
There are other sensors that are coming too, but the dominant form that most of these devices
are integrating are EEG or electroencephalography sensors.
My sense is that, you know, if we were having this conversation a decade ago, this is the
type of stuff where you would need to actually have a cap on your head and have, you know,
like electrodes, you know, like all over with wires coming out and or potentially on an
FMRI or any of the other sort of like brain-oriented scans to get information.
And generally it's only available, you know, either through scientific research if you're
part of a study or through medical diagnostic and treatment.
But now it's just it's part of our consumer products.
Yeah.
So, definitely a decade ago,
if you wanted any quality of signal,
you would need to have a cap that had, you know,
a hundred some electrodes on it,
and that was applied with gel to your scalp,
and it'd be a messy process,
and you would just track a short period of time
while you're in a doctor's office.
Or you go into your point,
like a giant functional magnetic resonance imaging scanner
and you might be in there for an hour doing an activity but you get a snapshot in time,
those are still going to give you far better signal than any of the consumer devices are
because you're talking about the ability to look more deeply into the brain or to have
more electrodes covering more of the scalp than these devices. But now, a decade later, you know, you can buy almost, you know, from countless companies
now headphones that have, you know, sensors, maybe eight in each of the cups, the soft
cups that go around your ear or earbuds that might just have, you know, four electrodes
in each ear and can detect through the ear what's happening in the electrical activity
in your brain or pick
up peripheral nervous system activity. That is, like, as the
signals go from your brain down your arm to your wrist to pick
up motor neuron activity at the muscular junctions through
something like a watch that has an EMG, electromyography center
in them. And these are just consumer-based devices. And the
major players on the market, like
Meta, who acquired that company, Control Labs, have started to market these products to,
you know, do things like integrate with their Orion augmented reality classes, or, you know,
Apple ProVision that uses eye tracking to make inferences about brain and mental states.
You know, they have a patent to put sensors, brain sensors,
into their AirPods and likewise to put brain sensors into the forehead band of their virtual
reality devices to pick up that electrical activity. And so it's no longer something that's
confined to the medical arena. And then, of course, the algorithms have gotten much, much better at
being able to extract signal over the
noisiness of the signal that it's otherwise getting. So the quality has improved for
what can actually be detected from brain activity.
I mean, it's just incredible, you know. And it feels like AI over the last couple of years
probably just allows the interpretation of like whatever simplified data we're getting
from our devices in a way that, you know, probably not too long ago, we would have had really interesting data, but the AI is letting
us actually see the utility in it. Like, how do we actually use this to live better and
differently?
That's right. And much faster, I think, than anybody expected. You know, when Shout GPT
first came out, I reached out to some of the leading neuroscientists and said, like, how
are you integrating this already into language decoding?
And they're like, we're figuring it out.
We don't totally know yet.
It was only a few months later that one of the researchers
out of UT Austin published a paper applying GPT-1
to being able to decode information
from functional magnetic resonance imaging scanning sessions, being
able to do things that nobody would have expected, like continuous language, entire paragraphs
of what people were thinking being decoded from that thanks to advances in GPT-1.
And when you listen to the conversations at Meta that they've done at MediConnect, they
talk about the power of, you know, being able to have like a large
language model on a device.
One of the limitations of shipping out these products to a mass market has been that everybody's
brain signal is a little bit different.
And when you have a consumer product, when it comes right out of the box, it needs to
work.
And so what they've been able to do thanks to generative AI and having these on-device
large language models is to have basic functionality work right out of the box. So, what they've been able to do thanks to generative AI and having these on-device large
language models is to have basic functionality work right out of the box.
Like, you can use it to go up, down, left, right, but then it learns you, right, and
co-evolves with you and gets better and better at decoding your brain activity by having
an on-device decoder and classifier.
That's incredible, right?
That wouldn't have been possible five years ago.
And the fact that now these devices can co-evolve to get better and better at decoding brain activity
enables a mass market product that couldn't happen before these advances.
Break down what you mean by decoding a little bit more here.
Are we talking about literally like wearing a pair of glasses or a watch or ear buds or headphones
that can
pick up electrical signal in your brain and then literally start to say, oh, this is what
you're thinking, this is what you're saying, this is what you're, or like, maybe we're
not there yet, but like, through what decoding actually is.
No, I mean, we're pretty close, are we?
I mean, so, you know, what's tricky is how much it can decode, right, and what it means,
what you mean by thinking.
So intentional thought or intentional communication is different than passive thought that you
have in your brain.
Like, if I say, I want to send a text message and here's the sentence I want to send, that's
intentional communication of speech in your brain.
And the devices are getting much better at that, being able to pick up intentional communication of speech and literally what you're thinking, right? So,
if I want to send a text message and type on a virtual keyboard from what I'm thinking,
these models already are starting to be able to do that with pretty amazing accuracy. And then,
you know, just think about what a large language model does. It predicts the next word in a sequence. And
so, you know, what am I most likely to say? Like, open the,
you know, is it open the window, open the door, open the wine
bottle? You can pick up context to figure out what it is that I'm
most likely to say next. And so these models make decoding
happen much faster and much more efficiently. You know, just a
few years ago, before the models came onto the market, it was
already possible to classify major brain states.
So you talked about meditation.
MUSE has been trying to classify, like, here's roughly
what this brain wave means when you're alpha wave dominant
or beta wave dominant, different patterns of electrical activity
in the brain, most likely means you are meditating,
or you're focusing, or you're happy or sad.
So those big brain
states are already possible to decode with a decent degree of accuracy using these consumer
devices. But it's the ability to decode intentional speech is what has gotten remarkably better.
It's even possible to decode passive thinking though. So that's, I think, where things start to get a little bit scarier for people is the
idea of like, can it decode just a story that you're imagining but that you don't intend
to communicate to another person?
That's what that first study I was telling you about that came out of UT Austin, you
know, it was just stories that people were imagining, not what they were trying to actually
communicate.
And that, like, they could decode with really high degrees
of accuracy using GPT-1.
We're already on, you know, 4, pro-0, whatever it is,
coming, like, approaching 5 very soon, which has only gotten
better and being able to do so.
And so I think what we're seeing is a move
from what had been just general brain state decoding to being able to decode intentional
speech, you know, emotions at a much more granular level, and then even just what you're
passively thinking rather than actively thinking. And I'll just put in one more caveat, which
is, you know, that same researcher,
I was having an interesting conversation with him about, like, is this mind reading yet?
And, you know, he said...
Because of course, that's what I'm thinking right now.
Of course, right? I mean, I saw that, which is why, of course, I brought it up. I was
able to code what you were thinking, right? So, you know, he said, well, it's interesting
because it doesn't, you know, like it's decoding
what I'm imagining, but not what I feel about what I'm imagining, like the story, right?
And he said, like, if you really think about what mind reading is, it's more complex than
like one stream.
It's just like the images that you're, you know, evoking in your mind at the same time
as you're thinking about a story or of the words of a story and then it's how you feel about it.
Like there's layers of thought and how you think about it.
And I don't think we're at either even, like I don't even think that we have an adequate
theory of mind to be able to decode all of the complexities of human thought.
So it's still sort of a narrow peering into the brain
rather than a complete peering into the brain.
But it's a much more accurate peering into the brain
than I think most people think or thought was possible.
Yeah, I mean, it's so wild.
You know, if I understand it correctly,
then this sort of like the active thought process
and the passive thought process.
The active, I mean, you know,
imagine if you're wearing a pair of glasses
where, you know, like on the,
by your ears, you've got some sensors
picking up what's going on in your brain,
but you've also, you know, like smart glasses,
you've got, you know, like some cameras
on the front of the glasses and microphones
to pick up the environment so that it can use that
to provide context, so it's like it's picking up
what's happening inside your brain.
Then it's got these external sensors,
like vision and auditory, to pick up
environmental cues to help probably filter into an AI to then interpret and give
context in like what is most likely happening here. Fill in the blanks.
And that's going to get more and more accurate as we go.
Well, and I think you raise a great point, right,
which is we should never really think about these in isolation.
I mean, if I'm wearing a VR headset,
it is packed with sensors.
It's got cameras that are on my face and on my eyes
and on the external environment.
You know, it has potentially heart rate sensors
that are integrated with the Apple Watch that I'm using.
It has EEG sensors.
And it's all of that's being used at the same time
with complex algorithms to interpret
what you're thinking and feeling.
And that gives a much more accurate picture, right?
It's where you move in the virtual environment.
It's the email that you brought up
and how you're responding to it.
And the kind of contextual ability
to take all of that biometric information
and all of these other contextual clues means that the power and the accuracy of decoding what you're
thinking and feeling goes up exponentially.
Yeah, I mean, that's where I was going with this, right? Because if you add a
watch to that, if you add where you've got galvanic skin sensors and temperature
and heart rate and pulse ox and these things, those are all, you know, like, indicators of emotional states.
Now, it may be, it's not going to be able to distinguish, are you anxious or excited?
Because a lot of times, I guess maybe, you know, like, it'll tease it out and maybe
it can actually overlay that with what you're thinking to actually then determine,
is this physiological response more likely to be anxiety or excitement? It's wild.
Yeah. But I mean, more importantly, it's not even just, are you anxious or excited? It's
what are you anxious or excited about, right? And then how, like, what are you envisioning
and what are you visualizing at the same time? And then even more provocatively,
and if whoever or whatever entity is monitoring
the fact that you're anxious or excited,
is it possible to change how you feel
rather than to just allow you
to continue to be anxious and excited?
And we'll be right back after a word from our sponsors.
Good Life Project is supported by Audible. So this year, why not let Audible expand your life
by listening? You can explore audiobooks and podcasts and exclusive Audible originals that
will inspire and motivate you. Just open the app and tap into your well-being with advice
and insight from leading influencers and experts and professionals.
Whatever your focus or interest, there's a listen for it on Audible.
You'll find titles on better health, including personal fitness, nutrition, relationships,
and relaxation.
Maybe explore new career strategies or reimagine your financial life.
I recently listened to No Bad Parts by Richard Schwartz and just learned so much about my
different parts and how they affect me.
Ultimately it's all about starting good habits.
Making a positive change is the best resolution you can make for yourself and Audible can
help.
There is so much opportunity and more to imagine when you listen.
Let Audible help you reach the goals you set for yourself.
Start listening today when you sign up for a free
30-day trial at audible.ca.
With the FIZ loyalty program, you get rewarded
just for having a mobile plan.
You know, for texting and stuff.
And if you're not getting rewards like extra data
and dollars off with your mobile plan,
you're not with FIZ.
Switch today.
Conditions apply.
Details at fizz.ca.
I also want to explore, you know, what are the big benefits of being able to do this on a personal
level, on a societal level? You know, like, what comes to mind immediately is, you know, is there,
are there benefits on a mental health level of being able to actually harness these tools?
Well, I mean, to your point, you've tried every device just like I've tried every device
with respect to migraine, right?
And if you are a chronic migraine sufferer, you understand that it is debilitating, it
is painful, and it's frustrating because, you know, every treatment is inadequate in
some way and has some side effect to it.
And if there's just some magical thing that I could do to just, you know, stimulate my brain and make the pain go away or to stop migraine in
its tracks, I absolutely would. That's true for a lot of mental health conditions and to neurological
conditions. And it's just an incredibly undertreated area and poorly understood area. So what's
happening, you know, in the space of neurological
disease and suffering up until now has been incredibly poorly characterized and undertreated.
So you know, consider the fact that like already people with epileptic seizure, especially
those who are treatment resistant, that one of the big breakthroughs of monitoring for
brain activity has been the ability to use algorithms to predict minutes to up to an hour before a person suffers from
an epileptic seizure.
And for somebody who's treatment resistant, they could take just-in-time medication or
get a just-in-time alert to be in a place of safety.
That's, you know, a game changer.
I would like to have that same kind of notification of when I'm going to have a migraine and be
able to, you know, kind of address my life accordingly.
I don't know if you've got aura.
Those are visual disturbances, but I do.
And it can be, you know, frightening while you're driving suddenly to have, you know,
stars running across your visual space.
Likewise, you know, that areas like Alzheimer's disease and Parkinson's disease, it's possible
to diagnose a lot earlier using neurotechnology.
One of the really exciting studies that I saw looked at glioblastoma, which is, you
know, one of the most frightening brain cancers for people when they are diagnosed with it
because it's such a pervasive kind of brain cancer.
By the time it's been diagnosed, it's almost always, you know, spread throughout the brain
and the tangles of it make it incredibly difficult
to fully resect.
And so it's a really lethal diagnosis for many people.
There are early, early electrical changes that happen in the brain that are possible
to detect with continuous use.
And so just like people are tracking heart rate and seeing if they, you know, are suffering from abnormal, like arrhythmias or, you know, having atrial fibrillation or something and
then using that for information, being able to track your brain health in the same way,
whether that's, you know, the earliest electrical changes, be it, you know, suggesting something
like glioblastoma or disturbances in sleep that need to be addressed, and sleep is so important to mental health and wellbeing and, you know, staving off all
kinds of diseases to even things like we live in such a distracted world now, right?
Like our technology is designed to make it so that we can't have sustained attention.
Being able to use technology to see that, to visualize it, and then to be
able to use it to try to improve your focus and attention, you know, there's a lot of
promise here.
So I'd say it goes from the most serious conditions for neurological disease and suffering or,
you know, with implanted neurotechnology, the reports of, you know, one patient who
had their, like, brain to spine reconnected through a device or you know patients
who have lost the ability to communicate verbally being able to with implanted neurotechnology
do so and in the future with wearable neurotechnology do so.
I'm really bullish that you can tell there's so many promising health cases and so many promising just well-being cases for it, that for me,
I'm, you know, net optimistic if we can get it right.
What about on the mental health side?
Yeah, because what I'm wondering is, you know, we have such an epidemic of depression and
anxiety and it's still, you know, just so poorly treated and understood.
I mean, there are great advances.
There's amazing things happening,
but for sure, the prevalence of this
and the misery that it causes,
the level of suffering that it causes
is so pervasive and so deep now.
And I'm wondering whether you see
any sort of like form of neurotech out there
that might help in that context.
Yeah, so I know less on the anxiety front for really good
neurotech devices. I know more on the anxiety side on basic biofeedback.
Like there's some really cool platforms like Mightier which are designed for kids to
follow heart rate and to be able to see when their heart rate is getting elevated
and then learn ways in playing the game to be able to bring their heart rate down and
techniques for being able to decrease the anxiety.
I suspect that those platforms will get better and better with neurofeedback.
But on the depression side, I know more advances that have been really promising in this regard.
So for both and for any mental health issue, I'd say, again, they're poorly characterized.
We group a bunch of things together symptomatically,
even though the underlying neural mechanisms
might be quite different.
And the more data that we have of people using
everyday devices and everyday settings,
rather than a snapshot in a doctor's office,
will give us the ability to learn a lot more.
But there's, you know, two examples already in depression where treatment has improved
thanks to neurotechnology.
One of them is there was a company that was using neural stimulation and they'd been using
it for performance enhancement for a while and didn't have a big market in it.
They ended up partnering with a company that has since run it through clinical trials and
is marketing it primarily in Europe called Flow Neuroscience, where it's been shown to
substantially improve depression symptoms by this neuromodulation, which is, I think,
pretty cool.
And then the second is there's a company that already has an FDA-approved
device for Parkinson's and for essential tremor. What they do is they track neural signals
as they go from the brain down the arm to the wrist. And if you think about tremor,
where like oftentimes it's one side, they don't have a lot of other symptoms yet in
early, you know, Parkinson's or an essential tremor, it's just a one-sided tremor.
It picks up that neural signature and then it sends back an inhibitory response to the
brain and stops the tremor.
And that same company has been investigating for depression and other mental health conditions,
whether they can decode, again, the precise neural patterns and then send feedback that
would interfere with that.
We've seen an implanted neurotechnology, that being truly transformative for patients who
suffer from depression, especially intractable depression.
And to be able to do it with a wearable device, I think it's incredibly exciting.
So they have some of those in clinical trial right now for mental health conditions, including
depression. And I'm really excited about that because with essential
tremor, part of the reason that drugs don't work is because
they're incredibly nonspecific.
It's like bathing the entire brain to try to get at one
problem, right?
Whereas you can pick up a precise neural pattern and send
a response that interferes with it.
And with depression and with other
mental health conditions, the more you can precisely track what's happening and then
more precisely target it, rather than bathing the entire brain with a whole bunch of side
effects, you know, these are really promising and exciting advances for those areas.
Yeah. I mean, this sounds incredible. I'm curious, would you consider something like
TMS to fall under this umbrella?
I know there was like the first wave of it and then Stanford I guess a couple of years
ago came out with a newer, the same protocol, a different approach to this to where you
can literally use electromagnetic stimulation to have some pretty profound effects on depression.
Yeah, absolutely.
I don't know if you ever tried the TMS for migraine.
I did. So they
had this like big helmet you could get at home and you'd get like a certain number of
pulses that you could use and it was really cumbersome. I was unable to do it. So yes
to TMS. Any, you know, I classify under neurotechnology really any device that like interfaces directly
with the central or peripheral nervous system. It's a very broad set of technologies, but TMS, transcranial direct current stimulation,
ECT, the electric shock kind of treatments, those are all, in many ways, first-generation
technologies in this space that are getting better and better as we study and understand.
I shouldn't say we. I'm not one of the neuroscientists, right?
I'm studying the neuroscientists, but as neuroscientists really study and understand what the effects
of those are and then make them increasingly more precise.
Because one of the things that was exciting about transcranial magnetic stimulation is
you could look into the brain, figure out a specific spot, and then direct the pulse to it.
But for the most part, that had to happen in like a clinic, and they're not like sustained
treatments that every time you're experiencing it, you go in and get a pulse that happens.
And so what the promise of a lot of these neurotechnologies are is being able to move from the clinic,
from the hospital, from the doctor's office, into everyday
settings to have portable, wearable technology that offers
much more precise and targeted treatment.
Yeah.
I mean, it's so exciting.
One of the things that's popping into my head around this also though is, is there a,
so here's the analogy.
Over the last couple of years,
whole body MRIs have become sort of like
this topic of conversation.
Anyone can show up, you don't need a script from your doctor.
You pay a couple thousand dollars
and you get to spend an hour in an MRI
and they'll give you this report.
And what a lot of people are doing with them is they're trying to actually see if they
can pick up really early stage cancer in their bodies.
And there is a contingent in the medical world that is pushing back aggressively against
this saying you're going to get A, a whole bunch of false positives, and B, a lot of what may get picked
up is never going to go anywhere.
Nothing's going to happen to it.
So you're going to start to flood the medical system, which is already overwhelmed with
people where they're using technology in incredible settings.
Some of them will detect things which are incredible, and that can stop something before
it actually becomes really harmful.
But the concern is, are you going to, if this becomes something that happens at scale, are
you going to start to flood the medical provider system with all these people inquiring into
and running, you know, like a metric ton of additional tests and stuff like this that
end up actually not being necessary or useful?
So I'm curious whether you see that potential
with what we're talking about in neurotech.
Yeah, you know, it's funny,
doctors seem to hate every step toward personalized medicine.
Like, you know, you have more autonomous patients.
Yeah, by the way, that wasn't me making that argument.
No, I know.
I don't necessarily agree with that,
but it's an argument that I've heard a number of times.
Yeah, I hear it all the time, right?
And I've heard it in kind of every different context.
So 23andMe, when they launched the direct-to-consumer genetic testing, doctors were like, now everybody's
going to come in, we don't have any idea how to interpret any of this, and they're all
going to be convinced that they have a predisposition for X, Y, and Z, and it's going to flood the
medical system.
Well, I mean, it is true that patients would go in with their 23andMe reports, and the
doctor would say, I have no idea how to interpret that, and I'm not going to order a bunch of
tests for you.
But the idea that one of the early arguments in that space was that women were going to
come in asking for preemptive double mastectomies based on what their 23andMe report showed.
That didn't happen.
But if it was a well-informed patient
who went in and said, hey, this showed that I have the breast cancer genes, my mom and
my grandmother died of breast cancer, I'd like to run the test because we haven't tested,
you know, with medical grade testing. And if it's positive, I want to have a double
mastectomy, that's okay. I mean, I actually think that's a good informed patient that's,
yes, it's an increased number of patients, but essentially an increased number of patients who are saved
as opposed to just an increased number of patients. And, you know, what hasn't happened
is in each of these other areas, the same arguments were being made about the Apple
watch. Doctors were opposed to, you know, giving patients the capability of having the
ECG, yeah, the AGs on the watches and, you know, there's going to be abnormal heart rhythms and all kinds of patients who
are flooding the system.
No, but, you know, there are earlier detection and more detection of heart disease and there
is, you know, very low false positive rate, but there is a some, like there is some false
positive and yet there are also positive cases that it's catching and saving lives. And so, you know, I think the medical system has to be open to a changing world of technology
where patients are more empowered and getting more and more information where they can make
decisions about their own lives and that they might be more in the driver's seat of those
decisions.
The MRI case is a tricky one. And the reason it's a tricky one is we
don't have routine scanning data for patients to know what those scans mean, right? And
so it's the validation that's the problem, which is, you know, being able to interpret
a scan when there is no baseline set of healthy individuals who have MRI scans
to know what, like, what's the rate at which these things
progress? If they progress at all, does that finding
actually mean anything clinically significant? And so
it, you know, it's harder when you have unvalidated
studies like those and we don't have a basis to be able
to study. What I hope is that a lot of those early MRI scans will become part of a data set so that
we can track them over time and that we more systematically start, like, you know, you've
got to compare the different machines that they're on.
They've got to be capturing the data in a way that could actually allow for longitudinal
studies of them so that in time they'll become useful.
But you know, like, I haven't rushed out to get one of those MRIs,
largely because we just don't know what the early scans mean in a lot of these instances.
And, you know, my blood tests are fine.
All the other indicators of kind of early, you know, the other early indicators and scans
that you might do are fine.
But my sister did.
She went out and got the full body MRI
and she and her partner both did it
and are making lifestyle choices based on what they found.
Yeah, I mean, it's such a fascinating moment.
I actually know somebody who did it and detected,
thought they were completely healthy
and nothing was going on.
They detected stage one
or even less than stage one pancreatic cancer
and had a very fast and easy treatment.
It was basically said you're good.
And you realize that you know with especially with cancer
that so often it's not the fact that it's there.
It's the stage that it's caught that becomes.
No that's right.
That's right and that's why my sister did it.
She was like okay maybe there will be a bunch of you know
kind of unknown significant findings.
But if there is a significance finding that we know of and we see it, I'd
rather have the scan and be able to address it. I think that's a
fair point. I think, you know, doctors are worried more about
the, like the findings of unknown significance rather than
the findings of known significance on those scans.
Something clear. Yeah.
And we'll be right back after a word from our sponsors.
And we'll be right back after a word from our sponsors.
So if we think about Neurotech then, like in all these different use cases and technologies we're talking about, it feels like we can also split this into
okay, so there's, on the one side there's the sensing element of it.
And then for some, it seems like there's starting to be also
and sort of like an intervention side of it too. It's like, so first we pick up
something that's like, ooh, this is a signal that something's a little bit off
here and then we have some technology maybe it's the same rate it's a
different tech that can then influence the connection between the brain and the
body to actually in some way intervene or help with whatever it is that you know
you're moving through. As we start to zoom the lens out a bit, you know, all of
these technologies, and this is something you speak and write about
regularly, they tend to not be the type of technologies where you buy it and then
everything that gets sensed and intervened just stays with you.
Right, yeah.
And this is where we start to get into really murky water.
So take me into this.
Yeah.
So, I mean, you know, as we've talked through, there are huge benefits to individual access
to this technology or even sharing the data with your doctor.
You know, the upside potential is really quite enormous.
But you know, in order to capture data about what's happening in my brain, I have to have that
data go from my brain somewhere else, right?
That is, something has to detect it and then it has to communicate with something like
my iPhone and an app that's on my iPhone.
And then from there, the big question is what happens next, right?
Does a company suddenly have access to that data?
Does my employer, if it's a work-issued, you know, phone, have access to that data? Does my employer, if it's a work-issued phone,
have access to brain data? Does the government, in the same way that they have subpoenaed
all kinds of personal data from phones and from Apple Watches and other devices, suddenly
have the right in a criminal case or in a civil case to be able to get access to that
data and to be able to use it against me in different settings.
So, you know, what we've seen in the digital era is that all of these different
technologies which we, you know, purportedly receive for free or for really
subsidized costs aren't free to us, that the product is us and that the collection
of the data and the use of that data and the
reselling of that data or the use of that data to steer our
behavior or to keep us addicted to devices is primarily how the
companies have monetized, you know, the products that they're
selling to us. And brain data, I believe, and many others too,
that it's uniquely sensitive. It's the kind of prebehavioral
information. It's the kind of pre-behavioral information.
It's the information we haven't shared.
We haven't hit a like button, but our brain lights up
like a like button when we see something.
And for companies and employers and governments
to have access to that information
and the closed loop environment that has been created
where not only do they have access to that information,
but they can use it to shape the environment
that we're interacting with, it can go dystopian incredibly
quickly.
And so, you know, what I really have been writing about and speaking about is this is
technology that, you know, we shouldn't be trying to ban.
That's not the answer.
But that we have to steer in a way where we're not afraid of the misuse of this.
It doesn't become the most Orwellian technology we've ever introduced to society,
that we get the upside potential and we mitigate against the downside risk by putting
into place safeguards now before it becomes in every pair of headphones you wear,
in every AirPod, in every watch, and every VR and AR device that you don.
Yeah, I mean it's interesting also, right, because I would imagine we'll get to a point
fairly soon where a lot of the devices that we buy just for our own convenience have this
technology in it.
We didn't buy it because of that.
We may not be aware of the fact that it actually has this capability in it because we're not
really quote using that for our own benefit. Right. But the sensors are there.
You know, it's detecting and decoding potentially in the background and the
question is like are we okay with that and are we you know and what is how even
if we have no interest in that information we're not using it in a
meaningful way if that's happening and then that's being passed
outside of our immediate ecosystem or device,
you know, like, are we aware of that, are we okay with it?
And I'm not aware of a lot of conversation
happening around this.
Yeah, no, I mean, I think that's right.
Now, I think, you know, many of the companies
will start by marketing much more directly to say,
like, it has these capabilities and that's why you're buying it.
But if the next generation of AirPods is packed with sensors and some of those sensors are
brain sensing sensors, and you can either choose to interact with that on your health
app or not, but it's still collecting the data all the same, or you buy a virtual reality
headset and the way that you navigate through the game is by thinking about it, but you're
not thinking about what that actually means, about how leaky your brain is now with respect
to the data that can be gathered about it.
Should we have much more explicit consent?
Should we have much more explicit notice to people?
And should there be different ways that different categories
of data are treated? Right? You might be fine, and a lot of my
students, they, you know, when I ask them how they feel about
personalized algorithms that, you know, figure out what they
like, for the most part, they seem to be fine with it.
They're like, okay, you have to collect a huge amount of data
for me, but are you giving me products that I actually like
and I'm only seeing fewer that I don't? Great. My feed is more specialized to me. I'm
okay with that. It's what that data enables in ways that are frightening or
problematic. And some of the examples I go into in the book, like in China, there's
already educational classrooms where the students are required to wear brain-sensing
headsets to track their focus and attention.
That data is given to the teachers, it's given to the parents, it's given to the states,
and they've been punished for, you know, what their brain metrics reveal.
And just, you know, imagine being in an autocratic regime where your child and your brain activity
is being monitored, you know, the entire day that you're in the
classroom, what does that do to your ability to think freely, to, you know, develop and
grow in the way that a child should be able to develop and grow?
Or in the workplace where, you know, people are increasingly used to surveillance of,
you know, their productivity tools on their laptops, but suddenly your brain sensing earbuds
are also something that your employer has access to
and the kind of informational asymmetry that creates
or the increased pressure that creates to stay, you know,
focused and paying attention the entire day,
even if that's not the best thing for you
or for the bottom line,
or they start to see decline in mental health,
then what do they do with that information?
Or if health insurance companies have access to that information, your car insurance company
has access to the information, right?
Like it's not hard to start to see how this becomes dystopian pretty quickly if there
aren't any limitations on the kinds of data each of these different entities can have
access to.
And nothing really prevents these companies
from selling your data to all of those third parties.
You know, there's a couple of states in the U.S.
that have started to move in the direction of adopting laws
that protect neural data,
and there's some international treaties and accords
that are underway.
But, you know, it's largely the Wild West
when it comes to brain data.
Again, it's exciting and terrifying simultaneously, you know.
It will enable so much.
And at the same time, you know, it just opens so many questions, you know.
The notion of, you know, the typical person.
I remember hearing some data about, you know, like the types of thoughts that a typical
person, just a normal, everyday person, well-adjusted, great life,
and yet that typical person is also going to have some
pretty warped and pretty dark thoughts,
like here and there, and sometimes more here and there.
They'll never do anything about it.
They'll never act on it.
And they may actually feel like,
well, that's kind of weird and dark,
not realizing that actually the vast majority of people
have those same types of thoughts.
That passing in and out, and we'll never do anything about it.
It's just sort of like part of the human condition.
But if those thoughts register, like if you're hanging out wearing your smart glasses and
just walking around, those thoughts register and then that information can get passed on
to other people.
Does that then raise red flags with potential partners,
potential employers, potential, like somebody wants to,
you know, is considering you as a student in a university
and they have, you know, they look at your application
and then they have their neural data that gets passed
to them also to try and make sure that we have
a safe class on the one hand. Yes, we all want safety and we want ease, we want the
best. But at the same time, you know, if you start to just judge people on what are, quote,
aberrant but every day and very common thoughts that happen in a brain, it just, it gets really
spooky.
Yeah. I mean, you know, you read so many interesting points within that, right, from the misclassification
of neuroatypicality, right?
I mean, all of us have thoughts that, you know, like a good example, like every now
and then, you know, I just think like, okay, I'm going to strangle my husband, right?
I don't really think that, right?
I'm never going to actually act on that, but I don't have always, you know, just kind and lovely thoughts about my husband
every single day and every single second of every day. And you can just imagine for yourself
the thoughts that you have. You know, somebody walks by, you have an unkind thought. Like,
whatever it is, we're not always proud of every thought that pops into our head, but
it's our actions that we want to be judged on, not every thought that pops into our heads.
And yet we could quickly get to this world where we're judging people based on their
thoughts.
And that's not so unbelievable, right?
If you already look, like I talk about this in my chapter on your brain at work, you know,
there's already companies that are doing personality and cognitive and neural testing based on neuroscience
for screening and
for hiring candidates in the workplace.
And the theories that they're built on are all based on
trying to typify how your brain works.
And that together with a device that's put onto your head or
eye tracking data that's making inferences about what's
happening in your brain as you're answering those questions,
this is all within the realm of what's already happening. And so, you know,
given how afraid we all are, I'm a parent, you know, I worry every day when my kids go to school
with all the school shootings. Is it really so far-fetched for us to think that there's going to
start to be increased screenings for people for safety and that we start to label people
particular ways.
There's a researcher by the name of Kent Keele who for a long time has been studying
psychopaths and has been characterizing their brain activity and came out with some really
controversial findings a few years back where he showed that the differences he sees in
brain scans in psychopaths who are in jail, he can start to see those differences as early as five years old. Now, he hasn't followed those five-year-olds
to adulthood to see if they end up in prison, but, you know, it's not hard to imagine people
taking that kind of research and applying it to say, okay, we have an incredibly competitive
and selective process for this private school. You know, you have to submit this kind of
data for us to see, are you a danger to our classroom
community. Or are you the quote type of student that would thrive in this
environment which brings in all sorts of opportunities for bias and all sorts of other stuff.
Well that's the thing, right, is that it's the coded bias that can happen in
this space, right? It's like nobody's gonna tell you that the reason
that they didn't bring you in or hire you
or that they fired you was because of what
your brain metrics showed.
Instead, they're gonna say you weren't the right fit,
you weren't being as productive as you should be,
we're just having structured layoffs, like whatever it is,
even though they're using this data
to make those kinds of decisions.
And then on the other side, you know,
there may be some real value for somebody
to actually be using data like this.
It could be.
For both people.
Maybe you actually figure out, we actually legitimately
are not a type of fit for the type of opportunity
that this is and just the way that your brain functions.
Or optimistically, there's all kinds of brain wellness
programs that companies have implemented.
So instead of using it to penalize people
or to invade on
their mental privacy, you actually use it to provide more
services that help people who are struggling with stress or
struggling with mental health to be able to have the resources
they need to get treatment and to be able to regain the kind of
self-determination of their lives that they might want.
Yeah.
I mean, maybe somebody like you're able to actually pick up, you know, you have
some young employee there who's just pushing and working nonstop and they're being told,
like, don't do this.
We don't expect this of you, but the script in their brain is like, this is how I get
ahead.
And you can start to actually detect maybe, like, this person is tipping towards mental
illness or burnout or something like this and we need to intervene because we need to
help them because they're not helping themselves.
Well, I mean, in fact, I get into some of those use cases in
the book where I talk about cognitive load and overload,
that already, you know, the future of a more positive
workplace could be one in which you can actually detect
cognitive overload, which has been, you know, attributed to
not just stress and mental health, you know, concerns for the individual, but also safety, right, their own safety and the
safety of the others that they're working around.
And to be able to say like, oop, you're reaching cognitive overload.
There's a new company, a company that's just launched their headphones that have EEG sensors.
And one of the features that they've enabled in the app is to give you a signal when you're
reaching like a level of overload, when it's time for you to take a brain break, both to be
able to recover but to get to levels of optimal productivity and to optimal
levels of you know the right balance of stress versus you know kind of drive
versus burnout and you know I think those are really positive possible use
cases of being able to give you the feedback that you need that you're not getting internally from yourself about, hey, it's time for a break.
This is the best thing for you to actually achieve the goals that you want to achieve.
So where do we go with all this?
You know, where there's a really clear acceleration in the technology and what it's able to do.
There are a lot of really interesting and fascinating benefits, maybe even life-saving
or life-changing benefits.
There are some real big concerns about what happens
with all of the data that comes out of these.
What's the way forward right now?
Like if you look at the next three, five years,
what would you want to see happen?
So first and foremost, I think we have to put into place
a recognition of basic rights for individuals to flip the narrative, to empower people.
And those are rights around what I call cognitive liberty, the right to self-determination over
your brain and mental experiences.
And from a human rights perspective, that looks like trying to secure to people a right
to privacy, which would include a right to mental privacy, a right to self-determination,
which gives you both a right to access and change your brain if you choose to do so, and a right
to freedom of thought to protect you against interception, manipulation, and punishment
of your thoughts.
On the other side, we have to shore up the capacities for cognitive liberty, you know,
being able to help people navigate an increasingly noisy world.
So these are things like really starting to develop our
introceptive capabilities, our mental agility, and our
relational intelligence so that we can navigate this world and
be able to use the freedoms that are being protected by
cognitive liberty by having the capacities that we need to be
able to navigate the world, to be able to think critically, to
be able to think freely, and to engage with technologies in ways that are intentional and productive for us.
So that looks like educational changes that we need to put into place.
It looks like practices in our everyday lives to be able to create this better mind-body
connection that we're losing because of the way technology is being developed and designed.
And so I think there's a lot that needs to happen at the individual level and there's
a lot that needs to happen at the societal level to bridge the gap of
where we are right now versus where we need to go.
Yeah. Are you bullish on the policy level?
Bullish? Not at this moment in time. I think that the US is going into a period where we're going to see is an experiment with much
more laissez-faire engagement with technology.
And what we see as early indicators of how technology companies are reacting to that
is really to push all of the obligations to individuals rather than providing any protections
to them.
And companies up until now have been monetizing all of our data and have done so without any
oversight.
The EU is moving in a very different direction of trying to put into place much stricter
safeguards and that could end up serving as a floor that companies have to adopt as certain
protections to be able to not have to navigate different
markets incredibly differently.
But what I see is kind of an unrestricted race by technology companies without a lot
of intervention by governments to try to put into place the right sticks or the right incentives
to realign that technology with human flourishing. So I'm not that bullish in the exact moment that we're talking, but, you know, I also
feel like if I don't maintain optimism, it's hard for me to do the work that I do, which
is to continue to advocate for the changes we need to put into place.
The one thing that gives me a sliver of optimism is that the few states and countries that
have adopted specific protections around brain
and mental experiences have done so in a bipartisan fashion. They seem to recognize the exceptional
nature of being able to peer into and to change our brains. And so it gives me some optimism
that at least in this one space there is a concern.
Like imagine being a politician being able to have all of your
thoughts read. That would be really bad for them, right? So
they sort of get that there needs to be some protections in
the space. And so that gives me a good sliver of hope to keep
working on.
I mean, that makes so much sense. You know, it's like if
you see this technology and you're like, this would help me
do X, Y or perform at this level or like or like just live so much better, more comfortably.
But it also potentially exposes me,
but I really want the benefit of that.
Then there will be an incentive to say,
like I wanna be able to access this,
but I also, we need to be able to protect
against the dark side, the downside here.
So it's such a fascinating moment.
Yeah.
It feels like a place for us to come full circle
in our conversation as well.
So in this Container of Good Life project,
if I offer up the phrase, to live a good life, what comes up?
It's to live a life of purpose and meaning.
And I think increasingly as I write in this
space, I find that it's not a destination, right? It's about the journey itself. And
the more we can do to enable people on that journey to do so freely, to be able to do
so with intentionality, and to be able to do so with their full capacities, I think the
more likely that we can live a good life individually and collectively.
Thank you.
Before you leave, if you love this episode, safe bet you'll also love the conversation
we had with Adam Grant about rethinking things.
You'll find a link to that episode in the show notes.
This episode of Good Light Project was produced by executive producers Lindsay Fox and me,
Jonathan Fields.
Editing help by Troy Young, Christopher Carter crafted our theme music and special thanks
to Shelly Del Bliss for her research on this episode.
And of course, if you haven't already done so, please go ahead and follow Good Life Project
in your favorite listening app or on YouTube too.
If you found this conversation interesting or valuable and inspiring, chances are you
did because you're still listening here. Do me a personal favor. A seventh second
favor is share it with just one person. I mean if you want to share it with more
that's awesome too, but just one person even, then invite them to talk with you
about what you've both discovered, to reconnect and explore ideas that really
matter, because that's how we all come alive together.
Until next time, I'm Jonathan Fields, signing off for Good Life Project.
Good Life Project is sponsored by Self-Conscious with Chrissy Teigen and new podcasts from
Honble.
So if you love our deep conversations about living well and personal growth,
you'll want to listen to what Chrissy Teigen is creating.
Each week, she partners with brilliant minds like Mel Robbins, Adam Grant,
Gabby Bernstein to unpack transformative ideas about living well
and understanding ourselves better.
What makes the show really special is how Chrissy approaches each conversation,
not as an expert,
but as someone genuinely curious about growing alongside us. Whether it's exploring the science
of sleep with Dr. Matthew Walker or understanding boundary setting with Nedra Glover-Tulab,
every episode offers practical wisdom that you can apply right away. So if you're ready to expand
your self-awareness and discover powerful new perspectives, go to audible.ca slash Chrissy Podcast,
or wherever you get your podcasts,
and start listening today.
With the Fizz loyalty program,
you get rewarded just for having a mobile plan.
You know, for texting and stuff.
And if you're not getting rewards like extra data
and dollars off with your mobile plan, you're not with Fizz.
Switch today. Conditions apply. Details at fizz.ca.