Ologies with Alie Ward - Neurotechnology (AI + BRAIN TECH) with Nita Farahany
Episode Date: August 2, 2023Machine poets. ChatGPT fails. Neurological surveillance. Brain implants that treat depression. Is it scary? Cool? Let’s firehose some questions at Duke Law professor, neuro and bioethicist, author a...nd TED speaker Dr. Nita Farahany. She explains the history of AI, the dawn of chatbots, what’s changed recently, the potential for good, the possible perils, how different lawmakers are stepping in, and whether or not this is scary dinner party conversation. Do you have feelings about AI and brain implants? Hopefully, and we talk about why. Buy Dr. Nita Farahany’s books: The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology (2023) and The Impact of Behavioral Sciences on Criminal Law (2009)Dr. Farahany’s 2023 TED Talk: Your right to mental privacy in the age of brain-sensing techFollow Dr. Farahany on Instagram, TikTok and TwitterA donation was made to Human Rights WatchMore episode sources and linksSmologies (short, classroom-safe) episodesOther episodes you may enjoy: Field Trip: A Hollywood Visit to the Writers Guild Strike Line, Neuropathology (CONCUSSIONS), Attention-Deficit Neuropsychology (ADHD), Molecular Neurobiology (BRAIN CHEMICALS), Radiology (X-RAY VISION), Futurology (THE FUTURE), Gizmology (ROBOTS), Diabetology (DIABETES)Sponsors of OlogiesTranscripts and bleeped episodesBecome a patron of Ologies for as little as a buck a monthOlogiesMerch.com has hats, shirts, masks, totes!Follow @Ologies on Twitter and InstagramFollow @AlieWard on Twitter and InstagramEditing by Mercedes Maitland of Maitland Audio Productions and Jarrett Sleeper of MindJam Media and Mark David Christenson Transcripts by Emily White of The WordaryWebsite by Kelly R. DwyerTheme song by Nick Thorburn
Transcript
Discussion (0)
Oh, hey, it's the bunny that you swear you saw on the lawn, even if no one else believes
you, alleyward.
And here's all the geez.
Hey, am I a real person?
Unfortunately, I am.
Am I intelligent?
That's up for debate.
But this week, we are taking a dive into artificial intelligence and brain data with
a scholar in the matter.
So listen, the past few months, been a little surreal.
Photoshop's out there generating backgrounds to cut your cousin's ex-girlfriend
out of your wedding photos.
Chat GPT is writing obituaries and frankly, a lot of horsebucky.
There's also groundbreaking labor strikes and the arts, which we covered in
the field trip episode from the WGA strike lines.
If you haven't heard it, I'll link it in the show notes.
But I heard about this guest's work, and I said,
please, please, please, talk to me about how to feel about AI.
Are we farting around the portal to a new and potentially
shittier way of living?
Or will AI say, hey, dipshits, I ran some simulations.
And here's what we have to do to unextinct you
in the next century.
We're gonna find out.
So this guest has studied law at Dartmouth, Harvard,
and Duke, and been a professor at Vanderbilt University.
And is now at Duke's Institute for Genome Sciences
and Policy.
She recently delivered a TED Talk called
Your Right to Mental Privacy in the Age of Brain Sensing Tech and just
authored a new book called The Battle for Your Brain, defending the right to think freely
in the age of neuro-technology.
But before we chat with her, a quick thank you to patrons of the show who support at patreon.com
suchologies for a book or more a month and submit their questions for the second half.
And thank you to everyone inologiesmerch.com, shirts and hats and such. Of course, you can also support the show just
by leaving a review. And I made Delight U by reading it, such as this one left this week
by Environmental Lawyer, Harrison, Harrison Harrison, who wrote a review calling allegis
and we gooey, rather toy, rip, roaring, good time. So yeah, I read them all. Thank
you Harrison for that. Okay, neuro technology. Let's get into this. How the brain interacts
with technology and also techno neurology, how tech is striving to replicate and surpass
human intelligence and what that means for us all. So let's beat up our way into a talk about texting, scrolling, cheating, brain implants,
mental health, doomsday scenarios,
congressional hearings, apocalypse potential,
medical advances, biometric mining,
and why suddenly artificial intelligence is on our minds
with law professor and neuro technologist,
Dr. Nita Farhani.
Nita Farhani, she her. So if anything, I think I'm a great dinner guest, right? Because they're fascinated. I definitely should clarify that. You are there's nothing scary about you. The information that you hold is like, oh, no, I'm a great dinner guest right because they're fascinated. I definitely should clarify that you are there's nothing scary about you
The information that you hold is like no, I know do I want to look do I not want to look do I want to look?
It's thrilling like a horror film. Yeah, it's like people can't look away
Yeah, right and that's good because I don't want them to look away
I want them to know but at the same time what I usually get is like wait
This is real like what you're talking about is it actually exists and people are really using it and
employers are really using it and governments are really using it and wait, what?
Yeah.
Do you spend a lot of your time chatting with people trying to warn them or calm them down?
Yes.
So, on the one hand, I am trying to raise the alarm and to help people understand that this
whole area of being able to decode and really hack and track the brain is a new frontier
and the final frontier of what it means to be human and privacy and freedom.
And at the same time, I don't want to make people have the reactionary approach to technology,
which is like, okay, then let's ban this because the promise is also extraordinary.
And so I am very much equal parts.
Like, let me help you understand not only what the promise is and why you're likely to adopt it,
but why before you do so and before we as a society at scale adopt this technology
that we make some really important choices
that will actually make it good for us
and not the most or well-in frightening, scary thing possible.
I feel like there's a few topics
that have this much true ambivalence of so much good
and so much potential for misuse.
Did your brain become a lawyer brain because of those,
sort of like the lasophical conundrums?
What drew you to this kind of deep, deep thought?
Yeah, I've always been driven to the questions
that are at the intersection of philosophy and science.
Like in high school, I was really interested in the science,
but I was a policy debater in college.
I was government minor and science major.
And I did in lab stuff, but largely things that were policy.
So Nita got several graduate degrees studying law
and science, behavioral genetics and neuroscience,
the philosophy of mind, neuroethics, bioethics,
and even reproductive rights in policy and Kenya.
And she said, all her work seems to gravitate
toward this intersection of philosophy and law and science
because she had fundamental questions like,
do we have free will and do we have like fundamental autonomy
and freedom and how do we put into place the protections?
But I've always been fascinated and really interested
in the science and the technology itself. I've never been a let out. I've always been fascinated and really interested in the science and the technology itself.
I've never been a let out.
I've always been somebody who's an early tech adopter, but clearly see what the downsides
are at the same time.
Where was tech at when you were getting that roster of graduate degree?
Where were we at?
Were we at emails?
Were we at video calls?
Yeah, so we were not a video calls.
We were at emails.
The internet existed.
We used it.
We all had computers, but we didn't have cell phones.
I got my first cell phone after I graduated from college,
like the year after and I had a flip phone.
And I thought that was super cool.
You know, I could type out a text message
one character at a time.
Oh, T9.
Yeah.
I had a gold medal in T9.
Nice.
I could do it without even looking at the phone where I found it harder. I had a gold medal in T9. Nice. Nice.
I could do it without even looking at the phone, where I found it harder when we had a keyboard.
Yeah.
And then I had a palm pilot, like as the precursor to the iPhone.
And then I stood in line the first day that the iPhone was being sold and, you know, got
one of the first iPhones in my hand.
So I've seen the evolution of tech, I guess, as I was getting all of those degrees.
And what about in terms of neuro-technology, have you seen kind of an exponential growth pattern
in terms of technology? Is that growth pattern still valid or have we surpassed it?
Slowly over the past decade or two, neuro-technology has been getting better. And the ways in which
neuro-technic has been getting better has largely been kind of hardware based,
which is the sensors are getting better.
Sometimes the software has been getting better
to be able to filter out noise, the algorithms
to be able to pick up brain activity
without having muscle twitches or eye blinks
or interference from the environment
to pick up different information.
All of that's been getting better.
But suddenly, we've gone from what was improvements to just the past five years
seeing much more rapid advances.
Generative AI is making things move in these seismic shifts, like where you
suddenly have just a massive leap in capabilities.
Just real quick, before we descend into the epiths of ethics
and possible scenarios, what is generative AI?
What is AI?
And what's just a computer computing?
OK, I look this up for us, and then I took a nap
because it was confusing.
And then I tried again.
And here's when I assessed out.
So artificial means it's coming from a machine or software
and intelligence, fuck, I mean, that depends on who you ask.
But broadly, it means a capacity for logic,
understanding, learning, reasoning, problem solving,
and retaining facts.
So some examples of AI, or Googling, or search engines,
the software that recommends other things
you might like to purchase, navigating via self-driving cars,
your Alexa understanding when you scream, Alexa stop because she tried to get you to subscribe
to Amazon Prime Music again.
It also includes computers, being chest nerds, that's AI, and generating artwork.
And according to some experts, AI can be separated into a few categories, including
on the base level, reactive machines,
and those use existing information, but they don't store or learn anything.
Then there's limited memory AI that can use precedent to learn what choices to make.
There's something called theory of mind AI, and that can try to figure out the intentions
of a user or even acknowledge their feelings, like if you've ever told Alexa to get
bent in a lot of other words, and then she sasses you back.
If you'd like to tell me how I can improve, try saying, I have feedback.
There's also a type called self-aware AI that reflects on its own actions.
And then fully autonomous is kind of the deluxe model of AI.
And that just, that does its own thing.
That sets its own goals, set it and forget it if you can.
So when did things start speeding up?
When did they start curing toward the future like this?
When computers got faster and smaller and better
in the last 10, but really kind of two or three years.
So better hardware means more processing power.
There's also cloud storage, and that adds
up to something called deep learning, which kind of sounds creepy, like a hyper-vigilant
mannequin.
But deep refers to many layers of networks that use what look like these complicated flow
charts to decide what actions to take based on previous learning.
So, that's kind of what led up to these startlingly human-like generative AI outputs
and deep fakes, where they can just straight up
put QAnna Reeves' face on your mom
and then confuse the big Jesus out of me on TikTok,
or chat GBT, which is one language model chatbot.
Computers are starting to pass bar exams.
Maybe they're writing the
Quippy flotations on your dating app. Who knows? Meanwhile, less than a hundred
years ago, a lot of the US didn't have flush toilets in case you feel weird
about how weird this feels because it is weird. Evolutionarily, our flabby
beautiful little brains can barely handle the shock of a clean river coming out
of a garden hose, let alone some metal and rocks that are computers that we're training
to potentially kill us.
We don't know how to deal with that.
So pattern recognition using machine learning algorithms has really pushed things forward
rapidly.
Like a lot of brain data that happens in characteristic patterns and those associations
between like what is a person seeing or hearing or thinking, how are they feeling,
are they tired, are they happy, are they sad, are they stressed.
Those things have been correlated with huge data sets and process
using machine learning algorithms in ways that were impossible before.
I can read your mind.
Then you have generative AI and chat GPT that enters the scene in November.
And all of a sudden, the papers that are coming out are jaw dropping. Data that's being processed
by generative AI to reconstruct what a person is thinking or hearing or imagining or seeing
is next level. Right? I mean, my book came out March 14th, 2023. All the sudden, what was happening was
continuous language decoding from the brain in like really, really high resolution using GPT-1,
not even the most advanced GPT-4. Visual reconstruction of images that a person is seeing and ways that were
much more precise than anything that we had seen previously. And that's happening
at this clip that is just, I think, extraordinary. It's just so much faster than I even I would have
imagined. And even I could have anticipated it even having written a book about the topic.
That was literally going to be my next question, because when a person writes a book, that doesn't
happen overnight. You've been working on this book probably for a couple of years. Did you have any idea that your book would be so closely timed to such a
giant leap in terms of public perception and awareness of AI? I mean, it couldn't have
timed it better. Well, I mean, of course, I'm a few, I'm a few, I'm a few tries. I was
predicting it perfectly right now. No, I mean, I wish, right?
In truth, my book is like a year and a half late
from when I was supposed to turn it into the editor
into the publisher, but you know,
there was a global pandemic that got in the way
and a bunch of other things,
but I'm grateful that it didn't happen sooner
because I was both able to be part of what is a growing
conversation about the capabilities of AI and to see.
When you say to a person,
like, oh, yeah, also AI can decode your brain, you know, that really puts a fine point on it for
people to understand how quickly these advances are coming and to see how it's changing everything in
society, not just how people are writing essays or writing emails, but fundamentally unlocking the
mysteries of the mind that people never thought before possible.
And the risks that that opens up and the possibilities of mental manipulation and hacking and tracking.
Those are dangers that I think a year ago before people really woke up to the risks of AI,
they would not have been having the conversation in the same way that they are around the book.
And now they are having that conversation
seeing the broader context and seeing the alarm bells
everywhere, right?
Like, oh, wait, we really do need to regulate
or recognize some rights or do something.
So futurists are urging some foresight,
congressional panels have aired on C-SPAN
and there seems to be this kind of collective side eye
and like a hope that someone's on top of this, right?
So I mean, I think people are looking for some guidance.
And to have somebody come at it
from a balanced perspective, like, wait a minute,
there's a lot of good here, and there's some serious risks.
And here's a potential pathway forward.
I think instead of like pause, which everybody says,
like, of course, we can't just pause,
or a doomsday scenario without any positive,
like, oh, let's regulate AI.
I think we need voices at the table
who are thinking about it both in a balanced way,
but also are coming forward with like,
here are some concrete things we could do right now
that would actually help the problem.
So we know a few types of AI from Googling a source
for a research paper or digitally removing your cousin's
ex from your wedding photos.
But what about technology that's gathering data from our brains?
Let me give you the spectrum.
There's medical grade neuro technology.
This is technology that people might imagine in a doctor's office where somebody puts on an EEG
electroencephalography cap that has a bunch of different wires coming out of it and a bunch of gel
that's applied to their head and a bunch of sensors. That's picking up electrical activity,
which we'll get back to in a minute. Then there's the clunky giant machine, a functional magnetic
resonance imaging machine, which can peer deeply into the brain.
And somebody might have already
undergone an F and MRI test for something
like a brain tumor to kind of look more deeply
into the brain.
And what that's picking up is changes in blood flow
across the brain, which tells us something
about different areas that are activated
at any particular time and what those patterns might mean.
So if you've never had an MRI,
I guess congratulations, that's probably good.
But this is magnetic resonance imaging.
And it's pretty exciting how these strong-ass magnets all line up the hydrogen atoms in
your body to go one direction.
And then they release them, and from that, they can see inside of your bow day.
Now, an FMRI is a functional MRI, and to put it in super simple terms,
it's kind of like animation instead of a still picture,
but it's of your brain.
So when you see imaging examples
of how someone's melon illuminates
like a Christmas tree to certain stimuli,
that's FMRI technology tracking blood flow
to different regions of the brain.
And this FMRI technology is used in a lot of neuro and psychology research. And then
there's something like functional near infrared spectroscopy, which is more
portable, and it's also measuring changes in the brain, but it's using optical
and infrared lights in order to do so. And that functional near infrared
spectroscopy looks for changes in oxyhemoglobin and deoxyhemoglobin
in the brain.
These words might not matter to you right now as you're cleaning your shower grout or
you're carpooling.
But in clinical settings, it comes in handy for patients with strokes or learning about
Alzheimer's or Parkinson's or even anxiety or a traumatic brain injury, which my brain
would like you to know
I've had. And I will link the traumatic brain injury or the neuropathology episode about my
Helena Narnaar concussion I got last year in the show notes. But yes, there are a lot of ways to
get data from a brain, including CT scans and PET scans with radioactive tracers. But what about
non-medical uses? Do they exist?
Oh, boy, how do you do that?
If you then look at what's happening in the consumer space, in the consumer space, you
take the 64 or 120 electrodes that are in a big cap, and then you have a couple of them
that are put into a forehead band or a baseball cap, or increasingly what's coming is brain
sensors that are embedded in everyday technology.
So you and I are both wearing headphones and the soft cups that go around our ears
are being packed with sensors that can pick up brain activity by reading the electrical activity through our scalp.
You want my tinfoil hat?
Or if we were wearing ear buds inside of our ears instead,
embedding brain sensors inside of those that can pick up electrical activity in our brain activity as we're thinking or doing anything.
And those become much more familiar and much more commonplace very quickly.
So there's just a few of those products that are on the market, but that's where most
of the big tech companies are going is to embed brain sensors into everyday devices like
earbuds and headphones and even watches that pick up brain activity from
your brain down your arm to your wrist and picks up your intention to move or to type or to swipe
or something like that. So to use a like a medical analogy, you know continuous glucose monitors.
These are a powerful tool for diabetics to monitor their blood sugar levels and their insulin needs
and we covered those in the two part dietology episode with Dr. Mike Natter.
But now continuous glucose monitors are starting
to become available to people without diabetes,
just to better understand their metabolism's
and their dietary responses, their mood and energy.
So all of these neuroimaging and all this data
was just used in clinical and research settings
by people in crisp, coats codes carrying metal clipboards,
but it's starting to pop up in the market now.
This is great news, right?
The understanding of your brain?
Yeah, yeah, but not all the research
in consumer applications is solid.
And some make some wild claims of efficacy.
Others argue that if a device can enhance our moods
and sharpness cognitively, because
some serious cash, doesn't that just widen a privileged gap even further?
But I guess, so does college.
I don't know.
In the US, you need to go fund me to pay for chemo.
So we've got a lot of pretty large systemic fish to fry.
But if you've got money, you can buy EEG headsets
that track your mood and emotions and stress
for a few grand.
There's others that track your heart rate
and brain waves for sleep and meditation.
There are VR gaming sets that track brain waves
and even a Mattel game called Mind Flex.
You can buy for like 120 bucks, but Nita says,
All of those consumer-based technologies pick up a little bit of like kind of low-resolution information
right now. They pick up if you're stressed, if you're happy, or if you're sad, if you're tired,
like it maybe picks up that your mind is wandering, and you're kind of like dozing off. And the things like FMRI pick up much more precise information.
Now that could just be a matter of time.
It could be that as machine learning algorithms
and gendered AI gets applied to the electrical activity
in the brain, that it'll get better and better and better.
And it's interesting,
because in a way you could think about AI
as being the convergence between computer science and neuroscience.
So computer scientists have been designing algorithms that can process information in very
narrow ways, and they're very good at doing specific tasks.
So for example, a doctor or a pathologist who's looking at many different samples of tissue
to figure out if it looks cancerous or not,
can only see so many samples in a lifetime.
And so they've marked them and labeled the data.
And a machine learning algorithm can be trained on that data, which is like, here's thousands
of images that are cancer and not cancer.
Now here are new images, predict whether or not they have cancer.
And they become very, very good because they can process
millions and millions of images and see far more images
and get much better at being able to do that specific task
of identify if something is cancerous.
So those tasks are relatively simple for machines
to learn and execute.
Computers are like, child's play.
But the human brain isn't so narrow
and task-specific. And neuroscience has long understood that the connections that the
brain makes are much more multifaceted. They're much more complex. And so the modern types
of AI are built on how the brain works. They're built on what are called
neural networks. So this is a deep learning model which is instead of that very
specific task of like do this, do that. It's meant to take a huge amount of
information and to learn from that information and then do what we do which is
to predict the next thing or to kind of understand
where that's going or to make inferences for more of a deep learning perspective.
So, it's more than machine learning, like the pathology example she gave.
So remember deep learning.
So neural networks are modeled after biological brains, and they have nodes, like neurons,
that consume input that they learn from, and then it's processed in several layers or tears.
AKA it's deep to come up with a result or an action.
And things like chatbots or facial recognition or typing dog into your phone's photo album to see what goodness comes up.
Or speech to text, those are all done by neural networks and AI that we're already using,
and they seem commonplace after having them for just a few years.
But since late last year, we're seeing them create more like how the human brain might.
And those insights about the brain and neural networks have informed this new class of AI,
which is generative AI.
Generative AI is different in that it is both built
on a different model and it has much greater flexibility
in what it can do.
And it's trying to not say like this is cancer,
that isn't cancer, but to take a bunch of information
and then be asked a question and to respond
or to generate much more like the human brain reasons
or things or comes up with the next solution.
And that's exciting and terrifying.
I'll say.
What about the information that say,
artistic AI is getting?
Are they scrubbing that from existing art?
And in the case of say, the writer strike,
where you see writer saying, you cannot take my scripts
and write a sequel on something without
me.
If you're curious about what is up with these strikes, what is going on in the entertainment
industry, including the WGA or the Ryder-skilled America strike, which started on Mayday of this
year, and it was joined in recent weeks on the picket lines by SAG After, which is a screen
actor-skilled.
Again, we did a whole episode explaining what is going on.
It's called Field Trip WGA Strike.
That'll be linked in the show notes.
So if you watch TV or movies or you ever have, listen to that episode because it affects
us all.
And these entertainment labor unions are known as the tip of the spear for other labor
sectors.
Your industry may be affected or might be next.
I'm really interested in what happens in this space,
not just because of the writers themselves
and hoping that they manage to succeed
and actually getting fair appropriate treatment,
but also because it's gonna be incredibly telling for every industry
as what happens when workers demand better conditions and better terms, and the result is greater
experimentation with generative AI to replace them. But why is this such a sudden concern? Why does it
feel like AI has just darkened the horizon and thundered into view and we're all cowering at its advance? Is this the first
act of a horror film?
So how where does it come from? They're not totally transparent. We don't know
all of the answers to that, right? But we do know that these models have been
trained, meaning there's been billions,, trillions, we don't know, right, the exact number of parameters.
That is prior data, which has been used, meaning the material that the
machines learn from.
And that could be prior scripts.
It could be prior books.
It includes a bunch of self-published books, apparently, that are part of it,
prior music, prior art, potentially a whole lot of copyrighted material that has been
used to inform the models. Once the models learn, they're not drawing from that information
anymore, right? That information was used to train them, but in the same way that like you
don't retain everything you've ever read or listened to, and your creativity may be inspired
by lots of things that you've been exposed to.
The models are similar and that they've been trained
on these prior parameters, but they're not storing
or drawing from or returning to them.
It's as if they have read and digested
all of that information.
And I was talking with an IP scholar
who I like and respect very much.
And his perspective was, how is that different
than what you do, right?
You write a book and you read
tons of information and there's tons of information you cite and there's also tons of information
that you learned from, that inspired you, that shaped how you write and think that you don't actually
cite. And is that actually unfair or violating the intellectual property or somehow, you know,
not giving a fair shake to every source that you have
ever learned from or every input that you've ever learned from. I mean, it's an interesting and
different perspective, right? I don't have the answer to it yet. I'm really interested to see how
this particular debate evolves. What do other people think who aren't me? So a recent study reported
that about 50% of AI experts think there's a 10% chance of unchecked AI causing the extinction of our species
with AI getting into a little sneaky elf on the shelf shenanigans like playing God or establishing
a political dictatorship. And the Center for AI Safety issued a statement. It was signed by dozens
of leaders in computer science and tech, including the
CEO of Google's Deep Mind and Bill Gates and the guy who started J.J.P.T. and the director
of a center on strategic weapons and strategic risks.
And this statement said very simply, quote, mitigating the risk of extinction from AI should be a global priority alongside other societal
scale risks such as pandemics and nuclear war.
So that's a pretty big statement, and other experts draw parallels between humans and chimps,
but where are the chimps?
And AI is us.
So guess who's making who?
Where diapers and live with Michael Jackson?
Yeah. Although,
of course, there are computer scientists saying that we need to calm our collective boobies,
and that AI isn't advanced enough to threaten us. Yet. Yet. I love yet. Yet is so comfy. Yet is
the space between the alarm clock and the panic of racing out the door because you'll be late to a job interview.
Ah, yeah.
Mmm, just yummy.
Just fuck it.
I think from a governance perspective in society, we have near-term risk that we need
to be safeguarding against.
And this is near-term risks like bias and discrimination and inaccuracies.
I don't know if you saw the story recently about a lawyer who filed a brief in a case
before a federal judge that the pleading for the case
had been entirely written by Chat GBT,
which included a whole bunch of invented cases.
And the invented cases, like he hadn't gone
and sight checked them or read them.
In fact, he has this dialogue where he's asking
chat GPT if the cases are real or not.
It's rather than like, do you have to.
And he was not doing this to prove a point.
No, just straight up.
Just straight up dumbass just did it.
I mean, and then the other side comes back
and says, hey, judge, we can't find any of these cases.
The judge says you have to produce it and apparently he produces the full citations of
the made up cases.
Anyway, it finally goes back with the lawyer that admitted, like, I'm so sorry, this is
all apparently fabricated and it's fabricated not intentionally, but it's fabricated because
I generated it all using chat GPT.
Neita says who knows what will happen if and when more people start using bots to cut
corners and know in fact checks it.
And around June 15th, I saw a viral tweet about chat GPT not acknowledging that the Texas
and Oklahoma border was in fact influenced by Texas desiring to stay a slave state.
I told my husband, Jared, your pod mom, didn't believe it could get things so wrong,
and then he proceeded to have like an hour long fight and discussion with chat GPT,
hoping to teach chat GPT that it has a responsibility to deliver accurate information.
I was like, dude, you're fighting a good fight, and I wish you luck.
Now, as for this lawyer that needa mentioned, according to a May 2023, New
York Times piece about it titled, here's what happens when your lawyer uses chat GPT.
The lawyer in question pleaded his own case within the case, telling a rightfully mifed
off judge that it was his first foray with a chatbot and that he was quote, therefore unaware of the possibility that its content could be false. And the
New York Times explains that Chat GBT generates realistic responses by making
guesses about which fragments of text should follow other sequences based on a
statistical model that has ingested billions of examples of text pulled from all over the internet. So ChachiBT is your
friend at the party who knows everything and then you find out that they're
full of shit and they're very drunk and maybe they stole your wallet and they
could kill your dog. Will they shit in the pool? It's anyone's guess but wow.
They are spicing up the vibe. This is not a boring party at all. It raises this
complex question about you know know, who is responsible?
And we've generally said the attorney is responsible, right?
The attorney is the one who is licensed to practice law.
They're responsible for making sure all of the work that they certify under their name.
Is there any liability for generative AI models?
Now, chat GBT says, like, I'm not here to provide legal advice, and it's prone to hallucinations. Is that enough to disclaim any liability for chat GBT?
Just a jacuzzi of hallucinating chatbots saying, whatever sentence they think you want to hear,
maybe pooping in there too. So what happened to that lawyer though? Did he get so despoured?
Did he have to grow a beard and move to Greenland? Does he make felt it hats out of goat fur now?
No, no, he's fine.
He kept his job.
He was just fine five grand, which if he built for the research hours that a chatbot
really did, he maybe still turned a profit on that deal.
But the lessons, those are invaluable.
Now if you appreciate nothing else today, I just want you to stare off at the horizon
for even 30 seconds and just say,
what a time we're living in. Hundreds of thousands of years of people getting boners and falling
in love made me a person standing on a planet at a time when there's plumbing, antibiotics,
electricity, there's domesticated cats, and I have a front row seat to some real madness.
What an era.
As for what we do, I don't know.
Aren't we being watched all the time anyway?
What are the blotters doing about this?
Well, forgive the patriarchal caricatures,
but where are Big Brother and Uncle Sam?
Are they working together on this?
Is there any incentive from like a governance perspective
to say, to step in and say like, we don't know how far this should go? Or does it just generate kind of more income for maybe big
corporations that can misuse it? So like hard to fight against that. So you know, it's hard to know,
right? There have been hearings that have been held recently by the government to try to look into
sort of both questions that you're asking, which is Uncle Sam and Big Brother, right?
So there were hearings looking at whether or not to regulate private
corporation use of generative AI models. And it was, you know, a very public hearing where Sam
Altman from OpenAI calls for regulation. If you're wondering why this is a big deal, so Sam Altman is the CEO of OpenAI,
which invented jet GPD.
And he spoke at the Senate Judiciary Subcommittee
on Privacy, Technology, and the Law Hearing,
which was called Oversight of AI,
Rules for Artificial Intelligence,
that was in May of this year.
He also signed that statement
about trying to mitigate the risk of extinction.
And he told the committee that AI could, quote, cause significant harm to the world.
Papa Chad GPD himself.
My worst fears are that we cause significant. We the field, the technology, the industry,
cause significant harm to the world. I think that could happen in a lot of different ways. I think if this technology
goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the
government to prevent that from happening. And ultimately, Sam urged the committee to help
establish a new framework for this new technology. It was a surprisingly collaborative tone for most
of the federal officials who were questioning
him very differently than in social media context of the past.
The meanwhile, in a different building.
That same day, a different hearing was happening, which most people weren't aware of, which
was federal use of AI.
And a lot of the discussion in that context was about how the federal government needs to
be innovating to use more AI in a lot of what they do and
to be modernizing what's happening.
Today, we'll be discussing how AI has the potential to help government serve, better
serve the American people.
Okay.
So, tonally, the Senate Homeland Security and Governmental Affairs Committee hearing,
which was called artificial intelligence and government, was a little bit more optimistic
like, hmm, God, can be some of that.
And that would include things like Uncle Sam, right?
Improving the IRS system and, you know,
what does filing of taxes look like,
and are there ways to ease the burden,
are there ways to modernize
and have different parts of the government talking
to each other?
And hopefully those conversations will converge.
We won't be looking at like,
how do we regulate and limit the risks of gender to Baye
and then infuse it throughout all of the federal government at the same time
Right?
Like hopefully like you have the left hand talking to the right hand so that we actually come up with a sensible strategy and it wrote ahead
A road ahead, but which one are you feeling confused right now?
Because you should be the inventors and the backers of a billion dollar technology swore under oath. Something to the tune of,
yeah man, this shit could kill us. And everyone's freaking out because it's already taking over jobs
because it's so smart, but at the same time it's worse at googling than your 10-year-old niece
with a book report. And while this is going on, the government is holding two simultaneous hearings
on the same day, and one is Oppenheimer flavored,
and the other is Barbeland. So if you are confused by all of this, and you don't know how to feel,
the answer is yes, that's correct. But it's happening so quickly that it's not going to be law
alone that does anything to reign it in. We're going to need a lot of cooperation between governments, between tech companies.
And if you look in the US, the US has not been good at regulating tech companies.
It has had lots of talk about it, lots of very contentious Senate hearings.
I started Facebook.
I run it, and I'm responsible for what happens here.
And then they have so much money and so much power and so much lobbying influence that,
you know, the result is nothing happens. And that just can't be the case now. We can't
go into this era, leaving it up to tech companies to decide the fate of humanity.
Right. What do you do if you're Mattis Hall and you're not going to take it anymore?
What is the average person who does not own a $40 billion tech company say when they're
like, don't scrub my brain data through my headphones, I'd stop simulating art.
But some people make some art.
Have you seen that meme about how somehow we've gotten to a place where human beings
are still laboring at wages that don't increase, that are not livable,
yet computers get to write poetry and make art.
No, but that sounds right.
That's such a heartbreaking.
We're going to look at it where no one can afford to be an artist.
So the exact words from Twitter user Carl Sherrow read, humans doing the hard jobs on minimum
wage while the robots write poetry
and paint is not the future I wanted.
So that tweet was shared 35,000 times because it's true and it hurts my soul.
Yeah, I have been seeing that meme and now I'm reeling from thinking about it, which is
like, oh my god, that's so true.
Suddenly, we've outsourced all the things that we like, and we're now doing all of the grunt work still, and how horrible
is that? We're going to send gender to AI to the beach next weekend, and see how we stay
home in toil and pay for it, right? Yeah, I mean, you know, the problem is that on the
one hand, we could say, oh, it's all happening so quickly.
And so we can't do anything about it.
On the other hand, that's just the nature of emerging tech.
It happens quickly.
And so it's not as if there have not been proposals about what agile governance looks like or
what adaptive regulations look like that actually changed based on changes in milestones
in technology.
And it would not be impossible to put some of those things into place.
There have been people who've been writing about and thinking about and proposing these models for a long time.
First off, what does agile governance look like? And what does adaptive regulations mean?
I don't know, I'm not a law professor. I'm a podcast host who's jealous of a circuit board that gets to watercolor.
So I asked my robot machine Google, and agile governance means a process that brings the most value by focusing on what matters.
Okay, but adaptive regulations, I think mean like,
watch the space, keep making laws
if it seems like it's getting out of hand.
Now in June, the European Union overwhelmingly passed
the EU AI Act, which classifies different types
of AI into risk categories.
There's unacceptable, there's high risk, there's generative AI and limited risk.
What is in these buckets you're wondering?
So the unacceptable bucket includes cognitive, behavioral, manipulation, and social scoring,
all a black mirror, and biometric identification, like real-time public facial recognition.
High risk involves more biometric uses, but after the fact, with a few exceptions for
law enforcement, but it curbs AI stitching on employees and doing emotional spying from
what I gather.
Generative AI would have to disclose that it's generative, and the makers need to come
clean on what copyrighted material they're using to teach generative neural networks. Now, that's in the EU. As for America,
we have not gotten that far yet. I mean, that is, if everyone could even agree on what needs to
happen, then they'd have to agree on voting for that thing to be actually enacted, which is,
it's a beautiful dream that I'm
generating with my human imagination.
The problem has been, I think, the political will to do anything about it and to figure
out why should we care about the cognitive liberty of individuals, why should we care
about leisure and flourishing of humanity?
Let's just maximize productivity and minimize human enjoyment in life, that just can't be
what the answer is in the digital age anymore, right? I mean, we need an updated understanding of
what flourishing means, and it can't be that it is gender-to-vei making art and writing poetry while
we toil away, right? That can't be the case. Like, I'm a philosopher, right? I'm going to go back to,
we have all of these philosophical conceptions, lots of perspectives on what flourishing is. None of those perspectives, if you go back
and look at them, contemplated a world in which our brains and mental experiences could so easily
be hacked and manipulated. And the idea of happiness being the primary concept of human flourishing,
like what is synthetic happiness? Is that really happiness?
If it's generated by dopamine hits from being on a predictive algorithm that's sending
you little notifications, it's just the right time to make your brain addicted and staying
in place, that looks like happiness, but I don't think that's happiness.
So given that all of these presupposed world in which we actually had cognitive freedom,
we need to realize we don't anymore, right?
And if we don't anymore, we need to create a space in which we do so that human flourishing
in the digital age is what we're actually after and trying to make happen.
That we could put some human rights in place for it, we could put some systems in place
that were actually creating incentives to maximize cognitive freedom as the precursor to all other forms of flourishing.
And hopefully that cognitive freedom
would be the right to create art
without having it appropriated,
the right to write scripts and poetry
without having it used to train models without our permission
and without us being part of it
that then make us irrelevant
so that the models can play while we work.
So in her book, The Battle for Your Brain,
Neeta writes that we must establish the right to cognitive liberty
to protect our freedom of thought and rumination,
mental privacy and self-determination over our brains and mental experiences.
This is the bundle of rights that makes up a new right to cognitive liberty, which can and should be recognized as part of the universal declaration of
human rights, which creates powerful norms that guide corporations and
nations on the ethical use of neuro technology. Neurotechnology has an
unprecedented power to either empower or oppress us. The choice is ours." And quote, and one liberty I've taken
is never using chat GPT.
Kind of like my high school's football rallies.
I just don't want to participate
and I don't like what it's all about.
Even though literally no one cares
that a stinky drama student with dyed black hair and braces
is boycotting, nobody misses me. I've always been a little bit creeped out
and hesitant like I've never tried chat GPT and I have this absolutely incorrect
illusion that if I don't use chat GPT it won't get smarter and therefore I
single-handedly by abstaining have somehow taken down an entire
industry by eye. It's not true. Well, it's not true, but there is something to this idea that
we're not helpless and that there is a demand side to technology, just as there is a supply side
to technology. And there is a sense in which consumers and individuals feel like they're helpless.
It's the same way you see with voting. Well, what's the point of voting?
Because my state always goes this way or that way.
Or, and that kind of apathy means that a lot of times
elections are decided by everybody else.
And you know, that you don't have an effect.
But this is even more so.
Like collectively, if we don't like the terms of service,
why are we all still on the platforms?
And you're right, the models are going to continue
to be trained with or without you.
Yeah, no, I'm like, it's not that radical an act from just me to abstain.
Well, but, but that idea that collectively we could act differently. If we could motivate and
actually work collectively to act differently, we could act differently. One individual person silently
protesting against ChabGPT isn't going to do it.
Right, but loudly protesting against it and saying, like, look, the models train based on human
interaction and the more human interaction there is, the more it is trained.
And so do you want to continue to feed into that model?
That's a worthwhile societal conversation to have.
You know, I was talking to my husband this morning about how many brilliant engineers end up working
for bomb companies because they're going to have the best benefits.
They're going to have the most stable employment.
How many people in the legal field do you feel like get kind of scooped up by tech companies
because it's just an easier way to live. Do tech companies just have more pull
to get the best lawyers to advocate for them instead of for say greater humanity?
I think it's not just law, right? If I look at some of the best tech ethicists,
many of them have gone in-house to a lot of companies that are not actually that invested in tech
ethics. And many of them got laid off in the major tech layoffs
that have happened from 2022 to 2023 because a lot of tech companies, I think, have put lip service
to being serious about ethics, but they haven't as seriously grappled with it. And the money and the
power that these corporations have and the influence on society they have, I think both makes it hard for some people to resist saying no, but also this idea that like if you're at a tech company
where the transformation of humanity is happening, maybe you can steer it in the ways that you
think are better for humanity.
Are there any nonprofits or organizations that you feel like are doing a good job?
There are a lot.
I mean, I couldn't even begin to name them all.
Like I would say, first,
I admire what UNESCO is doing. So UNESCO is the United Nations Educational, Scientific,
and Cultural Organization, and on their ethics of artificial intelligence, web page,
it states UNESCO has delivered global standards to maximize the benefits of scientific discoveries while minimizing the downside risks,
ensuring they contribute to a more inclusive, sustainable, and peaceful world. And it's also identified
challenges in the ethics of neuro technology. So as a result, their recommendation on the ethics
of artificial intelligence was adopted by 193 member states at UNESCO's general conference, way back in the old
in times of November 2021.
They're really trying to get out of a lot of issues and to thoughtfully provide a lot of
ethical guidance on a lot of different issues.
I think the OECD is trying to be a useful and balanced organization to bring important information to bear.
The OECD, I had to look this up, is the organization for economic cooperation and development,
and it's headquartered in France, but involves 38 countries. So what are they doing?
The OECD principles on artificial intelligence, promote AI that's innovative and trustworthy,
and that respects human rights and democratic values. And then of course,
there's a EU. I think the EU is acting in ways that are really pushing the conversations forward
around the regulation of AI and how to do it and how to respect everything for mental privacy,
to safeguard against manipulation, and you know, they get lambasted for like going too far or
not going far enough. And those conversations are better than putting nothing on the table,
which is what's happening a lot of times in the U.S. I think the Biden administration has put out
a lot of different principles that have been helpful and that those kinds of principles or things
around like an AI Bill of Rights. I went and took a gander at this doc and the blueprint for an AI
Bill of Rights sets forth five principles, which I will now read to you. You should be protected from unsafe or ineffective systems.
You should not face discrimination by algorithms.
You should be protected from abusive data practices and you should have agency over how data
about you is used.
You should know that an automated system is being used and understand how and why it contributes
to outcomes that impact
you.
And finally, you should be able to opt out where appropriate and have access to a person
who can quickly consider and remedy problems you encounter.
I don't know if that means a helpline, I have no idea.
But that five point framework is accompanied by a handbook called From Principles to Practice
and its guidance for anyone who wants to incorporate
those protections into policy. So that's what the White House has put out. They're like, y'all,
we should really be cool and nice about all this. And it's so sweet, and I appreciate it.
My grandma had 11 children and really just dozens of grandkids, and she still remembered
all her birthdays and would send a letter with $1 in it. And that dollar meant a lot, even if it didn't get you far in the world, but I appreciated
it. In the same way, I appreciate that AI Bill of Rights. It's very sweet. Don't know
what to do with that.
There's a lot of different people coming at the problem from a lot of different perspectives.
If anything, there are so many voices at the table that it's in many ways becoming noisy
where we're not necessarily like moving ahead in a really constructive or productive way.
And there's a lot of replication of efforts, but that's better than having too little activity
at the table.
So, yeah.
I think that a lot of us on the outside of it think there's a tumbleweed blowing through
a boardroom and nobody cares.
So it's really good to hear.
No, I will tell you that I just feel like there are conversations happening in every corner
you can imagine right now.
And I'd like to see those conversations be turned into useful and practical pathways
forward, like calling for governance if you're a major tech company and saying, like,
these technologies that I'm creating create existential risk for
humanity, please regulate it.
Or if you think that they present existential risk for humanity, don't just rush ahead and
then, you know, like come forward with something positive rather than saying, my job is just
to create the technology your job is to govern it.
Like, that's not the pathway of forward either.
I have questions from listeners who know you're coming on.
Oh, great.
Yeah.
Please. But before we do, we'll donate to a relevant cause.
And this week, it's going to Human Rights Watch, which is a group of experts, lawyers,
and journalists who investigate and report on abuses happening in all corners of the world.
And then they direct their advocacy toward governments, armed groups, and businesses.
And you can find out more at hrw.org.
And we will link that in the show notes
and thanks to sponsors of the show who make that donation possible. Okay, on to questions
written by actual human listeners made of meat and water. Let's start with something optimistic.
A ton of people, Lena Brotsky, Nina Evesie, Chris Blackboard, Megsy, Alexandra Katoule, Adam Silk, Nina McAfee, Madison Piper, and Will Mc.
Want to know, can we use AI for good?
Rye of the Tiger wants to know what will AI's world look like in the fight against climate change?
For example, or should we be using AI for the toils like meal planning and trip planning and things like that?
Yeah.
So I think we can absolutely use AI for good. And first I would say a friend of mine,
Orally Loebal, wrote a book recently called The Equality Machine. And it's all about
using AI to achieve better equality in society and gives kind of example after example of both
how it could be done and how it is being done in some context.
I think recognizing that there is this terrifying narrative about AI, but that actually AI is
already making our lives better in many, many ways is an important thing to look at.
And that we can put it to solving some of the biggest problems in society, right, from
climate change and trying to generate novel ideas to testing
and identifying, and this is already happening, novel compounds that could be used to
solve some of the worst diseases, to being used to identify the causes of different diseases,
to identifying better patterns that help us to address everything from neurological disease and suffering
to the existential threats to humanity like climate change.
So I absolutely think it can be used for good.
It is being used for good.
It could be used for more good.
We have to better align the tech companies
with the overall ways of human flourishing.
I mean, if you were to use AI to improve brain
health instead of to addict and diminish brains, that would be phenomenal. And it could be
used to do that. It can be used for mental health treatment and for solving neurological
disease and suffering, or it can be used to addict people and keep them stuck on technology.
We need to figure out a way to align the incentives of tech companies with these ideas of AI for good.
It'll be so interesting to see if they are getting a lot of feedback from our brains.
Any mental health challenges or speaking as someone who has anxiety and is neurodivergent? Hello, hi.
Things like ADHD, autism, those have been so overlooked in some populations.
It would be interesting to see people getting a better understanding of their own brains that maybe
medicine has overlooked because of demographics for a long time. Yeah, I have a TED talk that just
came out that the first half of the TED talk actually focuses on all of the positive ways
that our technology can be used and all of the hope that it offers,
like us tracking our everyday brain activity
could help us better understand what stresses us out.
The earliest stages of glioblastoma,
the worst and most threatening form of aggressive brain cancer
is the earliest stages of Parkinson's
and Alzheimer's disease, better solutions for ADHD and trauma, you know, everything
from like understanding the impact of technology on our brains to impact the understanding the
impact of having that glass of wine or that cup of coffee on the brain and how it reacts
to it, gaining insight into our own brain activity could be the key to unlocking much better
mental health and well-being.
And I think if it's put in the hands of individuals
and used to empower them,
that will be tremendous and phenomenal.
So long as we don't overshadow those benefits
or outweigh those benefits,
with the dystopian misuses of the technology,
which are very real and very possible, right,
of using in the same way that companies are using
all kinds of algorithms to predict or purchasing behavior or to nudge us to do things like
watch the 10th episode in a row of a show, rather than, you know, breaking free and getting
some sleep, which is important for brain health, if the companies don't use brain data to commodify it, to inform a more or
well-earned workplace, get back to work.
If governments don't use it to try to surveil brains and to intrude on freedom of thought,
but instead, it's used by individuals to have greater power over their own health and well-being
in their own brains, it will be tremendous. We just have to really worry about those misuses
and how we safeguard against them.
So the day before this interview,
a TED Talk featuring Nita went live,
and in it, she discusses the loss of her daughter
and the grief that overwhelmed her.
And she tells of how using biofeedback
to understand her own sorrow and trauma
from the experience helped her so much, but how individuals
brain data should be protected. And this wrenching personal story that she tells plus her long
backgrounds in ethics and science and philosophy make her very uniquely suited to see this issue
from a lot of angles. And a lot of patrons had questions about surveillance and brain data and
even neural hardware, including
Katie McAfee, Ryan Marlow, and Sandy Green, who asked about things like medical devices,
like brain implants being used for surveilling or for commerce.
I was curious, so are some listeners too, a Pavka 34, Dominic David, and Alex Ertman's
words, if we were to implant chips into human brains, what would they most
likely be capable of? Would they be more in the realm of modulating real inputs? Or would
they be capable of generating new thoughts? Alex says it seems far-fetched, but also the
truth can be straight in the fiction. So, is that a really big leap philosophically
and weekly and technologically? I think it might be easier to interrupt thoughts than to create new thoughts.
However.
I guess philosophically that is creating new thoughts if you're interrupting thoughts
right because you're letting other thoughts happen.
But implanted neuro technology right now is very limited.
It's very difficult to get neuro technology into people's brains.
And there are 40 people who are part of clinical trials
that have implanted neuro technology right now.
It's a tiny number of people.
If Neuralink, you know, and Elon Musk has this way,
there will be far more people who are part of that.
But implanted neuro technology is limited.
What it primarily is being used to do
is to get signals out of the brain.
That is to listen to intention, to move,
or to form speech and
to translate that in ways that then can be used to operate other technology.
If you're like, what is narrow link again?
It sounds like a commuter train, but this is actually a side hustle of Twitter owner and
Tesla guy and tunnel maker Elon Musk.
And he described this cosmetically undetectable coin-sized brain accessory as a
wireless implanted chip that would enable someone who is
quadriplegic or tetriplegic to control a computer or mouse or their phone or really any device just by thinking and he likened it to a
Fitbit in your skull with tiny wires that go to your brain.
So a robot surgeon, also invented by Neuralink,
sows 64 threats with over a thousand electrodes into the brain matter,
which allows the recipient to control the vices or robotic arms or screen using telepathic typing.
Which sounds pretty cool. In an early 2022, it came to light that roughly 1,500 animals had been killed in the testing process
since 2018, some from human errors like incorrect placement on pig spines or wrong surgical glue
used in primate test subjects. And some former employees reported that the work there was often rushed and that the vibe was just high-key
stressful, but nevertheless, Neuralink announced just a few months ago that they got the green light from the FDA to launch their human trials.
And if you're like, hey, I am always losing the TV remote. So wire me up, Musk.
Please call your jets because they added that recruitment is not yet open for their first clinical trial.
More on that is it develops.
But I guess when I said that we could become bubbles of chimp, that was really on the
optimistic side of things.
What is possible, though, and this is one of the things I talk about in my TED Talk, is
it's possible to use neuro stimulation in the brain.
So I described, for example, the case of Sarah, where she had intractable depression,
and through the use of implanted electrodes, was able to reset her brain activity.
This signet was conducted at the University of California,
ServiceScope, where neuroscientists implanted what's called a BCI,
or Brain Computer Interface, which was initially developed for epilepsy patients into someone with
treatment-resistant depression. And one surgeon on the team said, when we turned this treatment on, interface, which was initially developed for epilepsy patients into someone with treatment
resistant depression.
And one surgeon on the team said, when we turned this treatment on, our patient's depression
symptoms dissolved.
And in a remarkably small time, she went into remission.
And the patient's era reported laughing and having a joyous feeling wash over her that
lasted at least a year after this implantation.
So the specific pattern of neural activity that was happening when she was the most symptomatic
was traced using the implant technology. And then like a pacemaker for the brain,
those signals were interrupted and reset each time she was experiencing them.
That doesn't create a new thought. What it does is interrupt an existing thought. But philosophically, you could say that creates a new thought.
It creates for her an experience of being able to have a more typical range of emotions.
I think specific thoughts would be very hard to encode into the brain.
I won't say never.
So brain hacking, hand hacking, into your brain may radically change the way that we think
and feel if we don't blow up the planet
first, which is not an intelligent thing to do. Speaking of intelligence, many patrons wanted to know
what is in a name. Alexis Wilklerk, Zombot, who proposed the term OI or organic intelligence for
human thinking and history buff Connie Brooks, they all had questions about AI and the term
AI.
Is it intelligent?
Is it artificial?
Are they ever going to do a rebrand on that?
Does it give people the wrong idea of what it is?
Yeah, so I mean, a lot of the technologists out there were computer scientists just saying
this isn't artificial intelligence because that assumes that there's intelligence. These aren't intelligence. They are task-specific algorithms that are designed to do particular
things. And that if we ever get to the point where you start to see more generalized intelligence,
then that's the point at which it makes more sense to talk about artificial intelligence.
But not everyone is so casual about that assessment.
Interestingly, Eric Corvitz, who is the chief science
officer at Microsoft, who has partnered with OpenAI for chat GPT, he just published his
essay on this AI anthology series, and he talks about how his experience with GPT-4
was to see a lot of threads of intelligence, of what we think of as intelligence.
You see increasingly a lot of examples of reasoning
more like humans.
I think one of the examples I've seen out there
is giving GPT-4 a question of like,
okay, you have some eggs, a laptop,
it's like five or six items, how would you stack them?
Then comes out and explains how you would stack them
and like you would put the book on the bottom
and then you would put a set of eggs that were spread out
so that they could be stable
and then you would put the laptop
in a particular configuration and blah, blah, blah.
And why that kind of reasoning was more like human intelligence
than it is like an algorithm.
And those are really interesting to think about, like what is intelligence is really the fundamental question was more like human intelligence than it is like an algorithm.
And those are really interesting to think about,
like what is intelligence is really the fundamental question,
I think when somebody is saying,
is it really artificial intelligence?
It is to have a particular perspective
on what intelligence is and means.
And then to say, well, that isn't intelligence
or if a generative AI model says it's happy
that it can't really be because that's not an authentic
emotion because it's never experienced the world and it doesn't have sensory input and sensory output.
Or if a generative AI model says here's what the ratings of wine are and what an excellent wine is,
it can't possibly know because it's never tasted wine. And then there's a question of, is that kind of intelligence
what you need, which is experiential knowledge and not just knowledge built on knowledge?
There are some forms of intelligence, like emotional intelligence, which you might think really
requires experiencing the world to authentically have that kind of intelligence.
I don't know shit about wine, and sometimes I'm bad at my own emotions. Oh well, we can learn. Speaking of learning many patrons who are students had thoughts and questions like handy dandy Mr.
Mandy Natalie Jones, Josie Chase and Slayer as well as educators including Nina Brotsky, Julie Valmer, Leah Anderson, Jenna Kong, Ben Theater, Viscion, Hudson, Anzli, and Nina Evese. There were several teachers who wrote in with questions.
Katie Bauer says, I'm a middle school teacher,
and I just started having students use AI tools
to write essays for them.
Help, talk me down.
How do we embrace new tech,
but also teach students how to navigate this new landscape
with solid ethics and an understanding
of the need to develop skills that don't revolve
around AI technology.
And Liz Park, for some question asker asked our teacher, and they feel that teaching,
along with a lot of other jobs, just can't be handed off to AI and expected to have the same impact
because machines, no matter how advanced, won't be able to individualize education and provide
warmth and et cetera. Well, you know, it's funny because I hear the almost the same question in both,
right? What is the role of education
and human to human education in a world of gender and I think that's a great question to be asking
and I would say first I'm so glad that they were giving their students the assignment of working
with chat GPT and trying to understand it because I think there are skills that you can't learn from generative AI, and if you don't learn them,
we will not be able to interact well with them
and use them well.
And these are critical thinking skills.
And if the same old assignments
are how we're trying to teach students,
then yeah, students are just gonna go to chat GPT
and say, here's the book, generate a thesis statement
for me and write my essay, right?
But they will have lost out on the ability to generate a thesis statement and what that critical thinking skill is,
and lost out on the ability to build an argument and how you do so,
lost out on the ability to write and understand what good writing is,
and they won't be able to interrogate the systems well because they won't have any of the skills necessary to be able to tell fact from fiction and what is good writing or anything else.
So then the question is what do you do?
And it's the teachers and higher education and K through 12 education needs to be thinking about,
okay, what are the fundamental skills of reasoning and critical thinking and empathy and emotional intelligence and
mental agility that we think are essential and that we have been teaching all
along but we've been teaching by task that now can be outsourced and then how
do we shift our teaching to be able to teach those skills and you know if you
go back to like the socratic dialogues, there's an art to asking the question to
seeking truth. And there is an art to asking the question of
generative AI models in seeking the truth or in seeking good
outputs. And we have to be teaching those skills if we want to
move ahead. I wasn't sure what the Socratic method of
questioning was. So I asked the literature via computer. And I
found that it involves a
series of focused yet open questions meant to unravel thoughts as you go. And according to one
article, instead of a wise person lecturing, the teacher acts as though ignorant of the subject.
And one quote attributed to Socrates reads, the highest form of human excellence is to question oneself
and others.
So, don't trust my wine recommendations, but do cut bangs if you want.
Text a crush, ask a smart person, a not smart question, because worms are going to eat
us all one day.
But yeah, the point of education isn't to get a good grade, but to develop skills that
in the future are going to get you good grade, but to develop skills that in the future
are going to get you out of jam.
So many jams.
And I think your other person talking about that they can never replace human empathy, that's
right, but don't be blind to the fact that they can make very powerful personal tutors as
well.
And they may not be able to tell when a student is struggling or when they need emotional support
or when they may be experiencing abuse at home
and need the support of the school
to be able to intervene, for example.
But they can go beyond a teacher can go.
A teacher doesn't have the capability
to sit down with every student for hours
and help them work through 10 different ways
of explaining the same issue to somebody.
And so you help them learn how to ask the questions, and then they could spend all night long saying,
okay, well, I didn't understand that explanation. Can you try explaining it to me a different way?
Can you try explaining it to me as if you were telling my grandmother? I don't understand what that word means.
There's no teacher on earth who has either the patience for that or the time or is paid well enough to do that
for every student. And so I think it can be an extraordinary equalizer, you know, right now like
wealthier parents are able to give pride of tutors to their kids. Okay, now you can have a
gendered of AI model serve as a private tutor that can be customized to every student based on how
they learn. However, that doesn't mean we don't need teachers
to be able to be empathetic and to help students learn
how to engage with the models and learn critical thinking skills
or to create a social environment
to help develop their emotional intelligence
and their digital intelligence.
But it does mean that there is this additional tool
that could actually be incredibly beneficial
and can augment how we're teaching.
Okay, but outside the classroom and into your screens, folks had questions, including Michael
Hiker, Kevin Glover, Andrea Devlin, Jenna Kongdon, Grite State of Mine, Chris Blackthorn, RJ
Doryj and one big question a lot of listeners had is Rebecca Newport says, what's your favorite
or least favorite portrayal of AI and media? Chris Whitman wants to know
what is your favorite AI storyline based movie and why is it X-Makina? Someone else said,
Mrs. Davis, should we turn off Mrs. Davis? If we could, how do we prevent Terminator 2,
whether or not you watch Black Mirror, anything that you feel like pop culturally written by humans,
that you've loved or hated? I love Minority Report. It's an oldie but goodie.
But it really informs a lot of my work.
And I think it's great.
I'm placing you under arrest for the future murder of Sarah Marks.
Do the man has had.
The future can be seen.
I think that some of the modern shows that I like, like Severance, Altered Carbon, I thought
was a great series, Black Mirror, yes.
All of those, I think, are terrific and creepy.
I appreciate those stories in really raising consciousness about some of the existential
threats, but I would like to see stories that give us a more balanced perspective sometimes.
I guess that doesn't make for good film,
but look, the fears of like we don't fully understand consciousness, let alone how emergent
properties of the human brain happen, let alone how emergent properties could happen in an
incredibly intelligent system that we are creating, I share those fears. Like, I don't know where all of this is going,
and I worry about it.
And I also don't think anybody has an answer
about how to safeguard against those existential threats.
And we should be doing things to try to identify them
and to identify the points and identify what the solutions
would be if we actually start to see those emergent properties
and those emergent properties are threatening, like when you're monitoring systems,
we also, in the meantime, need to be looking at the good and figuring out how to better distribute
the good, how to better educate people, how to change your education systems to catch up with it,
how to recognize that the right answer for the right or strike isn't to outsource it to chat GPT,
and there's something uniquely human about the writing of stories and
the sharing of stories and the creation of art and that that's part of the
beauty of what it means to be human. And so those conversations about the role in
our lives and how to put it to uses that are good and still preserve human
flourishing like that I feel like is what we need to be doing
in the meantime, right, before it actually torches us all.
That is great advice.
And the last questions I always ask
is always like, what's the worst part about your job?
A lot of people say, might be jet-like, meetings, emails.
But I will outsource that to the patrons
who wanted to know, are we fucked?
So we want to know, are we fucked?
So what is the most fucked thing about what you do or learn?
So I mean, we're fucked if we let ourselves be.
And I fear that we will.
Right?
I mean, so I can tell people, until I turn blue in the face about the potential promise
of AI, and certainly the promise of neuro technology,
if we put it to good use,
and if we safeguard against the orwellian misuses
of it in society.
But we seem to always go there.
We seem to always go to the orwellian
and do the worst thing
and put it to the worst applications
and be driven just by profit
and not by human flourishing.
And so if we keep doing that, then yeah,
we're kind of fucked. And if we actually like he the wake-up call and do something about it, like put into place not only a human right to cognitive liberty, but also the systems, the governance, the practices, the technologies that help cultivated in society.
I mean, if we invest in that, we have a bright and happy future ahead.
If we don't, you know, talk it. Yeah, we're fine. Talk it. What about to be such a globally recognized
trusted voice on this? Obviously, I was so pumped to interview you. Like I came straight out of the
gate being like, I'm terrified of talking to you. What is it about your work that gets you excited?
What kind of keeps you motivated?
I guess I'm also fascinated and terrified, right?
I mean, so it's almost like a horror show
where you can't look away.
And so I'm just motivated to continue to look
and to learn and to research.
And then I guess at the end of the day,
I am an eternal optimist. Like, I just, I believe in humanity. I believe we can actually find a pathway
forward. And that if I just try hard enough, right? If I just like get the message out there and
work with enough other incredibly thoughtful people who care about humanity that we will find a good pathway forward.
So I'm driven by the hope and the fascination.
I'm driven to continuously learn more and I'm just grateful that people seem to respond.
I'm encouraged that in this moment, people seem to really get it.
They really seem to be interested in working together collectively to find a better pathway
forward.
I feel like you walking into a room or a conversation is like, have you ever seen a
piece of chicken thrown into piranhas?
All of us are just like, can you have a closer?
Like, right, right.
Interleksual piranhas being like, please help me everything you know.
Get it, get it, get it. And give please tell me everything you know, get in there.
And give me a hug while you're out of thank you.
Well, that's a good thing is I can give hugs too, right?
And so I'm also a mom at the end of the day.
I have two wonderful little girls at home who keep me grounded
and see the world full of curiosity and kind of brilliance
of all kinds of possibility.
And I want to help them continue to see the world
as this kind of magical place.
I want it to still be that place for them
as they grow up.
So ask actual intelligent people some analog questions
because the one thing that we can agree on
is that there is some power in learning,
whether you're a person or a machine.
And now that you know some basics,
you can keep up with some of the headlines,
but honestly, take news breaks, go outside,
smell a tree, play pickleball or something,
or go read Nita's book.
It's called the Battle for Your Brain,
defending the right to think freely
in the age of neuro technology.
We'll link that and her social media handles
in the show notes, as well as so much more on our website
at alliword.com slash allergies slash neuro technology.
Also, small a Gs are kid friendly and shorter episodes.
Those are up at alliword.com slash small a Gs linked in the show notes.
Thank you C. Rodriguez Thomas and sure it's sleeper of Mindsha media and Mercedes-Mate
Land of Madeland audio for working on those.
We are at all a Gs on Instagram and Twitter and I'm Ali Ward on both just one L and Ali
Thank you patrons at patreon.com's for such great questions
You can join for a dollar a month if you like. Oligis merch is for sale at reasonable prices at Oligis merch.com
Thank you Susan Hale for handling that among all of her many responsibilities as managing producer
Noel Delworth schedules for us Aaron Talbert Admin
The Oligis, Facebook group,
this is from Bonnie Dutch and Shannon Feltis,
also happy birthday to my sister Celeste,
who has a great brain, Emily White of the Wordery,
makes our professional transcripts,
and those are at alliword.com,
slash allegies-extras, for free.
Kelly Arduire does our website, she can do yours too.
Mark David Christensen, assistant edited this,
and lead editor and alarmingly smart,
Mercedes-Mateland, a Mate Lind audio, pulls it all together for us each week.
Nick Thorburn wrote and performed the theme music, and if you stick around until the end of the episode,
I tell you a secret, and I'm gonna treat this space like a confessional booth, if you don't mind.
Okay, so once I ran into this guy that I had dated, who had dumped me, and he was with his lovely new girlfriend.
And I pretended, like I didn't hear his new girlfriend's name right.
I was like, what is it?
Is if I hadn't been six years deep in her Facebook,
like the day they became official.
And I still feel guilty about that.
But I'm telling you that because computers,
wow, they've changed our lives.
And also humans were so goopy and flawed.
But you know, everyone's code has bugs
and we just keep upgrading our software
until things work well enough. Okay, go enjoy the outdoors if you can. Bye bye! I am now telling the computer exactly what he can do with a lifetime supply of chocolate.