StarTalk Radio - Cosmic Queries: Minds and Machines
Episode Date: May 18, 2018Explore the inner workings of the human mind, the mysteries of memory, The Matrix, deep learning, the ethics of driverless cars, ELIZA, and much more with Neil deGrasse Tyson, comic co-host Chuck Nice..., and neuroscientist Dr. Gary Marcus.NOTE: StarTalk All-Access subscribers can watch or listen to this entire episode commercial-free here: https://www.startalkradio.net/all-access/cosmic-queries-minds-and-machines/Image Credit: metamorworks/iStock. Subscribe to SiriusXM Podcasts+ on Apple Podcasts to listen to new episodes ad-free and a whole week early.
Transcript
Discussion (0)
Welcome to StarTalk, your place in the universe where science and pop culture collide.
StarTalk begins right now.
This is StarTalk. I'm your host, Neil deGrasse Tyson, your personal astrophysicist.
And today is a Cosmic Queries edition of StarTalk.
We've solicited your questions on an interesting subject, queries of minds and machines.
Oh yeah, something I can't do myself, had to bring in help for that. We'll get to that in just
a moment. Chuck Nice, you're helping me out here.
That's right. How are you, buddy?
Alright, good. Good. Have you been practicing how to
pronounce names? No, I have not.
Which is why they will be just as
awful as they always are.
And quite frankly, I believe that people
send in crazy names just to hear me
butcher them. But I'm totally comfortable
with that. Keep telling yourself that.
That's on purpose.
So we've got mind and machines.
I mean, this is a very intriguing topic that touches everything,
like morality and politics and culture, business, all of this.
Yeah.
We've got a guy who's like in the middle of that,
and he's sitting here in the middle of us.
Morality and business in one sentence.
So Gary Marcus.
Gary, this is not your first rodeo here on Star Talk.
It's my third time here.
Thank you.
Your third time.
Welcome.
You're a professor at NYU of psychology and neuroscience.
So you are an expert on the intersection of mind and machine,
psychology and technology.
That's right.
My training is in natural intelligence,
and my work in recent years is mostly in artificial intelligence.
And so that is kind of minds and machines
and going back and forth between the two.
Wow.
I know.
We ever see a day where a machine will have a mind?
Depends what you mean by a mind. We can dig into that if you the two. Wow. Do we ever see a day where a machine will have a mind?
Depends what you mean by a mind.
We can dig into that if you'd like.
Oh.
Well then in that case, what is a mind?
Yeah, yeah.
Clearly I do not have one.
Apparently.
You got that question wrong.
Let's start a little further back.
So what is it about a human mind
that most distinguishes it from the mind of other mammals?
Just so I can get a sense of what it is to be human.
Just start there.
I think our language is vastly more sophisticated.
I think we can talk about and think about not just what's here and now, but what might be, what could have been, what happened before, what will happen eventually.
So abstraction and not just the abstraction of democracy,
but also the abstraction of what would happen if the United States were no longer a democracy.
So things that we hope are so-called counterfactual,
but we don't know for sure, given contemporary politics.
All right, so some time ago, I interviewed Ray Kurzweil,
and you were our guest in studio, academic guest,
in response to that show.
And he had commented that the next evolution of the human brain,
if it's not biological, then it would be mechanical,
would be extending what the frontal lobe had done for us.
Because as I understand it,
the frontal lobe is responsible for this abstract thinking
that animals that don't have developed frontal lobe is responsible for this abstract thinking that animals that
don't have developed frontal lobes are incapable of attaining.
If that's the case, what thoughts are we not having by not having some other lobe in front
of the frontal lobe?
It's a fine question.
It's sort of like the Rumsfeld known knowns and unknown unknown.
It's sort of a question about unknown unknowns.
The first thing I would say is that we're really restricted by our memories
and the capacity limits on them.
Computers have something called location-addressable memory.
That means everything goes in some sort of master map.
And that means like…
It's kind of true with the human brain.
Humans use something called context-addressable memory
where we don't know exactly where things are. It's kind of true with the human brain. Humans use something called context-addressable memory,
where we don't know exactly where things are.
I mean, even the best brain scientist in the world is not going to be able to tell me exactly where your memory of the Pink Panther movie is.
Maybe because they're not there yet.
Sometimes the memories might not be there,
but for the memories you have, they're not very well organized.
No, no, no.
He means maybe the scientists are not there yet.
Just because they can't figure it out doesn't mean it's not true.
It's not real.
Before Isaac Newton, the planets would look pretty mysterious going forward and backwards up in the sky.
Granted.
Writes down an equation and takes away the mystery.
Granted that there are lots of mysteries and unknown unknowns and all that.
But if you look mechanically at how people's memories work, we are, for example, subject to a phenomena you might call blurring together of memory
So if you park every day in the same lot you give that an official term blurring
You don't have a more scientific term than that what what call it that?
One of the scientific terms is your blurry memory today all of my memories are blurry get some glasses for your memory
One of the technical terms is interference. There's proactive interference.
That feels a little better.
Okay.
It's okay.
You want the technical terms.
So we're very subject to interference in a way that you wouldn't be if you had location
addressable memory.
So computers don't get confused between 12 similar memories.
They can, for example, use buffers.
So if you store, sorry, if you park your car in the same lot every day, and then you go
out on the 10th day, you'll be like, did I park here or there? Because you blurred together, my technical term again,
those memories. There's interference between them. Whereas a computer could have a last
entry buffer, and it will just forget the first nine. There's a process called garbage collection,
get rid of all of those. You just have the piece of information that you're looking for.
Our memories are not very reliable. This is why we can't, for example, give eyewitness testimony
that's trustworthy, and we can't have time example, give eyewitness testimony that's trustworthy,
and we can't have time-date stamps the way that you can have on a video.
There are lots of ways in which our memory is really not as precise as computer memory.
Can an experience bias a memory during the making of the memory itself?
That's a really hard question to answer.
I don't understand the question.
What's that?
That went above my head
real okay that again oh my god let me ask that again
wow it sounded deep and i just don't you know let me hear it all right so what i'm asking
is if for instance we're i don't know hanging out in the kitchen and you know we're having a
conversation and for me that conversation is like,
wow, I was talking to Gary and Neil
and I learned all this stuff
and it's a great conversation, right?
And I'm able because of my experience
to recall things and it's a better experience for me.
Could that same experience that we're all sharing
and you two are like, Chuck's a dumbass,
and I hated this conversation. Could that then mar your actual memory, the information,
the surroundings, how you recall it, so that we recall the same experience differently,
because we're biased by the way we felt about the experience while it was happening?
There's kind of two processes there. Okay. One we would call coding. And the other we would call
retrieval. So one is called re-encoding. And then retrieval. So we know there's lots of distortions
made at retrieval time. So you can show people a video of somebody going past a yield sign and
then ask them a question. How fast was the car going when it passed through the stoplight?
And they'll just be like, oh, I guess it was a stoplight.
And so they'll distort the memory by having some new information on top of the old information.
And coding is like how you put that memory down in the first place.
And it's less clear.
We may have bias even in how we record that information at the time,
but it's a little bit harder to do the experiments.
We know that at retrieval time, there's lots of distortion. In fact, we reconstruct a lot of our memory. So computers,
like a videotape, you're just pulling out something that is stored. There's no question
about it. A lot of what we do is we try to figure out, well, what could it have been like? So if I
asked you, we did that episode with Kurzweil, and what did I say about Kurzweil? You might sit there
and try to remember, well, at the end, I said nice things about Kurzweil, but I was nicer than Gary. And so what did Gary say? And go back and try to
reconstruct it, or your viewers can go watch the podcast of it. They'll have a different experience
watching the podcast of it, as opposed to you figuring out from your memory. Your memory is
not a video recording, and some of your biases... Except I'm trained to not trust what I don't
have explicit memory of.
I mean, I have some training to edit that
away from any statement, right?
So in other words, and I agree with you,
there are people who, particularly under pressure
to have to remember something,
they'll stitch together bits and pieces
from things that didn't happen
or happened that resembled it
and come up with some other reality,
and that
becomes the reality right so if i kind of don't remember something i don't try to i don't try to
buff it up to try to you think you don't and you might be better than the average person i don't
i'm saying i'm trained train yourself to avoid it that there's a process called reconsolidation
that humans seem to use or biological creatures in general seem to use, by which when you access a memory, that memory actually becomes loose and flexible.
And then you put it back down, and you don't put it back exactly the way you found it.
And this is just a fact about how biological creatures use their memory.
Again, it's very different from what a computer does.
And to go back to the earlier question, if you said, how would I soup up a human brain? I would start with the memory system and make it more reliable.
So my evidence for whether I fail or succeed at this, and I think we can all test in this way.
How well do you remember a scene of a film you saw once 10 years ago, 20 years ago, 30 years ago?
It doesn't have the words Kaiser Soze.
I don't remember.
So I'm just saying that's an example of something you experienced.
No, you were not in the scene, but you observed the scene.
So think of it as part of your life experience.
And there are plenty of people who say, I don't remember who was acting.
Oh, no, I forgot the scene.
But some people are candid about what they remember, what they don't.
I have really acute memory of movie scenes,
about what they remember, what they don't.
I have really acute memory of movie scenes,
which tells me that I should also have corresponding acute memory of events of my life.
And by the way, it's not that I remember everything.
I'm not one of those.
But if I think I remember it,
chances are I remembered it accurately.
There's plenty of stuff I have no clue.
I was not paying attention.
I was ignoring it.
Plenty of times I will tell you that.
But if I know something, it's pretty much there.
So a few years ago, I wrote a piece for Wired,
which was called Total Recall.
And it was about a woman named Jill Price
who seemed to have perfect memory.
But it turned out it was mostly for autobiographical facts.
So it was things about her own life.
Compartmentalized.
A lot of it, I think, was essentially, I don't know how to say this politely, it was things about her own life compartmentalized a lot of it i think was essentially
i don't know how to say this politely it was narcissism um she kind of practiced her own
memories the way i practiced baseball statistics when i was a kid so when i was a kid i was known
as the walking encyclopedia of baseball and it's not because i had some phenomenal memory it's
because i kept reading the baltimore orioles information guide and so i just knew all all
of the stats that were in there because i read it so many times and she spent a lot of time rehearsing her own life but when I asked her when when the Magna Carta
was signed she said what do I look like I'm 500 years old which was way off um because it wasn't
autobiographical and so she didn't know about it so people can choose like if you care about movies
I heard you off stage talking about how you like to use movies as a scaffold to teach people about
science so the movies become important to you you spend a lot of time getting it right.
Only if it's a communal knowledge.
Right.
And my mentor, Steve Pinker, does that a lot with Woody Allen things in his books.
He'll use funny Woody Allen skits.
Pinker, professor at Harvard.
Professor at Harvard.
He was at MIT when he was my PhD advisor.
And in his books, he uses a lot of pop culture also.
I'm not as funny and can't pull it off, but Pinker pulls it off very well.
Those books are on the bestseller list.
That's why his book,
one of mine made it once for a few weeks,
but anyway, but his reliable.
Which one of these books?
I have you here.
The Future of the Brain?
Did that make it?
Or you edited that?
I edited that.
Guitar Zero was my book that was
on the bestseller list.
New musician in the science of learning.
Very nice.
And a failed video game. And a failed video game.
And a failed video game.
It's actually a story about, I am awesome at Guitar Zero.
What?
The game is Guitar Hero.
Yes, I know that.
The title was a joke.
Gary, that was the, oh.
The title was a joke on the game.
Gotcha.
Because I started learning about music after failing and then succeeding at the game.
So your joke actually cuts to my personal history.
But that's another story for another day.
So I'd love what you're talking about,
how you store memory.
And it leads me to wonder,
maybe you have some insight into this. If we did have perfect memory storage and recall,
would that make us less creative?
People have asked that question.
We might be anchored to reality
and creativity comes out of a non-reality,
no matter what.
It's science, art, music.
It's something that did not exist before,
maybe in threads,
and you put it together into something
that no one thought of before,
and you are not recalling this.
So we can complain about how we store and retrieve memory,
but maybe that's the basic essence of what it is to be human.
I've heard that argument before.
I don't buy it, but I think it's open.
So I think a lot of what passes for creativity is simply taking two elements from different
places and combining them.
And you can do that if you have perfect memory, you can do that if you have lousy memory.
On the other hand, it is the case that we do things like free association where we just
kind of jump from topic to topic and sometimes it hits pretty well and that can count as creativity
too. So I don't know. There was a second creativity was more what I was describing.
That's right.
If you take two perfectly remembered things and put them together, yes, you can come up with
something new, but you're still anchored to the reality of the perfect memory and if you have imperfect memory so in there are like unicorns and that you think you saw
and whatever and out comes a whole thing that is not derived from anything real that happened to
you could be i mean there's a study in science a few years ago where they took the journal the
journal science probably 10 and 15 years ago now um love that you two knew
that you know i said studying science the other year uh yes the capital s right the journal of
read my mind and saw them i'm just trying to not just general science in the journal all of science
no there's a journal procedures journal entitled american counterpart to the journal nature in the
uk um in which they compared Madison Avenue trainees
or something like that
with a computer program
for advertising.
And people just made up things like,
I don't know,
like a drink that was fast.
They would put tennis shoes
and soda together or whatever.
And the computer could do it
just as well as people.
And there, people had, you know,
the weird memory that we do.
Machines didn't.
The machines did just fine.
So it partly depends on what the task is.
That would totally explain Japanese commercials
because they are crazy.
That comes from someplace else.
Yeah, exactly.
It's just like-
See, Japanese everything on television.
That's so true, yeah.
It's like the Simpsons actually make fun of them.
They're like, Homer looks like a character
and he's like called Mr. Sparkle
and they actually see the commercial
and it just makes no sense at all because they're looking at it from through American
eyes.
So cool.
So what's the future of this?
Where is this going to go?
I mean, we will invent.
Are you cyborg?
Let me just.
I am part Apple watch and part human being.
Okay.
And I mean, mostly I rely on my external memory from my phone, right?
My phone is really a game changer.
I used to have to remember phone numbers.
I used to have to remember all kinds of facts.
And your iPhone.
My iPhone.
You can tell from the watch.
You can infer that I'm a fanboy, I guess.
The phone extends my cognitive reach greatly.
Eventually, it might be on board.
I worry about like Bluetooth hackery and stuff like that.
I mean, you put a phone outside of my body and hack it,
I can probably still hack it,
in the other sense of the word hack.
If you have something inside my head,
cyber crime is gonna happen.
I walk by you and make you think.
No, no, it'll first happen with advertising.
Absolutely.
I'll make you, yeah, you want a Shake Shack burger
right now. Exactly, right.
In this moment.
And I'm a vegan.
Where did that come from? Where did that come, how's that even happen now the other side of that
the other side of this we're suggestible anyway you just said shake shack and i want one
i don't you don't need a brain implant to do it that's some shit right so so why what is the urge to merge? You like that rhyme?
Urge to merge.
What is the urge to merge?
Got a need for speed and a urge to merge.
I am insane for an implant in my brain.
That's like a little too many syllables in there.
Oh, come on.
Tough crowd here.
It's like cut me a brain.
The third one in never gets it right.
Because the second one creates the trend, and then you've got to stay with the trend and now the pressure on you twice three times so so what is the urge to
merge it into your physiology biology when it it's perfectly fine sitting in your palm
there's two things within my arm's reach arm's reach. Why do I have to,
why do I need a USB port into my neck?
I think some of it's- Like in Avatar, they had USB-
I think some of it's efficiency
and some of it's a false quest for immortality.
So efficiency is,
if I don't have to type it,
I don't have to say it, it's faster.
And if I'm paraplegic and I can't type it,
I can't say it.
Clearly, in those cases.
So there are some cases where efficiency wins hands down.
And if I don't have to sit here typing
and I can search for those facts
that I wanted to give you faster,
that's just by thinking.
That would be great.
And I think it will happen eventually.
Okay, so I have the choice between a neurosurgeon
cutting into my brain and sticking electrodes,
sticking chips in it, or...
Using the phone.
Hitting my iPhone with my thumb.
I'm thumbing.
I got the thumb thing.
I understand that you got the thumbing, but the analogy I would make is to all kinds of
things that people do in sports where they want an edge.
And people are going to want their kids to get in.
I mean, already do want to get their kids into Harvard.
And if they think, I can get my
kid into Harvard with this implant, if they think
it's safe enough, they might do it.
Just like they'll give their kids steroids so that
they can get an athletic scholarship.
So it's a way, it's a
human augmentation.
That's what it is. We're talking about human augmentation.
Whoa. All right, let's bring this first segment
to a close. And when we come back,
it will be Cosmic Queries.
Yes.
As promised. As promised, we will get to Cosmic Queries.
You watching, possibly listening to StarTalk.
We're back on StarTalk.
Professor Marcus here from NYU, New York University,
which does a lot of cool stuff lately.
NYU.
From the actors, they've got a whole math department.
What's it called?
The whole...
Courant.
The Courant Institute.
Because if your math is not a department, it's an institute.
It's good philosophers there.
You've got a lot of good stuff going on at NYU.
So it's great to have you in our backyard.
So thanks for making time for us.
You're one of the world's experts on thinking about.
It's funny you get to say that about a professor.
They don't have to do anything to be famous.
They just have to think about it.
World's expert for thinking about this intersection of technology and mind.
And we solicited questions on this very subject from our fan base
and all the usual cast of sources, Instagram, Facebook, Twitter.
What else?
Pretty much anywhere that there is an internet.
People can send us a question.
They can send us questions.
So, Chuck, what do you have for us?
All right.
Our first question is actually from a name that I can pronounce perfectly, Chuck Nice,
sitting here on the couch, who would like to know-
Are you taking first questions?
I am taking first questions.
Are you a Patreon member?
I am indeed a Patreon member.
You are?
Okay.
Well, there you go.
All right.
I am a data patriot.
Okay, well, there you go.
So I would like to know,
since we know how we download information to computers,
how exactly are we downloading memories to our brain?
From our brain to a machine?
Well, no, period. Us, as biological organisms
that have this brain function in the hippocampus,
how does that process actually take
place how are we downloading memories i guess it depends what you mean by downloading wait wait
so here's your brain okay people talk about putting your brain in a machine now i'm not
talking about that so here's just talking about everyday ordinary experiences which we see and
we record right and then they're downloaded to a place in our brain or upload it if you want
okay okay if you want to get technical or upload it to the place in our brain our hippocampus
how does what is that process because there's really two versions of the question i think we're
both thinking that one is like the ordinary course of events forget about modern technology
how do i make a memory at all right and then the other is like am i ever going to be able to have
a way where i can type something in my phone and kind of like airdrop it if you know the apple technology um
directly into my brain so like somebody else there's you know the famous scene or somebody
else or somebody else or somebody else is their famous scene in the matrix where she like downloads
the skill for flying a helicopter um isn't that an awesome scene so so that's like the second
version of the question the first is like an ordinary experience.
If I want to learn to ride a helicopter, I have to practice a lot.
And every trial is changing something in my hippocampus, in my free frontal cortex.
The honest answer is we as neuroscientists don't yet understand that process.
We have looked at some simpler organisms.
So the plesia is the most famous one.
And you can pluck at its gill and eventually it learns, hey, someone's being annoying.
I won't pull my gill in every time.
And we know something about how the synapses in the nervous system of the Apleasia change over many, many trials.
And so that's a kind of gradual learning.
But most of the learning that's interesting to us isn't about I tried something 50 million trials.
I mean, there's some things like, you know, shooting a basketball is many, many trials.
Practice makes perfect.
Practice makes perfect.
My guitar book was about learning to play guitar
and learning those things.
But there's also like, I saw my friend Gary
and he taught me the new word of chimera.
And like, you don't need a million trials to do that.
You're like, that's a cool thing.
And it kind of rattles around your brain.
We don't know exactly how the brain does that.
We don't even know exactly where it does it. So this very quick memory, which is most of what you're talking about, there are a
few things we would like to know. We'd like to know where it is. We'd like to know what the
biological process is. We'd like to know what the representational scheme is, which is like,
is it sort of like a bitmap for a picture? Is it like a set of words in a sentence? Do we use the
ASCII code? What is the encoding scheme by which that information is stored? Unfortunately, we mostly
don't know. There are some places we know a little bit. So we know something, for example,
about motor memories. And so we can read to some extent, if somebody is paralyzed and we
stick in implants in their brain, we can guess where they want to move their hands. And we're
partly reading their memories a little bit.
The implants are for you to read what's happening in their brain.
Read what's happening in their brain.
But we don't actually have a general understanding of memory.
It's one of the most basic things.
But also, the memory of an apleasia is pretty different from the memory of a Chuck Nice,
right?
I hope so.
And we don't want to do the same kind of experiments.
Most people don't get too squeamish if you chop open the apleasia,
but probably you don't want to be chopped open and you have a say in it.
And your wife might get mad at me if I did it.
And it might be litigation.
She's the only one that's a fan of it.
It might not be a lot of lawsuits, but there'd be some.
It's a lot of paperwork.
And so we, I'm being facetious, of course,
but we as scientists don't do the same kinds of experiments on people so we do things like
mri brain scans but they're very coarse mri the pixels in an image or they're called voxels
because they're three-dimensional has like 70 000 neurons in it and a memory might be a matter of
like 100 neurons in those 70 000 neurons being configured the right way. I wrote an article in the— So you need a higher-voxel resolution machine.
You definitely need a higher-voxel machine.
And there have been some work.
So in people that have epilepsy, sometimes you have to cut open their brain in order
to do surgery.
And there are experiments in which scientists have stuck electrodes in the brains of those
people and found some pretty interesting things.
Like, they have found neurons that only respond when you see Oprah Winfrey or hear her name.
So they're kind of multimodal.
Oprah neurons?
Oprah neurons.
I was going to say, there are about 50 million women in this country who have that experience.
The Oprah neuron.
The Oprah neuron.
Is it Jennifer Aniston neuron that was identified?
But these are kind of like outputs of a process.
So we don't know the circuitry that causes this neuron to actually activate. We just know at the end of some long
chain of events, it fires there. There are a bunch of memories that are involved in that,
that help you know what she looks like, what the name looks like, but we don't,
we haven't decoded that stuff yet. I guess you're not in a position to say,
to tell me where in the brain is your concept of self.
position to say, to tell me where in the brain is your concept of self? No, I mean, I can tell you things like your prefrontal cortex is involved. If I blow away your prefrontal cortex, you're not
going to have much of a concept of self, but there's the old joke you might know about the
frog in the foreleg. The scientists are trying to figure out where hearing is in the frog and
they operationalize it by clapping and the frog jumps. And so they cut away their front leg,
front left leg, they clap, the frog still jumps.
So they say hearing isn't in the front left leg and they cut away the front right leg
and the frog still jumps.
They cut away the back left leg, it still jumps.
And then when they cut away the back right leg, the frog doesn't jump anymore.
And so they conclude, ah, hearing must be in the back right leg of the frog.
This is, you know, a pretty shoddy inference.
And unfortunately, a lot of the inferences that we
might make about memory and self and so forth are kind of similar we lesion some part of the brain
or we study someone that has a lesion we don't verbally we don't actually cause lesions too
often in humans except to cure epilepsy or something like that um and then something
doesn't work anymore but that doesn't mean it's the only piece involved it's like let's say you're
stopping the epilepsy you're not curing it i would use a different word well fine a point well taken
i will cut your brain open cut through some lesions to cure you it's you know there's there's
a long sordid history of that sort of thing that's going back to trephining when they cut holes in
people's skulls and what's the one where they they pick your thing? Yeah, it's a trephining.
Yeah, okay.
So, a quick follow-up on this.
Ahead.
It might be a naive question.
In the scene in The Matrix
where Trinity gets uploaded
the instructions for flying the helicopter,
wouldn't she have also needed muscle memory for that
rather than just knowledge
on how to fly the helicopter?
Muscle memory is in your brain. It's not in your muscles. It's a misnomer.
And some of it's in your spinal cord, if you want to get technical about it.
Fine. So if I can read a book on Kung Fu and I can know every move, but if I have not performed it,
are you implying that you can put performance memory in my brain.
Yeah, but it's a really astute and clever question you're asking.
So why is it that when you read a book,
you don't get the muscle memory for free?
So why, when I read about guitar and music theory
and all the things that you needed to do to play and strumming
and read all these books about strumming,
could I still not do it very well?
And I still had to go practicing,
and I got at least a little bit better.
I think that's a kind of question about which processes are linked in which ways into the brain.
It's not a question of whether that stuff is ultimately in the brain and we can do brain scans
and show that different parts of the brain change as you learn to strum. So it's an access question.
Not all parts of the brain are equally accessible to one another. And so even though you can read
about it, you don't have a circuit that is responsible.
You think about the environment of adaptation.
Exactly.
So you can upload the knowledge of the information,
but then separately upload the experiential.
Okay, maybe that's information as well.
In principle, you ought to be able to do that.
Okay.
And someday, I won't be here to collect
or not collect on the bet,
but someday, maybe it's 100 years from now, we will, I think, be able to do that.
In principle, there's no reason why the experiential part of it can't be encoded, can't be fired
in there using nanobots that change the circuitry of your brain.
My book, The Future of the Brain, talks about some of this stuff.
There's no reason in principle why you can't do that.
But right now,
we don't know how to read the code. It's like if a computer dropped from above, it would take a while. There's no other computers. And you had volt meters and stuff like that. You could sit
there and try to figure it out. But it would take a long time before you could say, so that's how
Microsoft Word works. There's a lot of complication there. Next question. All right. Next question is
from Cat Pirates from Twitter.
At Cat Pirates, since we're on this
subject, will it one day be possible
at some point
to use computers to store and
access our memory? So this is just
the exact opposite of what we were talking
about in my question.
Offload it. Offload.
Can we take what's up here
and offload it onto some storage the device
i think the answer eventually will be yes we're stuck in the same place if we don't really know
the code yet there's also a separate question i didn't talk about which is invasiveness so
right now we can use a an fmri you know basically a set of magnets to read stuff but not with enough
resolution to get the resolution we have no
way of doing it now short of putting stuff in the brain and then even now that doesn't really work
and i said i saw people they were reconstructing a photograph of somebody out of their brain
thoughts yeah so there are studies like that that are actually not about that i did not know i'm
asking because i saw it weeks ago weeks ago one of. One of the guys. It was fuzzy, of course, but it was like,
whoa, that's a person.
That's incredible.
It's fuzzy.
There's some tricks involved.
So you need to have right now,
and it'll be solved eventually,
as a kind of crutch
to make these systems work better,
these decoding systems,
you have to kind of give them a hint.
It's almost like animal, mineral, or vegetable.
So you tell them it's an animal,
and then given this information, you kind of guess,
I'm making it a little bit cruder, but you guess what kind of animal it is.
The systems we have now can't sort of take an arbitrary picture and reconstruct it.
But if you narrow things down, then the system-
You help it out.
You help it out with what's called a prior, and the systems can get somewhere.
Eventually, you'll need less and less support because the resolution will get better and better, and we'll be able to do things less and less dangerously. There'll be less worry
about infections and brains and stuff like that. You will be able to do it. I want to pause, by the
way, and say, I love the Star Trek episode of Black Mirror. Probably a lot of people saw it.
There's something totally wrong with it, which is there, you get the complete set of memories from
somebody's DNA, and dna doesn't actually
carry memories it carries the kind of evolutionary memory so you know but it does not carry well
actually there's an interesting question there which is dna might actually be a substrate for
memory but it would be different like we might use or strands of rna could store memory in it
you could that's right it's a digital thing maybe even biology does in ways that we don't know but
you don't store it in what we call the germline DNA
that they sequence in that show
in order to reconstruct the memory.
So just taking somebody's hair
is not going to allow you to break into their brain
and decide were they looking at the porn or not.
Like that is not going to be recorded in their DNA.
Well, thank God for that.
Chuck, I got this hair of yours.
You know the answer for you.
We don't need the hair. Chuck, you be nice. I remember. You know the answer for you. We don't need the hair.
I remember you, polyamorous roboticist. That's right. Polyamorous roboticist.
I love it. Alright. What's next? Here we go.
Alex Lander wants to know this. How close are we
to toys that can be remotely controlled by thoughts transmitted as
instructions via radio so i did
see where um there are some uh things that we can control with our eyes but that's really just
tracking movements that become the joystick right is there any transmission otherwise that we might
be able to do funny you mentioned joystick because i was going to say if all you want is a joystick
you could probably do that now there may even be some like
kickstarter to do this where you put an eeg skull cap on people and you can train up low resolution
so you get a few bits of information so i was at comic-con they were selling these hats that
there you go claim to read your some some eeg of your brain and there were things that would spin or something.
And if you're in love, it would spin one way,
and if you hate...
So it looked kind of gimmicky,
and it wasn't that expensive,
so it could be just a fun party, you know, trinket.
But it's sort of party technology now,
and, you know, probably not even that reliable.
So there's an open question about how much you can get
from a skull cap that you wear outside your head. So you can get some bits of information. So forward and backwards or things
like that. You're not going to get subtlety. Like I want the toy to go under the chair,
around that other chair, up the guitar, next to the wall and back. Like that's too complicated
a thought for the skull caps, maybe ever, but not too complicated in principle ever.
We might need- If you get into the brain in other ways.
If you get into the brain in other ways. If you get into the brain in other ways,
eventually then yes.
So you would be,
this is basically electromagnetic signals at this point
because the sensors will be reading out of your brain
and now that gets converted to,
we know how to communicate across space,
but you need some conversion
from the electromagnetic signals of your brain
to some transmitter at that point.
It all again comes down to resolution.
So right now, we can do that in a kind of low-res kind of way.
So you get a limited bit of information.
The resolution will get better, and there's a decoding problem.
What is the code by which we read this?
We don't know how much actually kind of makes it outside the skull.
That's an open question, but some of it does,
and we'll get better at it.
We've got to take a break break and when we come back,
we'll finish this up,
which I hate to do
because I want this to go on forever.
Yeah, man.
When we come back, Chuck,
I want to ask a first question
in that segment.
All right.
Because it's my turn.
I got Chuck Nice.
I got Gary Marcus,
Neil Tyson.
We'll be right back.
We're back on a really cool episode of StarTalk. We're talking about the intersection of mind and machine,
psychology and technology.
Chuck, nice helping me out here as usual.
Professor Gary Marcus, thanks for coming back to StarTalk.
We last had you on,
I had last had you with Ray Kurzweil.
Great program.
Thanks for your contributions there.
A question for you.
Reading up on your profile,
you're a critic of deep learning.
And deep learning is a major sort of research angle in Google and in IBM. And so what's your problem with deep learning? This is where a machine
is sort of teaches itself based on just a few parameters and gets better and better at it on
a level where it's better than anything we could have trained it to do. Well, it is for some things,
but not all. There's an old logical fallacy, the fallacy
of composition. You see something is true for X and you think it's true for everything.
We do that in astrophysics all the time. It's always a problem. Deep learning is really good
at recognizing objects, but not perfect at that. I'll tell you about that in a second.
It's very good at speech recognition. So it allows your Siri or whatever to transcribe
your sentences. But it's not very good at what recognition. So it allows your Siri or whatever to transcribe your sentences.
But it's not very good at what some people call artificial general intelligence.
So artificial general intelligence means machines, AGI, machines that could answer kind of any question and not just a particular narrow set of questions.
So we have seen great advance in, for example, playing Go.
But Go is something where you can get as much data as you want for free. It's a Chinese strategy board game.
That's right. And DeepMind, a division of Google, has done fantastically well on that. But it's not
clear how that translates to real world problems ranging from driverless cars, which seem like
they're okay now, but they don't seem like they're maybe getting to where they're safe enough to
actually use, to general natural language understanding.
They just have to be safer than humans?
Well, even safer than humans is pretty hard.
So the problem with deep learning
and the problem with driverless cars
is what we call outlier cases.
So deep learning is kind of like
a glorified version of memorization.
If you've seen some version close to this before,
then you can interpolate, this is like that.
But if you see something that's unusual,
the systems don't work that well.
So there've been a couple of accidents with Tesla.
One of them—
In self-drive mode.
In self-driving mode.
One of them was in self-driving mode.
Tesla ran into a semi-truck that was white on a sunny day that was crossing a highway.
Well, that's an outlier case.
It's unusual.
If your paradigm is basically to memorize what you've seen before, you get into something unusual, something bad happens.
Another case we suspect driverless mode was engaged in was just a month or two ago.
A Tesla at 65 miles an hour on a highway ran into a stopped fire truck.
A human probably would not make that mistake.
Now, this is the red fire truck.
Red.
I believe it was a red fire truck.
Because they pretty much only come in two colors, which is bright red and bright yellow.
I think it was a red one, but we'll have to have your research.
The red is not even dull red.
Verify that.
It's candy apple red.
And you're like, how could that happen?
Yes, how could that happen?
Well, the way I think about it is deep learning is kind of like the part of your brain that recognizes textures and patterns,
but not the part of your brain that reasons about things. So you don't have an experience, probably, of a fire
truck parked on the side of a highway. So you can't look that up in your memorized experience.
But you do have part of your brain that can be like, that's a very large object. It's not moving.
That's probably not a good thing. I think I will move out of the way or slow down. And it's hard
to build something like a driverless car system that can deal with the full variety
of human experience.
We're near my home in Greenwich Village.
I ride a unicycle around here.
I really don't want driverless cars.
I do.
And I do not want driverless cars in Manhattan because they're not going to have a big data
set on unicycles.
That's the problem with deep learning is they don't have a big data set about a particular
thing.
They don't know what to do with it.
So the term deep learning is actually like a great rhetorical move, like calling something
the death tax.
Deep learning refers to a particular thing about how many layers in a neural network
and something else, but not how abstract it is.
Okay, so there's an interesting ethical question.
If deep learning for self-driving cars removes the possibility of death,
for most cases that any human would end up killing themselves or someone else,
like not seeing someone cross the road because they're putting on makeup or reading or texting.
Or you're doing the cycling and juggling.
No way.
I never do that while I'm driving. If it prevents 100%
of those cases... But causes its own
problems. But the cases that
we would have avoided,
a few of those slip through.
But nonetheless, we go from 30,000
deaths a year to 1,000 deaths
a year. But every one of those 1,000 deaths
could have been avoided by a human.
If that guy wasn't juggling on a unicycle.
For me, that's not a hard ethical question. I mean, I think then we should go
with the machines. The statistical realities, we're not even close to that yet. And the
political realities, they're questions of deep importance. So there is no question in my mind,
even though I'm a skeptic about deep learning and so forth, that it is possible to build a
driverless car that's safer than a human being. But politically speaking, there are going to be
people that die in kind of objectionable ways.
Nobody was too worried about the guy who died in the Tesla
because he was a rich guy.
He was watching Harry Potter
and people thought he's spoiled.
They kind of let it go.
But at some point,
there will be a driverless car
that kills a bunch of children.
And then there'll be a congressional investigation
and so forth.
And at that point, your question is really important
because it might be that in fact,
statistically, it's just much better off,
but they can't sell it to their constituents
or think they can't sell it to their constituents
and they could cut the whole thing off.
And so I worry about that a lot.
But if what Neil is saying is the case,
your outliers notwithstanding,
then the answer would be, if I'm the company,
I'm going to create a pool of other companies
where we just take a crap load of money and dump it into this pool that becomes the insurance
policy for when the one and one thousandth person dies well i mean there's an economic question
about whose liability it is.
And, you know, there are places like, well, maybe I can't say on the record, but there are big car
companies who are thinking about maybe they can self-insure themselves. So there's that side,
but there's also the political and legal side of it. So even if there's enough money to pay the,
you know, families of the victims, nobody wants to be, you know, in that category of family of
victim. And the people whose
whose families are killed in these very peculiar ways that you're talking about are going to be
very upset they're going to say we should ban the driverless cars even if the overall statistics say
you know actually we would save 20 000 lives the drunk teenagers on prom night who didn't die
is not a news story that's right right right Right. That the self-driving car protected.
Go quick to AI on there, because we don't
have much time.
This is Nicodemus
Archelone, who says this,
or Archelone, says,
should sentient artificial intelligence
be subject to the same laws and
hold the same rights as humans?
Oh my... I mean, I can
certainly see that argument. The problem, I would say there,
is we have no idea how to tell
whether something is sentient.
So it's one thing to be able to say,
can a machine behave in all these kinds of circumstances
in ways that are reasonable or whatever?
We don't have a measure.
I mean, it's like for consciousness.
We don't have a consciousness meter.
So there's this whole scientific field
of trying to figure out consciousness.
We got an argument about philosophy.
I'll make it real simple for you, Gary.
Machine, okay, you've programmed it, blah, blah, blah.
And then you say, I'm going to unplug you.
And the machine says, please, man, don't kill me, man.
Please don't unplug me.
Please, Gary.
It's not persuasive because.
Oh, damn.
You are rough.
Because.
You are so rough.
Oh, cold-blooded.
Oh. Because. Damn. because because because
let's hear him out
let's hear him out
let's hear him out
go
it's not persuasive
for the same reason
the Turing test
is not persuasive
you can can responses
so it's not that hard
for someone to build
a robot
and have a sensor
to see if somebody's
unplugging it
and say that just like you know Siri has this line about blade runner being a story about two
intelligent assistants or whatever and some comedian sits there and writes it you have an
assistant who's you know been contracted to write jokes of this sort all i you're reminding me of
this comic i saw i think i've told you about this once uh probably a new yorker comic there are two
dolphins swimming together and one says to
the other of the humans on the
side, they face each other and
make noises, but there's no evidence that they're
actually communicating.
I love it.
Says the bigger-brained mammal.
Bigger-brained mammal.
Give me another one. Okay, here we go.
Ben Sadaj says this,
do you think it would be possible for AI to be able to identify and assist with mental health, sort of like a virtual therapist? And I'll go a step further. Do you think that it might be able to identify and then help self-correct someone who maybe is going off their meds or about to go into a psychotic break?
someone who maybe is going off their meds or about to go into a psychotic break?
The answer is clearly yes. I'm actually talking to a guy named Roger Gould about working on a project with him about digital therapy. There's a number of other companies that are starting to
work with this. Actually, early in the history of AI was something called ELISA, which was not
very clever. It had a lot of canned responses. I think I'm older. I remember ELISA when it first
came out. Then you are older than me
because it came out
a little before I was born.
Yeah, yeah.
Eliza actually uses
some of the same kinds
of programming techniques
as Siri
and it can, you know,
get a little ways
and say, you know,
you mention your wife
and I can say,
well, tell me more
about your family
or your mother or whatever.
That's what Eliza does.
Ask me a question.
I'm Eliza.
Ask me a question.
Any question.
How are you feeling today, Neil?
Why do you ask that?
It's called Rogerian therapy where you redirect everything. That's a question. Any question. How are you feeling today, Neil? Why do you ask that? It's called Rogerian therapy where you redirect everything.
Why do you feel so positive about Rogerian therapy?
Screw you, Eliza.
No, so you would say something like, my mother, you know, I don't think my mother likes me.
And they'd say, why don't you think your mother likes you?
So it would take the sentence, analyze the sentence, the verbs and the nouns,
figure out a sentence to send back to you.
And it would be like an active, if you weren't really thinking that it's a computer,
you'd think it was a sensitive psychologist.
Some people actually got fooled by the original Eliza.
It won't fool you for an hour, but it can fool you for five or 10 minutes.
There's some advantages to digital therapy.
Like, for example, with a real therapist, you have to wait.
And, I mean, usually, like, you feel this acute sense of pain,
something like that, emotional pain, and you want to talk to somebody right away.
And then you have to wait.
In a month.
Two weeks or a month or whatever.
And digital therapists, in principle, could be there, like, right then, right there,
say, you know, what's your problem, and let's try to figure out how to help you.
Not only therapists, but also someone who could be a friend, your friend, right there. Say, you know, what's your problem? And let's try to figure out how to help you. Not only therapists, but also someone who could be a friend,
your friend, a
consultant. Well, in China, there's something called
Xiaoice. Not too many people know about it here.
It's made by Microsoft, and millions
of people talk to Xiaoice every day, and it's partly
kind of quasi-therapeutic
friendship kind of relationship.
But really, it's a government
information-gathering technique.
If it's China, let's be honest.
Theoretically, it's not.
But I'm not going to touch that part.
But Tay, which they made over here, Microsoft made over here and became very offensive,
is actually somewhat similar technology.
But it's sort of trained on a different data set.
The other problem with deep learning is it's super sensitive to the data set.
It's hard to get it to kind of step away from the immediate data.
So if you have a lot of Donald Trump Twitter bots talking to Tay, it's hard to get it to kind of step away from the immediate data. So if you have a lot of
Donald Trump Twitter bots talking to Tay, it's going to take Tay in a particular direction and
you don't have a sort of abstract enough understanding of what's going on.
Yeah, because we get two more questions in here, but we like going in speed mode.
All right, speed mode. Here we go. Brandon Christopher from Facebook wants to know this.
Is there a concern that we are reaching a tipping point where people psychologically cannot handle the advancement in technology?
People are pretty good at adapting to new technologies, so no.
That is surely no one under 20 asked that question.
They have adapted.
Yeah, they have adapted.
Next one.
Lauren Puglisi says this, what ethical guidelines should be established before these new technologies are developed in order to prevent abuses? Now, you want to talk about AI.
That's a doggone good question.
What are we doing to make sure that we don't?
Who abuses who?
AI abuses us or we abuse them?
Like, yeah, well.
I think it's a really hard question.
I'll put in a plug for an organization I'm on the board of called Ada.ai,
which is partly trying to kind of...
As in Greek letter, Ada?
As in the first female computer programmer.
The first computer programmer was female,
Ada Lovelace.
Oh, Ada, Ada, yeah, yeah.
And it's Ada-AI.
And they're trying to, in part,
be a kind of consumer organization
to help represent consumers' rights in all of this.
So AI is being driven by the big companies.
One of the big problems is you have these ethics panels
where the people don't know as much about what it is
they want to make ethical laws about
than the people who are making the thing itself.
You want to make sure you have people
maybe with not so much self-interest,
but have knowledge.
The other problem is the machines are just so dumb.
So I had a New Yorker column about what would happen if-
What's in the machines?
Well, I had a New Yorker article about what would happen if a driverless car went out
of control, hit a school bus full of children.
Everybody picked it up.
Barack Obama picked it up.
It really spread pretty wild.
And it's a really interesting-
It's an article you wrote in the New Yorker.
Yeah, in November, I think, of 2012.
And a lot of people started thinking about this.
There are conferences where people talk about it now.
And the reality is, okay, but right now they're hitting fire trucks on the side of the road.
That's not an ethical problem.
That's a perceptual problem.
We have to solve those first before we can get to some of the ethical problems.
But they are important.
I think we got to wrap this.
Gary, thanks for being on, dude.
Always a pleasure.
Pleasure being back.
We got to get you back. Let's do this all the time. Once being on, dude. Always a pleasure being back. We gotta get you back.
Let's do this all the time.
Once a month,
we need a brain machine episode.
We need a brain machine episode, yeah.
I'm down.
Chuck, always good to have you here.
Always good to be here.
You've been watching,
possibly listening,
to StarTalk,
a Cosmic Queries edition
on the brain and machines.
As always,
I bid you to keep looking out.