StarTalk Radio - Cosmic Queries – Bioethics
Episode Date: June 28, 2019Neil deGrasse Tyson, comic co-host Paul Mecurio, and NYU bioethicist and philosopher Matthew Liao answer fan-submitted questions on artificial intelligence, the moralities of science, CRISPR, “desig...ner babies,” the ethical limits of experimentation, vaccinations, and more.NOTE: StarTalk All-Access subscribers can watch or listen to this entire episode commercial-free. Find out more at https://www.startalkradio.net/startalk-all-access/. Photo Credit: StarTalk©. Subscribe to SiriusXM Podcasts+ on Apple Podcasts to listen to new episodes ad-free and a whole week early.
Transcript
Discussion (0)
From the American Museum of Natural History in New York City,
and beaming out across all of space and time,
this is StarTalk, where science and pop culture collide.
Welcome to StarTalk.
I'm your host, Neil deGrasse Tyson, your personal astrophysicist.
We've got a Cosmic Queries edition of StarTalk.
The subject, bioethics.
My co-host, Paul Mercurio.
Paul.
Nice to see you again. Welcome back, dude.
Yeah, thanks for having me back.
Thanks for making some time for us before warming up Stephen Colbert's audience.
So you're right down the street.
Yeah, yeah, which is up the street from you from here.
Yeah.
Can you provide a limo, which is nice.
Did we?
No.
Okay.
Now, you're not a bioethicist.
No.
Neither am I.
No.
Even though we might have thoughts on the matter.
Yes.
Right, right?
Every day, I'm constantly...
I wake up and go, what is going on ethically?
Bioethically, yes.
So we went back into our Rolodex and re-invited Professor Matthew Liao.
Matthew, welcome back to StarTalk.
Thank you.
So we last had you on stage live in front of an audience at New York Comic Con.
And we had Adam Savage with us as well.
We were talking about human augmentation and whether that would be bioethical.
And you said off camera, you remember that that was my birthday?
We sung, right?
Oh, I guess.
I try to forget those things.
3,000 people sung to you.
They did.
They all sang.
They all did sing.
It was written on the program that that was required when you came in.
So, welcome back. Good to have you.
You are director of the Bioethics Program.
The Center for Bioethics at NYU.
At NYU, Center for Bioethics. And New York University, right here in town.
So, easy date for you.
So we'll be calling more on you as we think of these issues.
So we've got questions.
Paul, I haven't seen them.
I don't know if Matthew's seen them.
No, he has not.
And we'll just sort of jump right in.
Well, let me just find out, what is bioethics?
What's an example?
Just so we're on the same page.
Yeah, it's the study of biomedical issues
arising out of biomedical technologies.
Mostly medical now.
Yeah, mostly medical.
But it could also involve things like artificial intelligence
and sort of its connection to healthcare.
Yeah, but AI is not bio on purpose.
Right, right, right.
But it could be used for...
So what you want is silicon ethics.
Yeah, silicon ethics. That's right, that's right. Well, a be used for... So what you want is silicon ethics. Yeah, silicon ethics.
That's right, that's right.
Well, a lot of people are now thinking about putting in
things like brain-computer interfaces into their brains
and things like that.
So the silicon and the organic matter,
they're kind of merging now.
Oh, so this complicates your job.
That's right.
Or makes it more interesting, both.
Yes.
What's the fastest-moving area?
Is it AI?
Is it genetic manipulation?
Yeah, I think both of them are occurring concurrently.
So there's the CRISPR technology, gene editing technology,
that's sort of really advancing.
I like that because if you can mutate my genes
so I don't have to go to the gym, I'm your guy.
Is that how that works?
Yeah, that's exactly how it works.
You can do it today.
And then there's the artificial intelligence.
People are using that for things like cancers.
Pathologists are looking at these images.
The AI is getting really good at pattern recognition
and image recognition.
They can spot cancer cells almost as good as pathologists now.
Okay, but that wouldn't be an ethical thing.
That's just the machine can do it better, so let the machine do it.
Right.
Right, so ethics would be now the machine knows your condition
and it's connected to the internet.
Yeah.
And so a hacker might have access.
Yeah, or say that the insurance company knows the algorithms
and tries to hack it and sort of make it look like it's not cancer when it is
or something like that.
Or sort of issues to do with privacy.
Well, he's paid to think about this stuff.
It's incredible.
You have a very diabolical mind.
Yes, exactly.
Come up with a way we can foil this system. When you're out You have a very diabolical mind. Exactly. Come up with a way
we can foil this system.
When you're out for dinner
and the waitress goes,
would you like to have dessert?
You're like,
what do you mean by that?
Are you fun around people?
I mean,
you're fun,
but if they want to do
something a little
inappropriate,
like put a little
extra gas in
when nobody notices,
you go like,
no,
there's an ethical issue there.
How ethical are you
is the short question there. Well there there are surveys that say that ethicists aren't
necessarily necessarily more ethical oh really so they you know apparently they uh steal they steal
uh sort of books from libraries and they don't call their mothers you know uh on mother's day
and things like that so yeah i call my mom on Mother's Day.
Do as you say, not as you do.
That's right.
That's right.
That's right.
All right, so what questions do you have, Paul?
We're going to start with the Patreon question.
Patreon, let's do it.
This is Oliver.
This is up to the Patreon.
Yeah, absolutely.
We love them.
This is Oliver Gigaz.
I'm sorry if I'm mispronouncing that.
Personally, I feel that we, the general public,
aren't talking enough about subjects like bioethics and AI,
even though they are clearly going to be a huge part of the future.
Do either of you feel the same way?
And if so, how can we better educate ourselves on these subjects?
I completely agree.
And so one of the things I try to do is to talk to the public about some of these issues and work in this area.
Things like sort of gene editing and artificial intelligence.
How much of it is just fear that people don't understand the technology?
And so we fear everything we don't understand.
Doesn't it come down to that at some level.
Yeah, I think a lot of it is that just people are scared of new technologies. They're very cautious.
And there's also science fiction writers that take it to the worst.
That's right.
Future.
That's right. The you know, the robots are after us, they're gonna kill us.
Oh, yeah.
Super intelligence is coming. And so people get really scared and they think, oh, we should not do any of this stuff.
And that's also bad for science.
It's bad for progress.
Yeah, but I just bought a car where I don't have a dipstick anymore.
And I just hit a button and it tells me the oil.
The oil.
Really?
Yeah.
And I'm a little weirded out by that.
Like, I want the physical thing.
Get off my lawn.
I'm not an old man yet.
But I don't trust it. Young whippersnapper. What if the oil companies have adjusted the
program of that so that it's falsely telling me I need oil to make extra money? Yeah. We should
hang out. You see what I'm saying? Yeah. So you sound like a bioethicist already. Man. Okay, just not to hang you out to dry.
Yeah.
When the dashboard became all screen without a mechanical speedometer,
where it just turns on, and in what turns on, it has your mileage.
And I'm thinking, this is a screen.
Come on now.
Exactly.
Okay?
There's no mechanical mile. How does – who – I got. Come on now. Exactly. Okay. There's no mechanical, mild.
How does, who, I got all old man on it.
Give me back my dial.
I'm with you.
I unplug my toaster every night because I think it's going to catch fire.
I don't know.
The whole thing is sort of overwhelming for people on some level.
Yeah, so I think you hit on exactly the right issue.
And the issue is trust.
Like trust in technologies, trust in algorithms,
trust in like how do we make sure that when we roll out these technologies, there's trust.
And that's the job of the scientists, but also the ethicists and everybody.
Yeah, and the educator to make sure that we can actually trust these things.
So here's a question that I remembered getting asked of the public.
And I remembered at the time what my answer was then, and it still is today.
But the public in the day answer differently.
Here's the question.
If something happens, you're on an airplane,
and something goes wrong with the airplane, okay?
And what would you trust?
A button that says, auto-fly this thing home,
or a trained Navy pilot who would decorate a trained Navy pilot to bring it home?
It was the pilot, of course.
And I'm thinking, no, give me the auto-fly.
It's like, push the auto-button.
What if he just had a fight with his wife and just downed a bottle of scotch in the airport?
That's what I'm saying.
The button didn't have a bottle of scotch, guaranteed.
And today, I mean, what my thinking has borne out,
because planes are designed that they cannot actually be flown by a human being.
There's too many surfaces that are under control of the computer.
That's why flying is so stable now.
Do you trust the technology or not?
That's right.
In order to trust the technology, you have to make sure
that it's safe, it's tested, it's reliable.
It can be
adversarially
attacked.
That's why ethicists
like myself, we ask
these questions. things like,
well, what happens?
We imagine these hypothetical examples,
like what happens if the insurance company
is trying to cheat you and do certain things?
Or if the hacker is trying to hack into the algorithm
or the imaging thing,
there's plenty of evidence
that some of these imaging machine learning technologies
can be hacked.
But the thing that's amazing to me is science,
and especially what you do, is so on track with ethics.
It's a microcosm because in society in general,
ethics seems to be the last thing.
It's like worrying about table manners
at a Game of Thrones red wedding.
You guys have this ability to really think about these things. Like there's this
conversation about like, well, AI could
destroy the planet. Well, humans are already
kind of doing that.
Maybe AI can do it better.
More efficiently.
Exactly.
Less complaining.
Yeah.
So some people think that
the super intelligence, you know, if they were to be created, they're going to decide that, hey, we, that's, you know, so some people think that, you know, the super intelligence,
you know, if they were to be created,
they're going to decide that,
hey, we're destroying the,
you know, we're destroying the planet.
And one way to stop,
to help the planet is by,
like, killing all of us.
Because we're a virus.
Because we're viruses.
Yeah.
That's the word my wife uses for me.
That's a line from The Matrix.
Yes.
Yeah.
All right, so Paul.
Yes.
You got more questions.
I do.
Go.
Raymond Ouyang, startalkradio.net.
Nice.
Question about morals and science.
Are there any circumstances in science where it would be acceptable to bypass ethics in
human experimentation if the findings would lead to greater good.
Oh, good one.
Wasn't that the entire Nazi medical
enterprise?
Yeah.
And the Tuskegee study?
That's the Tuskegee study as well.
Just tell us about one or both of those
and then tell us what... That's a great question here.
Yeah, so the Nazis were sort of
experimenting
on humans.
For example, they're taking them up into the airplanes
to see how much pressure a human being can withstand.
These are mostly Jews and other undesirables in the Germanic model of humanity.
That's right.
And apparently some people
say that they were able to find out
things that we wouldn't have otherwise
found. But still, I think
that it's very clear now
that we need to sort of
abide by these ethical norms and
we need to stick to
research ethics. And there's sort of,
since the, there's something called the Belmont
Report that came out as a result of
the Tuskegee experiments.
Describe the Tuskegee briefly.
It's the
experiment where
there are these subjects and they were given
syphilis.
They weren't told that...
I thought they already had syphilis.
They were told they were being treated, but in fact they weren't.
That's right.
The observation was to see the progress of syphilis. They already had syphilis. But they were told they were being treated, but in fact they weren't. That's right. And then the observation was to see the progress of syphilis in the human body.
And all the subjects were black men.
That's right.
After that, when it was discovered, basically that was the birth of bioethics as a field.
People decided that we shouldn't be doing this.
We need to look at, there were sort of different principles
that were being proposed, things like do no harm.
You need to make sure that the research benefits
the subject, and then you need to make sure
that there's autonomy, there's informed consent.
So a lot of the biological principles came out
as it was-
Interesting.
Oh, go ahead.
Well, do no harm, that's in there.
That's part of the Hippocratic Oath.
Yeah, but talk to Mickey Rourke's surgeons.
I mean, they violated that thing eight ways to Sunday.
Right.
I mean, isn't that sort of part of the, like, the medical field, to me,
seems like that was, fair to say, the first sort of area where bioethics
was sort of really founded in some way.
Yeah.
And yet it seems like that profession, they're all over the place.
I mean, there's pimple popper shows and TLC.
Well, I think maybe their intent is to not do harm
even if they end up doing harm.
Right.
Right.
Like in plastic surgery.
You can go wrong
if that wasn't their intent.
Yeah.
Right.
It's like me with a bad joke.
You did harm.
It was a lot of harm.
That set did a lot of harm.
Yeah.
And I can't bring it back.
Okay,
so what you're saying is
this is an interesting
enlightened posture
which is no matter what is going on, I will do no harm to you, even if having done harm to you saves the lives of a hundred other people.
Because the individual has the priority in this exchange, in this relationship.
That's right.
So that's enlightened, and even profound, I think.
And so...
Is the converse of this whole issue
with measles now and how...
Because I'm really fascinated by that.
So someone is morally against a vaccination
because they think it causes autism,
and yet they're putting
entire communities at risk, right?
What is the conversation
in your field now about that?
Yeah, I mean, it's...
And what do you serve at a measles party?
Salmonella cake?
I'm just curious.
Like, what do you...
But that seems to me to be...
Yeah.
So my own view about vaccination
is that we have a public duty to, you know, be vaccinated.
And so that comes from sort of not harming other people.
So we have an obligation not to harm other people.
And so the issue with vaccination is that
we also have a right to bodily integrity.
So some people think that we shouldn't be forced
to be vaccinated if we don't want to.
And I think that's right.
But I also think that that doesn't mean
that we ourselves don't have a duty to be vaccinated.
So we should do it voluntarily.
So there's a greater good.
That's right.
It's a greater good argument.
That overrides the personal integrity.
Well, you can, personal integrity is something that you can waive.
It's your right, but you can waive it, right?
In these cases.
And so in this case, I think that we have a duty to serve the public by getting vaccinated.
You kind of straddled the fence there a little.
You didn't want to create a law.
You should run for president.
That was good.
You did not answer that question.
You know what?
That was an unethical answer.
Oh.
No.
It's interesting.
It's really complicated.
Yeah.
And they actually dealt with a little bit of this
in Planet of the Apes
because you have the intelligent chimps
and they're doing medical experiments on the humans
that they captured.
And we think that's an abomination
because we're human.
But of course we do that on lab animals all the time.
So who are we to say that they can't do that?
And yet the quality of our life is much better because we do it.
So it's sort of this whole
balancing act.
That's why we have you.
Yeah.
Not to do experiments on you. No, no, no. That's next week. You come back and there's a's why we have you. Yeah. Yeah, okay. Not to do experiments on you.
No, no, no.
No, that's next week.
You come back
and there's a dungeon
and we take you there.
That's why, wait,
let me just,
I can't let this go.
Is there,
so there's not even
some numerical threshold
where you would say
harm to one person
if it saves a hundred
or a thousand
or a million
or a billion.
So there's this view, it's called threshold deontology.
And it's threshold deontology.
Deontology.
That's right.
And it's the view that there's a threshold.
And when you cross that threshold, then it might be okay to harm somebody in order.
But isn't it arbitrary who decides what the threshold is?
Yeah.
That's why we have him.
You're making all of these decisions.
He's the ethicist.
I'm leaving.
You're sitting next to an ethicist. Who makes these decisions?
He makes the decisions. He and his people.
He has people. He has a team.
Yeah, so you're
absolutely right. So where's the threshold?
It's not okay to, say, kill
one to save five people. Is okay to say kill one to save five people.
Is it okay to kill one to save a million people?
Right.
Or a billion people?
What's the threshold?
If one to five is okay.
It's not okay.
Yeah.
Okay.
But then you're saying, let's say, no joke here, Neil is one of the five.
But then there's a million and you're saying it's okay.
You've devalued his life
based on the number of people
in his group. Yeah. There doesn't seem
to be any logic to that. Yeah.
So some people say that
well, if we were to
think that it's okay to kill Neil
in order to save a billion people.
What? How did I think that?
Well, you're just
very smart, extremely intelligent,
so you're worth a billion people.
I'm worth like a dog.
I'm the equivalent of a dog.
Then all of us.
This is like, it's the rowboat thing.
You throw out Abe Lincoln.
Do you keep the criminal?
That's right.
And by the way, how would we kill Neil?
Just out of curiosity, would it be a slow death?
I'm an ethicist.
The most ethical way to kill me. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way. The most ethical way I'm an ethicist. The most ethical way to kill me.
Painless.
Painless way of doing it.
So tell me that it's called
Threshold Deontology.
Threshold Deontology.
And so that's the view that
there's a threshold beyond which it's okay
to harm somebody in order to
save the greater number.
So towards the end of the movie,
The Secret of Santa Vittoria,
this is, I don't know
if it's fiction or if it's based on
a real story. There's a town
in Italy that had
this, or it might have been France,
this amazing wine
producing,
world famous for their wine,
and the Nazis were coming through and they didn't want the Nazis to get it.
So they hid the wine in a cave
and bricked it over
and then put moss on it
and made it look aged.
And then the Nazis came in looking for the wine
and they couldn't find it
and they scoured the countryside
and they decided that
whoever's the next person that comes out in the street,
they're going to torture them
and find out where the wine is hidden.
So the townspeople agreed to let the prisoner out of the...
They said, you're free to go.
Because the prisoner didn't know any of this.
The prisoner was just a thing.
The prisoner comes out and the Nazis torture him.
Wow.
And they couldn't figure out where it was and the Nazis leave.
Jesus.
It would have been hilarious if the guy they tortured was a sommelier
and they just got caught.
Come on, man.
I just got my degree.
Really?
What are you doing in jail, though?
All right, we've got to take a quick break,
and we'll come back more on bioethics.
Really cool when StarTalk continues. bringing space and science down to earth you're listening to star talk We're back on StarTalk.
We're talking about bioethics.
My co-host, my guest co-host
this episode, Paul Mercurio.
Paul, you tweet, Paul?
Give me your Twitter handle.
At Paul Mercurio.
Okay, very creative.
I had my people,
we gathered around,
we had a long meeting.
By the way,
it's M-E-C-U-R-I-O
and I only say that
because there's
an Australian actor,
Paul Mercurio,
M-E-R-C-U-R-I-O.
Mercurio.
Which is actually
how I spell my name
but he got in
the actor's union
before I did.
He was in Strictly Ballroom
in Exit to Eden.
Whoa.
So I did my first guest appearance on a sitcom.
My manager calls it, you have to change your name.
I'm like, why, did I bust a law or something?
He's like, no, there's this guy.
So it's M-E-C-U-R-I-O.
And in retrospect, I should have just changed it to Smith
because it would have been a lot easier.
Mercurio is cool.
Reminds me of the planet Mercury.
Oh, there you go.
And we have Professor Matthew Liao.
Welcome.
And you're head of the Bioethics Oh, there you go. Yeah. And we have Professor Matthew Liao. Welcome. And you're head of the
Bioethics Center
at New York University.
And we're reading questions.
We've got questions
from our fan base.
We have another question
for Matt.
Bioethics.
For Matty.
I'm going to call you Matty
the rest of the show.
All right.
This is Hay Hider
from Instagram.
Do you think CRISPR's technology
will allow us
to take the DNA
of an athlete
or maybe a bounty hunter, tweak it to be even better and stronger than the original, and then take the DNA and create a clone army?
Can we do that?
And if so, please send the instructions to my bunker.
No, I just added that.
So.
Okay, cool.
So what's up with that?
So, yes, I think that's possible.
I mean, sort of the fact that some people are stronger than others
is partly genetics, right?
And so if we can figure out the genome...
But don't say that because then Paul will say,
I'm not getting muscles because it's genetic.
So therefore there's no point in going to the gym.
The Twinkies have nothing to do with it.
I did say partly.
Partly, partly. Partly, partly.
Yeah, yeah.
And so, like LeBron James,
you know, sort of, you know,
because of his genes, right?
And so if you can sort of sequence... Well, he's big because of his genes,
but is he athletic because of his genes?
Yeah, he needs to work out.
Yes, okay.
So there's definitely the neutral part.
and other things very bluntly from your genes.
That's right, That's right.
And so we can figure that part out. And then you can imagine using CRISPR technology to then put that into sort of either gametes or embryos
and then create offsprings that have those traits.
So this is in our future?
I think so.
I think this is something that can be done.
So we will breed into our own civilization
entire classes of people for our own entertainment.
Is that anything different from sumo wrestlers in Japan?
It's called the one and done rule in college basketball.
Isn't that what we're doing, basically?
Yeah.
Tell me about sumo wrestlers.
No, it's not a genetic thing,
but they're specially treated and specially fed
to be sumo wrestlers.
That's right.
And that's a cultural thing.
They don't live long and everybody knows this.
I don't think they reach 40, age 40.
Yeah.
So is that really any different from doing that genetically?
So that's what, you know,
so people talk about designer babies
and the ethics of designer babies.
So there's the question
of whether we can do it,
but then there's also
whether we should be doing this.
And I think...
It's very Jurassic Park
over there.
And I think Neil
asked a really good question.
Do you have an evil lair?
Right,
which is that
we're already doing
a lot of this,
you know,
this hyper parenting.
Look at like Serena Williams and, you know, Venus Williams.
Yeah, but that's different than manipulating through CRISPR, manipulating a...
But the result is the same.
Yeah.
It's not different.
Yeah.
So the question is, what's the difference, right?
What's the difference?
One psychologically and the other is through genetic...
Yes.
So the means are different.
That's definitely right.
But why does that make a normative difference?
Why is it sort of ethically different
when we do it at the genetic level
as opposed to after the child is born?
So maybe it's because maybe I can...
You might have genetically bred me this way,
but I can choose to not do this.
Right.
But can you?
Shouldn't you have bred him in a way
not to fight who he is and what he is?
Yeah, but maybe I'll say
I'd rather just be a poet
and then you can't stop me.
Whereas otherwise,
if you're raising me this other way,
then there's all this conflict.
You know, go to the gym,
eat your three squares, whatever.
Or stay at the piano.
And right, it's conflict at home.
Whereas you can be genetic, can have a genetic propensity
but then just decline the option.
Boring house, though.
I'd rather be like,
I don't have a mom,
and then slam the door, you know?
Yeah.
Well, the problem is,
what if you also genetically modify the motivations
so that the child wants to be a super athlete
or super pianist.
Could you make me want to be Neil deGrasse Tyson?
Maybe, you know.
I just want to be able to talk like this.
Oh, yeah.
We have another one?
So the answer is yes, it's possible, and it could happen.
And we need more of you, the ethicists, around at that time
to either say no or yes to it.
Right.
Good, okay.
Launchpad Cat, Instagram.
Is there any such committees that regulates new technology
such as genetic tech or AI
and puts regulators in place preemptively
to prevent it from being used for amoral things
like eugenics or something of that sort?
So the U.S. has a sort of... Does it remind people what eugenics is? Right that sort? So the US has a sort of...
Just remind people what eugenics is.
Eugenicists, yeah.
No, to remind people what eugenics is.
Eugenics is this idea, it means well-born.
And so basically
the Nazis were
trying to breed these people
to sort of, you know,
a certain race or certain class of people
thinking that some genes are better than others.
But even at a time when the concept of gene was not really...
They just knew that if you breed two people who are desirable,
presumably you'll get a desirable person.
And then you prevent others who are undesirable from breeding.
And then you can systematically shift the balance in the population
to be a demographic who you want and care about.
So the Aryan ideal was then what was sought.
Isn't that happening in a way with breeding dogs and breeding purebreds and sort of breeding?
And plants.
So like Irish setters are out of their minds.
Because they've been bred so much.
We have a dog that we adopted it's like a mutt and
she's totally chill totally chill yeah but but you're not breeding breeding doing it with plants
and animals that's right other animals that's right but is it really going to be a board that's
going to oversee this preemptively i mean i i said this before but like look at the medical
profession there are a lot of questionable things that are going on in the medical profession
yeah and there's a board that oversees that preemptively. The boards have
ethicists on it. That's right. That's right. There's sort of different research committees.
They have oversight, sort of IRBs. They're institutional research boards.
Institutional research board. Yeah. And then they have ethicists on those boards to look over research,
look over the experiments to make sure that they're ethical.
The problem is that with these IRBs, it's sort of...
There's something like that.
We're not allowed to, the scientific community,
there are rules about what animals you can do laboratory tests on.
Really?
Right, like chimpanzees, there are certain things you can't do.
That's right. And depending on someone's judgment, some panel's judgment, laboratory tests on. Really? Right. Like chimpanzees, there's certain things you can't do and that you can
and depending on
whatever someone's judgment,
some panel's judgment
as to the value
of that animal
to the ecosphere
or to whatever.
Yeah.
And other than PETA,
if you're doing it to a rat,
I don't think anyone cares.
I was going to say the rat,
like that poor thing
gets slammed every time.
Okay, so do you think
it can be effective
going forward?
Yeah, so the, I mean... It's only effective if the researchers are responsible. That's right. Okay, so do you think it can be effective going forward? Yeah, so the, I mean...
It's only effective if the researchers are responsible.
That's right.
Okay.
That's right.
Yeah, and also the value of the research
has to justify whatever research that you're doing.
So you can't just sort of, you know,
torture these rats for fun.
You can't?
You cannot, right?
So that's very unethical, right?
And so...
You should have told me that a couple of weeks ago.
You know, so in order to do research,
even on animals, even on rats and mice,
you have to be able to justify it
to an institutional research board.
You have to sort of say, why is this necessary?
And there's no other way.
You have to show that there's no other way.
And this is sort of a less harmful way of doing it.
The least harmful way of doing it.
And it's not the rat's fault it doesn't have hair on its tail,
but a squirrel does.
It'd look adorable.
It's not its fault
it's got a little,
it's really pointy nose.
It's not its fault.
Exactly.
It's just hanging out.
It's not its fault
it eats your garbage.
Well, now it's my fault.
I'm sorry,
I put my garbage out on the street.
Squirrel eats nuts
and rat eats your garbage and you don't like it.
Right.
Pigeon, right?
Rats with wings.
Do we do experiments on those?
Hmm.
We should.
Look into that.
Are we having another one?
Yeah, let's keep going.
All right.
Scotiashofrandon, Instagram.
If in the future our noble intentions lead to the practice of genetically editing fetuses for preventing birth defects and future diseases,
how do we avoid the pitfall of creating designer babies and the possible repercussions, genetic inequality, caste systems, etc.?
And would it even be a pitfall at all?
Yes.
So that's right.
Would it even be a pitfall at all? Yes. So that's right. Would it even be a pitfall at all?
Maybe this is something we should think about doing.
Maybe there are good reasons to do it.
For example, to genetically have designer babies
to engage in genetic editing.
So this is where we were talking about earlier
that people, as soon as, you know,
when they think about new technologies,
they get very scared.
But maybe there are good uses of these technologies.
So just for example,
if we want to sort of go engage in,
if we want to go to the moon or go to space,
we want to make sure that we're more radiation resistant, right?
And so there's maybe there's some sort of genetic thing,
radiation resistant, right?
And so maybe there's some sort of genetic thing,
basis where we can sort of be more radiation resistant.
And so that's something that we should look into if we want to sort of...
So that means you breed people for certain jobs.
Yeah.
But this idea of creating the perfect human,
I mean, I don't even know if anybody wants that.
I mean, everybody hates Tom Brady.
And that's about as perfect as you're going to get.
And I'm a Patriots fan saying that.
Here's where I would take that.
I would say, isn't so much of what we are,
what we've been through to overcome what we're not,
so that if you come out perfect,
then where does your character get developed?
Because you're interacting in an imperfect world, right? so that if you come out perfect, then where does your character get developed? Where is your sense of...
Because you're interacting in an imperfect world, right?
So your perfection is always challenged.
Well, I'm just saying,
who you are is almost always what you have overcome in life.
Absolutely.
If you're perfect, there's nothing for you to overcome.
What do you got to show for anything?
So you're saying it's unachievable
to create a perfect person.
No, you can create
a perfect person,
but they will achieve nothing.
That's what I'm saying.
The real achievers,
stuff happened to them.
Hey, Doc,
I was supposed to be perfect
and I'm not.
What's going on here?
Look at the real achievers
in life.
They've overcome something.
It's a broken family.
There's a thing.
They have a lisp, but they've got a limp. They have a limp. It's a broken family. There's a thing. They have a lisp.
They've got a limp.
They have a limp.
A therapist gave me a list
of people that,
things that,
people who were rejected,
you know,
like Edison.
Yes.
Bell.
No one's going to want
to talk to each other
far apart through a box.
And they were rejected
and rejected
and overcame.
Yeah.
Right.
That's what I'm saying.
So if you're perfect, you might be of no use to anyone. Right. Yeah. So I think there are two
things to say there. So one is that, you know, human goals will change like the better you get.
So, you know, like my kids, when they're five years old, they like to play go fish. Right. But
now they're 10 years old. They don't play Go Fish anymore. It's too boring, right?
Because you've kind of outgrown that, right?
And so you can imagine that
when we get smarter,
there are other things,
there are other challenges, you know?
That we don't even know of right now.
That we don't even know of right now, right?
And then the flip side of that is,
you know, if you really think that
there's really value to being imperfect,
then that can be, you know, there's an app for it.
So make it more challenging.
Okay, take it back.
Should we do another question?
Real quick, real quick.
Another question.
Okay, here we go.
Patrick Lin, Facebook.
Are there any red lines that we should not cross or maybe never cross in science and in ethics?
And a related question, are there any ethical red lines today that you think should be rolled back?
Oh, good one.
And we don't have time to answer that because we have to take a break.
When we come back, the red line.
Should you cross it or not?
On StarTalk.
We're back on StarTalk.
Bioethics is the subject of this edition of Cosmic Queries.
Matthew Liao, you're our ethicist.
You're head of a whole center for bioethics.
Everybody comes to you with their problems.
Is that how that works?
I don't know.
And always good to have you, Paul.
So when we left off,
there was a question about crossing red lines.
Yeah, this is Patrick Lynn, Facebook.
Are there any red lines that we should not cross?
And a related question is,
have there been any red lines that you feel we've crossed that should be rolled back yeah yeah um well i think there are many red lines
that we shouldn't cross so uh some people are i mean just you know creating humans that'll be
like slave humans for example i mean that's that's an obvious one doesn't that happen anyway if you
create humans who are perfect?
Then the humans who are not created perfect are left as slave to the perfect ones.
Well, you really hate perfection.
No.
Then you're making a slave class
without purposely making a slave class.
Yeah.
So there's this view that,
I mean, even in our society now,
sort of people have differential abilities, right?
But we think that everybody has equal,
like they all have the same moral status, right?
Yes, yes.
And so we could still have that.
People under the eyes of the law.
That's right, that's right.
And so we could still have that,
even if you have like some people who are perfect
and other people not as perfect, right?
Who would be enslaved by that.
Yeah.
And how about red lines that we have crossed
that you would roll back today?
I got one.
Yeah.
I'm old enough, I'm old and all y'all,
to remember the announcement of the first test tube baby that was born. have crossed that you would roll back today? I got one. I'm old enough. I'm old and all y'all. To remember
the announcement of the first test tube baby
that was born. That was banner
headlines. Test tube baby.
And today, that's not
even an interesting point to raise
on a first date. Whether you
were in vitro or in utero
conceived. What were you like
dating? Was that your opening line?
No, but there was a day that might have been a thing.
Yeah, I'm a test tube baby.
It was like, wow, tell me about it.
That's a really good point.
Right, right.
And back then, people say, are we playing God by fertilizing eggs in a test tube?
And now it's like, of course you're doing it.
This is the fertility aid that goes on every day for so many couples.
So I bet that that would be a line
that existed back then that we cross
and now you'd roll it back
because we're all just accustomed to it.
Would you agree?
Actually, we just ran a conference
on the ethics of donor conception
two weeks ago at NYU.
And there were all these donor conceived individuals
and they were saying that they shouldn't have been born.
Should not have been born.
Should not have been born.
Why?
Yeah, they were, because they feel that,
like, they don't know who their genetic parents are.
They feel very isolated from, you know,
there's just a lot of psychological trauma.
Well, this idea of God, I mean, if you're an atheist, right?
I was curious about this.
Where does religion creep into this, right?
Oh, good point.
So people start to go, well...
Because ethics panels typically have a pastor or somebody
that brings a religious philosophy to the argument.
And if religion is not a part of my life on any level,
why am I leaving to some ephemeral game?
That explains everything about you.
I'm soulless, everybody.
You heathen.
That's my tour, the soulless stand of poverty.
You're going to hell.
But religion, sorry.
How does religion fold into this?
Religious ethics, I guess.
Yeah, so some people look at ethics from a religious standpoint.
So there's like divine command theory.
What would God do or what would God command in certain situations?
So they would look at these issues from that angle. There's like divine command theory. What would God do or what would God command in certain situations?
So they would look at these issues from that angle.
Speaking for God on the assumption that they understand the mind of God for having read books that they presume God wrote.
Right.
Just want to clarify.
Yeah.
Well, there's a view.
There's the natural law view that what God would want
is what our best reasoning,
whatever we come up with our best reasoning.
At the time.
Yeah, at the time.
And so that's sort of a natural law type view.
And you alluded to this, bringing a perfect person into the world, right?
This idea of bioethics and whatever.
But then you look at the world we live in, right?
We're obsessed.
Okay, we're going to make genetically enhanced corn so we have better nutrition so that we're in better
shape to kill each other yeah yeah yeah it's sort of like i just feel like here we do we need a gene
for rational thought let's work on that one okay get your people to yeah yeah let's trademark yeah
well there are like a lot of people talking about moral enhancement.
Like, you know,
can we enhance ourselves morally
so that we're less aggressive
and more sympathetic
and empathetic
to the plights of others,
et cetera, et cetera.
I say screw other people.
Next question.
Go, yeah, yeah, go for it.
We are going to go to
Dixon Clinton, Instagram.
Combining CRISPR and ever-advancing AI will be the downfall of humankind, right?
How many years do I have before I'm being murdered by cyborg overlords?
Okay.
Wow, you got to stop going to the movies.
Yeah, so when do we all die?
Yeah.
Oh, well, we're all going to die.
Okay, good.
But yeah, so that's the question there.
So, you know, some people like Ray Kurzweil
thinks that, you know, by 2050,
we'll have super intelligence.
Other scientists, AI scientists.
Ray Kurzweil, we have interviewed him on StarTalk
in a live taping.
Yeah, go on.
Yeah, and so, and other people say it's not so,
you know, like they're less optimistic,
but they think that maybe by 2100 we'll have super intelligence.
And so there's a real life issue.
What happens when you have these really smart AIs that are smarter than us?
We become their slaves.
We become their slaves, if we're lucky.
Maybe we'll just become their pets.
Or maybe we'll go out of existence.
I can see you sniffing your butt.
Maybe I went too far there.
I'm sorry.
All right.
This is Chris Cherry, Instagram.
Hi, Chris, from the Sunshine Coast.
Australia.
Not Austria.
Australia.
Australia.
Should we fear DNA samples being required
by health insurance companies and employers?
Potentially, you could be discriminated against
because of something you have no control over.
Yeah, Chris, it's called race and ethnicity.
It's happening every day.
Yeah, absolutely.
You alluded to this about the insurance company.
Yeah, no, absolutely.
I think that's a real worry that as more
and more of our information are
available through genetic testing,
et cetera, et cetera,
companies might use that in
inappropriate ways or unethical ways.
So an ethics board would say, no,
insurance companies will not have access
to your DNA. That's right. Or
maybe a society, maybe that's something
that's beyond the like
ethics board. So don't leave a coffee cup that you sip from in the insurance office. Yeah.
They might take they swab. Swab. Swab it and send it to you. Just show up completely. Swab your spit.
All right, we gotta go to lightning round. Okay, ready? So you're gonna ask a question and
I and Matthew, you have to
answer it in a soundbite. Okay. Pretend you're on the evening news and they're only going to
soundbite you. Okay. Okay, go. Okay, this is Justin Vilden from Instagram. What's your opinion
on ethics of manipulation slash creation of AI in general? Could we manipulate with it so far to come close to something resembling our own consciousness?
Not yet.
When?
It's hard to say.
So I don't think we
have figured out what consciousness is
or, you know, sort of the biological
substrates of consciousness to be able to
do that yet. None of the
machine learning technologies right now can do that.
The day we understand consciousness, how soon after that
do we program that into computers?
The next day.
Okay, next question.
This is Dejaniro
Instagram. Do you think
AI and humans will be integrated
or DNA editing
can be used to create super humans
like we see in X-Men?
I like that because if you can edit the DNA,
what do you need the computer for?
That's the question, right?
So the computers might be faster, right?
So they have more bandwidth.
So the brain is sort of very slow.
It thinks very slowly.
So you can imagine that once you can kind of augment
through some sort of brain-computer interface,
it gives you a vast amount of storage, space, capacity,
upgrade capability, And perfect memory.
It's none of this arguing about what happened.
Exactly. I said this.
No, I didn't forget to buy the milk.
Let's go to the videotape.
This is a theme on many episodes of
Black Mirror, by the way.
You should check it out on Netflix.
Okay, next. Galaxy Star
Girl Xbox Instagram.
Do you think...
Whoa!
Excellent.
I love it.
Okay.
And there's an underscore, but I left that out.
Do you think the future of AI in society will bring about the less need for doctors?
I believe doctors will still be needed, just in fewer numbers.
Yeah, I'm not sure about the numbers, but, you know, we're going to have wearables
that are going to be able
to track our heartbeats.
Our toilets are going to be able to,
you know, kind of, you know,
analyze our stool, you know,
and sort of tell us
whether we're healthy or not.
And then that's going to be
sent to doctors.
Do you want your toilet
talking about your poop?
That's what he just said.
If my toilet could talk,
it would throw up so
but I think it's coming
smart toilets are coming
so that's sort of the
that's the next business
you know
next
this is from
Kristen Versailles
Instagram I would like to know what are the considerations to judge something as, quote, good or bad in the aspects of modifying an organism genetically, humans, for instance?
So I have this view that humans need some basic capacities, things like the ability to be able to think,
to have deep personal relationships,
and things like that.
And so I think that whatever we do with genetic modification,
we shouldn't interfere with those core human capacities.
And the flip side of that is
if an embryo, like an offspring,
doesn't have those capacities,
then we should try to make sure that they have those. In whatever genetic way.
Whatever genetic way. And beyond
that, it's just luxury items.
That's right. Off of a shopping list. That's right.
Exactly. Next.
Okay. Got time for like one
more. Here we go.
That'd be a good one, dude.
Wow, there's a lot of pressure here.
Okay, this is
Dagan Pleak, Instagram.
Will we attempt to splice
human DNA with other animal DNA
to make mutants of a sort?
With this conflict with our ethics,
what are your thoughts on creating new
humanoid species? It's called a centaur,
isn't it? Or a minotaur.
Yeah, yeah.
Yeah, that's a great question.
So it relates to what I just said earlier.
I think as long as we don't affect
those core fundamental capacities,
sometimes we might look into
these type of augmentations,
these combining different genes.
What animal would you want to splice with a human?
I can tell you.
Would you want to be?
Let me guess, a dog so you can sniff.
In my concluding remarks, I will tell you. Would you want to be? I'll tell you. Let me guess. A dog so you can sniff. I'll tell you.
In my concluding remarks,
I will tell you.
Oh.
Yes.
Oh.
Yes.
So we've got time
for just some reflective thoughts.
So, Paul,
why don't you go first?
I just think
all of these questions
that you deal with,
it's endlessly fascinating
and on some level
open-ended, right?
You seem to have
the most subjective
sort of job in a way.
Plus you're like the calmest person I've ever met.
Which means you're up to no good.
Yeah, he's hiding something.
When you're that guy, you know something that we don't.
And we didn't get much into this,
but I know you've done a lot of work with manipulation of memory
for PTSD, rape victims, et cetera, and erasing thought.
Is that making advances?
Was that part of your TED Talk?
Is that right?
Yes, yes.
And can I have it in September?
Because I'm going to a reunion in high school
and I want to wipe out the memory of asking Renee Sherlock to the prom
and getting turned down twice.
I want to wipe out her memory and mine.
And both memories.
Oh, yeah, yeah.
Takes some propranolol with you.
I knew you were a drug addict.
He's got it.
He's got the drugs.
Is that fairly far along?
That's pretty far along,
but unfortunately,
you've got to take it within 12 hours
of asking someone to a prom.
So, you know.
It erases your short-term memory.
That's right.
It stops it from consolidating
into the long-term memory.
There's another thing, something called Zip.
So there's this idea that...
I'm not consuming anything called Zip.
Zip erases everything.
Really?
Yeah. Zip erases everything.
Wow. I'll see you after the show.
So, Matthew, give us some reflective concluding remarks here.
So, Matthew, give us some reflective concluding remarks here.
So, I think a lot of these new technologies are on the horizon.
I think they have a lot of promises,
but we also should worry about their,
we should be mindful of their ethical implications.
And I think they can help.
Further keeping you employed.
That's right.
So that it keeps me employed.
That's hilarious. He keeps raising issues that aren't issues. So that it keeps me employed. That's hilarious.
It keeps raising issues that aren't an issue.
No, that's an issue.
It's not an issue yet.
It's an issue.
My kid's going to college next year.
It's an issue.
And I think ultimately our aim is to sort of create human well-being,
human flourishing.
And so we want to make sure that these technologies do that.
So here's what I think. Not that anyone asked. create human well-being, human flourishing. And so we want to make sure that these technologies do that.
So here's what I think.
Not that anyone asked.
Wait, Neil, what do you think?
Thank you, Paul.
The fact that you can crossbreed the genetics of different species at all.
We do this often in the food chain.
It's a reminder that all life has some common DNA.
So we should not be surprised
that you can take a fish DNA and put it in a tomato.
It's just a reminder that we're all related genetically.
So what I think to myself is,
the human form is not some perfect
example of
life
I like the fact that
newts can regenerate their limbs
where's the gene
sequence for that? let's put that
in humans and give it first to veterans
who have lost their legs or arms
and regrow our limbs
if a newt can do it, and we have
genetic editing, why can't we do it?
And why haven't we?
Well, maybe that's to come. But I look at
what is possible in the
experimentation of the biodiversity
that is life on Earth and say,
why can't we have some of that?
Exactly.
And that is a thought from the cosmic
perspective. I want to thank matthew leal
a second time on star talk we will bring you back for sure oh thank you all right and um
have fun down that fun but you know work hard make a better world for us or
help us make a better world for ourselves keep creating issues that aren't really issues
by the way best mind eraser? Better than
vodka. Right.
You've been invented. Takes out those
cells right there. Paul, always good to have
you. Thank you. All right. I've been
and will continue to be Neil deGrasse
Tyson, your personal astrophysicist,
coming to you from my
office at the Hayden Planetarium of the
American Museum of Natural History.
And as always, I bid you to keep looking up.