Daniel and Kelly’s Extraordinary Universe - What Does Artificial Intelligence Mean?
Episode Date: February 7, 2019How do we define Artificial Intelligence? Should we be worried that AI may one day take over Humanity? Learn more about your ad-choices at https://www.iheartpodcastnetwork.comSee omnystudio.com/liste...ner for privacy information.
Transcript
Discussion (0)
This is an I-Heart podcast.
Have you ever wished for a change but weren't sure how to make it?
Maybe you felt stuck in a job, a place, or even a relationship.
I'm Emily Tish Sussman, and on She Pivots, I dive into the inspiring pivots of women who have taken big leaps in their lives and careers.
I'm Gretchen Whitmer, Jody Sweetie.
Monica Patton.
Elaine Welteroff.
Learn how to get comfortable pivoting because your life is going to be full of them.
Listen to these women and more on She Pivots, now on the IHeart Radio app,
Apple Podcasts or wherever you get your podcasts.
How serious is youth vaping?
Irreversible lung damage serious.
One in 10 kids vape serious, which warrants a serious conversation from a serious parental
figure like yourself.
Not the seriously know-at-all sports dad or the seriously smart podcaster.
It requires a serious conversation that is best had by you.
No, seriously.
The best person to talk to your child about vaping is you.
To start the conversation, visit Talk Aboutvaping.org.
Brought to you by the American Lung Association and the Ad Council.
The U.S. Open is here and on my podcast, Good Game with Sarah Spain.
I'm breaking down the players, the predictions, the pressure,
and of course the honey deuses, the signature cocktail of the U.S. Open.
The U.S. Open has gotten to be a very wonderfully experiential sporting event.
To hear this and more, listen to Good Game with Sarah Spain,
an IHeart women's sports production in partnership with Deep Blue Sports and entertainment
on the IHeart Radio app, Apple Podcasts, or wherever you get your podcasts.
Brought to you by Novartis, founding partner of IHeart Women's Sports Network.
Alexa, what's the best science podcast on air?
Hey, are you trying to replace me with Alexa?
What's going on here?
Do you think you're replaceable?
There's no way an artificial intelligence could ever make jokes nearly as funny as I am.
I think there's no way
an artificial intelligence
would laugh at your jokes.
I'm pretty sure I could program
a pretty dumb computer
to laugh at my jokes.
It's called the laugh track.
But hey, that's a new challenge
for AI.
First chess,
then go,
now science comedy.
That's right.
Now program something
that can find humor
in Daniel's ramblings.
Hi, I'm Jorge.
And I'm Daniel. And this is our podcast. Daniel and Jorge explain the universe.
In which we try to download everything we know about the universe, episode by episode, into your brain, whether you're a real person or an artificial intelligence listening to our podcast.
While trying to sound intelligent about it.
While writing your own humor for the open mic AI night.
The topic of today's podcast is
What is artificial intelligence?
And very importantly, is it dangerous?
That's right.
Should you be looking at your window for the first signs of the robot revolution?
Should you be afraid of your Alexa?
Should you be worried about that robot vacuum cleaner getting resentful
for having to do all the dirty work and eating your face off in the middle of the night.
Oh, geez.
That's a bit dark.
It seems kind of sinister, doesn't it?
It's like sitting there, circling, circling, circling, waiting, waiting, waiting.
I think those things are creepy.
Right.
Maybe it wants to, you know, clean your face.
It wants.
See, that's the question.
Does a robot vacuum cleaner want anything?
What does it mean for it to want?
What does it like to be a robot vacuum cleaner?
The next great paper in philosophy.
So this is kind of in the zeitgeist right now.
I mean, people are really excited about artificial intelligence.
But at the same time, there are big names like Elon Musk
kind of warning people like, hey, artificial intelligence, not such a good idea.
That's right.
It's a huge topic.
I mean, you drive around like San Francisco, you see artificial intelligence, machine learning, deep learning.
It's on billboards even, you know?
You want to get a million bucks for your new company?
You just say the words, AI, deep learning, and boom, people are throwing cash at you.
Deeper learning.
Deeper learning.
in the deepest learning.
It's definitely part of the cultural moment.
And you see that reflected not just in like what deep thinkers are saying,
but also in like science fiction.
You know, a lot of the near-term dystopian these days
is about how AI will take over and the dangers of AI.
Right.
In a way like 30 years ago it was about the dangers of radiation, right?
That was the new dangerous thing, physicists had invented.
Now the new dangerous technology that we're all worried about is AI.
It's a new promise in peril.
But yeah, AI, every piece of technology is a double-edged sword, right?
You can use it for good, you can use it for evil.
But AI is special because it's not just technology.
It's not just a tool that people use.
It's a tool that has independence, that has autonomy.
And that's why it's such a vexing question.
Well, people, I'm sure everyone associates it with robots and machines and computers,
but we were kind of wondering if people actually knew what artificial intelligence was.
Like what makes it work, what makes it different than real intelligence?
I bet you that 95% of the people who say the phrase artificial intelligence don't actually know what they're talking about,
which is probably true for most topics inside technology.
It's true for me, probably, I'm sure.
But we're wondering if you guys out there knew what artificial intelligence was.
And so as usual, Daniel went out and asked people in the street.
And here's what they had to say.
It's the idea that we can create some type of material thing that could think on its own.
ultimately.
And do you think it's something we should be concerned about?
Is it ever going to be a threat to humanity?
I mean, possibly, but I mean, we don't know everything.
We can't know the bounds of what could be a threat, what could not be a threat.
Yeah, it's AI, and it's this stuff that's used in various technological applications,
basically just kind of like trying to make machines replicate certain aspects of human intelligence, stuff like that.
Okay.
And do you think it could ever be a threat to humanity?
Is it something we should be worried about?
I guess since I don't have a particularly strong opinion on it, I don't think so.
So I guess I'll say no for now.
I'm assuming that's the idea that computers or electrons can have like sentience.
Right.
Are you worried that computers would a day take over and make us their slaves?
Not really.
I don't think it will come to that point.
All right.
Those are pretty sophisticated answers.
I like the ones that said, oh, artificial intelligence.
That's just AI, right?
Like, that's an answer.
That's right.
That's an answer to every question, you know.
What is Google Blex Zabbybrom?
Oh, that's just GZ right?
Yeah.
Yeah.
Acronyms.
Accronyms can make you look intelligent.
That's the real artificial intelligence.
Just speaking acronyms.
Acronym intelligence.
No, but people had some sense that it's, you know, something that can think for itself
or something they can do something for you or create something that can think by itself.
There's definitely a sense.
the nugget of idea is definitely out there.
They use it in relation to what it can do.
That's right, yeah, exactly.
What's the new capability that defines it?
Right, yeah.
And it's a fascinating way to think about it, you know,
and it's definitely a tricky question.
Right, because I guess we know it in the context of using it for things, right?
Like, people don't just create AI because we want to create artificial beings,
it's like, so it can help us.
I want to create artificial beings.
What's wrong with that?
That sounds pretty awesome.
create a whole army of artificial physics grad students.
That sounds pretty cool.
Do your kids know?
Yeah, you mean, are they worried about competing with my digital children?
Yeah.
Do they know you'd rather have artificial children?
I didn't say I'd rather have artificial children.
I said in addition to my beautiful, wonderful natural children,
which I should not be talking about on this podcast.
I'd love to have a whole, you know, cadre of artificial children to do my bidding.
Unlike your real children, who won't do your bidding?
That's why somebody listening.
And that sort of goes to the heart of the question, you know.
If you created a digital being with artificial intelligence, would it listen to you or would it make its own decisions, right?
And so that's what we thought it would be interesting to dig into, like, what is artificial intelligence?
If it just did what you told it to do, it wouldn't maybe be in an artificial intelligence.
You're saying nobody's smart should listen to you? Is that what you're saying?
I'm saying they should decide for themselves whether I'm worth following.
Yeah.
So let's break it down for people.
Daniel, what is artificial intelligence?
Well, you should listen to this podcast, and that'll give you the answer.
Done.
Well, you know, I think to understand what artificial intelligence is,
we should think for a moment about what do we mean by intelligence, right?
Very simply, intelligence is just the ability to learn,
is to find patterns, to extrapolate from them.
Really?
That's how you...
But like a dog can learn, but a dog, you wouldn't say it's intelligent, would you?
Absolutely, I would say a dog is intelligent.
you can teach a dog you can train a dog
it's more intelligent than a rock
but would you say it's like a lot
by a lot
oh my gosh if you like never interact with a dog
a dog is like a living sentient being
it feels it experiences it definitely learns
it can recognize you
yeah I mean dogs can do complicated things
a dog is a perfect example let's take a dog
but you know I wouldn't trust it to do my taxes
you know
well I don't know compared to our tax accountant
it might do a pretty good job I mean you could say
that's an intelligent dog but you wouldn't say
like that's the epitome of intelligence
I wouldn't say that dogs are the most intelligent beings in the universe,
but that's not what we're talking about.
We're talking about, do they have intelligence?
And dogs are a perfect example because they can learn.
You can train them.
And the cool thing about an intelligent being is that you can train it to do something
even if you don't know how to do it.
Say, for example, you want your dog to recognize you, right,
but tear the face off anybody who tries to break into the house, right?
Guard dog, okay?
So you can train a dog.
You reward it when it does the right thing,
you punish it when it does the wrong thing.
You don't know how to, like, build a being that does that, like, recognizes your face
and recognizes strangers' faces and makes these decisions.
That's a hard task, you know?
It's not easy to do.
But you can train a dog.
A dog can learn how to solve this problem.
And all you need to do to train it is to reward it and punish it.
So you're saying just the ability to sort of learn from your mistakes or learn from your surroundings,
that's what you would call intelligence.
Yeah.
And dogs have less of it than we do.
and more of it than cats and mice
but they have some of it for sure
which is what makes them trainable
and you know I wonder sometimes
because dogs can be trained right
nobody ever trains their cat
what does that say about a cat's intelligence
I've always thought
I mean I love cats but I've always thought
dogs are probably smarter than cats
because you can train them
right or maybe cats are more intelligent
in that they're not they don't allow themselves
to be trained by humans
right and rocks by that metric
are the most intelligent
because they completely ignore you, right?
You see the fallacy of that argument right there?
No, but I mean, maybe there's sort of like a hump, right?
Like as you get more intelligent, you're easily more trainable, trainable, trainable,
but it's something you get so intelligent that you rebel against your masters.
And so how do you tell the difference between something that's totally unintelligent
and something that's so intelligent it completely ignores you?
Yeah, I don't know.
Deep question.
Jorge believes all the rocks are probably thinking about him.
Well, sure.
I mean, if you used to use the ability to listen to what I say as a benchmark of intelligence,
then, yeah, there's something super intelligent.
It could be just as smart as the rock.
But obviously, a cat is still making decisions and acting and, you know, doing things.
So it's intelligent, but maybe it's much more intelligent than a dog because it chooses not to listen to us.
All right.
I think we need to have a whole other podcast on who's smarter cats or dogs.
And before we do that, we will collect some data to answer this question.
but I think with the question we're focusing on is what is artificial intelligence.
So natural intelligence, just the ability of an animal to learn.
Artificial intelligence would be if something artificial that we create has that same property.
The ability to change the way it processes things in response to what it sees about the world.
Yeah, artificial intelligence is a very broad field with lots of elements that we couldn't cover in just one episode of a podcast.
But let's just talk today about one important subfield of AI, which is machine learning.
Or more specifically, I would say, let's focus on training, right?
Can you build something artificial that can be trained, right?
And I think let's talk for a moment about, you know, how normal computers work.
And then we can talk about how computers, smart computers, computers that can learn, computers with artificial intelligence, how they work.
I think what you keep talking about cats and dogs.
All right.
We'll talk about cats and dogs, but first, let's take a quick break.
I always had to be so good, no one could ignore me.
Carve my path with data and drive.
But some people only see who I am on paper.
The paper ceiling.
The limitations from degree screens to stereotypes that are holding back over 70 million stars.
Workers skilled through alternative routes rather than a bachelor's degree.
It's time for skills to speak for themselves.
Find resources for breaking through barriers at tetherpaperceiling.org.
Brought to you by Opportunity at Work and the Ad Council.
Have you ever wished for a change but weren't sure how to make it?
Maybe you felt stuck in a job, a place, or even a relationship.
I'm Emily Tish Sussman, and on she pivots, I dive into the inspiring pivots of women who have taken big leaps in their lives and careers.
I'm Gretchen Whitmer, Jody Sweeten.
Monica Patton.
Elaine Welteroff.
I'm Jessica Voss.
And that's when I was like, I got to go.
I don't know how.
that kicked off the pivot of how to make the transition.
Learn how to get comfortable pivoting because your life is going to be full of them.
Every episode gets real about the why behind these changes and gives you the inspiration and
maybe the push to make your next pivot.
Listen to these women and more on She Pivots now on the IHeart Radio app, Apple Podcasts, or wherever
you get your podcasts.
The U.S. Open is here.
And on my podcast, Good Game with Sarah Spain, I'm breaking down the players from Rising
stars to legends chasing history, the predictions will we see a first-time winner, and the pressure.
Billy Jean King says pressure is a privilege, you know.
Plus, the stories and events off the court and, of course, the honey deuses, the signature
cocktail of the U.S. Open.
The U.S. Open has gotten to be a very fancy, wonderfully experiential sporting event.
I mean, listen, the whole aim is to be accessible and inclusive for all tennis fans,
whether you play tennis or not.
Tennis is full of compelling stories of late.
Have you heard about Icon Venus Williams' recent wildcard bids?
Or the young Canadian, Victoria Mboko, making a name for herself?
How about Naomi Osaka getting back to form?
To hear this and more, listen to Good Game with Sarah Spain,
an IHeart Women's Sports production in partnership with Deep Blue Sports and Entertainment
on the IHeart Radio app, Apple Podcasts, or wherever you get your podcasts.
Presented by Capital One, founding partner of IHeart Women's Sports.
Let's talk about what computers can do.
Yeah, because computers are smart, right?
You can program a computer to do smart things,
but that doesn't necessarily mean it has intelligence.
That's right.
There's a difference between a computer that can do something
and a computer that can learn something, right?
The way I think about non-intelligent computers
is the way you sort of think about machines, right?
you can tell them what to do, and they do exactly what you tell them, regardless of whether
it's the right thing. You don't give them like a goal and say, hey, I just want the house to be
clean, figure it out. You have to tell them exactly what to do. You say, step over here, move the
broom this way, step over there. And if it's not cleaning the house because they're stuck on a corner
or they're, you know, fell on their butts or whatever, they don't care. They just do exactly
what you tell them to do. They have no sort of larger sense of what's important. It just follows
instructions just follows
the recipe you gave it. That's right. It's like
a wind-up toy, you know?
You wind it up, you give it some energy, and then it goes.
And I really do think about computer programs
the way you might think about little machines,
right? Because that's exactly what they are.
They just execute a set of instructions.
You know, it's just like a bunch of gears
clicking into place. And they can't
change the way they do that, and they do it regardless
of whether it's the right thing or whether it's effective
or whatever, it just goes.
Like your electric
toothbrush. You know, you switch it on,
and it's just that it has a circuit that just has it move the bristles back and forth.
That's right.
And it doesn't know if it's brushing your teeth or just flailing around in midair, right?
It has no idea.
It doesn't care.
It doesn't think or feel, whatever.
It's just a machine, right?
Thank God it doesn't know.
It would tell you to brush your teeth more off and work.
You'd be like, oh, what a nightmare.
Eat less chocolate, Jorge.
I'm tired of this.
What is this gunk?
Yeah, exactly.
And so that's what a sort of a normal machine is.
That's what like a classical computer program is, right?
think of it just the same way as you think of a physical machine.
Okay.
It's just doing what you, the programmer, told it to do.
That's right.
And it follows your instructions exactly.
Now, a computer that can learn is different, right?
A computer that has artificial intelligence is different in this really important way because you can train it, right?
And you can train it because we build these things to model the way that we work.
Right.
So, for example, an AI program is sort of like, like a.
a newborn baby, can't do anything, right?
Say there's a AI program, for example, that's supposed to recognize you when you come in the
door, right?
Is this Jorge or is this not Jorge, right?
Because it should only open the door for Jorge and not open the door for not Jorge.
Okay, so when you create a new AI program, you would start out like just a newborn baby,
okay?
Like a blank slate, right?
Yeah, like a blank slate.
It would make random decisions, right?
You'd show it a face, and it would say, yes, it's Jorge.
And then you say, no, you were wrong, or yes, you were right.
And then you would reward it if it does well, if it gives the right answer,
and you would punish it if it doesn't, right?
You would tell it.
I mean, you don't actually punish it or reward it.
You just tell it, yes, you made the right call this time,
and no, you made the wrong call this time, this other time.
But how is that different than the idea of calibrating something?
Do you know what I mean?
Like, is calibration than artificial intelligence?
Right.
Well, the difference is calibration is like, here I have a,
tool. I know how to solve the problem. I just have to adjust it so that it does exactly the right
thing here, right? But you have a strategy that it's executing. You know, it's like you have a drill
and you want it to drill fast or slow and, you know, you know how to solve the problem. You just,
you know, it has to spin and screw the thing in or whatever. It's just adjusting a knob.
Here, you don't know how to solve the problem. And so you've given it a very, very, very flexible
strategy on the inside. You've given it like, imagine something has like a thousand knobs.
If you twist all these knobs, you could get all sorts of crazy strategies.
So back to the example of recognizing Jorge or not, when you tell it it's done, it's given the wrong answer, then it adjusts those knobs.
It says, well, let me try to tweak my strategy for deciding, is this Jorge, and then we'll see how that goes.
I think that's a key difference.
It's the number of knobs, right?
Like a drill with a knob for velocity, I mean, that is sort of trainable, and you can sit out of
to be adaptive, but it's just one knob. And so it's not, you wouldn't say it's intelligent.
It's not intelligent. The spectrum of things it can do is very, very small, right?
But whereas like something that recognizes a face, it needs to evaluate like a million pixels
in a photo, right? And so for you to tweak how it evaluates each of those pixels, it would
be really difficult for you to. That's right. So imagine the machine here is a camera in the door
and takes a picture before he's got a million pixels. And then it has to look at those pixels and
decide, is this Jorge or is this not Jorge? And so there's, does some calculation on that
picture, right? And that calculation has millions of knobs on it, right? How much do I weigh this
pixel? How much do I weigh adjacent pixels? Do I look for his nose? Do I look for his hair?
Do I look for the eyes, right? So it's got some very, very flexible thing inside of it that can do
almost anything. And when you first start out, it's just random. So it's making ridiculous,
terrible decisions. But the key, the thing that models the learning, right? You don't just need
artificial intelligence, you need artificial learning. The thing that models that learning is that
when it gets the wrong answer, it knows how to adjust those knobs so that next time it's more
correct. By itself, that's a key thing, is that it learns by itself. It doesn't need you
there sitting like, oh, you got this pixel wrong, you got that pixel wrong. I mean,
tweak, you know, tweak this one this way. It's really more like an autonomous, automatic learning.
That's right, because you don't know how to adjust it. If you knew how to adjust it, you would just
write that program, right? The key is
artificial intelligence is excellent when
you don't know how to solve the problem
but you can define the problem. You can say
this is a picture of Jorge and this is not.
Learn a way to tell the
difference, right? You give it a very
flexible strategy and then
you let it try out and
when it gives the wrong answer, you
let it adjust itself so that it gets closer
and closer to giving the right answer.
Eventually, these things will find
the right setting for those millions of knobs
so that it's doing the right thing.
It's saying, oh, look, this picture is a picture of Jorge,
and it gets the right answer 99% of the time.
And when you give it a picture that's not a picture of Jorge,
and you give it Daniel, it says, no, sorry, you're not getting in the house.
Right.
And I think a key thing is also that you, as a programmer,
could not have predicted what all those knobs are going to be at the end, right?
Like, it's such a big problem.
There's a million knobs.
There's no way that you can predict what those knobs are going to be set to when it learns my face.
That's right.
It's perfect for really hard problems where,
We don't know how to solve it, right?
We know how to describe the problem, but we don't know how to solve it.
You're right.
So if I already knew how to solve it, I could write a computer program and tell it, just like, use this pixel, use that pixel, use this pixel.
But I don't know how to solve that problem.
It's really hard, right?
But I can train a computer to figure it out.
Just the same way, I can train a dog, right?
A dog can learn my face, right?
A dog recognizes its owner and happily licks its face when it comes home and recognizes.
that when somebody's not its owner
and barks like crazy and choose its face off
when it's not its owner, right?
Remind me not to visit your house, Daniel.
Seems a little dangerous.
So then a big thing is programming
a structure in the
software that is kind of open-ended
and malleable.
Do you know what I mean? Like something that is
kind of unpredictable in a way
that can learn. That's right. And that's a key thing
is that some people might be thinking,
well, hold on, you said the computers can just do what they tell you.
So how can a computer learn, right?
How is that it's an emergent property, right?
Like the way that you write a computer program that can learn
is you build all these little calculating bits with knobs on them, right?
And each bit just does what it's told.
It takes some data, it makes a decision based on the value of the knob,
and it sends out some data.
And together, all these things make a decision, right?
Each individual piece has no idea what it's doing.
It's not smart or intelligent or making its own decisions.
That doesn't have free will, right?
But together, they're doing something.
And as you said earlier, they can change the way they behave.
They can adjust these knobs themselves to improve their performance.
That's where the learning comes from.
It's from that training.
It gets external input and changes its behavior based on that external input.
You're saying that the way to program these AIs is by connecting a bunch of little simple things together to get something complex.
Yes.
And here it's important to remember that we are using neural networks as sort of a stand-in to represent a big, broad set of strategies that are part of machine learning.
Right.
And you don't know how to set them, how to put them together to get the right complex behavior.
You just put them together and then you train it, right?
You say, well, I have something that's dumb like a newborn baby and I teach it how to do the thing that I want.
But this really all sort of came about from brain research, right?
Like people were studying the brain and they figured out that our brains are made up of all these little simple units.
neurons. That's right. And each neuron is pretty simple, right? Like it just takes a couple of inputs
and then it just outputs one signal. That's the really fascinating deep part about it, right?
Is that the structures we use in computers are modeled after what's actually happening in real
brains. And as you say, inside your brain are a bunch of neurons, right? And these neurons take in some
input and then if the input is right or above some a certain amount, then they send out some output,
which is the input to the next neuron. Right. And your brain is basically just a big way.
of these things.
Yeah, that's right.
That's the key is that these neurons, they're simple, but they're all sort of connected
to each other.
So it's a huge complex web going on inside your head.
And when you're learning, what you're doing is you're kind of like shaping that web.
You're saying some connections, these connections are important for recognizing Jorge.
These connections are important when you want to, when it's not Jorge kind of thing.
That's right.
Your neurons can change.
They have like basically knobs on them.
I mean, not physical, literal knobs, but they can.
have, they can adjust. And so if you feel pain, you know, or you have an experience, then that
changes the way your neurons work. And it changes a little bit who you are and how you react
to things. And that's why, you know, newborn babies, when they're born, they're not very
responsive to stimuli because they're just still figuring it out. You know, a newborn baby
doesn't even know like, this is my arm and I know how to control. It has to learn all of
these things by being trained, by having experiences, you know? It has the neurons.
and the neurons are connected to each other,
but it has to figure out how to use those connections.
That's right.
It has to be trained to be useful
and to interact with the world
in any sort of meaningful way, right?
And so that's exactly the same sense.
And it's fascinating that if you build a mathematical system,
that's what a computer program is basically a mathematical model
of the processes that are happening in your brain,
it performs in a very similar way,
and it does this amazing thing,
which is it adjust itself to improve its performance
on the task you've given it.
right so it really is like a model of learning and and when people saw this they said wow i mean you
look inside the brain you're wondering like how does thinking work where's the soul right where am i
you look inside the brain all you see all these weird neurons connected to each other you think how could
that possibly describe me but then you build a model of it in a computer and it can do the things that
you can do which is learn and develop and react and be trained yeah and make back jokes
not yet we have not yet solved the bad joke problem right humans are still world champions in terms of bad
we can still beat them at something that's right and you know and this is very useful because you
want the systems around you to learn and to react you know and like if your phone for example
it knows hey every time you open your phone you start with twitter right and so twitter goes up
on there on the most used app list right and that's not a very complex artificial intelligence
But it is, and these sort of things are very helpful.
Let's take a quick break.
Smokey the Bears.
Then you know why Smokey tells you when he sees you passing through.
Remember, please be careful.
It's the least that you can do.
After 80 years of learning his wildfire prevention tips,
Smoky Bear lives within us all. Learn more at smokybear.com. And remember, only you can prevent wildfires.
Brought to you by the USDA Forest Service, your state forester, and the Ad Council.
Have you ever wished for a change but weren't sure how to make it? Maybe you felt stuck in a job, a place, or even a relationship.
I'm Emily Tish Sussman, and on she pivots, I dive into the inspiring pivots of women who have taken big leaps in their lives and careers.
I'm Gretchen Whitmer, Jody Sweeten. Monica Patton. Elaine Welteroff. I'm Jessica Vaugh.
And that's when I was like, I got to go.
I don't know how, but that kicked off the pivot of how to make the transition.
Learn how to get comfortable pivoting because your life is going to be full of them.
Every episode gets real about the why behind these changes and gives you the inspiration and maybe the push to make your next pivot.
Listen to these women and more on She Pivots now on the IHeart Radio app, Apple Podcasts, or wherever you get your podcasts.
The U.S. Open is here.
And on my podcast, Good Game with Sarah Spain, I'm breaking down the players from rising stars to legends chasing history, the predictions, well, we see a first time winner, and the pressure.
Billy Jean King says pressure is a privilege, you know.
Plus, the stories and events off the court and, of course, the honey deuses, the signature cocktail of the U.S. Open.
The U.S. Open has gotten to be a very fancy, wonderfully experiential sporting event.
I mean, listen, their whole aim is to be accessible and inclusive.
For all tennis fans, whether you play tennis or not?
Tennis is full of compelling stories of late.
Have you heard about Icon Venus Williams' recent wild card bids?
Or the young Canadian, Victoria Mboko, making a name for herself?
How about Naomi Osaka getting back to form?
To hear this and more, listen to Good Game with Sarah Spain,
an Iheart women's sports production in partnership with deep blue sports and entertainment
on the IHart Radio app, Apple Podcasts, or wherever you get your podcasts.
Presented by Capital One, founding partner of IHart Women's Sports.
So that's kind of what makes AI is that, A, it can tackle complex problems that we don't, we don't even know how to program something to do, and B, that it changes and adapts and kind of it can get better, not just better, but also kind of adapt to the person using it.
That's exactly right, exactly right. And so, for example, sometimes, you know, Netflix uses AI. It says,
What program will you want to watch next?
Well, you know, that's an AI.
It's been trained.
They feed it a bunch of examples.
They say, Bob watched these five shows, and then he watched this sixth show.
But they gave the AI just the first five, and they asked it, predict what show he will watch next.
And then they see if it does a good job.
And if it does a good job, you know, they're a reward.
If it does a bad job, it tunes its knobs to do better.
Then, when you're sitting there watching five hours of Netflix, it can do a pretty good job
predicting what you're going to watch next because it's been trained on a lot of data.
This is why people are always talking about big data, big data.
These companies are gathering data about you so they can train their AIs to learn your
behavior and predict it.
Except the problem is me and my spouse, we share the same account and the same login.
That's right.
So it's learning some weird combination of your brain and your wife's brain.
I have a very confused Netflix account.
Or maybe it understands your marriage better than you.
maybe he's trying to tell us something
it's like you guys should
you guys have to watch something together
every time his wife is out of town he watches these shows
and when she's in town he has to watch these other shows
uh oh
all
all
all right
I know break it down for us
how long before the AIs take over
the world
not very long actually
but you were asking a different question earlier
which is, is AI dangerous?
And I think that has, there's two different questions there, right?
I mean, people are concerned.
Some people are concerned.
Yeah, I think people are concerned, and they're a good reason for it to be concerned.
Right.
One question is, will AI develop its own autonomy and, you know, take over?
That's a different question from, are they dangerous?
Because, you know, they could take over and then take better care of the planet than we have,
in which case, you know, they're not dangerous.
They're benevolent dictators.
Oh, I see.
I think the real question is, will they take over?
Will they become autonomous?
It's when we lose control of them somehow
and could they become smarter than us?
Oh, I see. It's two issues.
One, could it develop a consciousness on its own?
And B, is that consciousness good or bad for us?
And it's an important question
because as soon as you identify learning with consciousness,
then you wonder about that.
And this connection between the structure of AI
and the structure of our brain begs that question.
If you created, for example,
an artificial Jorge in the computer,
if you built a set of neurons that mimicked your brain,
would that simulation be alive?
Would it be aware?
Would it think and would it have a first person experience, you know?
Right.
Or would it just burn all the circuits?
That's a deep philosophical question.
We'll never answer it, right?
And it's not really the important question.
The important question is, would we lose control of AI?
Well, AI, because AI is something that can change and that can evolve,
it can handle complex tasks, the question is, can we lose control of it?
And I think the answer to that one,
definitely yes.
We can lose control, meaning we'll give it control and then not be able to take it back.
Yes, exactly.
Because the way AI is moving is that it can handle more and more complex tasks so that you
don't have to be super specific about what it's doing.
We have amazing natural language processing now.
You can say sort of vague things to your phone like, hey, set me up an appointment for
tomorrow afternoon, right?
And it'll understand because it understands what your intent was.
It has to judge your intent and then execute it.
It used to be you had to go into your computer
and you have to press the keys
to create that in your calendar.
Now you can sort of talk to your phone
and it'll interpret what you want and it'll do that.
And that's awesome.
That's wonderful for human computer interactions
that we can use our language to talk to them.
We don't have to write computer code.
It's a huge step forward, right?
People can instruct machines using English
rather than Python or C++, right?
It's a big step forward.
I think that's kind of what people find scary
about AI is that you can't really predict what it's going to do. I mean, it's sort of comedy
gold when your kids are trying to talk to Alexa and ask it funny questions, but that's kind of
what's fascinating about it, right? Like you ask it questions, you give a task, and you really
sort of don't know what it's going to do. That's exactly right, because it's making higher and
higher level decisions, which makes it much more useful and much more intelligent. The same way
when your kid grows up, right? When your kid is four, you have to be very specific. You have to say
things out loud, which are ridiculous, right?
Like, don't put that finger in your nose.
You know, we're like, oh, that's been
on the floor, don't eat it, right? You have to be really specific.
When they're 10, you can say more
general things, and they'll understand.
They've learned. They're intelligent.
Like, don't put both fingers in your nose.
That's right. Only put one finger
in your nose at a time, please. Don't put your finger
in your sister's nose, right?
The same way, as machines
or artificial intelligence gets more intelligent,
you can give it vaguer instructions
and then it makes decisions based
on its training. Right. And you don't really know. It's just like you can't really read what is
in every neuron in another person. In an AI, you sort of, you don't know what's going to happen,
what's going to come out. Exactly. So they're going to start making decisions based on,
you know, still what we tell them to do? But, you know, what if you told your AI, you're like,
hey, keep my kids safe, right? I mean, imagine some future where you have an AI robot, it's really
smart. And you say, hey, keep my kids safe. And you come home and it's like, lock them in the
basement, right? And like, well, okay, they're safe. But it's sort of a month.
monkey-paws situation, right?
Like, you got exactly what you asked for, but you didn't really elaborate in the right way
and made different decisions, right?
So we sort of skipped the question of whether AIs can be, you know, achieve consciousness
and become its own kind of soul, have a soul.
It doesn't seem like you think that's a relevant question.
I think it's important because when AI gets to be super intelligent, it's going to seem
like it has a soul.
They're going to seem like people.
And people are going to wonder, like, do they have rights?
Can you kill an AI?
What, can you just delete it?
know? That's going to be a really interesting question. But again, that's a whole question
of philosophy that we could easily spend an hour on. I think it's a much more practical
question, which is will we lose control of them? Whether or not they have first-person experiences
or they just seem to. It's important to think about whether we're going to lose control. And
there's two reasons why I think that we will. One is computers are getting faster really,
really quickly. Every year, computers get faster and faster and smarter and smarter. And the scale
is growing, right? So this thing is happening very quickly. But
We're not, right?
We're not getting smarter.
Right.
Human brain is not changing and evolving at a very rapid rate.
Computers are.
So they're catching up and the slope is steep, right?
You can just get bigger and bigger computers and team them together and parallelize them and you can just keep going, right?
So eventually they'll definitely have enormous computing power with capabilities to do things we can't even imagine.
And also being faster doesn't necessarily mean being smarter.
You also need, like, more data to train on.
And also being twice as fast doesn't mean being twice as smart.
It's not linear.
So you think that they will get more capable than us,
but do you think we will ever see control of really important things to AIs?
Like, hey, here's the nuclear button.
Only fire it if it's necessary.
Exactly.
Let's talk about weapons.
Weapons is going to be what ends it because, you know, for example,
we already have drones, right?
And we have drones with missiles on them.
And these drones can kill people.
They can, they can, you can, some pilot somewhere is flying it.
He's making a decision and he's going to shoot this missile to kill a person, right?
Right.
But, you know, the enemy has drones.
And pretty soon it's going to be drone on drone warfare, right?
Oh my goodness.
And the drones are going to shoot each other.
And at some point, somebody's going to put an AI in their drone.
Why?
Because an AI can make the decision about shooting much faster than a human can.
So which drone is going to win?
An AI will be a better fighter than a human fighter.
Yes.
And so eventually these AI will be making kill decisions, right?
Because the one that can make the decision faster is going to be the one that wins.
And so I don't think it's going to be very long before we have AI powered drones that are authorized to kill people, right?
This is a clear next step for the military, you know?
Like here, here's a picture of somebody we think is a terrorist.
If you spot them, just fire the missile.
Don't bother checking.
Yeah, don't bother checking with us, right?
Oh, boy.
That's a clear next step.
So now you have AI that have the authority to kill people.
And why?
Because they've been tasked to, you know, take care of us or protect us or whatever.
But only if you give it that permission, though, right?
I mean, that's a big ethical step to say, like, if you see them, shoot them.
Yeah, but I don't think that's a big ethical step for the military.
You know, the protocols for shooting somebody in the military.
I mean, I'm not an expert on military protocols, but, you know, our military kills a lot of people for, you know, a lot of civilians get killed, right?
and we decide it's okay.
A lot of innocent people
get killed for military purposes.
And so I don't think it's too far before AI is making that decision.
And then it's AI, it's weaponized AI,
our weaponized AI, versus their weaponized AI.
And then it's an arms race.
And then the most powerful army is going to be the one
that just makes it all of its decisions.
And the generals just say, defend us, right?
Or respond if we're attacked, right?
And then you basically hand it over control of the weapons
to the AI,
Because the enemy has weaponized AI.
But that doesn't mean that they're controlling us.
I mean, we use them to protect us or to take away some decision-making for us.
But that doesn't mean that they're necessarily in control of us.
And let's make sure not to be too alarmist here, of course,
because people are working really hard to make sure that there are always ways for humans to override these systems.
Well, it'd be different.
That'd be like if a robot then turns the weapons inwards, that's another deal, I guess.
Yeah, and of course, AI researchers do their best to make sure.
that the AI systems are very well trained
so that they do exactly what we want them to do.
But they are complex and unpredictable,
just like people are, right?
So this is a very interesting topic,
whether AI is dangerous or not.
And I know, Daniel, that you are sort of an expert
in artificial intelligence because you use it
in your particle physics research, right?
You use machine learning.
That's right.
I wouldn't say I'm an expert.
I mean, I know something about it.
I've done some reading and I've used it,
but I'm certainly not a deep expert in artificial intelligence itself.
Right.
But you know some experts in your department, right?
Or in your campus?
That's right.
UCI has an amazing computer science department and experts in machine learning.
Some of the folks I actually collaborate with
when we're understanding the huge amounts of data from the Large Hageon Collider.
We train machines to sift through that data and look for the Higgs boson
and learn to recognize new kinds of particles.
It's really fun.
And these guys know a lot about artificial intelligence more than I do.
so I went over there and I asked them
if they were worried about whether robots would take over the world.
And what did the robots say?
The robots had taken over the professors and they answered for me.
No, first here's Professor Pierre Baldy.
He's a distinguished professor on campus.
And here's what he had to say.
Potentially, yes.
All very powerful technologies, I think,
can pose such a threat
and all depends on they are deployed, how they are used, etc.
right? You can say that nuclear technology poses such a threat and continues to pose such a threat.
And I think AI, if used in the wrong way, could pose a threat to mankind. Yes. The potential is there,
and so we should be careful. Right. So that was Professor Baldy. And then I also went down the hall and asked another colleague,
so I thought this get more than one opinion. And so this is Professor Porc Smith, also a professor of computer science
at UC Irvine.
I think the main threat with artificial intelligence going forward is not understanding how
the black boxes work.
And so I think not the typical sort of we're going to have robots taking over the
world, but more the use of AI in situations where we're extrapolating beyond what it can
do.
And so I think we need to understand the limits of AI.
I think that's a threat.
All right.
So the answer is yes.
well I think they're cautious right
both of them think it's unpredictable
we don't know what's going to happen
we're creating a whole new kind of system
and we may lose control of parts of it
on the other hand
is it likely for that to happen
you know a lot of people are working really hard
to make sure that AI will be contained
and that in the end you could just pull the plug
if the robot revolution starts
and so it is unpredictable
but also you know the future is unpredictable
is always going to be unpredictable
yeah I feel like I thought it was
interesting he said it is dangerous
but not more so than
any other powerful technology
yeah that's a really interesting comment
it's true that any technology you can create
could be used for good or for evil
if it's powerful right like a I mean
not just like you know wind up toy
maybe it's not as dangerous but
but I think that speaks to
the kind of the power of AI like
it really is maybe
more powerful than we can handle
yeah and it's powerful
in a special way right like nuclear weapons
are powerful, right? But in the end, a human is making that decision. And so you're giving
humans a new kind of power, which is unpredictable. But here you're unleashing something,
right? You're creating AI, and it's making its own decisions. Of course, it's making decisions
based on what has been told to do, right? You have to give it instructions still. You have to
teach it. But you can't predict what these complex systems are going to do in new circumstances
and how they're going to interpret your instructions. Right. And of course, there are a lot of
AI-smart people out there working hard to make sure that there are boundaries and
safeties installed in all AI systems.
But, you know, I've seen Jurassic Park, you know, the lesson there.
They had fences.
The lesson there.
They had fences.
We have fences.
But then Jeff Goldblum, you know, has a theory about chaos.
Yeah, exactly.
You know, these systems are hard to predict.
And so I think we should be worried, but then we should respond to that worry with appropriate
safeguards.
You know, we should take this seriously, but not be overly alarmed.
Right.
Well, the other point that the other professor made is also interesting that he's saying some of the danger is in the fact that it's kind of like a black box.
Like, we're trusting these things, but we don't really know what's going on inside.
Like, it's so complex that we can predict what it's going to do.
We can maybe even deconstruct how it makes decisions.
That's right.
And, you know, you train these systems.
They're very complicated, and you don't know how they're going to respond to new circumstances, right?
It's the same as, like, training your dog.
Like, do you know how your dog makes a decision about who to bark at and who not to?
You try to train it.
You try to give it instructions.
You try to make sure it knows how to handle its stuff in a new circumstance.
But you can't honestly know what it's going to do at any given moment.
Yeah, I'm definitely not visiting your house if you have dogs.
I think about AI.
And again, not an expert.
So maybe these are uninformed speculations.
But I think about AI sort of like digital children.
You know, like you raise your children, you know they're going to take over one day because, you know, you and I are going to get old and our kids are younger than we are.
So eventually they will take over and you don't know what they're going to do.
And you raise them, you try to raise them in a way that they have values, they make reasonable decisions.
And you can sort of think about AI the same way.
Like you try to create this new generation of technology that's going to make its own decisions, but you try to teach it to make good decisions so that when you're in a home, right, it's making good choices for you.
Right.
And I know that some folks out there think, well, you know, AI is never really going to be separate from humanity.
There's not this, like, cognitive separation.
Like, you can just be part of who you are, the way your iPhone feels like part of who you are.
But we don't know necessarily if that separation is going to be serious, you know, if these things really would be separate from us or if they're always just feel like an extension of ourselves.
Well, until then, I think we should stick to regular dogs.
Regular dog dogs.
Yeah, but, you know, I think about it sometimes.
the way I think about children, right?
In the same way that you raise your children
and they're going to take over, right?
It's going to be some point when your children are in charge.
You raise them to have values and to make good decisions
and you hope that when they take over,
they're looking out for you.
In the same way, we've got to create these digital tools
and we've got to teach them to behave.
We've got to teach them what's important
and we've got to teach them how to be responsible
so that if they take over, you know,
we hope they treat us well.
Yeah, Daddy good.
Your parents
Don't put daddy in a home
Please don't bury me underground
Well I personally am looking forward to a time
When I don't have to think as much
Life is a little bit easier
Because we have these things
Making things easier for us
It could handle a lot of the drudgery
And a lot of the logistics
Eventually you could have a car
that drives itself and obeys your instructions.
You could say, like, hey, go pick up my kids from school.
And he would know how to navigate and how to drive and recognize your children and how to get back home.
And that's totally within the realm of possibility in a few years, right?
And that's pretty awesome.
It'll offload a lot of work and logistics from beleaguered parents.
I think you and I are in a pretty good position career-wise.
You know, like I'm a cartoonist and you're a physicist.
These are not jobs that are going to be taken away by AI anytime soon, hopefully.
Have you not seen the AI cartoons?
They're pretty good, man.
Are they?
You should like start a podcast instead of relying on your cartooning.
Well, there is definitely that as a genre of humor, like, hey, I put so-and-so through an AI machine and look at the crazy thing it came out with.
I know, except those are all manufactured.
None of those are real.
Oh, none of those are real?
None of those are real, man.
Those are all made up.
Well, that's good for humorist.
So artificial intelligence is certainly a revolution in thinking.
and in computing, and it'll definitely change the world.
And so check back in in 10 years to see if we've been replaced by Robot Daniel and Robot Jorge.
Maybe we already are.
Bump, bum, bum.
So thanks everyone for listening to this episode of Daniel and Jorge Explain the Universe.
Yeah.
And to listen to more, just say, Alexa, what's the best science podcast in the world?
What's the third best science podcast in the world?
If you still have a question after listening to all these explanations,
please drop us a line we'd love to hear from you.
You can find us at Facebook, Twitter, and Instagram at Daniel and Jorge, that's one word,
or email us at Feedback at Danielandhorpe.com.
If someone has a problem, they just blurt it out and move on.
Well, I lost my job and my parakeet is missing.
How is your day?
But the real world is different.
Managing life's challenges can be overwhelming.
So, what do we do?
We get support.
The Huntsman Mental Health Institute and the Ad Council
have mental health resources available for you at loveyourmindtay.org.
That's loveyourmindtay.org.
See how much further you can go when you take care of your mental health.
Have you ever wished for a change?
but weren't sure how to make it.
Maybe you felt stuck in a job, a place, or even a relationship.
I'm Emily Tish Sussman, and on She Pivots,
I dive into the inspiring pivots of women who have taken big leaps in their lives and careers.
I'm Gretchen Wittmer, Jody Sweetie.
Monica Patton.
Elaine Welteroff.
Learn how to get comfortable pivoting because your life is going to be full of them.
Listen to these women and more on She Pivots.
Now on the IHeart Radio app, Apple Podcasts, or wherever you get your podcasts.
The U.S. Open is here.
On my podcast, Good Game with Sarah Spain,
I'm breaking down the players, the predictions, the pressure,
and of course, the honey deuses, the signature cocktail of the U.S. Open.
The U.S. Open has gotten to be a very wonderfully experiential sporting event.
To hear this and more, listen to Good Game with Sarah Spain,
an IHeart Women's Sports Production and Partnership with Deep Blue Sports and Entertainment
on the IHeart Radio app, Apple Podcasts, or wherever you get your podcasts.
Brought to you by Novartis, founding partner of IHeart Women's Sports Network.
an iHeart podcast.