Chewing the Fat with Jeff Fisher - Jeffy's Corner: Ready for Robots
Episode Date: June 30, 2016For more on Chuck Palm, go to chuckinflorida.com/Follow Jeffy on Twitter: @JeffyMRA Like Jeffy on Facebook: www.facebook.com/JeffFisherRadioFollow Jeffy on Instagram: @jeffymra Learn more about your a...d choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
This is the Blaze Radio on demand.
In the next 19 seconds, you could sell your home.
Okay, I mean, it's not going to sell your home, I mean this,
but you're going to take a big step toward getting it sold.
Go to real estate agentsitrust.com and find an agent selected by my team,
a professional who shares your values and speaks the truth.
Sell your home fast and for the most money.
Get moving at real estate agents.
I trust.com.
You're listening to the Jeff Fisher show.
I wanted to talk a little bit about robots.
You know how much I love robots.
I mean, seriously, I am ready to be served by a robot.
I want a robot to wait on me.
I don't want to have to wait for, you know, another human.
I don't want to get up myself.
I want a robot doing it for me.
Well, Google has worried about artificial intelligence.
Of course they are.
They're worried that they're not going to be the first.
front leaders. They want to be out in front.
And they have a research paper entitled Concrete Problems in AI Safety.
Huh.
I know what those problems would be.
Let's see.
Avoiding negative side effects.
How do you stop a robot from knocking over a bookcase in its zealous quest to Hoover the
floor?
Avoiding reward hacking.
If a robot is programmed to enjoy cleaning a room, how do you stop it from messing
up the place just so it can feel the pleasure of cleaning it again.
Scalable oversight. How much decision-making do you give the robot?
Does it need to ask you every time it moves an object to clean your room?
Or only if it's moving that special vest that you keep under the bed or never put flowers in for some reason.
Safe exploration. How do you teach a robot the limits of its curiosity?
Google's researchers give the example of a robot that's learning where it's allowed to mop.
How do you let it know that mopping new floors is fine, but that it shouldn't stick to the
in an electrical socket.
Robustness to distributional shift.
How do you make sure robots respect the space they're in?
A cleaning robot, let loose in your bedroom will act differently than one that is sweeping up in a factory.
But how is it supposed to know the difference?
Good, great questions.
Right?
Seriously.
Good questions.
Now, it's not I-Robot laws.
Remember the three laws.
And actually, Isaac Asimov has like 30 laws that he wrote for robots.
Some of them, you know, kind of funny and kind of not.
But the three from I-Robot that ever remembers is a robot may not injure a human being
or through an action allow a human being to come to harm.
A robot must obey orders given it by human beings,
except where such orders would conflict with the first law.
And three, a robot must protect its own existence as long as such protection does not conflict with the first or second law.
And, of course, that was one of the things in the movie, right?
I mean, he was, you know, Will Smith had the robotic arm, but he was, didn't like the robots.
And he didn't like them because they didn't have the moral or the thought process to when there was a crash.
she was involved in.
There was a little girl, and there was him.
And the robot dove in and saved him instead of the little girl,
because the robot looked and said the little girl has less of a chance to survive than the man.
So I'm going to save the man because the man has more of a chance to survive.
Where a human, that wouldn't matter.
I mean, every human, even every guy, including Will Smith in the movie, obviously,
would say, save the kid.
Save the kid.
And the robots would tell you, yeah, that'd be great.
We could save the kid, but no, the kid didn't have a chance to survive.
You did.
And we wanted to save you.
Well, we've got robots working, and as I was saying, it didn't print because I've got some stuff that I printed out about a robot in Russia.
Caused an unusual traffic jam after it escaped from the research.
Search Lab.
The artificially intelligent bot is making headlines again after reportedly tried to flee a second time from the Russian lab dubbed promo bot.
All right.
Good for him.
I got to get out of here.
I'm out.
We also had the first robot suicide.
Wait.
What?
Yep.
A cleaning robot.
committed suicide by climbing onto a kitchen hot plate where it was burned to death.
According to local reports,
I-Robot, Rumba 760 Robot, is thought to have rebelled against its chores and decided enough was enough.
It's really funny.
Not funny at all.
What are you laughing at suicide?
Yes.
I don't know about the allegations of robot suicide,
but the homeowner is an assistant
but the device was switched off.
So we do have our first report
of robot suicide.
Now, we do
have a robot also that we're, and probably more than one,
actually, tons of them, but they're already starting,
we know we have, we know we're farther along
than what they're telling us, right?
So they're telling us we have
intelligent robot that remembers and learns and
And that's the one that's escaping from the lab.
So now we've set the groundwork for, hey, yeah, it's getting out, but it really isn't hurting anything.
It's escaping and it's escaping because it's learning.
And we want it to learn and it's escaping and it hasn't hurt anything and you're fine.
Don't worry about it.
Right?
Right.
Of course.
What could possibly go wrong?
I've never seen the movies before.
Movies aren't real life, Jeff.
Uh-huh.
Now, we also have robots allowed to trade money,
claim copyrights.
They're already using them for that.
So we're getting used to robots in our life.
And we're all getting used to the robots in our life
that look like robots.
Don't forget.
We want robots in our lives
that look like robots so we don't get all freaked out
because we want to be sure that they're robots
and we're humans.
So it's okay.
Don't you worry about it.
It's all right.
As long as you look like a robot.
You look like a human.
Something is wrong.
Right?
Because have you seen the movie?
I'm going to say this wrong
and everybody's going to holler at me and say,
don't you know how to pronounce the stupid movie?
and I'm going to say I did pronounce it.
That's the way I pronounce it, but I guess it's wrong.
The X.
Now, everybody, I've heard it, machina, but it's not machina, is it?
It's X.
I'm not even going to try.
See, in my ear, I hear, what is it?
Machina, right?
X, machina.
Okay.
See, that's wrong.
I don't care if they tell me it's right.
It's not wrong.
Fantastic movie, though.
and we made the robots look like humans.
Right?
So you know, and some of those in there,
you get the idea that, you know, as the process went on,
you saw the ones that weren't quite right.
But they finally got it right where you can't, can't really tell.
And I'm sure, I mean, we left it so we could have,
what is it again?
Machina, ex machina, ex machina too.
They left it so we'll be ready for that.
I'm sure that's already being made.
Although we haven't heard about it.
But I'm sure it already is.
Don't kid yourself.
But I'm still concerned about what Google is worried about
because really what they're really talking about if you think about it.
I mean, they want you to worry about the key problems
and they're worrying about it for you.
But really what they're concerned about is the robot.
just don't start killing us, right?
I mean, that's the main concern is they just don't start killing us.
You know, avoiding the – and when you talk about a negative side effects,
how do you stop a robot from knocking over a bookcase?
Yeah, and how do you stop the robot from knocking over the bookcase?
I don't know.
On you.
Good question.
Scalable oversight.
Uh-huh.
Scalable oversight.
I love the way that sounds.
Scalable oversight.
Uh-huh.
So we've got quite a way.
to go. But we're almost there. We're almost there. And so many cars are close to being driverless.
Now, we have cars advertising that they're driverless, and you can push the driverless feature.
And they're pretty good. But they're not perfect.
but you still have to be alert enough to grab the wheel
when it gets confused.
You know, that scalable oversight.
Oh, no, a curve.
You better take the wheel or I'm going to crash.
So if you fall asleep, forget it.
You're crashing.
I want to be able to just sleep.
I'll be able to push the button.
Home.
Go sleep.
When we get to that,
then my friends,
We're living large.
In the next 19 seconds, you could sell your home.
Okay, I mean, it's not going to sell your home.
I mean, this, but you're going to take a big step toward getting it sold.
Go to real estate agents.itrust.com and find an agent selected by my team,
a professional who shares your values and speaks the truth.
Sell your home fast and for the most money.
Get moving at real estate agents.
itrust.com.
