Big Compute - AI: Hollywood vs Reality (Part 1)
Episode Date: September 21, 2021Beginning nearly a century ago, Hollywood movies have portrayed artificial intelligence on the big screen… er… at least what they thought of artificial intelligence. But jus...t how much has cinema gotten right? We hear from AI expert Sam Altman, CEO of OpenAI, as he walks through his thoughts on where AI is really going, and what we need to do to prepare for it. Meanwhile, Ernest and Jolie explore a timeline of hilarious and fascinating AI-related blockbuster titles. From AI-gone-evil to AI learning to love, similar themes crop up again and again, demonstrating mankind’s obsession with the “what if.”
Transcript
Discussion (0)
I speak two languages and I can sit in a restaurant and I can hear both conversations happening in both languages at the same time and follow both of them.
But because my brain has to operate on different wavelengths.
That is not normal. What other language do you speak?
Spanish.
Oh, muy bien. Hablo espanol también, pero no mucho. Necesito estudiar mucho más.
That was actually pretty good.
Hi, everyone. I'm Jolie Hales.
And I'm Ernest DeLeon.
And welcome to the Big Compute Podcast. Here we celebrate innovation in a world of virtually unlimited compute,
and we do it one important story at a time. We talk about the
stories behind scientists and engineers who are embracing the power of high-performance computing
to better the lives of all of us. From the products we use every day to the technology
of tomorrow, computational engineering plays a direct role in making it all happen,
whether people know it or not. Hello, everyone. Welcome to season three of the Big Compute podcast.
We are back after a little summer break, and I've got to say to our listeners, I missed
you guys.
I don't know who you are, but I feel like we'd get along.
On the contrary, I know exactly who all of our listeners are.
Just kidding. Do I see that you added a sound effect in here?
Like it says insert maniacal laugh sound effects.
I love that you're adding sound effects for me.
Already off to a good start.
So we do have some exciting episodes ahead about awesome innovation through computational science and engineering.
And it's everything from heart implants to flying cars.
And for today's episode, we're going to have a little bit of fun and we're going to talk about AI, which means I'm going to put Ernest on the spot here.
So, Ernest.
Yes.
How would you define artificial intelligence? Like
if you had a great aunt or something who knew nothing about technology and you had to explain
AI to her, what would you say? Oh, that's a tough one because a lot of people think of AI in terms
of what we've long called the singularity, right? The moment where artificial intelligence reaches parity with human
intelligence, right? We're still not there. Every 20 years or so, they say that we're about 20 years
away. And that's happened several times now. I'm banking on 2045, but that's just a random number
to throw out there. But I think I would say that it's the point where we can rely on AI to do just about
anything a human being could do. That also proves the point that AI can be pretty complex and have
a lot of different definitions associated with it in a way. Because I think artificial intelligence
or AI, as we always call it, is just one of those terms that we've kind of gotten used to and we know
in our minds what it is, but we haven't really sat down to narrow it down from the beginning.
And so I decided to actually look up the definition online. And it seems like most
dictionaries seem to list two different definitions for AI. And one is, and I'll quote from our good old friends at Merriam-Webster, they say the AI has two definitions. The first is, quote, a branch
of computer science dealing with the simulation of intelligent behavior in computers. So that's
referring to the practice or study, you could say. And then the other definition is probably
what we're most familiar with, which is also pretty simplified. It's, quote,
the capability of a machine to imitate intelligent human behavior, just as you were talking about.
So that's the actual machine intelligence or the AI that pop culture tends to focus on. In fact,
I think that many in our generation were first introduced to the idea of AI through its portrayal in Hollywood movies.
And before you start naming AI movies, Ernest, because I know you know like a thousand of them,
hold that thought because I promise we will be voluntarily going down that rabbit hole in
just a few minutes. That's fine. I won't name any, but I will say that, you know, you will find traces of AI very far back to, you know, not the silent film era
for obvious reasons, but just after that, you know, early black and white science fiction films
will often have a component where there is an AI that is a computer of some kind.
Yeah. Even before that, and we're going to go down an actual timeline of pop culture and movies specifically and
when AI was first introduced.
And it is very early.
They built Colossus, supercomputer with a mind of its own.
Then they had to fight it for the world.
But first, I wanted to introduce you to someone who thinks about AI a lot and frankly knows a lot more about it than I do.
And this person is considered to be a thought leader in the AI space.
In fact, just before the pandemic lockdown, we at Big Compute had a live in-person conference event.
It is my pleasure to be here with you at Big Compute.
Where a bunch of thought leaders talked about the many ways
cloud high-performance computing was contributing to innovation, kind of like an in-person live
version of this podcast. And we were privileged to be able to hear from Sam Altman at the conference,
who defined intelligence, at least in the context of being separated from consciousness,
as more or less being... Something about the ability to learn new concepts based off of existing knowledge,
and maybe something about the ability to sort of learn them fairly quickly.
We talk about the right metrics here, but I think intelligence is deeply related to the
ability to learn, which is why I think we're going to get there, because we have algorithms that can learn. And in case you're not familiar with Sam Altman,
he's the CEO of OpenAI, which is basically a research lab founded in 2015 by Sam and by
Elon Musk. I think they're friends, actually. And they have this goal of promoting and developing
friendly AI in a way that benefits humanity as a whole. And it sounds like they believe that AI is really going to take off sooner than we might think.
And they want to get ahead of it and make sure it doesn't end up like in the hands of only a small number of people
who could then theoretically use it for nefarious purposes,
or at the very least, they would just have a lot more power than anyone else. So open AI is all about
collaboration and sharing information as AI is progressing. We have like a set of principles.
We try to write up in our charter what those are, and we'd like the public to hold us accountable
to them. I think people can disagree with the charter though. As the stakes get higher and
higher, no one organization and certainly no one person should be making decisions for what the new
social contract looks like and how this technology gets used and sort of how we share governance and
economics. I think a thing that we will move to in the coming years or decade is more and more of
our decisions will be influenced by an advisory board that we'll need to put in place of people
that can kind of represent different groups in the world, which right now we don't have. So by hearing Sam Altman describe AI as well as
good old Merriam-Webster, I think we can safely say that artificial intelligence is basically,
at least somewhat, a machine with the ability to learn. And it's good to throw down that
definition since these days the subject of AI is really hot
in the media and it's really hot in Silicon Valley and everyone seems to want a piece of it,
down to the point where we're seeing a lot of startups claim to work in AI, and many of them do,
but some of them probably just more wish that they did. Every few years there's like some buzzword,
you know, we're gonna do this with social, we're gonna do. You know, we're going to do this with social. We're going
to do this with podcasts. We're going to do this with crypto. We're going to do this with AI.
And I think by the time you get like, say, three buzzwords in the first two sentences of a startup
pitch, you can pretty safely ignore it. And even one, you should be like a little bit skeptical
unless they're clearly doing it. So the number of startups, say,'re an AI-driven X and are actually AI-driven,
it's, I don't know, 1 in 20, 1 in 50, something like that. And the lesson here is like startups
pitch themselves however they think will work. VCs often fall for it. The good VCs dig in and
don't fall for it. And Sam speaks from personal observation because in addition to being CEO
of OpenAI, Sam is also an investor in many tech startups. And he used to see a lot of startups from their earliest stages when he was president
of Y Combinator.
And for those who don't know what Y Combinator is, it's basically a startup accelerator that
has launched more than 2,000 companies, which would include companies like Stripe, Airbnb,
DoorDash, Twitch, Reddit, and our very own presenting sponsor, Rescale, which graduated from the program in 2011.
In fact, full disclosure, Sam Altman is also a Rescale investor after having learned about Rescale through Y Combinator.
And it's funny because personal story so I was the emcee for the big compute conference
in February 2020 and it was typical for me to meet each speaker before they spoke like I would meet
them either in the dressing room or just off stage so that I could make sure that I had their
introduction and like their pronunciation of their names correct but when Sam arrived to speak at the conference, like, I totally stood in his bubble
and said hello to him and just kind of stared at him. And maybe he didn't hear me or something,
but we didn't end up having any conversation. And thankfully, his name is very easy to pronounce.
So there wasn't any problem from that. Sam Altman, CEO OpenAI. I just kind of stood there right next to him until we went on stage like we were in some awkward elevator ride.
And I remember wondering if maybe he was just one of those people who thought he was like the shizzle and didn't want to talk to a lowly person like me.
But, you know, frankly speaking, I didn't know him at all.
Who am I to judge his entire character based off of one little moment where he may not have even noticed I was there?
So I didn't think negatively of him, but I did wonder about what he was like.
But then as he started speaking on stage, I became really glad that I didn't just pass negative judgment on Sam because I found that he spoke very deliberately on stage, but he also
spoke very sincerely. And it soon became apparent, at least to me, that rather than being some like
Silicon Valley hotshot, he's actually just a really brilliant introverted human whose brain
is always moving probably at a thousand miles an hour. And I do believe that he authentically
wants good for humanity. You know what I mean? Yeah. And I will jump to his defense here as well,
because my wife often accuses me of the same thing. I can be somewhere and just completely
not paying attention to what's going on around me. Right. And I'm the same way. There can be
someone trying to talk to me and I won't even notice it. Yeah. And my husband will tell an entire story
and I'll be finishing up a thought in my mind that I won't realize he's even talking until
one and a half minutes later. And then I have to say, hey, I'm so sorry. I didn't hear a word you
just said, but I just realized you were talking. Can you start over? You know, it happens all the
time, all the time. Yeah. It's the same thing. What my wife is,
you know, gets frustrated with me. And I, and I have to remind her, it's not that I'm ignoring you. It's not that I can't hear you or whatever the case is. It's if I'm in the middle of a,
of a train of thought, nothing is going to get me out of that unless I hear someone screaming
because they're dying or something like that. Right. Yeah, exactly. And I think that at this
conference, it was like, Sam was you and I was your wife. Like, why aren't you paying attention to me? You know what I mean?
Yeah.
And Sam, I mean, he's definitely got his share of critics, so he's familiar with people probably judging him, whether it's fair or not.
Like, for instance, some people say that he's too altruistic in his thinking or that his ideas about societal AI goals are unrealistic.
And we'll get into some of what those are.
And I don't know that I agree with everything he believes.
I mean, for one thing, I'm not an expert in his field, so I don't feel like I can really
develop any really strong opinions because I just don't have the data.
But I will say I do believe he authentically is trying to do what he believes
is right and what he believes will help humanity in this new age of artificial intelligence.
Because with AI comes this whole new package of potential ethical dilemmas, right? Dilemmas that
have been represented in pop culture for decades. I mean, you could even say that ideas behind artificial intelligence morality
have been around even before machines really existed.
I mean, for instance, think of the novel Frankenstein, right, by Mary Shelley.
It's alive! It's alive! It's alive!
That book was written in 1818, which was, you know, a couple centuries ago.
But the story is about an artificial being that is capable of human thought.
The monster created by a man they called mad is turned loose to strike terror into the hearts of men.
Like it derived from a scientific experiment and it facilitates ethical questions about scientific betterment for society that are still very much applicable today.
Yeah, that novel raised a lot of questions about ethics and morality in general.
The spine tingling, blood chilling story that stunned your emotions.
Frankenstein.
But thinking about Frankenstein totally got me down the rabbit hole, as subjects on this podcast do all the time.
And I started looking up all the movies that have to do with artificial intelligence in one way or another.
And it's been really interesting to compare those movies to reality and see what they got right.
As we're watching this evolution of actual AI. In fact, Ernest, as the movie buff that you are, do you have any favorite AI movies that come to your mind?
Yeah, so my favorite of all time is The Matrix.
The Matrix is the world that has been pulled over your eyes to blind you from the truth.
Obviously the original, but I'm a fan of the entire series.
I think the Wachowskis are brilliant. And I've said many times that that series, you have to watch many, many, many times to get all the layers of depth that are in that story.
Human beings are a disease.
You are a cancer of this planet.
And we are the cure.
Here's what it boils down to.
There's a thought out there.
When the singularity happens,
it essentially is going to destroy humanity. And if you think about the Matrix movie,
that's exactly what they're trying to do. The AI is trying to destroy humanity, right? So there's
this negative depiction about AI. But then you also have another segment out there, different
people who are saying, well, that is one possible outcome. But there's also the
other possible outcome, which is because the AI is fundamentally being trained by humans,
you have humans that are very altruistic, and it's possible that the AI goes down the altruistic
route. I believe there's a third option. It is possible that the AI, even though it's trained by humans, will look at the
situation and because of its nature, will automatically try to reach the best or the
solution that is most in equilibrium. And it will turn out that it is a coexistence, right, between
an AI and humanity and that each side benefits from the other. That's so fascinating. You've been talking
about the singularity. I've actually never heard that. I mean, I come from the entertainment
industry, right? I've never heard the singularity. And so I looked it up just now and it looks like
the technological singularity or the singularity, as you said, is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to the human civilization.
So in other words, it's a hypothetical future where technology growth goes out of control and can't be reversed.
And by the way, that's a very negative portrayal of what the singularity actually is.
The singularity is the point at which
this thing essentially is able to
not only think like a human,
but perhaps even surpass human ability.
Okay, so it doesn't necessarily have to be a point
where it destroys us all,
like in all the movies.
Well, I think that's what the chief arguments
are around this, right?
Like there are people who believe
that the singularity is in fact
that event. But as far as I'm concerned, it's
the point where the intelligence
reaches human parity or even exceeds it
somewhat. And so you cannot put it back in the
bag at that point because it's now smarter than we are.
Oh my gosh. The question is what happens
next, right? And that's
the hard part. That's crazy.
So most people believe this is a thing.
This is going to happen. The question is more when I think most of us in this field believe that it's
it's a it's a it's a win, not an if. And that's why I think there's a lot of effort around trying
to establish these ethical and legal frameworks ahead of time. Right. Before we get there.
Right. Because because if we don't, all that does is it leaves it open to abuse when that finally happens.
It's interesting as we talk about this, because looking at the list of movies that I've actually
compiled of AI movies, right, a lot of these themes that we're talking about are explored
extensively in pretty much all of these movies, which is fascinating. And so, yes, I have a list of AI movies that I've gathered after extensive research
on the interwebs because, you know, this is what I'm paid for. Of course. And let's go through this
list in order of release date, starting with the oldest movie about AI to the newest. Okay,
you ready for this? Yep.
So the oldest movie that I could find that had to do with AI was actually a black and
white feature called Metropolis that was made in 1927.
Have you seen this one?
I have never seen that one.
Yeah, so I had never seen it either, but I watched some clips and it was basically just
an epic music score over moving pictures with a few text slides here and there.
But apparently this movie has inspired everything from The Matrix to Blade Runner to Star Wars.
And that's because it shows this super stylized, futuristic, utopian city. And I mean, as a filmmaker, I gotta say the visuals for the time
this film was made are pretty impressive
because I mean, they had to be incredibly creative
with their limited resources.
So as far as AI goes with Metropolis,
apparently the plot of this movie
involves a robot character called False Maria,
who is this AI sort of being
that ends up unleashing chaos.
And I won't give away the ending in case people care to go back and watch it because the entire
movie is actually available to watch for free on YouTube. But this movie Metropolis showed the
first robot ever depicted in a film ever. And if you look at the robot and I'm going to I want you
to look at this robot and tell me, Ernest, does this robot remind you of any other movie characters you've seen before?
Well, I mean, C-3PO.
Yes! C-3PO of Star Wars was clearly inspired by this movie, Metropolis. I mean,
the False Maria robot, I swear, looks like a female version of C-3PO.
And it's funny because I remember seeing this in many, many different films.
Yeah, which is so fascinating that it came from the mind of like one filmmaker, right?
And then it really did influence all of the robot movies after that.
Yeah, the other thing I love to see is like how in these older films,
whenever they're trying to portray something, like in this case, it's an
AI or robot, everything has human characteristics. It's true. Right. And I think that obviously it's
not the case with C-3PO, but most of the time they're using that form factor to scare people.
Yeah. And we're going to explore that with this list because it happens all the time.
Right. It's like connecting it so much closer to home.
Right. No one's scared of R2-D2.
No one. Right. Right. But if C-3PO went rogue, you'd be scared of him. Yeah. Because he has eyes.
He has a face. Yes. I've just about had enough of you. But this movie Metropolis came out more
than a decade before the first real robot was actually ever built. So these filmmakers were
ahead of their time, not only with ideas behind robotics, but also with ideas about AI.
They maybe didn't know what it was so much then, but they had the same ideas.
And in the 1950s, there was the black and white movie called The Day the Earth Stood Still in 1951.
I mean, have you heard of that one?
Yes, I actually saw that one.
And I also saw the remake of it.
Oh, was there a remake of The Day the Earth Stood Still? still i didn't even know are you aware of an impending attack do you remember
if you liked it or if it was any good you know it was an interesting movie that could mean so many
different things it was interesting it was interesting i like i said i i don't remember
what the remake was called maybe Maybe this will refresh your memory.
So the main storyline with this movie, at least the 1951 version, is that a UFO apparently lands in Washington, D.C.
and introduces society to like a soft spoken robot named Gort.
He's a robot. Without you, what could he do?
There's no limit to what he could do. He could destroy the earth. And I'm not really sure if the robot turns out to be good or bad or whatever.
I mean, I have my guesses.
It's probably bad.
But apparently that robot has AI-ish qualities of sorts.
Yes.
And now I'm actually remembering.
The remake did happen and it had Keanu Reeves in it.
One of my favorite actors.
And the reason I remember the movie so well is because, at least for the remake, this was alien driven and it was essentially a Noah's Ark situation.
OK.
The Earth was obviously going to be destroyed.
And so this other alien species showed up and was selecting certain humans and various things and taking them to survive.
Right.
The event.
So what does gore
have to do with it i have no idea the next one i came across was made in 1968 and i have a feeling
this one will ring a bell for you it's called 2001 a space odyssey absolutely and this one i think most people have seen a shrieking monolith
deliberately buried by an alien intelligence starts man on a mission half a billion miles
into space i think this is the one that actually set that entire thought train in motion of ai's
being malevolent yeah i think this is where it starts. It is considered to be
essential viewing. For sure. Right. It's one of the more critically acclaimed films on the A.I.
list. A lot of them are more of the Michael Bay like action adventure sci fi kind of movies. But
this one was something that seemed to set some really interesting precedents in terms of thought
when it comes to A.I. Controlling the mission is a talking computer known as HAL.
HAL, you're the brain and central nervous system of the ship.
Does this ever cause you any lack of confidence?
Let me put it this way, Mr. Raymer.
No 9000 computer has ever made a mistake or distorted information.
It's also being hailed, to your point, as being one of the most realistic or accurate AI portrayals in film history, which is saying something given that it was made in the 1960s.
Open the pod bay doors, Hal.
I'm sorry, Dave. I'm afraid I can't do that.
What are you talking about, Hal?
This mission is too important for me to allow you to jeopardize it. And the synopsis is, quote, when astronauts are sent on a mysterious mission, their ship's computer system, HAL, begins to display increasingly strange behavior leading up to a tense showdown between man and machine that results in a mind bending trek through space and time.
Yep. It is a great movie.
And you said it's kind of odd that this is from 1968,
but back then when they didn't have the tools to do CG,
they had to rely a lot more on their storytelling ability.
Which, to be honest, I miss that.
Yes.
Because I feel like everything is so expensively cheap now.
No, that's exactly right.
They're spending millions of dollars in CG
and they're like truncating the story
to try to rush it through.
There's definitely an appreciation
for the older movies
because they had to do a lot more with less.
They had to rely on good storytelling
and characters.
And not just that,
but like film angles, right?
Because they didn't have CG.
Yeah, and I mean,
if anybody hasn't seen this movie,
it's at least worth watching the trailer, which will post on big compute dot org.
You can see just by the shots that are shown in the trailer itself why it is renowned like it is.
Exactly. And that's why it stands up with La Valencia and Velocipaster.
I would guess that it passes them quite easily.
And the thing is, this happens a lot right so there's
a lot of ai movies and this is the exact concept that the ai becomes sentient and for whatever
reason threatens to take out humans well not even threatens to take out humans the human becomes
scared that the ai is going to either do something bad fall fall into the wrong hands, whatever the case is,
and the human determines that they need to destroy the AI.
Yeah.
Well, the AI knows this, figures it out.
And then fights back.
And then fights back.
That's like everywhere.
That's like the classic premise for like half of these AI movies.
Yes.
And if you look at AI today and what it actually is, it's definitely a step away from what we see AI as being in a practical sense today, which is more of a bunch of programmers burning the midnight oil in some office building.
Honestly, there's no way I can make this sound exciting.
We show up every day and we bang on our computers and we try to get like algorithms to work.
And then we find out it was some stupid bug and we all get upset with each other.
That's my BFF, Sam Altman, again.
No, it's a little bit better than that.
We are trying our hardest to discover what makes intelligence work.
And we are trying to not think about how we get our applications
a little bit better next year, but over the long arc of history,
what it takes to make machines that truly think.
From supersonic jets to personalized medicine, industry leaders are turning to Rescale to power science and engineering breakthroughs.
Rescale is a full-stack automation solution for hybrid cloud that helps IT and HPC leaders
deliver intelligent computing as a service and enables the enterprise transformation
to digital R&D.
As a proud sponsor of the Big Compute Podcast, Rescale would especially like to say thank
you to all the scientists and engineers out there who are working to make a difference for all of us.
Rescale. Intelligent computing for digital R&D.
Learn more at rescale.com slash BC podcast. And going back to the theme of adventure AI,
1970 brought us the movie called...
Colossus, The Forbin Project.
Where a secret, massive, intelligent computer system called Colossus
is hidden away in the U.S. Rocky Mountains
to ensure the nation's safety against a nuclear attack.
The frightening story of the day man built himself out of existence.
But then Colossus ends up connecting with a similar computer in Russia called the Guardian
and chaos ensues.
So it's that plot again that we were talking about.
The choice is yours.
Obey me and live or disobey and die.
A lot of these were capitalizing on what we would now call the Red Scare.
Totally.
Which is Russia's the adversary.
And it wasn't just this.
I mean, look at...
It's all kinds of movies.
Right.
Like Rocky IV was all about beating the Russians, right?
So War Games is something similar.
War Games?
War Games, yeah.
Okay.
Yeah, that one's not on my list, but I know I left a few off.
War Games.
It was the wrong computer.
Shall we play a game?
AI is, I think, in NORAD, if I'm not mistaken, which is, I believe, in the Rocky Mountains.
It's like the exact same movie, just 10 years later.
And it's meant to protect us all from a nuclear attack from Russia.
What the heck?
I think it's the exact same movie.
I obviously have never seen colossus the
forbin project but now that i'm thinking it like i need to go watch it and then re-watch war games
because one of them ripped off the other one well colossus came first clearly now war games has a
little bit of a different tack it's it didn't connect to a computer in russia anyway i won't
give away the i mean it's 1983 like if you haven't seen it by now, you know, but, but clearly there's a lot of people who
haven't seen La Valencia and Velocipaster.
So I don't want to ruin, I don't want to ruin war games for you, but I'll just tell you
this.
It was a very young Matthew Broderick in it.
Oh, I don't think that I deserved it.
Do you?
So in 1970, we've established that Colossus the Foreman Project was released.
And then three years after that, 1973 brought us the movie Westworld, where Western style theme park robots malfunction and then begin killing visitors.
Where nothing, nothing can possibly go wrong.
I'm shot.
Go wrong.
Raw.
Go wrong.
Oh my God.
Shut down. Shut down immediately.
So, Ernest, if you haven't seen that one, it sounds right down your alley.
Wasn't there a new thing called Westworld on HBO recently or something like that?
I feel like there was something like that.
And it had to do with, like, AI and robots?
I bet you anything it's a remake.
I think I made a mistake.
So, our creatures have been misbehaving.
This was like a series.
This wasn't a movie.
As far as I know.
It was probably inspired by this, is my guess.
Yeah, I'll definitely try to go back and watch this.
Because if it will make it into my Hall of Fame, that's another question.
Boy, do we have a vacation for you.
For you.
For you.
For you.
And of course, we couldn't talk about androids with the ability to think without mentioning
Star Wars, C-3PO and R2-D2, human cyborg relations, and then a host of all the other droids are
AI-style robots who are basically human characters in machine form running all over those movies,
whether it's the original 1977 Star Wars film, one of the political thriller prequels, or the critically debated
newer films, whether you like them or not.
If it's a Star Wars movie, it's got humanistic droids in it.
These aren't the droids we're looking for.
Which we now know were inspired by...
Metropolis.
Metropolis!
And far fewer people are aware of the other AI-related movie that came out the
same year as the original Star Wars. This movie was called Demon Seed, and it's about an AI system
that takes its creator's wife hostage and attempts to impregnate her in order to take on a human form.
So, you know, a feel-good movie. It is something more than human, more
than a computer. It is a murderously
intelligent, sensually self-
programmed non-being.
Yeah, I've never seen that one.
Unfortunately.
My child shall live as
man among others.
Child? Yes.
My child and yours.
It kind of takes on the concept of a smart home a few decades before smart homes actually became a thing. And I know that you, Ernest, I mean, being the cybersecurity
guru that you are, I mean, you don't have a Google Home or an Alexa or digital locks or
any of that stuff, right? None of that. I have a dumb home. Everything is analog.
See, whereas since I am ignorant of many of the cybersecurity information that you have on a daily basis,
I have a smart device in pretty much every room, and it's connected to every light bulb, every TV in the house. So probably a hacker's dream, frankly.
But a lot of times I still spend so much
time yelling at Google for getting my commands completely wrong and like playing grunge metal
when I'm trying to play music for my toddler or something. Play Cocomelon songs.
I want to make something clear. I'm not like a technology troglodyte. Obviously,
I work in like the cutting edge of technology. It's not that I don't think I'll ever have a smart anything.
It's that eventually this technology will get to the point where it is secure and able to be used in a context that I agree with.
But right now, the answer is no.
And that's just from the security perspective.
Everyone I know who has these things complains that they don't understand them or they do the wrong thing half the time or whatever the case is.
And even though it's been around for a while and people think like, oh, yeah, this smart stuff's been around for like a decade, it still hasn't even learned to crawl.
So it's going to take time.
But eventually, like anything else, maybe a decade or two from now, it will be fully fledged and it will be everywhere.
There will be no way to avoid it.
But it will also have security. And then you'll get one in your house.
I think by that point I'll be out in the ranch and not needing any of this.
So most people remember how bad it was five years ago. And people that use Siri or whatever
have noticed that it gets a little bit better every year. Actually, not a little, a lot better
every year. And now it basically doesn't mess up,
even in difficult environments,
or it doesn't mess up appreciably more than humans mess up
when they're trying to understand.
Do you at least use Siri?
If I do, it's usually for stupid stuff
like set a 15 minute timer
so I can come back and check on my roast
that's in the oven or,
you know, something like that.
I'm not having it tell me anything of importance.
It's just like reminders, alarms and alerts. I use Google on my phone, you know something like that i'm not i'm not having it tell me anything of importance it's
just like reminders alarms and alerts i use google on my phone but i use it sounds like for a lot
more than you do but then i also have bixby on my samsung smartwatch and no offense to samsung i
think they make a lot of good stuff but bixby is dumb i'm not a fan well siri's dumb too and
that's actually i appreciate appreciate that about Siri.
Like I know a lot of people, you know, would be like, I don't want a dumb assistant. I know I
want a dumb assistant. I want one that doesn't know what I'm saying half the time, but does
know when I tell it to set a timer. That's all I need. But smart devices, I mean, it is, it's good
to say that they are just the beginning, I think, of where AI will take us and where AI has already landed.
I think one of the most exciting developments in the field in the last few years has been how good AI for natural language is getting.
I think we are going to see an explosion in the next few years of systems that can really process understand
interact with generate language uh and it will i think it'll be the first way that people really
feel powerful ai because you'll be able to interact with systems like you do by talking
to somebody else you'll be able to have dialogue that actually makes sense. You'll be able to process, and computers will be able to process, huge volumes of text that are
sort of very unstructured, and you as interacting with that system in whatever way you want,
in whatever way you do, will get what you want. These quotes are all from Sam Altman in 2020,
and even since then, progress has been made and announced in AI in terms of language.
Specifically, OpenAI has this AI system that translates written language into code.
It's called Codex, which is kind of crazy to think about.
I mean, speaking simply, if I understand it right, when you talk to Codex, it then uses
your commands and your language to generate code for you so that you, the programmer, doesn't
have to do that. And it's basically like telling a computer what you want it to do and then having
it act on your behalf. That's super interesting. It's almost like they're using AI for NLP to feed
into an AI programming interface. So yeah, this is like Inception. That's awesome. I never even
heard of that. Yeah, it's new. I just barely saw some articles on it over the last few months. And apparently Codex is a descendant of GPT-3.
I'm not sure if you've heard of that. Yes, I have. Yeah, that was pre-Codex. And that's OpenAI's
vaunted natural language model. But Codex is trained on billions of lines of code in addition
to written text. So, I mean, obviously these are some interesting things being done in the world of language for AI. Whereas I would say 40 years ago in the 1980s,
computers were just starting to really become a household option. I mean, I remember our first
computer when I was really young, you know, it was a DOS controlled computer, black screen,
green text. And with this option of having a household computer came this new excitement in pop culture
around the idea of computers and robots.
And I don't know about you, Ernest, but for me, the 1980s and the 90s were totally my childhood years.
And there's this nostalgia around the movies from that particular time. And it surprised me going back how many of the AI related movies actually kind of took off in a big way.
For instance, Blade Runner in 1982, which everyone's probably heard of.
A Blade Runner's job is to hunt down replicants, manufactured humans you can't tell from the real thing.
That movie is about like bioengineered replicas of humans powered by AI living amongst real humans, but then they only live for four years.
That's a great movie.
The remake is great.
I did your job once.
I was good at it.
But I'll agree with you.
Like the 80s to me were the best decade.
They're awesome.
Of all the decades I've lived in.
I mean, you know, I'm sure somebody will say the 70s and 60s.
I wasn't alive.
I'm sorry.
You mentioned computers were just coming into the home.
My first one was a TI-99 4A.
You could not do very much with that thing.
It wasn't even DOS.
But I was fascinated with that thing.
And that's how I learned how to program in BASIC.
And then the toys of the era, right?
The toys, the action figures.
I had the entire Ninja Turtles set with the entire ninja turtles right with the turtle van
and even like the turtle boat and rocksteady and bebop it's the turtles giving the old foot
soldiers the boot with their latest invention the cheapskate to me what made the movies in general
in the 80s great was it i think when the 80s hit and we started actually seeing computers in homes
storytellers started kind of focusing in.
They were looking at the technology and saying,
if this thing advanced to do X, Y, and Z,
what would that mean?
So I think it brought science fiction in general,
not down to earth, but it made it more tangible
as opposed to more like pie in the sky.
And that's why I think the 80s were the pinnacle of that
because by the time we got to the late 90s and the early 2000s michael bay ruined the transformers
oh don't get me started on michael bay he also ruined the ninja turtles and that is sacred ground
so they're aliens no that's stupid they're ninjas but when it comes to Blade Runner, yeah, in the 1980s. What I didn't know was they were looking for me.
Blade Runner was pretty much renowned for its production design, which was futuristic and unique.
And again, I didn't know that it was also inspired heavily by the same movie from 1927.
Metropolis.
Metropolis.
And during the same year
there was also the movie
Tron
finally we get here
when Kevin Flynn
a computer genius
unlocks the dimension
beneath the screen
he becomes a prisoner
in a world of his own making
a talented computer engineer
is transported to a digital world
where he has to
face off against
the computerized likeness of his nemesis as well as the master control program and i would venture
to say i know there's a remake i liked the original tron better than the remake you're probably not
alone but not only did i like the remake better i loved it way more than the original and that's
not just that's not to say anything negative about the original i love that one too oh man he said he was about to change everything science medicine religion the thing
i loved about the remake so the movie is essentially like a two-hour long daft punk music
which was amazing daft punk like you are so i was like man it was it was amazing and two is jeff
bridges right yeah okay jeff bridges i'm a fan. I like him.
Jeff Bridges is awesome. And the fact that he brought the dude into the Tron universe was golden.
I created a program in my own image that could think. As much as he knew, there was far more that he didn't know. And he based an AI off of imperfect knowledge because he's an imperfect human.
And the whole thing just brings it all home in that there's this fear that because we are imperfect and we train the AI, it will also carry our imperfections, good and bad.
But there's an arc that ties across all of these AI stories
that we're talking about here. And it is the culmination of the human condition in AI. I think
that there's a story that really matters in all of this. Yeah, I totally agree. And there's even
more great stuff coming in the 1980s. In 1984, there was another one that I know that you appreciate, Ernest the Terminator.
Absolutely. A total classic. In the 21st century, a weapon will be invented like no other. Where
Arnold Schwarzenegger plays the role of a cyborg assassin who travels to 1984 from 2029 to kill
the person who will eventually give birth to a son who will then, I guess, fight against Skynet, the artificial intelligence system that will ultimately spark a nuclear holocaust.
I always love when old movies try to portray the future, like 2029.
That's only like eight years away.
Eight years away.
Yeah, The Terminator is actually a great franchise.
But, you know, I go back to our biggest problem with AI is we are projecting our imperfections
and our faults onto it. And The Terminator was one of the first ones that I can remember
where they took a little bit of a different tack. They allowed the AI to just, again,
it took a negative view of AI, which most movies do. But the difference is they let it play out and then it was an attempt to correct the
mistake.
I'll be back.
And then after the Terminator, family friendly movies really started to jump into the AI
game.
Movies like Short Circuit.
Yep.
Number five is Alive.
In 1986.
I love that movie.
Do you?
That's awesome.
Artificial intelligence has gotten too smart.
No.
It's malfunctioning.
It might not do anything.
But it could decide to blow away anything that moves, couldn't it?
It's about an experimental military robot that's struck by lightning and comes to life.
And then also Flight of the Navigator.
I don't know if you remember that one in the same year.
I love that one.
And then that little spaceship that could reconfigure itself.
Yes.
About a 12-year-old boy who wakes up in a forest and apparently he discovers that eight years have passed without him aging.
And then he hops into that spaceship you're talking about.
And I think he talks to an AI sort of robot to try and unravel this mystery of the life he lost or something.
It sounds like you remember it.
The story of a spaceship.
That flying saucer is first rate. cool dudes a friendship i'm gonna miss you i'm going to miss you too and an experience beyond imagination i remember loving flight of the navigator as a kid
but i don't remember the plot very well i just picture that spaceship thing like morphing into
like a bullet like shape and then traveling really fast.
And and I don't know, I feel like there was like a big eyeball or light robot thing, which is probably the A.I. that we're talking about.
Yeah. And it would just kind of hover around or. Yeah. It was on an arm.
Yeah. On an arm. I'm not going to let you try this out on me. What if you fry my brain?
I will not fry your brain. How do you know?
I have been programmed with superior intelligence.
And then I remember like some Jim Henson type puppet creatures.
I don't know.
Am I in the ballpark here? No, you're right.
You're right.
And it's funny because this spaceship, it wasn't just that it could fly.
It essentially had like anti-gravity.
It could just hover.
Oh, yeah.
It had no like engines. it was able to manipulate gravity but yet the eye that
was you know representing the ai was on an arm floating around inside of that thing like you
figure that thing would have hovered inside too but they had to do that in order to make the effect
work right right yeah the technology of the time but you know these movies inspired a whole bunch
of future movies and not just movies but
like uh concepts so like one of the things that i remember at least to me the first place i had
ever seen it was in flight of the navigator was when he was ready to get in there essentially
it took like the back of its shell and like morphed it into a staircase yeah and he was
able to climb in and then it was just. Oh my gosh, I remember that now.
It was almost like a liquid metal.
Yes.
It reminded me of more recently.
I don't know if you watched the newest Star Trek series, Discovery.
I have not.
Another one which I'm a huge fan of.
Shocker.
They just in last season introduced the concept of programmable matter, which essentially is that. Flight of the
Navigator, like morphing
mercury-looking liquid.
Right. In this case, it's not mercury, but
the concept is the same. There's matter
that you can program. You can
essentially tell the matter what shape to
take. Sit down.
I think there's been some sort of mistake.
Your brain contains data necessary
to get me and my friends home.
I'm just a kid.
You are the navigator.
I want to go back and see if that movie holds up.
Probably not.
I doubt it does.
I made the unfortunate mistake a couple of years ago of going back to watch The NeverEnding Story.
Oh, yeah.
I was devastated.
I was like, man, I loved this movie as a kid.
That says a lot because you like
all the garbage movies yeah so i try not to i try not to go back and watch 80s movies
yeah that's the ones that i know like short circuit holds up or back to the future back
to the future those are awesome yeah but like et these like sci-fi fantasy type things didn't
really hold up like even the was the dark crystal yeah with the little jim henson the little muppet
things yeah even that one like going back it it's like, ooh, this is terrible.
This is terrible.
In a place outside time, where good and evil struggle to possess the Dark Crystal.
There were two more classic AI movies that sprung up in 1987.
So RoboCop.
Your move, creep.
About a cyborg cop
who turns on his evil creators
when he learns of their nefarious plans.
You have the right to remain silent.
You have the right to an attorney.
What is this shit?
Anything you say may be used against you.
And the TV movie by Disney.
I don't know if you ever saw this,
but I was like obsessed.
Not quite human.
And then the sequels.
A scientist has just come up
with his greatest invention.
Hi, Dad.
Alan Thicke creates the ultimate kid,
but he's not quite human.
These movies were probably so bad,
but I loved them.
They were about an inventor
who sends his daughter to high school with his latest creation,
a robot teen named Chip. So, like,
you know, your classic teen Disney
TV stuff, but with, like,
a robot.
How do you feel? Feel?
Yeah, I mean, when Becky said she couldn't go to the dance
with you, what was your reaction?
Well, something heavy.
Right in my central computer. Yeah, I highly
doubt they would hold up.
Probably not.
And even RoboCop,
I wouldn't put it in the realm of AI
simply because the intelligence portion of that thing
was still a human brain.
Oh, see, I haven't seen RoboCop in so long.
I couldn't remember.
Yeah, it's still a cyborg.
Don't get me wrong.
Like, I guess if you want to make that connection
that the human brain was relying on electronics to do it,
then yes, you could,
you could make that argument.
There's a couple other movies on this list that you'll probably say the same thing about then.
Yeah.
But RoboCop was definitely,
they essentially shot off,
I think his arms and legs,
if I'm not mistaken,
but his torso and head were intact.
So that's what made it into the robot frame.
So it was him inside of there.
As a matter of fact,
that's a key part of the movie because at a certain point he goes rogue so to speak and it's because his human brain is overpowering the oh the cyborg side the cyborg
side yeah so then it does he turn into a good cop he becomes a good good robo cop right like it's
not like a ai gone bad it's more of like a human hybrid cyborg that goes good yeah so what ends up
happening is he essentially he goes from
like being told what to do and carrying out every instruction exactly as told regardless of like
what we would consider the morality or the ethics around it to essentially like a robin hood type
not obviously not stealing or anything but just he's like helping the unfortunate right against
the powers that be he's now just purely good and he does what he does to do the right thing, not because someone told him to do it.
I see.
OK, that sounds like a good 80s movie plot.
Oh, yeah.
This guy is really good.
He's not a guy.
He's a machine.
So nowadays, our practical use that we're seeing of AI is smart speakers, email providers that can anticipate the next words that we plan to write.
And we haven't really seen these sorts of AI cyborgs that have been represented in these types of movies. And just going back to Sam Altman on what's actually happening in AI,
Sam believes that there is an AI revolution that is definitely coming,
and it's probably around the corner. Yeah, I would agree with him.
I think there have been
three great technological revolutions so far in human history the agricultural revolution
the industrial revolution the computer revolution um i think we are now in the early innings of the
ai revolution and i expect that one to be bigger than all three previous ones put together um
thinking understanding intelligence like that really is what makes humans humans much more than our ability to get physical stuff done in the world and so i think this is going to
be a huge deal and impact life in a lot of ways we talk about cloud high performance computing
and the simulations we run in there right and when you pair ai with that you start getting
predictive analysis so traditional simulation is one where like I put parameters in and then there's an output.
I go look at the output and I'm like, this is not really what I expected or what I wanted or what I thought.
I'm going to put a different set of parameters and run it.
And where AI comes in is AI kind of sits in the middle.
But earlier in that process, you know, it's looking at the bulk of the simulations you've run before, all of them, right?
And what it's doing is it's saying, based on the 10,000 simulations you've run before
this, the parameters you're putting in for this are most likely going to result in this.
And that's not what you're looking for.
So let's not run that simulation.
And so what it's doing is it's trying to narrow down the number of simulations you have to
run to get to the stated goal.
So instead of running 100 or 1,000 simulations to get something that you want, it's trying
to narrow that down to like 10.
That's just an example.
So it's putting predictive analytics against simulation. So everything we do in computing period, but
specifically in HPC and in AI is bound by two things, right? Our capabilities around electrical
engineering and mechanical engineering, more specifically material science and semiconductor
design, right? Like these are the two areas that kind of drive all of this. And right now,
both of those are very much not AI driven.
There are some, you know, I would say low level AI things being done, but they're not quite to where I would consider it was like full AI.
Yeah, I agree with you.
And when we get to the point where we have solid AI, it's literally going to change the world.
Like it's not just a revolution in terms of like what AI can do.
But just imagine this, like the simulations, the high performance simulations we run in the cloud, right?
They can only run as fast as the hardware that we have behind them.
The hardware that we're designing,
the semiconductor design is happening on the current generation of semiconductors, right?
So current generation of processors, memory, RAM, whatever, all that stuff.
Once this cycle starts feeding into itself,
the speed at which our material science and semiconductor design accelerates results in a
pattern where the technology is actually advancing faster than we can use it. And that is where we
will truly change the world because we're now not hamstrung by our own ability as humans to design these
things. We now have the ability for AI to do it at a speed that we cannot even compare with.
Interesting. Yeah, I see what you're saying. And I imagine that that would come with an entire
package of challenges, you could say. I know that there's already a lot of talk
about fears that AI is going to take away jobs, they say. And I will
get out a little bit of a soapbox here. It's my personal belief that AI probably will take over
a lot of jobs, right? Like for instance, self-driving cargo transport might not need a
human driver anymore for obvious reasons. But I guess I tend to err on the optimistic side of this because I feel like humans will always adapt. And once you learn a skill, who's to say you can't transfer that skill to another area or even learn another skill? I mean, that's the nice thing about human beings is that we're, but maybe more carefully planned for and ultimately embraced for a better future if we can continue moving in that direction kind of safely, if that makes sense.
I agree with you, but I would take it a little further.
I would say the vast majority of people's jobs are going to be taken by AI in the next 40, 50 years.
And I mean like over 90 percent.
Wow.
Yeah, because think about it.
Like the reason we needed
these large labor forces historically
was primarily for physical labor.
I realize that that's changed
and there's a lot of like other types of jobs now.
But the reality is like with just about any of those,
a machine that has the correct capabilities built into it
can do it better than a human, period.
There are a few out there that that's not the case, right?
And I acknowledge that.
But the vast majority of things, like you talked about transit, right?
Farming, even service industry jobs like serving food, all these kind of things can be done better by an AI, better by a robot.
We just don't have the technology to fully do that yet.
That technology is coming.
And not only is it going to be more effective,
it's going to be more efficient and it's also going to be safer. Right. So like these a lot
of the jobs that are dangerous out there. Yeah, I do like that. Humans won't have to do anymore.
Like just driving. Exactly. You know what I mean? Like a huge cause of death, car accidents,
because people make mistakes. And if you make a mistake while you're behind the wheel of a giant
machine that's going at 70 miles an hour, that could end up in a really bad result for you or some random person who isn't making a mistake.
Exactly.
So, I mean, there are definitely advantages to taking this technology to the next level.
And I feel like those advantages could be beneficial enough to make it worth it.
Yeah. And there are already people discussing this problem, right? Because we have known for at least a decade now, probably more like two,
that this was going to happen, that eventually we were going to automate just about every single
human out of a job. And it wasn't like it was an intentional malicious thing to do. We just knew
that technology on its own path was going to do that. Like once the science is there, you can't
put it back in the box. Right.
So this is going to happen.
It's one of those things where like all of us have to think about this.
Right.
And what it means and what we're going to do about it.
And I think that's a great thing that Sam is doing here is kind of like trying to put
this in the consciousness of as many people as possible and not trying to drive an agenda
one way or the other,
just saying you need to think about this and what's going to happen and figure out how you're
going to respond to it. Because if you get caught at the point where this stuff takes over and you
have not thought about it, it's going to be really hard to deal with it. I also believe that as hard
as it was at the time of the industrial revolution to imagine the jobs of computer programmers
working with big compute, it's hard for us to sit here and think about what the jobs
on the other side of this will be. But human demand, desire, its creativity seems pretty
limitless. And I think we will find new things to do. Betting against that has always been a mistake.
And I think everybody who knows anything about AI has a certain place on kind of the viewpoint spectrum, be it more optimistic or
more pessimistic. And while we have some ideas of what AI is going to look like, we don't know
exactly what a world filled with AI will look like yet. I think it's very hard to think about
what the world definitively looks like when computers are more intelligent in some ways
than humans or when computers can
do most work that humans can. So the only prediction I can make with confidence is that
things will be very different. And anyone I think who says we're going to keep everything the same
is lying. But although change is inevitable, we can work really hard to make sure the future,
although it's guaranteed to be different, guaranteed to be different is better.
And I think that's a good cliffhanger to leave this episode on.
And then we'll pick up next time.
Yeah, we still have a lot more AI movies to go through.
Yeah, I don't know what I was thinking when I thought that we could talk about all of
them in less than an hour.
That's not going to be a thing.
No, after all, we're not AI.
And if you want to check out some of the movie trailers we've been talking about, visit bigcompute.org. Yes. And you can also see Sam Altman's full
talk there. And with that, tune in next time to hear more about where AI is going compared to
where Hollywood thinks it's going. I feel like I'm like a Saturday morning advertisement.
Tune in next time. Yeah,'re selling uh weight loss pills on a radio
station that's what it comes down to and to help spread the word you can leave a review for us on
apple podcasts or wherever you get your podcasts or tell a friend and always remember to use mfa
and 321 backupup. Stay safe out there.