StarTalk Radio - Will AI Replace Us? with Matt Ginsberg
Episode Date: July 21, 2023Is artificial intelligence taking over? Neil deGrasse Tyson and co hosts Chuck Nice and Gary O’Reilly discuss deepfakes, AI hallucinations, and whether AI really is intelligent with software enginee...r at X, the moonshot factory, Matt Ginsberg. NOTE: StarTalk+ Patrons can listen to this entire episode commercial-free.Thanks to our Patrons Kathleen Kussman, Craig Hamilton, Denis de Oliveira, Jim, Ryan, and Krishna for supporting us this week.Photo Credit: mikemacmarketing, CC BY 2.0, via Wikimedia Commons Subscribe to SiriusXM Podcasts+ on Apple Podcasts to listen to new episodes ad-free and a whole week early.
Transcript
Discussion (0)
Coming up on StarTalk Special Edition, will AI be the end of civilization?
But before that happens, will it help us coach sports better?
Will it make deep fakes that will destabilize all that we know and trust?
Also, will it help us explore space?
Stay tuned.
Welcome to StarTalk.
Your place in the universe where science and pop culture collide.
StarTalk begins right now.
This is StarTalk Special Edition, an entire show right now devoted to AI.
Is AI good or bad?
It's bad.
You heard all about it.
Everybody's opining on it.
Is it a problem solver or does it create problems?
How do we use it in space, in media, in all kinds of places?
Here we are back with Chuck Knight.
Chuck, how you doing, man?
My co-host.
Sorry to tell you, but this is not me.
I'm not here.
Oh, see?
This is my AI doppelganger.
That's what it is.
That's your doppelganger.
Right now.
I have ways to test for that.
We'll find out in a minute.
Here's the problem.
The AI has me tied up in a closet.
Help me, please.
Someone help me.
Gary, how you doing, man?
I'm good.
I'm interested in this because it's not a subject I have any real depth of knowledge about,
which is most subjects, I agree.
He's lying. That's not Gary. subjects, I agree. He's lying.
That's not Gary.
That is not Gary.
He's lying.
Gary's also tied up in a closet.
Gary is right next to me in the closet, man.
All right.
All right.
So I'll do my best with your two doppelgangers.
So here we go.
Let's do it.
So Gary, set the scene.
What do we have here?
All right.
Today's guest has a doctorate in mathematics from Oxford.
That's the English Oxford.
Was researching astrophysics and then decided to switch to AI.
Author.
Okay.
Yes.
Published author of a book in 2018 called Factor Man.
Developed, and I love this thing, developed a computer program called Dr. Phil, that's
F-I-L-L, that solves crossword puzzles.
And Dr. Phil has competed in professional crossword puzzle tournaments successfully,
and now this gentleman works for the delightful people at Google.
So today's guest, none other than a dear friend, Matt Ginsberg.
Who's returning to StarTalk, Matt Ginsberg, welcome back.
It's great to be talking to you all again.
I think at the end, we probably should have a vote
on whether we want to let Chuck and Gary out of the closet
or whether we prefer the doppelgangers.
And we can just go with what we have.
That's right.
Whether these are better versions, right.
So Matt, what are you doing with Google right now?
Or are you on some kind of NDA, non-disclosure agreement?
So I work for an organization called X.
It is part of Alphabet.
We are Alphabet's moonshot factory, which means...
But just to be clear, Alphabet is the holding company of Google.
Alphabet is the holding company of Google and other organizations.
So Waymo, for example.
And X is part of Google or X in Alphabet?
It sounds like it belongs in Alphabet.
X is in Alphabet.
We're not part of Google.
We're in Alphabet.
I got to tell you something.
For a guy who went to Oxford, I ain't so impressed that you know that X is in the Alphabet.
Okay?
That's fair.
That's actually fair.
We do the hardest stuff
we can think of.
It's an unbelievably fun place to work.
The people are incredibly smart.
We have a project called Tapestry.
This is like the X Prize.
And that's fair.
It is.
The X Prize was money
for just some,
as we say, moonshot.
Something that,
who thought you could make this happen?
And you do, if you put enough smart people funded by enough money.
And then there it is.
Is it true that they call it the failure division of Google?
Because they don't care if you fail.
It's all about the discovery of information and advancement
through doing stuff that you would never otherwise attempt?
So we have this project called Tapestry.
The goal is to decarbonize the electric grid.
So everybody uses renewables and huge climate impact.
That is so hard that if we can't do it, nobody's going to be stunned. Any specific project at X is probably
more likely to fail than succeed. But there are some amazing successes. So Waymo, which is Alphabet's
self-driving car division, came out of X. We have another project called Mineral that just graduated
that has these weird vehicles that drive around
the tracks between crops on a farm and they use cameras and machine vision to figure out
how the plants are doing and just generally to make farming more efficient.
All of these things are things that were just as hard as tapestry when they started,
but they've actually succeeded and now they're out as, I mean, they're bets.
They're part of Alphabet, so we call them bets.
And these are other divisions.
Clever.
I like that.
That's it.
But I have to tell you, it's great working for a company that expects you to do unbelievably
hard things and realizes that when you try and do things that hard, you're not always going to pull it off.
Well, of course, science in general has many, many failures.
The press only talks about the successes.
So you starting out in science,
this would not have been a foreign concept to you other than that.
There's a whole company that's cool with it.
Normally, if you don't make the bottom line,
you're on the street, you know, the next quarter.
Exactly.
And, you know, I tell my friends who are in projects
that eventually get shut down because they didn't work.
I say, you know, you always learn more from a failure
than from a success.
So we should celebrate the failures.
And we do.
We actually, when an effort gets shut down,
there's a big meeting, everybody applauds.
It's like a party.
Because we know that we've learned stuff.
We know that we've tried stuff.
And we know that now we're going to try something new.
And enough of it works that the whole enterprise
is something that continues.
And how do you justify that to the short-sighted desires of shareholders?
Because, honestly, that's...
No, here's how you do it.
We ask a different question.
So, Matt, what percent of Google's annual revenue gets directed towards X?
So, that is covered by an NDA. This is like an R&D. This is an R&D number, right? That is covered by an NDA.
That is covered by an NDA.
It is an R&D effort that tries to do the crazy things.
All the stuff that we're going to talk about,
about generative AI,
is based on a technology called Transformers
that came out of a part of Google called Brain.
Brain came out of X. So Waymo came out of a part of Google called Brain. Brain came out of X.
So Waymo came out of X. We've done amazingly impactful things. We have a long time frame.
So when we start a project, it's common for us to say, this is going to take 10 years.
We might kill it in two because we can tell that it's not working. But if it succeeds, it's going to take 10 years.
We're okay with that.
So the hard part from the shareholders is not arguing that we're adding value.
I think we're clearly adding value.
We have to get them to be patient enough to see that value materialize.
So you're lucky, Matt, because there's a culture there.
I saw it once or twice when I was playing where if you did lose, got
defeated, beaten heavily, people went away, licked their wounds, but considered where the things went
wrong, came back with solutions. If you build that culture, you can achieve things by using that way.
But the pressure to get results, and as Chuck was so rightly pointing out, it's dollars. I don't have time because I'm committing such a phenomenal amount of money to
this. And if we lose too many games, my coach gets fired, all sorts of things. So it's a really,
really fascinating place if you can develop a sustained culture like that.
We do have the culture, and Alphabet has been fantastic about recognizing that there is room
in an entity as successful as Alphabet
to have a bunch of people, to let them take the long view, to let them try
and do incredibly hard things and see what happens.
And obviously, we can't just
keep failing.
Some stuff eventually has to work, but some stuff does work.
And the people at X who decide, what are we going to work on?
When are we going to kill it?
When are we going to keep pushing it?
They seem to be very good about ensuring that net-net were a positive.
Can you comment, just reflect on a recent AI news story about a John Lennon song where they sampled John Lennon's voice
and then had him finish the song
because he died before it was recorded?
Yeah.
Can you just reflect on,
is that a good thing or a bad thing?
That's a complicated thing.
I think that...
We could just say,
instead of asking what would Jesus do,
we could say, what would john lennon do
so if he were here would he punch you in the nose like what would he do more beatle songs i think are
an undeniably good thing i think that okay that the ability there are two things i think you want
to take away from this first look at what they chose to synthesize. They synthesized Lennon's voice.
Yep. What made the Beatles so magical
is, I think, the words.
What they chose to say. That's much harder.
Synthesizing a person's voice is, relatively speaking, easy. So the first thing I think we need to think about
is synthesizing a voice and synthesizing the idea, the essence, different things. And the second
thing is the fact that you can synthesize someone's voice is scary because it's going to make deep
fakes so much more of a problem than they currently are because now we can make a picture of whoever
doing whatever. And we can even attach picture of whoever doing whatever.
And we can even attach some voice to it
to convince you that it really happened
when in fact it didn't really happen.
So I think that's a big issue that will need to be addressed
if we're going to keep all of society
sort of continue to be grounded in reality
as opposed to these fabrications.
But that has to be addressed right now
because if we don't come up with some way
to watermark this technology on a digital level,
I mean, what is going to stop people
from utilizing it in the most heinous ways possible?
Nothing, nothing.
So you're absolutely right.
Watermarking is tricky
because you can pass
a law saying all
digitally created images
must be watermarked.
Okay. And then somebody creates
an image in a country that doesn't have the law
and it's on the internet
and now what do you do?
I think the way we have to deal with this,
I think there are a couple of things.
One is we need to develop the technology.
So right now there are programs that can recognize BARD,
which is Google's generative AI that you can talk to.
And if you give it a text that was written by Bard,
they can say, yeah, 95% that was written by Bard,
not by a human.
We need to use those to understand
what was created and what was not.
And the same thing can be said of images.
There are non-watermarked traces
that we need to better understand
and better take advantage of.
And the second thing is,
we need to recognize that there are trusted sources.
And we need to pay attention to,
this came from somewhere that I actually am willing to believe.
If they say they took the picture,
they took the picture, it actually happened.
So a news agency potentially can serve this role in the way that some guy somewhere far away who just creates a picture
might not be a trusted source. And we as a society have to be more suspicious, sadly,
than we have been, that just because somebody shows me a picture, it doesn't mean it's true.
That is true. Matt, we've discussed some of the positivity and some of the negativity.
But if we take the intelligence part of AI, have we actually, since we've been starting
to play with it and develop it, kind of pointed it in the right direction to do the right
things?
Or have we kind of just wasted our time with it so far?
I don't think we've wasted our time.
Okay. Okay.
Okay, first of all,
so I think there are lots of applications of AI
that have been phenomenally successful,
incredibly important.
The car that you just bought
was probably manufactured mostly by Roa.
The cars have fewer defects coming off the line.
Absolutely.
They're cheaper. It's a good thing. Are they?
Well, there's inflation. Pathfinder.
We want to send a robot to Mars because we can.
We don't have the technology to send people to Mars yet, but we can send our agents there in the form of these automated devices. They have to be
pretty independent because round-trip
message time to Mars is long.
You know, the robot has to avoid a rock
all by itself because you don't talk to it.
You can't talk to it so quickly.
Watch out for the cliff.
20 minutes later, it's too late.
Or the Martians.
So I think we've
done good things. I think that
AI is moving very fast at the moment.
And when technology moves quickly, it's challenging.
It's always been challenging for society to keep up
and for society to say, okay, here's what I have to think about.
Here's how the world has actually changed.
And it's important that society distinguish from the apparent changes,
which aren't actually grounded in reality, from what actually has happened that matters.
So, for example, all these generative programs, BARD and others, they're not actually smart.
They don't actually know what's going on.
But it's very easy for people to think they're smart
and to ask them for explanations about which they're completely poorly equipped to respond,
as opposed to asking them things that they can respond to.
So, for example, somebody asked me recently if Bard understood causality.
They wanted to understand some causal thing or lack of a causal thing.
So he said, can Bard understand causality? And I said, okay. And I went to Bart and I said,
is there a correlation between the phase of the moon and the amount of chicken eaten in Denmark?
And it said, absolutely. And it quoted a paper that had never actually been written and a survey
that had never actually been done. It just made all this stuff up. That was, and I told my son about this and he said, dad, why are you asking
BARD questions like this? It's not going to answer them. But if
I want, I'm a new business and I want to have a website
and I can't afford a developer, I can go to BARD and say, make me
a website that does this. And BARD will do great.
So we, all of us,
need to understand what these entities can do, what they can't do,
what they're going to be effective at, what they're going to be ineffective at, and we need to ask
them to do what they are good at, which is a lot. It's just not
everything. They're not going to replace all of us.
Good news. All of us. Good news.
All of us being the operative.
Well, Chuck, they replaced you apparently fine
because you're in the closet and this is just
the Chuck avatar.
Well
played, Matt. Yes. Matt, seeing as we're talking about robots and Mars,
what would you have to build into a program
for it to look into deep space
and find things that we don't know to look for just yet.
How do you go about tweaking AI to be able to achieve that?
Or is it not quite there yet?
I think it's mostly not quite there yet.
So the way these things work is, here's how I often think of it.
The world breaks down into what I call
51-49 problems, where you want
to be 51% right is good. So if you're playing
the stock market, and you can accurately pick stocks where you're going up 51%
of the time, you're about to be really rich.
And then you have 100-0 problems, where 51 percent is not
good enough, 99 percent is not good enough. If you're trying to shut down a nuclear reactor
in an emergency, you really need the 100 percent
answer. All of these machine learning systems
are incredibly good at the 51-49 stuff.
And they're not so good at the 100-0 stuff.
So if you want to look into deep space,
and you're really interested in something that you thought was probability zero,
an alien talking to us, or a new kind of supernova,
or something that we have never seen before,
that's sort of a 100-0 problem.
Recognizing things whose probability is actually
zero, and it's a huge surprise. Machine learning systems are not great at that.
But I don't agree. Nerd fight in the progress here.
I don't agree. Love zone. Nerd fight.
Excellent.
So Matt, I agree with you, but there's an important nuance here because I can program the computer to show me something that I don't recognize because I have a huge catalog of things I do recognize.
No matter what it is.
So it's extremely broad, but it's something that we haven't, we know we haven't seen it because the computer knows our entire catalog.
Correct.
The computer knows everything that we know, and something shows up that we don't know, and I say, show me everything we don't know.
Because that has to be anonymous because we don't know it.
It's going to be an anomalous one in a gazillion thing.
Yes.
Or it could just be a glitch in the matrix, but it'll find it though.
And so that's why I don't entirely agree it's not good, at least astrophysically, in finding the lone wolf out there.
So the trick is, what you're asking is, find the one that doesn't belong, basically.
Correct.
And these programs will classify everything.
You give them eight things and they'll say,
oh, this belongs in pot seven.
Oh my God.
So whether you want it to or not is what you're saying
is it makes the decision to do so on its own.
Even if it creates a category that didn't exist
and says, well, now it's in this category.
Well, it'll probably put it in an existing category.
Okay.
What you can do.
Wait, wait, wait.
Hang on, hang on, hang on.
What you can do is you can.
Don't make me come out there.
Somebody give me some popcorn.
This nerd fight's getting good.
What you can do is you can say if the frequency is outside of a frequency I've ever seen, flag it.
If the periodicity is shorter than anything I've ever seen, flag it.
But what you're doing now is you are actually creating
sort of a new category of surprising things
that you are defining for the system
and then saying what belongs in that category.
Absolutely, you can do that. But if you see a true surprise,
something that you actually had no idea was going to exist. So for example,
imagine that you find a
pulsar and miraculously
the phase of the pulsar is 100%
correlated with the phase of another pulsar two light years away.
That's amazing.
Something totally bizarre is going on.
But the machine won't know because it has no idea to look.
It just looks.
Because it doesn't have the parameter.
It's not a new object.
It's a new phenomenon.
Right.
It's a new thing.
Right.
Right.
So these things that are brand new things, it'll just a new object. It's a new phenomenon. Right. It's a new thing. Right. Right. So these things
that are brand new things,
you'll just look at that
and it'll say,
eh, pulsar, next.
Right.
Exactly.
Okay.
So I'll give you that.
So what you're saying is
it doesn't know to look for it
because we don't know
to look for it.
Correct.
So I was being blunt about it
and saying objects
and phenomena that are sort of singular that you would just put in a catalog with properties.
We use neural net searching throughout data to find weird stuff all the time.
But you are right.
If there are two pulsars that are synchronized, we know what pulsars are.
We know what their pulses look like,
and they're synchronized because aliens are getting ready for the invasion. We would have
no idea. Nobody would find that. That's correct. Exactly. But what would happen if this happened,
people would say, holy cow, these pulsars are synchronized. And then they would define a new
thing, which is a synchronized pulsar pair. And then all of a sudden, that would go into our category, and now
we would talk about that pulsar pair as an object. And all of a
sudden, the neural net would say, oh, pulsar pair. I've seen that
before. This is a pulsar pair. But the first time you see one of these
fundamentally new phenomena, which is what makes science fun
in all honesty.
Machines don't know.
It's too far outside, you know,
what they've been trained to do.
Mm-hmm.
So we've got to retrain them.
Or retrain ourselves.
I think... So I don't think so.
It puts a limit on AI's ability to explore for us.
So it does...
So right now,
and I've said this before on the show,
what we're good at and what machines are good at are different. We are incredibly good
at, holy cow, two synchronized pulsars? Who'd have thought it?
I got to pay attention to that. We're amazing
at that. Machines are. They're amazing at
this thing that you can barely see over here. I looked at it 18 different ways.
It's probably a pulsar. They're better than we are. But right now
we can do more with machines at our side than either of us
can do in isolation. And I think that's great. I'm incredibly optimistic
because of that. I don't see, and the day is
coming somewhere way far away,
but right now I don't see that they can get by without us
any better than we can get by without them.
We're good together.
Let's flip it into my backyard, sports.
Could AI become a live in-game play coach, a head coach?
Could it react in real time?
The answer to that is yes.
And I have built an NFL play caller.
I have run it in simulation against the choices made by actual play callers.
And it crushes them.
It just annihilates them.
And it's easily fast enough. Actually i i played with it before joining x i played with it and i had it and i actually was
watching football games with this and it would put in real time this is what you should do and
i would watch the coach do what he did and and then i ran all these simulations people are
unfortunately i guess not terribly good play callers. Play calling is this giant statistics problem.
Machines are going to be great at that, and they're great at it fast.
However, does your program take into account the adjustments that are made by quarterbacks
who recognize coverage in real time?
So the defense plays a call, and the offense plays a call.
So the answer is yes.
It does? The answer is yes. You're kidding me. That was the offense plays a call. So the answer is yes. It does?
The answer is yes.
You're kidding me.
No, it's the wrong question.
Chuck, that is the wrong question.
Does your program deflate the ball?
No.
That's the right question.
So the answer to Chuck's question is that part of my software was,
who's the quarterback?
Okay.
That's enough.
software was, who's the quarterback?
Okay.
That's enough, because the quarterbacks that are effective at
adapting to what they see
when they come to the line
are going to have slightly
different statistics than the
quarterbacks who are not effective.
So when the program
decides, do I want to call a run? Do I want
to call a pass? Where do I want to call a pass to?
It does know.
Damn.
I missed
something here. Since you
what do you mean
you do better than the actual
call players?
Because you don't have an outcome that
you can look at. You just say, shouldn't
have done that, should have done what I said.
Had he done what you said,
how do you know
what the outcome would have been?
So there...
You don't.
So there are two ways.
First,
as any coach
is based on a simulation engine,
so I can run that simulation.
Now, it's sort of
a self-licking ice cream cone
because the simulation...
Yes, it is.
Okay, so that's way one.
And I think there's still merit there because you can test the accuracy of the simulation.
The second thing I can do is I can just go back and look at the game and say,
okay, the plays where the actual human code made my play, A, B, C, D, E.
The plays where he made a different play, F, G, H, I, J.
And then I can look at them and say, oh, A, B, C, D, E. The plays where he made a different play, F, G, H, I, J. And then I can look at them and say, oh, A, B, C worked out pretty well.
F, G, H, J, they were duds.
And it's easy to look after the fact and see whether a play was a success or a failure.
Right.
But aren't you still just invoking the statistics of past events?
I am invoking.
To predict a future event?
But that's reasonable, right?
So there's a reason.
No, no, no.
If that's the case,
isn't that what they're doing
every single pitch in baseball today?
They are.
Everything is on a...
They're overanalyzing it, right?
In baseball.
Why are they not doing that in football?
Maybe they are.
So that I don't know.
I do think that baseball
is a little more committed to the statistics of the sport than the other sports are.
Because nothing else is happening between.
I also think that at some level what you're asking is,
how hard is it to build a machine learning or other model that tells you how effective a pitch will be,
how effective football play will be, how effective a football play will be,
that is able to predict with reasonable accuracy,
whatever that is,
is able to predict the outcome of a particular sporting choice.
The answer is, it's not super hard,
and it's not super easy.
You have to get the data.
There's a lot of curation involved.
You have to use reasonably modern techniques.
That's what I did with the NFL thing. And it worked pretty well. I was able to tell,
what was the actual answer? It was a while ago. I think I was able to predict run versus pass
with very high accuracy. And I was able to predict exactly what play would be called
something like 20% of the time. Damn, that's very, very high for football.
20%, that's insane.
Could you imagine a football team being on the sidelines,
aside from a Belichick team that's stealing the signals,
if you could imagine being on the sideline and with high accuracy,
knowing 20%,
one out of every five plays,
you would know what they're going to do.
You would win.
You would win a piece of the game.
This is part of why the software,
why the simulations indicated that a machine coach
is really going to do very well against a human coach.
I know, but what if you put one against another?
You know, my AI is better than your AI.
Oh, I love it.
Machine coach against machine coach.
Oh, it's the Decepticons versus the Autobots.
Do they cancel each other out?
They sort of cancel each other out.
So, you know, if you... Let's go to chess, which is sort of less controversial.
So you can have, even a relatively poor chess program now is way better than the best human.
Right.
But there are still better programs and worse programs.
Right.
So the same thing can happen here. Now you have additional factors in sports
because one of the teams may well be
simply more physically talented than another team.
True.
And then the question becomes,
can a difference in the quality of the coaches,
whether they're AIs or not,
overcome the difference in the physical qualities of the team?
And do you want to allow that?
I don't know.
But there will be better AIs and worse AIs. How big
those differences are, I don't know, because right now there are no AIs.
But that's going to be another facet of what makes a team
good, is how good is their software. What likelihood, Matt, is there that sports
organizations, whichever sport, are already
using AI technology
for in-game situations?
I think it's pretty small.
When I joined X,
I was
trying to talk to the NFL
about using the software to develop for play calling,
and it was,
it seemed pretty clear to me that they weren't doing anything
like that yet. Now, I've been
an X for a couple of years, so we're looking back two years.
I think that eventually this is going to happen, but not yet.
But it's moving towards that because right now you're using next-gen and next-gen stats.
And what they're looking at is percentage likelihood and probabilities for certain plays at certain times.
You've never seen more two-point conversions in the NFL than you have right now.
That is a direct result of statistically, you should do it.
You've never seen more teams going for fourth and whatever because statistically you should do it.
So we're moving towards that direction.
So about that specifically, about, I was probably eight years ago now,
I did some statistical work for the Oregon Duck.
And I told the coach at the time,
I gave him these huge printouts with what you should do
in every situation on fourth down. And I said, just stop punting between the 35-yard lines. That's really what it all says.
And he stopped. And that was the year that the Oregon Ducks were the best. They were.
And the other college teams noticed, and they stopped punting between the 35-yard lines. And
you're absolutely right. What has happened at the NFL level is people have noticed. So when I see someone not punting between the 30,
not going for it on fourth down,
I smile because it's basically my work that the statistical work I did a
while ago to get the Oregon ducks to stop doing it.
And then it's propagated out.
So.
But just so I understand,
um,
to,
to on fourth down,
rather than punt to release the ball to the opposing team, they would go for first down.
And in some percent of the cases, they don't.
But if I'm on my own 35-yard line, I'm handing you the ball at the 35-yard line.
Right.
And you're saying that risk is not as great
as just handing over the ball
because I might have gotten a first down. You're only
starting 10 yards further than
you would if I had
if you had made a fair catch.
Yeah. So you're only conceding
10 yards. You're conceding 10
yards by going for it.
I mean, by not going for it. That's all you're
really giving up, you know, unless there's a really good return. Now, does it take that mean, by not going for it. That's all you're really giving up,
you know, unless there's a really good return. Now, does it take that into account? Because that's fascinating. It took everything into account. Okay. So yeah, if you got a great
return guy, you know, it's also a function of how late in the game is it and how much are you
ahead or behind by. The actual rules, it turns out, if we're doing sports, is don't punt between the 35-yard lines
and always go for it on fourth and one,
even from your own 10.
Woo!
And I told that to the Oregon Ducks coach,
and he said, I can't do that.
I'll lose my job.
I'll lose my job, yeah.
But from a straight statistical perspective,
and the bottom line is, if you punt from your own 10,
you're still screwed.
You're still going to have unbelievably good field position.
Yes, you're right.
And if you go for it on your own 10,
fourth and one,
you have a reasonably good chance of getting it,
and now all of a sudden,
you're back in it.
Wow.
But he just said he couldn't do that.
He just looked me in the eye and said,
I can't do that.
I'll lose my job.
Damn. Wow. Did'll lose my job. Damn.
Wow. Did he lose his job anyway?
He went
to the NFL. He had a great year
for the Oregon Ducks. He went to the NFL.
Well, there you go. He lost his job upwards.
That's fantastic. You said earlier on, Matt, about how machine learning programs are getting quicker.
Are we going to get to the point where we can really start to predict
some of the big natural disasters, the earthquakes, the tsunamis,
or as good as it gets right now?
Well, let me be more precise there.
There's certain, I don't know,
earthquakes wouldn't be the best example here,
but certainly storms where we have limits
to how many days in advance you can predict the weather
because there's some chaos takes know, chaos takes over.
How does AI handle chaos?
Any better than we've ever handled it before?
So those are sort of the same question.
And I think that actually comes back
to this 51-49 versus 100-0 thing.
So predicting an earthquake, that's a 100-0 thing.
How many hurricanes are there going to be this season?
That's more like a 51-0 thing. How many hurricanes are there going to be this season? That's more like a 51-49 thing.
Dealing with chaos, very much a 51-49 thing.
The stock market is sort of chaotic,
but I don't have to get it right all the time.
I just want to get it right most of the time.
I should just be clear that our audience knows
more precisely what we mean by chaos.
So what we learned back, I guess, in the 70s and 80s,
that you can start a system out with certain variables having certain values,
and then you could run a system, and not all systems would behave this way,
but some systems, you get a result, okay?
And then you can make a tiniest adjustment in your initial parameters
and then set it go, let it go forward,
and you get a completely different result.
Exactly.
So that small changes in your initial conditions
would not lead to small changes in your outcomes.
It led to huge changes in your outcomes,
which meant that your ability to predict
far into the future
for some systems was essentially mathematically impossible.
So this is...
That's what I meant by chaos here.
A butterfly flaps its wings in East Africa, and there's a hurricane six weeks later in
the Bahamas.
And my answer is, again, it's the 100-0 versus 51-49.
I cannot tell you there will be a hurricane in the Bahamas on October 27th.
But I can tell you there are going to be more hurricanes this year than average.
In general, more hurricanes, 51-49.
in general, more hurricane, 5149.
This specific hurricane that depends on that specific butterfly,
I don't know any better than I used to.
How about quantum computing when you add that into the effect? Because now you're looking at billions and billions of data points
that are being fed to the AI.
It is.
So quantum computing is good.
I know less about it than I want to
and that I wish
because I certainly have the background.
Right now,
conventional machines appear to be able
to process the data we need them to process.
And quantum computing, I think,
will be helpful in other ways.
So the quantum stuff appears to be best at sort of
doing an almost uncountable number of things in parallel.
So I want to find an integer that has certain properties. And I can
sort of look at hundreds of billions of integers simultaneously
using the quantum stuff.
That's cool. But the machine learning stuff, I'm just trying to look for properties in enormous data sets that involve sort of looking at all the data and seeing how it interacts with each other.
And that we seem, at least so far, to be able to do with... takes a lot of computing, a ton of computing,
but we seem to be able... do with it takes a lot of computing, a ton of computing, but we seem to be able
currently we're keeping up. There might be a way
to query that same set of data
with the higher performance
quantum computing
in ways we had not thought to even ask
of the data. Maybe.
Interesting.
By the way, about that butterfly,
do you remember that article in the Journal of
Irreproducible Results?
Where?
I think I said this on another episode.
This is a journal where it's for idle scientists who have some crazy thought that is completely stupid, but they want to publish it anyway.
And so it goes into the Journal of irreproducible results. So one of them was the calculation that heaven is hotter than hell.
And it looked at the thermodynamics of souls and how many people are heaven worthy versus hell worthy.
And it added up all the energy of the souls going into heaven and it made heaven much hotter.
So stupid calculations.
The problem is, I was just going to say, the problem is that hell does not have air conditioning.
And heaven does.
So, yeah, this is definitely a bot.
Not chucking the closet.
So, this paper has this photo of a butterfly.
And it said, it captured the butterfly that caused Hurricane Andrew.
That's it.
It's in the one prosecutor.
The one butterfly that went to hell.
So I did not read that paper.
I'm sorry.
You missed that.
That was a good one.
That's fun.
So Matt, what I think is most fearful for people
is when ai does not just the tasks we give it better than we've ever done it but when it
self-learns and achieves what some mild version of what we might call consciousness
and this sort of artificial general intelligence,
I think is the scariest part of AI that's been discussed in recent months.
Could you just comment on where that is today?
It is scary.
What we need to understand is what this technology is actually doing.
These things that we're dealing with,
these generative AI programs,
they have no notion of truth.
They have no notion of reality.
They have no notion of fact.
Or morality, even.
Or morality.
Or anything, right?
All they're doing is trying to predict
what an expert would say,
just what words would come out of his mouth.
And as a result, they sort of don't know what they're doing.
They just know what someone might say.
I don't think we're anywhere near a point
where these things exhibit true general intelligence.
We need to understand when we're interacting with them.
These things don't understand that there are, they don't understand there are facts. Not that they don't with them. These things don't understand.
They don't understand there are facts.
Not that they don't understand the facts.
They don't understand there are facts. They don't even know what it is.
Right.
So when I asked about the phase of the moon and chicken in Denmark,
and I got back this long study that didn't even exist,
it had no idea.
And I actually asked it.
I said, are you sure?
And it responded.
It said, well, I'm not really that sure because this was the only study I could find.
It's just standing by its non-facts. And it has no idea that this is not how you look at the world.
I said, well- Is that what they call mirage?
When it comes back with something? It's a hallucination.
Hallucination. Hallucination.
It's hallucination. And it doesn't know it's made something up, right? When we talk about making something up, just the phrase is identifying a
distinction between actual reality and whatever you're saying. These things don't know there is
an actual reality. So is that a necessary guardrail? Is a necessary guardrail to imbue Imbue the digital intelligence with the concept of these things that it will do, you know, like what reality is, what a fact is, what truth is.
I mean, is it necessary?
It would be good, but I don't know how you do it.
These things are so divorced from the notion of a reality.
You can't just say, hey, there are facts.
Remember that.
It's just not how they work. It's not how they're architected.
Matt, could the problem be that these language model AI machines are coming of age at a time
where the internet is filled with non-facts? So it's not its fault we fed it junk food.
Had it come around right at the beginning of the internet
where you didn't have QAnon and all the rest of this,
might it have performed a little better?
The fact, and it is a fact,
that these things have no notion of truth
would still be true.
People have tried to curate the information
on which they are trained so they're not trained on nonsense and they still have this problem with
hallucinating. In fact the problem is they don't understand that there is an abstract an abject
reality in which that of which they are a part and I think you're right the problem is not the
programs the problem is us. We need to recognize that these things are divorced from reality.
We need to remember, if I want a website created, it's going to do well.
If I want to ask it, is there a correlation between these two crazy things that I pulled
out of the air, it's going to do badly.
And we shouldn't pay attention.
So if somebody, when I told my son, I asked Bard, is there a correlation between the phase of the moon
and the amount of chicken eaten in Denmark?
His immediate response was, don't ask Bard.
That's stupid.
I do think there's going to be a job here.
And it's going to be an important job,
which is how do I take what I want to know?
And it's like prompt programming.
What prompt do I give Bard to get back the most useful answer I can
and to avoid all the junk?
That's going to be a thing.
There are going to be people who are good at it.
There are going to be classes that teach you how to do it.
It's going to be a real skill that we are going to need.
Okay, so now let's make that a given.
What do you do?
And this may be more philosophical than you're qualified to answer.
I'm certainly not.
What do you do with the people who purposefully use the technology for the end of misinformation, confusion, and chaos.
Because even if you do everything that you just said,
those agents can still utilize the technology
to do some serious harm to society.
Correct, and I think they will.
And I think this gets back
to what we were talking about before.
I think you need to have trusted sources.
Trust is going to become much more valuable
because lack of trust is going to be so much
more dangerous. So you need to have trusted sources. You need to the extent
you can to have technology that can help identify
generated images as opposed to real images.
So there is both a technical problem there.
Can I produce software that tells me
this is fake, this is real?
Technical problem.
And there's a social problem.
How do I get people to care
that they're looking at real information
as opposed to sort of garbage?
That's half my life.
That's half my life as an educator.
I believe you.
And I think it is a huge part
of what scientists need to do.
And our responsibility to do it
is even greater now than it's ever been.
Because this is more than one facet.
This is more than one facet, I'm guessing now.
It is.
It's scientists and programmers
have a responsibility.
But the legislation, and I mean, you can't make it governmental
because if I go do something in another country,
that government's got no power.
So who are you going to get, Space Force to oversee it?
No, that won't happen.
So who will bring to bear legislation
and these bad actors keep them in their place?
It's going to be a party time for them.
Well, that'll have to be part of what Matt was talking about
in the recognition portion.
You would have to be able to recognize where these bad actors are.
Like most of the dark web, you know where the people are located.
You actually know where they are.
It's just that they are in a country that's not going to do anything to them.
I just wonder if bad actors here put civilization at risk,
that calls for some kind of international oversight over this.
I think that the, so first of all,
the technical problems here are hard and they're challenging
and they're important.
Identifying generated text versus non-generated text.
And I am thrilled because I'm a technologist
and I get to spend my productive time
working on technical problems
and I don't have to solve the social problems.
That's apparently Neil's job.
Well, I'm so glad this all works out for you.
You don't have to worry about it.
I do care about the need to inform people about what the technology can do.
I do think that this particular technology is going to take this problem that you mentioned that we have, right?
Truth has become more elusive.
And I think that this technology potentially can make it more elusive still.
But it is still up to us, and I think it's still possible for us to say, no, enough. We are going
to be committed to actually knowing whether the sky is blue before we start going telling all of
our friends and neighbors what color the sky is,
we should check. The sky really is blue. Here's why I'm convinced. Here's my source. Yes, it's trusted. So that's why I'm willing to talk to you about it. Well, part of me thinks that if AI has
these hallucinations and it doesn't really know what truth is. There's nothing intelligent about it at all.
So it's been misnamed.
It's a disruptive force in our culture and in our society.
And maybe we should rebrand it as artificial idiocy.
It's not going to like that, Neil.
Neil, you were trying to get me killed in this closet.
I'll end up killed in this closet. In the back of the closet.
Let me just bring some summative remarks here and get your final reaction, Matt.
It seems to me if deepfakes become so good that no one can trust them,
then that's basically the end of the internet as any source of information.
And that has a positive side to it
because it means, for example, let's look at QAnon.
It means QAnon won't even believe
the stuff that's wrong that it thought was true
because it doesn't trust it.
Because the level of misinformation would be so total
that people who were previously misinformed
will be worried that they'd be misinformed.
I think that the amount of misinformation can go up.
I think there will always,
I think there's likely to always be an internet.
I think that it's likely to always be an internet.
I think that it's likely to always have valuable,
factual, accurate,
Not if you don't know which is which.
Information.
And the trick is going to be to find it.
And if you think about the stuff that I talked about,
on the technical side,
we need the ability to find it.
And on the social side, we need to have the desire to find it. And I think both of those things- Isn't it a positive outcome if QAnon
can't even believe the stuff that's not true? Isn't that a positive outcome? That might be a
positive outcome in isolation, but if QAnon has that problem, then so do all the people who are
trying to affect positive change. Well, then that brings us to something that we haven't touched upon.
And that is our ability as a society where the majority of people are scientifically literate and trained as critical thinkers so that they know exactly where to place their trust.
That is really where the problem is.
Okay, now you put it back on me
that I got to train everybody to think this way.
Come on, Neil.
Now you know that that's your job.
I wanted to leave the blame on Matt at the end of the show.
Before we leave the show, Neil,
Matt, you're saying how AI is learning at quicker speeds
and is going to get there sooner, et cetera, et cetera.
Will it not then solve this problem for us, therefore itself?
Thank you.
I don't see a reason to expect that it will figure this problem out.
Chuck, I do agree with you.
I think that people don't have to be trained as scientists,
but I think they need to be trained as thinkers.
They need to be able to understand the information
with which they're presented
and evaluate it relatively dispassionately.
And I think that education, broadly,
education is going to become much more important
as we need it more.
It's also the case that as machines start doing the drudgery,
education becomes more important because we've been freed from the drudgery.
We get to do the fun, hard stuff.
People need to understand what that is, how they can contribute, and all that.
And I think that's going to happen.
But you mentioned education there, Matt.
I'm sorry to cut across you,
but people are handing in their homework
or their assignments created by AI.
Are we not heading down a path
where generations in the future
will not have any desire to self-educate?
I think the answer to that is no.
I think this thing where kids are handing in homework
written by
Bard or what have you, I think this is a relatively
I hope, this is a relatively temporary
anomaly. I agree.
We'll sort that out.
Yeah. Honestly,
it just means that
the school system values
your grades more than you value learning.
There you go. And there you go and so you
give them grades and so that that'll shift in a good way how the school systems uh place their
value on what it is to teach you something and it may awaken students it may awaken in students
a reckoning or recognizing of that value themselves.
Yeah, exactly.
You know?
Yes, exactly.
So I think, I mean...
So before AI takes us over and exterminates us,
there are these good sides of what...
I think there are some short-term bumps.
I mean, the fact that there are going to be
so many deep fates is going to be a short-term bump.
But, you know, problems are always opportunities in disguise.
So the commitment
to being able to recognize
what's true and what isn't,
the realization that there are facts,
that's something that I can see society
embracing more than it has
because it has to.
Because if you don't believe in reality,
you get overrun.
So maybe at the end of this, we come out better.
All right.
That's a good point to end this on.
Thank you for finally bringing this around.
So there's some hope.
Plus, we got to get Chuck and Gary out of the closet.
Please.
It's been great to talk to you.
Matt, always good to have you on the show.
Thanks for coming around again. And this will not be the last time we reach out to you. Cool.. Matt, always good to have you on the show. Thanks for coming around again.
And this will not be the last time we reach out to you.
Cool.
Always fun.
Always good to have you, Chuck, Gary.
This has been StarTalk Special Edition.
Neil deGrasse Tyson here, your personal astrophysicist.
Keep looking up.