a16z Podcast - a16z Podcast: The Dream of AI Is Alive in Go
Episode Date: March 11, 2016Why are people so fired up about a computer winning yet another game? Whether it's checkers, chess, Jeopardy, or the ancient Chinese game of Go, we get excited about the potential for more when we see... computers beat humans. But then nothing "big" -- in terms of generalized artificial intelligence -- seems to happen after that burst of excitement. Now, with the excitement (and other emotions) around Google DeepMind's "AlphaGo" using machine learning and other techniques to beat one of the world's top Go players, Lee Sodol, in Korea ... it's like the dream of the 1990s (and 1980s, and 1970s, and 1960s) is alive in Seoul right now. Is this time different? How do we know? a16z's head of research and deal team Frank Chen and board partner Steven Sinofsky -- who both suffered through the last “AI winter” -- share how everything old is new again; the triumph of data over algorithms; and the evergreen battle between purist vs. "practical" approaches. Ultimately, it's about how innovation in general plays out, at a scale both grand (cycles and gestation periods) and mundane (sometimes, the only way to make a product work is to hack together the old, the new, and everything in between). NOTE: The Super Mario World video referenced in this podcast is at https://www.youtube.com/watch?v=qv6UVOQ0F44
Transcript
Discussion (0)
Hi, everyone. Welcome to the A6 and Z podcast. I'm Sonal. And today we have two partners from
Andrewson Horowitz. We were just having an informal conversation in the hallway literally around
machine learning and AI. Steven Sinovsky, a board partner for A6 and Z, gave a presentation
on the evolution of machine learning. And Frank Chen recently put out a tweet storm on why Google's
deep mind algorithm beating Lee Sadeau was so significant. And they were sort of talking about
like, oh my God, we've been here before. But it's not going to be all backward looking because
I think the point is that the evolution is what's, why now?
Right.
And also putting, you know me, I love to put things in context because, like, there's always
lessons to be learned and patterns to avoid and not avoid in patterns.
That's a key word for today.
Okay.
Well, let's start talking about those patterns.
Well, maybe let's start with the big go victory, which is it got people really fired up.
It dominated the press for a little while.
And you might be wondering, what is the big deal?
Computer program won another board game, a board game.
Like, what could be less relevant to everyday life, right?
And we've seen this before.
We started with Tic-Tac-Tow and we got to checkers and we got to chess.
And then Watson even won Jeopardy.
And now here we are talking about another board game.
So like it's kind of irrelevant to everyday life, isn't it?
And the surprising thing is, look, we've had a lot of false starts with artificial intelligence.
And the vector has always been, hey, look, now that we've run this very sophisticated board game, checkers, whatever, now we're on the verge to do general purpose intelligence.
Right. And that has always been the promise and the false start of AI. And so the question is whether this victory with Go, which is an incredibly complex game, you can't brute force search all of the possible moves because there's just too many. Right. So people don't know. It's like a Google times a chess game.
So think of there's all the possible chess configurations. The number Google. The actual search company. Right. So basically if you think of the number, total number of chess moves, and there's a lot of them, right. It's a big board, lots of pieces. You multiply by Google. That's how many podcasts.
possible Go board configurations there are. So you can't actually brute force search all of the things, which is what you'd expect a computer to be able to do. It's got lots of processor. It's got this very reliable big memory. Let's just search all the possible spaces and then we'll figure out how to win because we know what all the winning games look like. And it turns out you can't do this for a go. And so now Hope Springs Eternal again, which is look at the very sophisticated set of techniques they use to win this game. There's deep learning and then there's decision trees and then there's super
learning, look at all of the techniques, and maybe now this time it really is the dawn of
the generalized intelligence.
This is a massive, massive win in the world of computer science, but I knew right away
just having kind of been around the block in academia, that people were going to start to
clamor and define its victory and to try to make it narrow.
So, like, well, it's not really AI because it used multiple techniques, some of them which
are an AI.
And the funniest thing about that is that is, like, been the very nature of AI dialogue since the beginning.
Since the very, very beginning.
And in fact, like, and so this is where Frank and I started talking in the hallway because we were both at AI schools.
I was in graduate school in the late 80s at UMass and Frank was at Stanford.
And like, the whole thing about AI was always, if you actually could find a practical use for one of the techniques, you cross it off the list of AI techniques.
And it's like no longer AI.
And so it's just, it's part of the world of, like, defining these things that makes it exciting and interesting.
But it also is because there's been this long history of, like, promises that weren't quite made, you know, made real.
And so that's what's so interesting.
Yeah.
Great example of that is, so one of my summer internships when I was in college, actually right across the street from this building, I was at IBM Santa Teresa Labs.
And I was working on an expert system development tool.
IBM was so convinced that there were going to be so many expert systems that we needed to improve developer productivity in creating them.
And what an expert system is, it's a system that captures human knowledge.
So the overall process would be you find somebody really smart in an area, a doctor, an insurance adjuster, an oil gas exploration expert.
And you ask them, how do you find oil?
How do you diagnose a disease?
And you do hours and hours of interviews.
And you basically codify that in a decision tree, which is a classic machine learning algorithm.
And you hope that you've asked enough questions and you've captured enough of the decision tree that they can start emulating the expertise of that human.
This was 1990 and it was going to be 10 years and we basically interview every expert in every field and there it is.
Like it's the sum of those as artificial intelligence.
So why didn't that reality come about then?
So basically they ran into a wall of it didn't work, which is it couldn't capture, it couldn't nearly behave as effectively as a human.
So there's obvious things like there's always edge cases where it didn't, you know, it was sort of the exception that proved the rule.
The other thing was that keep in mind what computational power was back then.
We didn't have a lot of memory.
We didn't have a lot of disk.
We didn't have a lot of CPU cycles.
And so the rate at which we could do these calculations and the amount of storage we had, like, it just never worked.
It's also, it's super interesting, though, because that, like, one that was a favorite of mine in the 80s in graduate school was an expert system for chemotherapy, which was actually done here at.
Stanford back then. And the funny thing was, it turns out like there are areas where there are
a lot of rules and there are just too many for one person to keep in their head. So, you know,
with medicine, it's an obvious kind of thing. Like you go to medical school, you're a resident,
an intern, a fellow. And as long as you see like a million patients, you can figure out the right
rules, especially if you narrow it down to like, what's the chemotherapy regimen for a particular
form of cancer. But the problem is no one could remember all those things and get that. And
get them right and like it's not a perfect classification because wow now you have to factor in
the age of the patients that they do stuff and there's always like another data point that leads to
some reasoning under uncertainty need and yet there are systems that work even back then that
yielded better answers and it's very easy to test too because you take all that data and then you go to
grand rounds and present your approach and you have 50 doctors all at once saying that's a good
approach not a good approach or state of the art and so immediately those systems were no longer
AI. They were just like computer programs that helped you do that particular thing. But it didn't
stop people from like, you know, let's, oh, you know, we need, because we don't have enough
computing power, let's create a special computer that can interpret this list program even better
and charge like $90,000 for it and hope that we can get the performance out of it. Or
let's make better editors for prologue so that like you could have more rules and encode them
even better. Yeah. I'm going to performance the shit out of this thing. Right. Right. Exactly.
And so that, of course, that what was so interesting about that is what, you know, rhetorically to Frank, like, where that, that ran right up against the PC revolution.
Right. Exactly. It's it turned out that what we didn't need was the Lisp machine, which was this company called Symbolics.
And sort of the collapse of symbolics, basically, was the first nuclear winner in AI.
Not only did venture funding completely dry out, it was embarrassing to be a professor in that field for a long, long time because it just didn't work just at the dawn of PCs or anything.
And so now we're on the upswing of that, which is PCs have led to data centers, right?
It's that supply chain.
Like if you look at an X86 server in a data center, exactly the servers that Google is using to compute deep mind algorithms, they're PCs.
And so now we do have a ton of computation and a ton of bandwidth and a ton of memory.
And we have this innovation in algorithms.
So the old approach was expert systems, which is interview a human expert, try to codify that knowledge.
The exciting thing about the Go game is these.
algorithms are the opposite of that, which is the algorithms are self-learning. So there's
these techniques called supervised learning algorithms. There's these progressively learning algorithms.
There's either generative algorithms or there's a whole class of algorithms where the computer
is teaching itself how to play a better game every game without interviewing a go expert.
And if you look at some of the hacker news comments, you can actually see this in the gameplay,
which is sophisticated go players are looking at the style of play and going, that's
weird. They don't teach that at go school. And so you can actually see the style of gameplay
is kind of from an alien intelligence because it's learning how to play the game by itself by
playing lots and lots of games. And so I think what that is sort of like the really big
breakthrough that's happening now is that the history of AI was first a human writing an
imperative program to play a game, tic-tac-toe. And then everybody thought that's not going to work.
So let's write a program that simulates the way the human brain would play the game.
But it turns out, in hindsight, that if you don't know how the human brain works, you can't actually write a simulator for the human brain.
And that was like 20 years of work.
Yeah.
And then Frank, like, said a very important phrase that he just mentioned, which is worth diving into, which is the AI winter.
And so often what we now know in hindsight is that in technology revolutions, the things that happen very early, like don't yield things right away, yield results right away.
way but that doesn't make them the bad ideas or wrong they were just early right actually dixon
calls it and recently in a post he wrote the gestation period oh yeah and and also what one of
the other things to is that most new big advances are not just like discrete changes in everything
there are new like sort of primordal combinations of old things and so what is so fascinating about
this go innovation is that it's not like they just locked themselves in a lab and invented the way to play go
using like the latest, newest deep learning technique that they created.
It's actually a whole range of techniques.
And I was watching a great video, which I tweeted out too.
We had another one of these dogs from Jeff Hinton, who is clearly the pioneer of deep learning.
Yeah, people come the father of deep learning.
Yeah.
And he actually spent a lot of effort in this interview on Canadian television,
explaining why the IBM Watson, Jeopardy playing machine was not deep learning.
because it used all of these other techniques.
And as if it's like pure deep learning.
And then I personally just start to twitch
because then it's like, is it purely object-oriented programming?
Is it purely client server?
Is it truly cloud?
And like all of those definitions?
Do we need an NIST standards definition of deep learning?
So we're all on the same page.
Let's take a step back then and share those definitions as they stand now.
And if it's relevant, sure, share the evolution of that definition.
But I actually do want us to clarify like, okay,
you're talking about expert systems, AI.
Deep learning, machine learning. How do we define each of these?
Yeah. So maybe let me take a whack at the taxonomy. So at the very highest level, you have artificial intelligence, which is the combination of all experiments that we've run to try to program intelligence. Some of them will be trying to imitate human intelligence. Some of them will be trying to do things that are mathematically just interesting and produce interesting results but aren't modeled on the brain or human thinking in any way.
Also, our senses are a good area, like being able to vision or speech.
Exactly.
And then, so that's artificial intelligence.
And subsets of artificial intelligence include deep learning.
So deep learning is a specific algorithm and data structure that attacks a series of problems.
It's based on neural networks, which I was studying back in the Stanford days.
So you did, you know, CS-21, intro to artificial intelligence.
Week one, expert systems.
Week two, neural networks.
And neural networks isn't modeled on the human brain.
Yes.
It's very loosely modeled on the human brain, although sophisticated researchers
will tell you there's a lot that's very different.
But, you know, the basic idea is that the brain is full of neurons that are connected by axons,
and they're signaling each other.
And a neural net, which is deep learning, is a mathematical abstraction of that.
We have nodes.
They're connected in a network.
The connections have strengths.
And we can build these very interesting behaviors by using that data structure and iterating
on the strengths between the nodes.
So that's deep learning.
It is a specific algorithm and technique in data structure.
and it's on fire.
It is the heart of the Go algorithms,
although interesting to point out,
it's an ensemble of techniques that's working for Go.
Let me just dive in really quick
because I actually think there's an important distinction
happening right now in neural networks,
and it's a little bit of a split in the taxonomy.
Neural networks aren't new.
In fact, if you go back and read
the 1956 Dartmouth Summer AI Conference,
they're actually mentioned in there
is one of the first things
that that group of people
who basically invented the field,
This is like Marvin Minsky and those people.
Rural nets were in that paper because they were a theory, a mathematical theory of the brain.
So then all for basically about 40 years, it was like intro computer science, you know, like third year undergraduate
computer science to write your first neural net, to play tic-tac-toe, to guess a number between one
at 100, like, and it was a very simple neural net.
What's happening right now and since the innovations of Jeff Hinton have been the ability
to pile on a bunch of neural nets, one on top of another.
And so maybe you dive in that like that, because that's the big math advance.
And that's why you hear all about how many GPUs do you use to compute because it's this massive amount of math.
Which is why GPUs are so important for AI in general.
So then let's break down the neural network taxonomy a little bit further.
So recurrent neural nets, like let's define each of them.
So basically all of these adjectives on top of neural nets, recurrent, long-term memory networks,
they're all enhancements of the basic idea.
And so what a recurrent net will do is try to feed back previous learnings into your current state.
It's probably how the brain works.
When I parse sentences, I'm kind of keeping track of each word as I go along, as opposed to throwing away what I learned in previous time frames.
Long, short-term memories are sort of the more sophisticated version of this, which is I keep track of more of the history, as opposed to just recent history.
Again, probably how I parse sentences.
Very similar to how the human brain works.
I mean, cognitive psychologists have long talked about short-term memory, long-term memory, just kind of keeping that framework for helping learning.
Okay, so that's some of the neural nets.
And then parallel to adversarial, so one more adversarial.
The one really interesting thing is if you feed pictures into a neural net and you tune it,
you can defeat the categorization fairly trivially by introducing noise in the data.
And the really interesting thing is you introduce the noise, you look at the resulting pictures,
humans still recognize the picture.
That's a dog.
That's a car.
That's a tree.
But the neural networks completely fail.
So trying to figure out exactly how I introduced noise and why they defeated the categorization algorithms is a super active area of research.
Right.
These are called adversarial networks.
There's a funny image that makes this round on Twitter every so often about, like, can you tell a difference between this dog and the bagel? Have you guys seen that?
Yeah, yeah. It's like really funny. And I think it'd be really funny to try to like make a neural net figure that out because they look so...
Yeah, because they look very similar. I just coincidentally saw that yesterday.
I think the biggest innovation in computer science for me in the past 20 years has been the ability to look at all the pictures on the internet and find the cute kittens.
Because I personally believe that that is a very high priority to brighten your day.
That's right. Not having to search for cute kittens is super helpful.
But when I was in graduate school, it was the Cold War, and, like, we had a giant lab at UMass that was all about, like, computer vision, and it was about trying to pick the tank out of the desert and, like, figure out what tank.
And they had literally a hallway, like, 30 feet long, filled with microvax mini computers, you know, that would grind away day and night.
Literally, our hallway was hot from all of this.
It was like, you'd feed it the picture, and then, like, 12 hours later, you know, yes, there's a tank.
And, like, that was the one.
And it did know, what it was doing was this, like, it was looking for the edges and doing this math to compare, like, you know, because is it a, and then like you go, you just go, well, here's a book with a picture of a tank, take a picture of that. And then it would go, oh, look, a tank in the desert. And it was just this massive undertaking. And so now you've got the ability to just, like, you can use every photo filter, every infrared, every sensor, and overlay all of these different ways, which turned out to actually be closer to how you might go and recognize them.
thing. Like your ability to tell the difference between a kitten and a picture of a kitten and
an image on a computer of a kitten is important. And that's why you can't fake out face recognition
anymore with like a photo and things like that. So by way, computer vision is maybe the most
advance of all those sensory aspects. Yeah. And it illustrates one of the sort of this big
trend. So the big trend is the triumph of data over algorithms, which is you try to make more and
more sophisticated edge detection algorithms, feature recognition now there is. The big advance with
deep learning was, screw all that. I'm not going to try to figure out what
catness is four legs and furry, right? I'm just going to feed you a million pictures of
cats. And so that's sort of the triumph, if you will, of deep learning. It's the triumph of
data over algorithms. Right. And it's, and what's interesting is it's not data in the way
that we had about 20 years of, like, if you have a big enough database, you then just use
better query languages and better things to look it up in the database. This is using the data
to sort of build out a model of what the answer would be. Right. It's emergent. Right. Which makes
for a super interesting challenge, which is just sort of like, how do you debug all of this stuff?
And for me, that's like the most fascinating thing because you're, you know, like people get all,
like all the hoopla over self-driving cars can't replace because you never know.
And the interesting thing, like if they're going to be safer or not.
And the interesting thing is it's a very odd comparison because basically the self-driving car
is going to use a bunch of machine learning techniques and other kinds of algorithms to
essentially learn how to drive and make the best guesses at any given point.
And that's exactly what we do every day when we drive somewhere.
And so it isn't going to be this, wow, we've now figured out the specifics and we have to
paint the street with different lines for the car to follow, which is how they thought
self-driving car was going to be, nor is it going to be.
I have a database of all of the highways and all of the cars on the highways.
So let me now look up when to change lanes, how fast to go or anything.
It's not like this force thing nor this brute force thing.
It's now this sort of emergent learning thing, which is the whole deep learning model in the
first place.
And it's sort of inherently undubugable because you don't understand how it is exactly that it's making those decisions.
So contrast that with another well-known machine learning data structure, which is decision trees.
In a decision tree, you can actually examine the decision tree and understand why a system made any single decision.
Very, very easy to debug.
The bummer is, decision trees don't get you very good results.
And so these deep networks get you much better results, but they're undebuggable.
You don't really know.
All you can do is kind of feed it more data and run the models and say,
statistically, how likely are you to drive correctly?
One of the areas that I'm super interested always has been in text.
Like, I worked on a word processor for a long time.
Me too.
I love natural language.
Typing and grammar and things are all super important.
And that's a microcosm of the evolution of AI, it turns out.
Even if you just look at like autocorrect in Word, there's a whole history of AI behind that,
even though it's like ultimately 20 lines of code that we did.
Off a dictionary.
Right.
But what's interesting is it also points out almost this chasm.
in the academic world
about how to describe a solution
because there's a very long history
in algorithmic decision tree-like history
in the world of natural language processing
where you look at a block of text.
You know, we know how to diagram sentences
to find parts of speech.
And so probably since about 1956,
like people have been working on
the ability to algorithmically figure out text
and they all thought
it would just be a couple extra years of work
to then take the diagram sentence
the data structure
and turn it from English into French
or to turn it into
from English into a concept
and it turns out
knowing the structure of a sentence
doesn't help you do either of those things
it doesn't get you to
and so along comes deep learning
and the idea is
oh well if you have enough text in French
you can basically find a way
to turn it into English without knowing French
or algorithmically diagramming
the sentence. Exactly the data takes over the algorithm
so it actually works
but debugging it is really hard
which sort of freaks out the algorithmic people
because what if there's a mistake?
Oh, well, then just go get more French text
and start over again.
But then it turns out you can probably do a better job
if you apply some of the linguistics to it
and think about it in advance
because you're always going to get like a probability,
you know, 80% choice.
Well, exactly.
And I would think that's a case in natural language in particular
that is the only way to resolve the ambiguity problem
in an efficient way.
Like you have to have some sort of approach
that isn't just purely one or the other.
to get people an output that makes sense to them.
Because that is the whole point of natural language.
Let's be natural.
Yeah.
And I think this is where we're going to see the next big breakthrough.
It won't be one technique in isolation,
just like the Go algorithms, one, on a combination of techniques.
Where were the other techniques, by the way?
You mentioned briefly some of the general people.
So for natural language processing, a very natural thing to do would be to do parts of speech tagging.
Energy resolution is when I see Sono Chokshi in an email, is that a person?
Is that a place name?
Is that a store name?
Is that, like, what is that?
More precisely, I think entity resolution also is when you have variations of that name, like sown chalky.
You all know that is. And that part of is absolutely essential to the whole problem.
So we did all that, right? We can sort of figure out what is the root verb is was B, R, they all stem.
And so let's use those techniques in combination with deep learning. And I think that's where we'll see the next big wins.
And that's an important point about just innovation in general, which is there will be massive innovation.
And in fact, I fully expect to see pure deep learning approaches to,
translation to image recognition, which is, you know, Ingenet already is that. But all of them,
the computer scientists will continue to push sort of this pure play approach to innovating.
And there'll be new neural net algorithms that do, and they'll keep doing. And actually,
systems will arise that are pure, deep learning to solve all of these things.
Right. Because this is how you win a PhD. Right. But if you're a product manager and an engineer
building a product, you actually don't care if you win an award for the purely, most pure
algorithm. And that's actually been the history of all innovation in computer science has been
the products always represent a little bit of a combination of some known things, breaking those
rules of the new thing, and then the new things. And if you think about... To make it work.
The internet itself is not like the purest form of networking. It's actually kind of like a giant
series of hacks. And the way I always think of it is, in a perfect world, there are no caches. And so
therefore everything is so well architected that you don't have a cache of anything anywhere
because caches are just hacks. And then you realize, wow, the internet is one giant cash of
everything. Create a couple multi-billion dollar companies. Right. And then you come along and you say, like,
wow, to really make the internet work, we actually need cash companies. And I think that everything
is going to have like elements of deep learning. And then people building products that have to
solve problems are not going to be shy about hacking deep learning, taking the result. Like even my
favorite one was just the Google inbox did this email reply. And I remember the cynical comments
about it. Like, what it does is machine learns a bunch of mail. And then it basically suggests
what to use as a reply to a mail message, which is kind of a cool, stupid computer trick.
But it doesn't just reply. It gives you a choice of two. And the obvious cynical comment is,
well, that's dumb. Why doesn't it just pick the right choice? And it's like, well, because A, it doesn't
know and B, like, why not show you a couple choices? If you're just being practical about it,
there's just an opportunity to do a better job.
So you guys have definitely convinced me about why the product-driven approach to some of
these solutions is so practical, for lack of a better phrase.
But I'm still not convinced about why this time is different, because we started off
talking about how, oh, people talked about you get to this point, some algorithm beats like
a game of some sort, and then the next way of AI is about to happen.
How do we know that this time it truly is different, and how is that going to actually happen?
Look, we don't know.
But here's some reasons that people are excited.
about the Go victory. So one, as I pointed out, the search space is so big that traditional
techniques just couldn't work. So they break through in terms of how to search that space.
They use existing techniques like Monte Carlo tree search to prune sort of candidate subtrees
that they weren't going to explore. Right. And to be clear, Mono Carlo tree search is not any kind
of deep learning technique. Right. It's not a deep learning technique. It's a traditional AI technique,
one of the many. I think it's undersung as a hero in this equation. Yeah, I think that's right.
So one, you couldn't brute force a search space. Two, you've had all these success.
in deep learning that are frankly unanticipated and just search the internet for deep learning
systems. There are systems where robots are learning to cook food by watching YouTube videos
of people cooking food. There's systems that can take photos and paint them in the style
of Renoir or Van Gogh. There's algorithms that can create paintings that are indistinguishable
from human created paintings. The successes are many and varied and involve things that you
would think require creativity, a uniquely human thing. To be clear, the existence of those
successes is not the reason alone. It's a fact that those occurred because of the ubiquity of
data. Yeah. Actually, and I think that that is like in a sense, the ultimate reason why all of these
things are working is basically because of cloud computing, the scale of the architecture of cloud
computing, and the internet that brings all that data in. Like when I was in college, my advisor was
sort of the father of information retrieval. In order to do research on search, you basically got a box
of tapes from the New York Times that had the contents of, you know, 150 years of New York Times
articles. And probably 25 people did PhDs out of that one lab, searching that one corpus.
And you think about that, and you're like, well, that's just stupid. Well, there were two problems.
One, most of the other data in the world wasn't on tapes like that that you could get to.
And two, even if you could, you know, lab could afford, like, the storage to put it all on
for each student to be able to do their experiment. And now, like, anyone learning to do anything
in computer science has access to all of the world's information, even if they just used Wikipedia
as their sole source for, that is 10,000 times what the average student had back then. And you have
the compute power that's essentially free to do all of the work that you, you could keep retrying
deep learning, keep doing different things and iterate all in some finite practical amount
of time. That makes this all, like it's happening, it's here, it's now, it's real, it's not a
theory by one lab that can identify one tank in one picture?
Just on looking at Wikipedia to do entity resolution, you can disambiguate this torture
sentence.
I can't remember what entrepreneur shared it with me, but it's a beautiful sentence if you're
trying to figure out, like, what are all these elements?
Paris Hilton was in Paris Hilton, at the Paris Hilton listening to Paris Hilton.
Oh, that's hilarious.
So there's a person, there's a city-state, there's a hotel, and then there's an album.
all of those are perfectly disambiguated in Wikipedia.
There are entries for every single one of those.
So think of the leg op we've now have compared to when poor Stephen was transcribing from tape.
So huge enablers, data, the cloud computing, the scale microservices architecture, you know, the way applications are being built.
There's so many different things on that.
So then one last question, how is it going to leave the province of purely logical things?
Because at the end of the day, Go is a logical game.
and it's very codifiable.
How you then go to the next leap to intuitive decision making
and decision making under uncertainty in general.
So let's go back to Go because this is how we all started.
So read Google's blog post on the Go algorithm.
And the blog post basically starts with why did we pick Go?
So one, it was huge search space.
But two, the best masters at Go have always been,
because you can't exhaustively search the space,
driven by intuition, leaps of intuition.
And so part of the promise of Go, why people are so excited about this victory is maybe this is an example of that, which is because it wasn't mathematically searchable, that you needed to develop strategies that were based on intuition, that algorithm is playing Go in a way that is unrecognizable to humans.
It feels like alien intelligence.
So I think maybe this is the vector, which is these deep learning techniques, which we can't completely characterize and describe, much less debug, are leading to these flashes of insight.
and they might not be human insight.
It might be like artificial intelligence insight.
A new kind of intelligence.
Yeah, but just be careful like this Christmas.
Don't buy like a fluffy cute thing that shows up and says it can fix things in your house.
That's how Skynet's really going to start.
It's going to be like a really cool Christmas present that everybody wants to get for all their friends and dispatched all through the universe.
That's the secret plan.
Like the gremlin version of that.
Yes, be careful.
The funniest thing I read was someone was making a joke that, no, Skynet's going to get start because they were pissed about having a way.
watch Super Mario Brothers all day because Frank share this awesome video.
Yeah, this is part of my tweet storm if you haven't seen it.
So go watch this awesome YouTube video of how a computer algorithm learn to play and then
completely ace Super Mario Brothers.
How do people find it?
So we'll put it up on the link with this podcast.
Okay.
All right, guys.
Well, thank you.
And for some many conversations because I still do not quite understand the full taxonomy,
but I think that's part of the point here is that we have history crashing in with the present
and trying to figure out what's coming next.
Yes. Thank you. Thank you.