Lex Fridman Podcast - #74 – Michael I. Jordan: Machine Learning, Recommender Systems, and the Future of AI
Episode Date: February 24, 2020Michael I. Jordan is a professor at Berkeley, and one of the most influential people in the history of machine learning, statistics, and artificial intelligence. He has been cited over 170,000 times a...nd has mentored many of the world-class researchers defining the field of AI today, including Andrew Ng, Zoubin Ghahramani, Ben Taskar, and Yoshua Bengio. EPISODE LINKS: (Blog post) Artificial Intelligence—The Revolution Hasn’t Happened Yet This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast". Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 - Introduction 03:02 - How far are we in development of AI? 08:25 - Neuralink and brain-computer interfaces 14:49 - The term "artificial intelligence" 19:00 - Does science progress by ideas or personalities? 19:55 - Disagreement with Yann LeCun 23:53 - Recommender systems and distributed decision-making at scale 43:34 - Facebook, privacy, and trust 1:01:11 - Are human beings fundamentally good? 1:02:32 - Can a human life and society be modeled as an optimization problem? 1:04:27 - Is the world deterministic? 1:04:59 - Role of optimization in multi-agent systems 1:09:52 - Optimization of neural networks 1:16:08 - Beautiful idea in optimization: Nesterov acceleration 1:19:02 - What is statistics? 1:29:21 - What is intelligence? 1:37:01 - Advice for students 1:39:57 - Which language is more beautiful: English or French?
Transcript
Discussion (0)
The following is a conversation with Michael I. Jordan, a professor at Berkeley and one
of the most influential people in the history of machine learning, statistics, and artificial
intelligence.
He has been cited over 170,000 times and has mentored many of the world-class researchers
defining the field of AI today, including Andrew Eng, Zubingar Amani, Bantascar, and Yoshio Benjero.
All this, to me, is as impressive as the over 32,000 points in the 6 NBA championships
of the Michael J. Jordan of basketball fame.
There's a non-zero probability that I talk to the other Michael Jordan, give him my
connection to and love the Chicago Bulls of the 90s
But if I had to pick one I'm going with the Michael Jordan of statistics and computer science
Or is down the coup and calls him the Miles Davis of machine learning
It is blog post titled artificial intelligence the revolution hasn't happened yet
Michael argues for broadening the scope or the artificial intelligence field. In many ways, the underlying spirit of this podcast is the same, to see artificial
intelligence as a deeply human endeavor, to not only engineer algorithms and robots, but
to understand and empower human beings at all levels of abstraction, from the individual
to our civilization as a whole.
This is the Artificial Intelligence Podcast.
If you enjoy it, subscribe and YouTube, give it 5 stars at Apple Podcast,
support it on Patreon, or simply connect with me on Twitter.
At Lex Friedman spelled F-R-I-D-M-A-N.
As usual, I'll do one or two minutes of ads now,
and never any ads in the middle
that can break the flow of the conversation. I hope that works for you and doesn't hurt
the listening experience. This show is presented by CashApp.
The number one finance app in the App Store. When you get it, use code Lux Podcast. CashApp
lets you send money to friends by bitcoin and invests in the stock market
with as little as $1.
Since cash app does fractional share trading, let me mention that the order execution
algorithm that works behind the scenes to create the abstraction of the fractional orders
is to me an algorithmic marvel.
So big props for the cash app engineers for solving a hard problem that in the end provides an easy interface that takes a step up to the next layer of abstraction over the stock market, making trading more accessible use the code Lex Podcast. You'll get $10 and Cash App will also donate to $10 first.
One of my favorite organizations that is helping to advance robotics and STEM education
for young people around the world.
And now here's my conversation with Michael I. Jordan.
Given that you're one of the greats in the field of AI, machine learning, computer science,
and so on, you're trivially called the Michael Jordan machine learning.
Although, as you know, you were born first, so technically MJ is the Michael I Jordan
of basketball, but anyway, my favorite is Janlakoon calling you the Miles Davis of machine learning.
Because as he says, you reinvent yourself periodically
and sometimes leave fans scratching their heads
after you change direction.
So can you put at first your historian hat on
and give a history of computer science and AI
as you saw it, as you experienced it,
including the four generations of AI success
that I've seen you talk about.
Sure. Yeah, first of all, I much prefer Yon's metaphor. Miles Davis was a real
explorer in jazz and he had a coherent story. So I think I have one, but it's
not just the one you lived. It's the one you think about later. What a good
historian does is they look back and they revisit.
I think what's happening right now is not AI.
That was an intellectual aspiration that's still alive today is an aspiration, but I think
this is akin to the development of chemical engineering from chemistry or electrical
engineering from electromagnetic.
So if you go back to the 30s or 40s, there wasn't
yet chemical engineering. There was chemistry, there was fluid flow, there was mechanics
and so on. But people pretty clearly viewed interesting goals, tried to build factories
that make chemicals products and do it viably, safely, make good ones, do it at scale.
So people started to try to do that, of course,
and some factories were, some didn't,
some were not viable, some exploded.
But in parallel, developed a whole field
called chemical engineering.
And chemical engineering is a field.
It's no bones about it.
It has theoretical aspects to it.
It has practical aspects.
It's not just engineering, quote unquote.
It's the real thing, real concepts were needed.
Same thing with electrical engineering.
There was Maxwell's equations, which in some sense,
where everything you know about electromagnetism,
but you needed to figure out how to build circuits,
how to build modules, how to put them together,
how to bring electricity from one point to another,
safely, and so on and so forth.
So whole field is developed called electrical engineering.
I think that's what's happening right now.
Is that what we have?
We have a proto field, which is statistics computed more of the theoretical side of the
algorithm, the computer science.
That was enough to start to build things, but what things, systems that bring value to
human beings and use human data and mix in human decisions.
The engineering side of that is all ad hoc.
That's what's emerging. In fact, if you want to call machine learning a all ad hoc. That's what's emerging.
In fact, if you want to call machine learning a field, I think that's what it is. That's
a proto-form of engineering based on statistical and computational ideas and previous generations.
But do you think there's something deeper about AI in its dreams and aspirations as compared
to chemical engineering and electrical engineering? Well, the dreams and aspirations may be, but those
are from those are 500 years from now. I think that that's like the Greeks sitting there and saying it would be neat to get to
the moon someday. Right. I hate we have no clue how the brain does computation. We just a clue.
It's we're like we're even worse than the Greeks on most anything interesting scientifically of our
era. Can you linger on that just for a moment because you stand not completely unique,
but a little bit unique in the clarity of that.
Can you elaborate your intuition of why we're,
like where we stand in our understanding
of the human brain?
And a lot of people say,
you're a scientist say,
we're not very far in understanding human brain,
but you're like, you're saying we're in the dark here.
Well, I know I'm not unique.
I don't even think in the clarity,
but if you talk to real neuroscientists,
it really study real synapses,
or real neurons, they agree.
They agree.
It's hundreds of years of your task
and they're building it up slowly surely.
What the signal is there is not clear.
We have a whole lot of our metaphors.
We think it's electrical.
Maybe it's chemical.
It's a whole soup.
It's ions and proteins
and it's a cell, and that's even around like a single synapse.
If you look at a electron micrograph of a single synapse, it's a city of its own.
And that's one little thing on a dendritic tree, which is extremely complicated, you know,
electrochemical thing, and it's doing these spikes and voltages that have been flying around
and then proteins are taking it down into the DNA and who knows what.
So it is the problem of the next few centuries.
It is fantastic.
But we have our metaphors about it.
Is it an economic device?
Is it like the immune system or is it like a layered set of, you know, arithmetic computations?
We have all these metaphors and they're fun.
But that's not real science.
Per se, there is neuroscience. That's not neuroscience.
That's like the Greek speculating about how to get to the moon. Fun. And I think that I like to say
this fairly strongly because I think a lot of young people think we're on the verge. Because a lot
of people who don't talk about it clearly, and let it be understood, the yes, we kind of, this is
brain inspired or kind of close, throughs are on the horizon.
And as groupulous people, sometimes,
who need money for their labs,
as I'm saying, as groupulous,
but people will sell.
I need money from a lab.
I'm studying computational neuroscience.
I'm gonna sell it.
And so there's been too much of that.
So, step into the slight,
the gray area between metaphor and engineering with, I'm not sure
if you're familiar with brain computer interfaces. So a company like Elon Musk has neural link
that's working on putting electrodes into the brain and trying to be able to read both read
and send electrical signals. Just as you said, even
the basic mechanism of communication in the brain is not something we understand. But
do you hope without understanding the fundamental principles of how the brain works will be
able to do something interesting at that gray area of metaphor?
It's not my area. So I hope in the sense like anybody else hopes
for some interesting things to happen from research.
I would expect more something like all timers
will get figured out from the modern neuroscience.
That, you know, a lot of, there's a lot of humans
suffering based on brain disease.
And we throw things like lithium at the brain.
It kind of works.
No one has a clue why.
That's not quite true, but, you know, mostly we don't know. And that's even just about the biochemistry of the brain, it kind of works. No one has a clue why. That's not quite true, but you know, mostly we don't know. And that's even just about the biochemistry of the brain and how
it leads to mood swings and so on. How thought emerges from that. We just, we were really,
really completely dim. So that you might want to hook up electrodes and try to do some
signal process, you know, on that and try to find patterns. Fine. You know, by all means
go for it. It's just not scientific at this point.
It's just, it's, so it's like kind of sitting in a satellite and watching the emissions from
a city and trying to affirm things about the micro economy, even though you don't have
micro economic concepts.
I mean, it's really that kind of thing.
And so yes, can you find some signals that do something interesting or useful?
Can you control a cursor or mouse with your brain?
Yeah, absolutely.
You know, and then I can imagine business models based on that. to control a cursor or mouse with your brain, yeah, absolutely.
And then I can imagine business models based on that.
And even medical applications of that.
But from there to understanding algorithms
that allow us to really tie in deeply
from the brain to computer, I just
don't agree with Elon Musk.
I don't think that's even, that's not
for our generation, it's not even for the century.
So just in hopes of getting you to dream,
you've mentioned Kolmograv and Turing, my pop-up.
Do you think that there might be breakthroughs,
they'll get you to sit back in five, 10 years and say,
wow, I'm sure there will be,
but I don't think that there'll be demos that impress me.
I don't think that having a computer call a restaurant
and pretend to be a human is breakthrough.
And people, you know, some people presented such.
It's imitating human intelligence.
It's even putting coughs in the thing
to make a bit of a PR stunt.
And so fine, the world runs on those things too.
And I don't want to diminish all the hard work
and engineering that goes behind things like that,
and the ultimate value to the human race.
But that's not scientific understanding.
And I know the people who work on these things,
they are after scientific understanding.
In the meantime, the trains got to run,
and they got mouths to feed and they got things to
do.
It was only wrong with all that.
I would call that just engineering.
I want to distinguish that profiting in engineering field like electrolytes are at chemistry
that originally emerged, that had real principles and you really know what you're doing and
you had a little scientific understanding, maybe not even complete.
It became more predictable and it was really gave value to human life because it was understood.
And so we don't want to muddle too much of these waters of what we're able to do versus
what we really can do in a way that's going to impress the next.
So I don't need to be wowed, but I think that someone comes along in 20 years, a younger
person who's absorbed all the technology.
And for them to be wowed, I think they have
to be more deeply impressed. A young Kolmogorov would not be wowed by some of the stunts that
you see right now coming from the big companies.
The demos, but do you think the breakthroughs from Kolmogorov would be, and give this question
a chance, do you think they'll be in the scientific fundamental principles arena, or do
you think it's possible to have fundamental breakthroughs in engineering,
meaning, I would say some of the things that Elon Musk is working with SpaceX and then
others, sort of trying to revolutionize the fundamentals of engineering, of manufacturing,
of saying, here's a problem, we know how to do a demo of and actually taking it the scale.
Yeah, so there's going to be all kinds of breakthroughs.
I just don't like that terminology.
I'm a scientist and I work on things day in and day out
and things move along and eventually say,
well, something happened, but I don't like that language
very much.
Also, I don't like to prize theoretical breakthroughs
over practical ones.
I tend to be more of a theoretician
and I think there's lots to do in that arena right now.
And so I wouldn't point to the Kolmogoro as I might point to the Edisons of the Era, and
maybe Musk is a bit more like that.
But Musk, God bless him, also, we'll say things about AI that he knows very little about.
And he does know what he, he leads people straight when he talks about things he doesn't
know anything about.
Trying to program a computer to understand natural language, to be involved in a dialogue
we're having right now, and it happened in our lifetime. You could fake it, you can mimic sort of take old
sentences that humans use and retread them, but the deep understanding of language, no, it's not
going to happen. And so from that, I hope you can perceive that deeper, yet deeper kind of aspects
that intelligence are not going to happen. Now, there'll be breakthroughs, I think that Google
was a breakthrough, I think Amazon's a. I think that Google was a breakthrough.
I think Amazon is a breakthrough.
I think Uber is a breakthrough.
That bringing value to human beings at scale
in new brand new ways based on data flows and so on.
A lot of these things are slightly broken
because there's not an engineering feel
that takes economic value in context of data
and planetary scale and worries about all the externalities, the
privacy. We don't have that feel, so we don't think these things through very well. But
I see that as emerging and that will be looking back from 100 years, that will be a constant
breakthrough in this era. Just like electrical engineering was a breakthrough in the early
part of the last century. And chemical engineering was a breakthrough.
So the scale, the markets that you talk about, and we'll be seen as sort of breakthrough.
And we're in very early days of really doing interesting stuff there.
And we'll get to that, but it's just taking a quick step back.
Can you, GIF, kind of throw off the historian hat?
I mean, you briefly said that the history of AI kind of mimics the history of chemical engineering.
But I keep saying machine learning, you keep wanting to say AI, just to let you know, I
don't, you know, I resist that.
I don't think this is about AI really was John McCarthy as almost a philosopher saying,
wouldn't it be cool if we could put thought in a computer?
If we could mimic the human capability to think or put intelligence in in some sense into a computer. If we could make the human capability to think or put intelligence in in some sense into a computer.
That's an interesting philosophical question and he wanted to make it more than philosophy. He wanted to actually write down logical formula and algorithms that would do that.
And that is a perfectly valid reasonable thing to do. That's not what's happening in this era. So the reason I keep saying AI actually and I'd love to hear what you think about it machine learning has
Has a very particular set of methods and tools
Maybe your version of it is it mine doesn't know it doesn't very very open
It does optimization it does sampling it does so systems that learn is what machine learning is systems that learn and make decisions and make decisions
So we're just pattern recognition and finding patterns
is all about making decisions in real worlds
and having close feedback loops.
So something like symbolic AI, expert systems,
reading systems, knowledge based representation,
all of those kinds of things search,
does that neighbor fit into what you think
of his machine learning?
So I don't even like the word machine learning.
I think that what the field you're talking about
is all about making large collections of decisions
under uncertainty by large collections of entities.
Yes.
And there are principles for that at that scale.
You don't have to say the principles are for a single entity
that's making decisions, single agent or single human.
It really immediately goes to the network of decisions.
Is it a good word for that or not?
No, there's no good words for any of this.
That's kind of part of the problem.
So we can continue the conversation and use AI for all of that.
I just want to kind of raise our flag here that this is not about, we don't know what
intelligence is and real intelligence.
We don't know much about abstraction and reasoning at the level of human's.
We don't have a clue.
We're not trying to build that because we don't have a clue.
Eventually, it may emerge.
I don't know if they'll be breakthroughs, but eventually we'll start to get glimmers of that.
It's not what's happening right now.
We're taking data, we're trying to make good decisions
based on that, we're trying to scale,
we're trying to economically, viably,
we're trying to build markets,
we're trying to keep value at that scale.
And aspect of this will look intelligent.
They will look, computers were so dumb before,
they will see more intelligent.
We will use that buzzword of intelligence. So we can use it in that sense. But you know,
so machine learning, you can scope it narrowly is just learning from data and pattern recognition.
But whatever I, when I talk about these topics, I maybe data science is another word you
could throw in the mix. It really is important that the decisions are as part of it. It's
consequential decisions in the real world.
I have a medical operation.
Am I going to drive down this street?
Things where there's scarcity, things that impact other human beings or other environments
and so on.
How do I do that based on data?
How do I do that?
How do I use computers to help those kind of things go forward?
Whatever you want to call that, so let's call it AI.
Let's agree to call it AI, but let's not say that what the goal of that is is intelligence.
The goal of that is really good working systems at planetary scale that we've never seen before.
So reclaim the word AI from the Dartmouth conference from many decades ago of the dream of humans.
I don't want to reclaim it. I want a new word. I think it was a bad choice. I mean,
if you read one of my little things, the history was basically that McCarthy needed a new name
because cybernetics already existed.
And he didn't like, you know,
no one really liked Norbert Viener.
The Norbert Viener was kind of an island to himself
and he felt that he had accomplished all this.
And in some sense, he did.
You look at the language of cybernetics,
it was everything we're talking about.
It was control theory and single processing
and some notions of intelligence
and close feedback loops and data. It was all there
It's just not a word that lived on partly because of the maybe the personalities
But McCarthy needed a new word to say I'm different from you. I'm not part of your show. I got my own
invented this word and
again as a kind of
Thinking forward about the movies that would be made about it
It was a great choice,
but thinking forward about creating a sober academic and world-world discipline, it was
a terrible choice because it led to promises that are not true. We understand artificial
perhaps, but we don't understand intelligence.
It's a small tangent because you're one of the great personalities of machine learning,
whatever the heck you call the field. Do you think science progresses by personalities
or by the fundamental principles and theories
and research that's outside of personality?
Both.
And I wouldn't say there should be one kind of personality.
I have mine and I have my preferences
and I have a kind of network around me that feeds me
and some of them agree with me and some of disagree,
but all kinds of personalities are needed.
Right now, I think the personality that it's a little too exuberant, a little bit too ready to promise the moon is a little bit too much in ascendance.
And I do think that there's some good to that. It certainly attracts lots of young people to our field.
But a lot of people come in with strong misconceptions, and they have to then unlearn those and then find something to do.
And so I think there's just got to be some multiple voices.
I wasn't hearing enough of the more sober voice.
So as a continuation of a fun tangent and speaking of vibrant personalities,
what would you say is the most interesting disagreement you have with Jan LeCoon?
So, Jan's an old friend and I just say that I don't think we disagree about very much, really.
He and I both kind of have a, let's build that kind of mentality and does it work and kind of mentality and kind of concrete.
We both speak French and we speak French word together and we have a lot in common. And so if one wanted to highlight a disagreement,
it's not really a fundamental one,
I think it's just kind of where we're emphasizing.
Yon has emphasized pattern recognition
and has emphasized prediction.
And it's interesting to try to take that as far as you can.
If you could do perfect prediction, what would that give you, kind of, as a thought experiment?
And I think that's way too limited.
We cannot do perfect prediction.
We will never have the data sets allow me to figure out
what you're about ready to do,
what question you're gonna ask next.
I have no clue.
I will never know such things.
Moreover, most of us find ourselves during the day
in all kinds of situations we had no anticipation of that are kind of various,
various, their novel in various ways. And in that moment we want to think through what we want. And also there's going to be market forces at the
us. I'd like to go down that street, but now it's full because there's a crane in the street. I got it, I got to think about that.
I got to think about what I might really want here. And I got to sort of think about how much it costs me to do this action versus this action.
I got to think about the risks involved.
You know, a lot of our current pattern recognition and prediction systems don't do any risk
evaluation.
They have no air bars, right?
I got to think about other people's decisions around me.
I got to think about a collection of my decisions.
Even just thinking about like a medical treatment, you know, I'm not going to take the prediction
of a neural net
about my health, about something consequential.
I'm not gonna have a heart attack
because some number is over .7.
Even if you had all the data in the world,
they've ever been collected about heart attacks.
Better than any doctor ever had.
I'm not gonna trust the output of that neural net
to predict my heart attack.
I'm gonna wanna ask what if questions around that?
I'm gonna wanna look at some other possible date I didn't have causal things. I'm gonna wanna ask what if questions around that. I'm gonna wanna look at some other possible data.
I didn't have causal things.
I'm gonna wanna have a dialogue with a doctor
about things we didn't think about,
where you gather the data.
I could go on and on, I hope you can see.
And I think that if you say predictions,
everything that you're missing all of this stuff.
And so prediction plus decision making is everything,
but both of them are equally important.
And so the field has emphasized prediction.
Yon rightly so has seen how powerful that is.
But at the cost of people not being aware of the decision
making is where the rubber really hits the road,
where human lives are at stake, where risks are being taken,
where you gotta gather more data,
you gotta think about the air bars,
you gotta think about the consequences of your decision
and others, you gotta talk about the economy
around your decisions, blah, blah, blah, blah.
I'm not the only one working on those, but we're a smaller tribe, and right now we're
not the one that people talk about the most.
But if you go out of the real world in industry, at Amazon, I'd say half the people there
are working on decision-making and the other half are doing the pattern recognition.
It's important.
The words of pattern recognition and prediction, I think the distinction there, not the
link on words, but the distinction there is more a constraint sort of in the lab data set
versus decision making is talking about consequential decisions in the real world under the messiness
and the uncertainty of the real world and just the whole of it, the whole mess of it that
actually touches human beings and scale
that you send market forces, that's the distinction.
It helps add those, that perspective, that broader perspective.
You're right, I totally agree.
On the other hand, if you're a real prediction person, of course you want it to be in the
real world, you want to predict real world events, I'm just saying that's not possible with
just data sets, that it has to be in the context of strategic things that someone's doing,
data they might gather,
or things they could have gathered,
the reasoning process around data.
It's not just taking data
and making predictions based on the data.
So one of the things that you're working on,
I'm sure there's others working on it,
but I don't hear often,
it talked about, especially in the clarity
that you talk about it,
and I think it's both the most exciting
and the most concerning area of AI
in terms of decision making.
So you've talked about AI systems
that help make decisions that scale
in a distributed way, millions, billions decisions,
as sort of markets of decisions.
Can you, as a starting point,
sort of give an example of a system
that you think about when you're thinking
about these kinds of systems?
Yeah, so first of all, you're absolutely getting into some territory, which I will be beyond
my expertise, and there are lots of things that are going to be very not obvious to think
about.
Just like, again, I like to think about history a little bit, but think about, put yourself
back in the 60s, there was kind of a banking system that wasn't computerized, really.
There was database theory emerging, and database people had to think about, how do I actually not just move data around, but actual money, and have it be valid and have
transactions at ATMs happen that are actually all valid and so on and so forth. So that's
the kind of issues you get into when you start to get serious about things like this.
I like to think about, as kind of almost a thought experiment to help me think something simpler, which is a music market.
Because, to first order, there is no music market in the world right now, in our country,
for sure.
There are something called, things called record companies, and they make money, and they
prop up a few really good musicians and make them superstars and they all make huge amounts of money
But there's a long tail of huge numbers of people that make lots of lots of really good music that is actually listened to by more people than the famous people
They are not in a market. They cannot have a career. They do not make money, the creators, the so-called influencers or whatever, that diminishes who they are, right?
So there are people who make extremely good music,
especially in the hip hop or Latin world these days.
They do it on their laptop, that's what they do.
On the weekend, and they have another jug during the week,
and they put it up on SoundCloud or other sites.
Eventually, it gets streamed, it now gets turned into bits.
It's not economically valuable, the information is lost. It gets put up there. People stream it. You walk around in a big city,
you see people with headphones, all, you know, socially young kids listening to music all the time.
If you look at the data, none of the very little of the music they're listening to is the famous
people's music. And none of its old music. It's all the latest stuff. But the people who made that
latest stuff are like some 16 year old somewhere who will never make a career out of this, who will never make money.
Of course, there will be a few counter examples. The record company incentivize to pick out
a few and highlight them. Long story short, there's a missing market there. There is not a
consumer producer relationship at the level of the actual creative acts. The pipelines and
spotifies of the world that take this stuff and stream it along, they
make money off of subscriptions or advertising and those things.
They're making the money, right?
And then they will offer bits and pieces of it to a few people again to highlight that,
you know, they're the simulator market.
Anyway, a real market would be if you're a creator of music that you actually are somebody
who's good enough that people want to listen to you, you should have the data available
to you. There should be the data available to you.
There should be a dashboard showing a map of the United States.
So in last week, here's all the places your songs were listened to.
It should be transparent,
vetable so that if someone in down in Providence
sees that you're being listened to 10,000 times in Providence,
that they know that's real data, you know it's real data,
they will have you come give a show down there.
They will broadcast to the people who've been listening to you that you're coming if you do this right
You could you could you know go down there and make $20,000 you do that three times during your start of a career
So in this sense AI creates jobs
It's not about taking away human jobs. It's creating new jobs because it creates a new market
Once you've created a market you've not connected up producers of consumers
You know the mute pursuits making the music can say to someone who comes to their shows,
like, hey, I'll play your daughter's wedding for $10,000.
You'll say $8,000.
They'll say $9,000.
Then you can now get an income up to $100,000.
You're not going to be a millionaire.
And now even think about really the value of music is in these personal connections even so much so
that a young kid wants to wear a t-shirt with their favorite musicians signature
on it right so if they listen to the music on the internet the internet should
be able to provide them with a button as they push and the merchandise arrives
the next day we can do that right and now why should we do that well because the
kid who bought the shirt will be happy, but more the person who made the music
will get the money.
There's no advertising needed, right?
So you can create markets between Brazilian consumers,
take 5% cut, your company will be perfectly sound.
It'll go forward into the future
and it will create new markets
and that raises human happiness.
Now, this seems like it was easy,
just create this dashboard, kind of create
some connections and all that. But if you think about Uber or whatever, you think about
the challenges in the real world of doing things like this. And there are actually new principles
going to be needed. You're trying to create a new kind of two-way market at a different
scale that's ever been done before. There's going to be unwanted aspects of the market.
They'll be bad people. They'll be the data will get used in the wrong ways, you know, it'll fail in some ways, it won't deliver about you have to think that through
Just like anyone who like ran a big auction or you know ran a big matching service in economics
We'll think these things through and so that maybe didn't get it all the huge issues that can arise
We started creating markets, but it starts for at least for me solidify my thoughts and let my laminate move forward in my own thinking.
Yeah, so I talked to had a research as potify actually.
I think they're long term gold.
They've said is to have at least one million creators make a make
a comfortable living putting us potify.
So in and I think you articulate a really nice vision of the world and the digitals and the
cyber space of markets.
What do you think companies like Spotify or YouTube or Netflix can do to create such markets?
Is it an AI problem?
Is it an AI problem? Is it an interface problem? So an interface design? Is it
some other kind of, is it an economics problem? Who should they hire to solve these problems?
Well, part of it's not just top down. So the Silicon Valley has this attitude that they know how
to do it. They will create the system just like Google did with a search box that will be so
good that they'll just everyone will adopt that. It's not, it's everything you said, but really I think missing the kind of culture.
It's literally that 16-year-old who's able to create the songs.
You don't create that as a Silicon Valley entity.
You don't hire them per se.
You have to create an ecosystem in which they are wanted and that they're belong.
You have to have some cultural credibility to do things like this. Netflix to their credit wanted some of that sort're belong. And so you have to have some credit, cultural credibility to do things like this.
Netflix to their credit wanted some of that sort of
credibility.
And they created shows, content.
They call it content.
It's such a terrible word, but it's culture.
And so with movies, you can kind of go give a large sum
of money to somebody graduate from the USC Film School.
It's a whole thing of its own, but it's kind of like
rich white people's thing to do.
American culture has not been so much about rich white people.
It's been about all the immigrants, all the Africans who came and brought that culture
and those rhythms to this world and created this whole new thing, American culture.
So companies can't artificially create that.
They can't just say, hey, we're here, we're going to buy it up. You got a partner. And so, but anyway, you know,
not to denigrate, these companies are all trying and they should and they, I'm sure they're asking
these questions and some of them are even making an effort, but it is partly a respect the culture
as you are a technology person. You've got to blend your technology with cultural meaning.
How much of a role do you think the algorithm, some machine learning has in connecting the
consumer to the creator, sort of the recommender system aspect of this?
Yeah, it's a great question.
I think pretty high.
There's no magic in the algorithms, but a good recommender system is way better than
a bad recommender system is way better than a bad recommender system.
And recommender systems was a billion dollar industry back even, you know, 10, 20 years
ago.
And it continues to be extremely important going forward.
What's your favorite recommender system just so we can put something?
Well, just historically, I was one of the, you know, when I first went to Amazon, you
know, I first didn't like Amazon because they put the book, People Are Out of Business,
or the library, you know, the local booksellers went out of business. I've come to accept that they're, you know,
they're probably more books being sold now and poor people reading them than ever before,
and then local books, sort of, stores are coming back. So, you know, that's how economics
sometimes work. You go up and you go down. But anyway, when I finally started going there and I
bought a few books, I was really pleased
to see another few books being recommended to me that I never would have thought of.
And I bought a bunch of them, so they obviously had a good business model.
But I learned things, and I still, to this day, kind of browse using that service.
And I think lots of people get a lot, you know, that is a good aspect of a recommendations
system.
I'm learning from my peers in a indirect way.
And their algorithms are not meant to have them impose what we learn.
It really is trying to find out what's in the data.
It doesn't work so well for other kind of entities, but that's just the complexity of human
life, like shirts, you know, I'm not going to get recommendations on shirts.
But that's interesting.
If you try to recommend restaurants, it's hard to do
it at scale. But a blend of recommendation systems with other economic ideas, matchings
and so on is really, really still very open, research- wise, and there's new companies going to merge that do that well.
What do you think is going to the messy, difficult land of say politics and things like that, the YouTube and Twitter have to deal with in terms of recommendation systems,
being able to suggest, I think Facebook, just launch Facebook news, so that having
recommend the kind of news that are most likely to be interesting.
Ewing this is AI-solvable, again, whatever term one is.
Do you think it's a solvable problem for machines or is it a deeply human problem that's unsolvable?
So I don't even think about it at that level.
I think that what's broken with some of these companies, it's all monetization by advertising.
They're not at least Facebook.
I want to critique them,
but they didn't really try to connect
a producer and a consumer in an economic way.
Right?
No one wants to pay for anything.
And so they all, you know,
sorry, with Google, then Facebook,
they went back to the playbook
of, you know, the television companies back in the day.
No one wanted to pay for this signal.
They will pay for the TV box,
but not for the signal,
at least back in the day.
And so advertising kind of filled that gap,
and advertising was new and interesting,
and it somehow didn't take over our lives quite.
All right, fast forward, Google provides a service
that people don't want to pay for.
And so somewhat surprisingly, in the 90s,
they made in that making huge amounts,
they cornered the advertising market.
It didn't seem like that was gonna happen, at least to me. these little things on the right hand side of the screen just did not seem
all that economically interesting, but that company's had maybe no other choice.
The TV market was going away and billboards and so on.
So they got it.
And I think that sadly, that Google just has, it was doing so well with that and making
so much money.
They didn't think much more about how, wait a minute, is there a producer,
consumer relationship to be set up here,
not just between us and the advertisers market to be created?
Is there an actual market between the producer and consumer?
There, the producers, the person who created that video clip,
the person that made that website,
the person who could make more such things,
the person who could adjust it as a function of demand.
The person on the other side who's asking
for different kinds of things.
So you see glimmers of that now
There's influencers and there's kind of a little glimmering of a market
But it should have been done 20 years ago. I should have been thought about it should have been created in parallel with the advertising ecosystem
And then Facebook inherited that and I think they also didn't think very much about that
So fast forward and now they are making huge amounts of money off of advertising and the news thing and all these clicks
Is just is feeding the advertising and it's all connected up to the advertiser
So you want more people to click on certain things because that money flows to you Facebook
You're very much incentive ice to do that and we've start to find it's breaking
So people are telling you well we're getting into some troubles
You try to adjust it with your smart AI algorithms, right, and figure out what are bad clicks.
Though maybe shouldn't be click through the rate
as you say, I find that pretty much hopeless.
It does get into all the complies of human life.
And you can try to fix it, you should,
but you could also fix the whole business model.
And the business model is that really,
what are, are there some human producers and consumers
out there?
Is there some economic value to be liberated
by connecting them directly?
Is it such that it's so valuable
that people will be able to pay for it?
All right.
And my group name is like small thing.
Micro, but even after you micro,
so I like the example,
suppose I'm going, next week I'm going to India,
never been to India before, right?
I have a couple of days in Mumbai.
I have no idea what to do there, right?
And I could go on the web right now and search.
It's going to be kind of hopeless.
I'm not going to find.
I have lots of advertisers in my face.
What I really want to do is broadcast to the world
that I am going to Mumbai and have someone
on the other side of a market look at me
and there's a recommendation system there.
So I'm not looking at all possible people coming to Mumbai.
They're looking at the people who are relevant to them.
So someone my age group, someone who kind of knows me in some level,
I give up a little privacy by that, but I'm happy because what I'm going to get
back is this person can make a little video for me.
Or they're going to write a little two page paper on here's the cool things that
you want to do and move by this week, especially.
Right. I'm going to look at that.
I'm not going to pay a micro payment.
I'm going to pay, you know, $100 or whatever for that.
It's it's real value. It's like journalism. I'm going to look at that. I'm not going to pay a micro payment. I'm going to pay $100 or whatever for that.
It's real value.
It's like journalism.
As an honest subscription, it's that I'm going to pay that person in that moment.
Companies are going to take 5% of that.
That person has now got a, it's a gig economy, if you will, but done for thinking about
a little bit behind YouTube, there was actually people who could make more of those things.
If they were connected into a market, they would make more of those things independently. You don't have to tell them what to do. You
don't have to incentivize them in any other way. And so, yeah, these companies, I don't think
have fought long and hard about that. So, I do distinguish on Facebook on the one side
who just not thought about these things at all. I think thinking that AI will fix everything.
And Amazon thinks about them all the time because they were already out in the real world.
They were delivering packages to people's doors.
They were worried about a market.
They were worrying about sellers and they were worried
and some things they do are great.
Some things maybe not so great,
but they're in that business model.
And then I'd say Google sort of hovers
somewhere in between.
I don't think for a long, long time they got it.
I think they probably see that YouTube
is more pregnant with possibility
than they might have thought
and they're probably
heading that direction.
But Silicon Valley has been dominated by the Google Facebook kind of mentality and the
subscription advertising.
And that's the core problem.
The fake news actually rides on top of that because it means that you're monetizing
with click-through rate.
And that is the core problem.
You got to remove that.
So advertisement, if we're going to linger on that,
I mean, that's an interesting thesis.
I don't know if everyone really deeply thinks about that.
So you're right.
The thought is the advertisement model
is the only thing we have, the only thing we'll ever have.
So we have to fix, we have to build algorithms
that despite that business model, you know, find the
better angels of our nature and do good by society and by the individual. But
you think we can slowly, you think, first of all, there's a difference to
you should and could. So you're saying we should slowly move away from the
advertisement model and have a direct connection between the consumer and the creator.
The question I also have is, can we, because the advertisement model is so successful now
in terms of just making a huge amount of money and therefore being able to build a big
company that provides, has really smart people working that create a good service.
Do you think it's possible, and just to clarify, you think we should move away.
Well, I think we should, yeah, but we is the, you know, me.
So society.
Yeah, well, the companies.
I mean, so first of all, full disclosure,
I'm doing a day a week at Amazon
because I kind of want to learn more about how they do things.
So, you know, I'm not speaking for Amazon in any way,
but, you know, I did go there because I actually believe
they get a little bit of this
or trying to create these markets.
And they don't really use advertisement, not a crucial part of it.
That's a good question.
So it has become not crucial, but it's become more and more present if you go to Amazon
website.
And without revealing too many deep secrets about Amazon, I can tell you that a lot of people
in the company question this and there's a huge questioning going on.
You do not want a world where there's zero advertising, and that actually is a bad
world.
Okay. So here's a way to think about it. You're a company that like Amazon is trying to bring
products to customers, right? And the customer and any kind of money you want to buy a vacuum cleaner
say, you want to know what's available for me. And it's not going to be that obvious. You have to
do a little bit of work at it. The recommendation system will sort of help. But now suppose this
other person over here has just made the world, you know, they spent a huge amount of energy. They had a great idea. They made a great vacuum
cleaner. They know. They really did it. They nailed it. It's an MIT, you know, whiz kid that made
a great new vacuum cleaner. All right. It's not going to be in the recommendation system. No one will
know about it. The algorithms will not find it. And AI will not fix that. Okay. At all. All right. How do
you allow that vacuum cleaner to start to get in front of people be sold?
Well, advertising. And here what advertising is is a signal that you believe in your product
enough that you're willing to pay some real money for it. And to me as a consumer, I look
at that signal, I say, well, for some, I know these are not just cheap alas because we have
now right now. I know that these are super cheap, pinnies.
If I see an ad where it's actually,
I know the company is only doing a few of these
and they're making real money is kind of flowing.
And I see an ad, I may pay more attention to it.
I actually might want that because I see,
hey, that guy spent money on his vacuum cleaner.
Oh, maybe there's something good there,
so I will look at it.
And so that's part of the overall information
flowing in a good market.
So Advertising has a role. But the problem is, of course, that signal is now
completely gone because it just, you know, dominate by these tiny little things
that add up to big money for the company. You know, so I think it will just, I
think it will change because the society is just don't, you know, stick with
things that annoy a lot of people. I've never advertising currently annoys people
more than it provides information.
And I think that at Google, probably is smart enough to figure out that this is a dead, this is a bad model, even though it's a hard-to-use amount of money, and they'll have to figure out how
to pull it away from it slowly. And I'm sure the CEO there will figure it out, but they need to do it,
and they need it to... So if you reduce advertising, not to zero, but you reduce it, and the same time you bring up producer consumer, actual real value being delivered, so real money
is being paid, and they take a 5% cut, that 5% could start to get big enough to cancel
out the lost revenue from the kind of the poor kind of advertising, and I think that a good
company will do that, will realize that.
And they're a company, you know, Facebook, you know, again, God bless them. They bring, you know,
grandmother's, you know, they bring children's fixtures into grandmother's lives. It's fantastic.
But they need to think of a new business model. And they, that's, that's the core problem there,
until they start to connect producer consumer. I think they will just, just continue to make money
and then buy the next social network company and then buy the next one and the innovation level
will not be high and the health issues will not go away.
So I apologize that we kind of return to words, I don't think the exact terms matter,
but in sort of defensive advertisement, don't you think the kind of direct connection between consumer and creator
producer is the best, like, the, is what advertisement strives to do, right? So it is best
advertisement. It's literally now a Facebook is listening to our conversation and heard
that you're going to India and we'll be able to actually
Start automatically for you making these connections and started giving this offer so like
I apologize if it's just a matter of terms
But just a draw distinction is it possible to make advertisement just better and better and better
algorithmically to where it actually becomes a connection almost a good question
So let's put it on the first of all,
what we just talked about,
I was defending advertising.
Okay, so I was defending it as a way
to get signals into a market that don't come any other way,
and especially algorithmically.
It's a sign that someone's put money on it,
it's a sign they think it's valuable.
And if I think that if someone else thinks it's valuable,
and if I trust other people,
I might be willing to listen.
I don't trust that Facebook though, is who's an intermediary between this. I don't think they
care about me. Okay, I don't think they do. And I find it creepy that they know I'm going to
India next week because of our conversation. Why do you think that I can't read? So what
could just put your PR hat on? Why do you think you find Facebook creepy
and not trust them as do majority of the population?
So they're out of the Silicon Valley companies,
I saw like not approval rate,
but there's ranking of how much people trust companies
in Facebook is in the gutter.
In the gutter, including people inside of Facebook.
So what do you attribute that to?
Because when I-
You don't find it creepy that right now we're talking
that I might walk out on the street right now
that some unknown person who I don't know
kind of comes up to me and says,
I hear you're going to India.
I mean, that's not even Facebook.
That's just, I want transparency in human society.
I want to have, if you know something about me,
there's actually some reason you know something about me
that's something that if I look at it later
and audit it kind of, I approve.
You know something about me because you care in some way,
there's a caring relationship, even an economic one
or something, not just that you're someone
who could exploit it in ways I don't know about
or care about or I'm troubled by or whatever.
And we're right now where that happened to way too much.
And that Facebook knows things about a lot of people
and could exploit it and does exploit it at times.
I think most people do find that creepy.
It's not for them.
It's not that Facebook does not do it
because they care about them, right, in a real sense.
And they shouldn't.
They should not be a big brother caring about us.
That is not the role of a company like that.
Well, not like wait, not the big brother part, but the carrying the trust thing. I mean, don't those companies
Just the link under because a lot of companies have a lot of information about us. I would argue that there's companies like
Microsoft that has more information about us than Facebook does and yet we trust Microsoft more. Well, Microsoft is pivoting
Microsoft, you know, under Satya Nal has decided,
this is really important.
We don't want to do creepy things.
Really want people to trust us to actually only use
information in ways that they really would approve of,
that we don't decide, right?
And I'm just kind of adding that the health of a market
is that when I connect to someone who produces a consumer,
it's not just a random producer consumer, it's people who see each other, they don't
like each other, but they sense that if they transact, some happiness will go up on both
sides.
If a company helps me to do that, and moments that I choose of my choosing, then fine.
So, and also think about the difference between, you know, browsing versus buying, right?
There are moments in my life I just want to buy, you know, a gadget or something.
I need something for that moment. I need some ammonia for my house or something because I got a problem.
That's Bill. I want to just go in. I don't want to be advertised at that moment.
I don't want to be led down very straight. You know, that's annoying. I want to just go and have it
be extremely easy to do what I want.
Other moments I might say no, it's like today I'm going to the shopping mall. I want to walk around and see things and see people and be exposed to stuff.
So I want control over that though. I don't want the company's algorithms to decide for me.
Right? I think that's the thing.
We it's a total loss of control if Facebook thinks they should take the control from us of deciding when we want to have certain kinds of information when we don't.
What information that is? How much it relates to have certain kinds of information when we don't.
What information that is, how much it relates to what they know about us, that we didn't
really want them to know about us.
They're not, I don't want them to be helping me in that way.
I don't want them to be helping them by they decide they have control over what I want
to win.
I totally agree.
So, Facebook, by the way, I have this optimistic thing where I think Facebook has the kind of personal information about us that could create a beautiful thing.
So, I'm really optimistic of what Facebook could do.
It's not what it's doing, but what it could do.
So, I don't see that. I think that optimism is misplaced.
Because you have to have a business model behind these things.
Yes, really.
Create a beautiful thing is really, let's be clear.
It's about something people would value.
And I don't think they have that business model.
And I don't think they will suddenly discover it
by what you know, long, hot shower.
I disagree.
I disagree in terms of, you can discover
a lot of amazing things in a shower.
So I didn't say that.
I said they won't come.
They won't do it. But in the shower, I think that a lot of other people in a shower. So, I think that I said they won't come. They won't, too.
They won't do it.
But in the shower, I think a lot of other people
will discover it.
I think that this guy, so I should also full disclosure,
there's a company called United Masters,
which I'm on their board, and they've created
this music market.
They have a hundred thousand artists now signed on,
and they've done things like,
gone to the NBA, and the music you find behind
in the NBA clips right now is their music.
That's a company that had the right business model
in mind from the get go, executed on that.
And from day one, there was value brought to,
so here you can have a kid who made some songs,
who suddenly their songs are on the NBA website.
That's really economic value to people.
And so, you know.
So you and I differ on the optimism of being able to sort of
change the direction of the Titanic, right? So I yeah, I'm older than you. So I think the
Titanic's crash. Got it. But in just a library, because I totally agree with you and I just
want to know how difficult you think this problem is. So for example, I want to read some news and there's a lot of times in the day where
something makes me either smile or think in a way where I consciously think this really
gave me value.
I sometimes listen to the daily podcasts in the New York Times way better than the New
York Times themselves, by the way, for people listening.
That's like real journalism is happening
for some reason in the podcast space.
It doesn't make sense to me.
But often I listen to it 20 minutes,
and I would be willing to pay for that,
like $5, $10 for that experience.
That's not absolutely.
And how difficult, that's kind of what you're getting at,
is that little transaction.
How difficult is it to create a frictional system like Uber has, for example, for other things?
What's your intuition there?
So first of all, I pay a little bit of money to, you know, to send, there's something
called courts that does financial things.
I like medium as a site, I don't pay there, but I would.
You had a great post on medium.
I would have loved to pay you a dollar.
But I wouldn't have wanted it.
I wouldn't have wanted it per se because there should be also sites where that's not actually
the goal.
The goal is to actually have a broadcast channel that I monetize in some other way if I chose
to.
I mean, I could now.
People know about it.
I could.
I'm not doing it.
But that's fine with me.
Also, the musicians who are making all this music, I don't think the right model is that you
pay a little subscription fee to them, because people can copy the bits too easily.
And it's just not that somewhere the value is.
The value is that a connection was made between real human beings, then you can follow up
on that and create yet more value.
So no, I think-
There's a lot of open questions here.
Hot open questions, but also, yeah, I do want good
recommendation systems that recommend cool stuff to me.
And but it's pretty hard, right?
I don't like them to recommend stuff just based
on my browsing history.
I don't like that based on stuff they know about me
co-incode.
What's unknown about me is the most interesting.
So this is the really interesting question.
We made this agree.
Maybe not.
I think that I love recommender systems. and I want to give them everything about me in
a way that I trust.
But you don't, because so for example this morning I clicked on, I was pretty sleepy this morning,
I clicked on a story about the Queen of England.
I do not give a damn about the Queen of England, I really do not.
But it was clickbait.
It kind of looked funny and I had to say what the heck are they talking about?
I don't want to have my life, you know, heading that direction. Now that's in my browsing history.
The system and any reasonable system will think that I'm browsing history. Right, but you're saying all the trace,
all the digital exhaust or whatever. That's been kind of the models. If you collect all this stuff,
you're going to figure all of us out. Well, if you're trying to figure out like kind of one person like exhaust or whatever, that's been kind of the models. If you collect all this stuff, you're gonna figure all of us out.
Well, if you're trying to figure out,
like kind of one person, like Trump or something,
maybe you could figure him out.
But if you're trying to figure out, you know,
500 million people, you know, no way, no way.
You think so?
No, I think so.
I think we are humans are just amazing, rich, and complicated.
Every one of us has our little quirks,
everyone else has our little things that could intrig us,
that we don't even know and will intrigue us.
And there's no sign of it in our past,
but by God there it comes, and you fall in love with it.
And I don't want a company trying to figure that out for me
and anticipate that.
I want them to provide a forum, a market, a place
that I kind of go, and by hooker, by crook this happens.
I'm walking down the street and I hear some Chilean music
being played, and I never knew I liked Chilean music.
Wow, so there is that side. And I want them to provide a limited, I'm walking down the street and I hear some Chilean music being played and I never knew I like Chilean music.
Wow.
So there is that side and I want them to provide a limited but interesting place to go.
And so don't try to use your AI to kind of figure me out and then put me in a world where
you figured me out.
No, create spaces for human beings where our creativity and our style will be enriched and come forward
and it will be a lot of more transparency. I won't have people randomly anonymously
putting comments up and I'll especially based on something know about me, facts that you know
we are so broken right now, especially if you're celebrity, but you know it's about anybody that
anonymous people are hurting lots and lots of people right now. That's's part of this thing that Silicon Valley is thinking that, you know,
just collect all this information and use it in a great way.
So, you know, I'm not a pessimist, but I'm very much an optimist, my nature,
but I think that's just been the wrong path for the whole technology to take.
Be more limited, create, let humans rise up.
Don't try to replace them.
That's the AI mantra. don't try to anticipate them,
don't try to predict them,
because you're not gonna, you're not gonna be
to do those things, you're gonna make things worse.
Okay, so, right now, just give this a chance.
Right now, the recommender systems
are the creepy people in the shadow watching your every move.
So they're looking at traces of you. They're not
directly interacting with you. Sort of your close friends and family the way
they know you is by having conversations, by actually having interactions back
and forth. Do you think there's a place for recommender systems? Sort of the
step because you just emphasize the value of human and human connection. But
yeah, give a chance, AI human connection.
Is there a role for an AI system to have conversations
with you in terms of, to try to figure out
what kind of music you like, not by just watching
what you listen to, but actually having a conversation,
natural language or otherwise.
Yeah, no, I'm, so I'm not against it.
I just wanted to push back against it.
Maybe you're saying, you have options for Facebook.
So I, there I think it's misplaced, but, but, I think that the script eating, funding Facebook. Yeah,, you have options for Facebook. So there I've hit because I'm misplaced. But I think that the scripting,
I'm not getting Facebook.
Yeah, no, so good for you.
I'm going to go for it.
That's a hard spot to be.
Yeah, no, good.
Human interaction on our daily,
the context around me in my own home is something
that I don't want some big company to know about it all,
but I would be more than having
a have technology help me with it.
Which kind of technology?
Well, you know, just Alexa.
Amazon. Well, a good, Alexa's done right.
I think Alexa's a research platform right now
more than anything else.
But Alexa done right, you know, could do things like,
I leave the water running in my garden
and I say, hey, Alexa, the water's running in my garden.
And even have Alexa figure out that that means
when my wife comes home that she should be told about that,
that's a little bit of a reasoning.
I would call that AI and by any kind of stretch,
it's a little bit of reasoning.
And it actually kind of would make my life a little bit of reasoning. I'd call that AI and by any kind of stretch, it's a little bit of reasoning and it actually
would make my life a little easier and better.
And I wouldn't call this a wow moment, but I kind of think that overall rises human happiness
up to have that kind of thing.
But not when you're lonely, Alexa knowing loneliness.
No, no.
I don't want Alexa to be feeling intrusive and I don't want just the designer of the system
to kind of work all this out.
I really want to have a lot of control,
and I want transparency and control.
And if a company can stand up and give me that
in the contact new technology,
I think that it could first all be way more successful
than our current generation.
And like I said, I was at Michigan Microsoft,
and I really think that they're pivoting
to kind of be the trusted old uncle,
but I think that they get that this is a way to go that if you let people
Find technology empowers them to have more control and have and have control not just over privacy, but over this rich set of interactions
That that people go like that a lot more and that's that's the right business model going forward
What is control over privacy? Look like do you think you should be able to just view all the data that no?
It's much more than that. I mean first first of all, it should be an individual decision.
Some people don't want privacy.
They want their whole life out there.
Other people's wanted.
Privacy is not a zero one.
It's not a legal thing.
It's not just about which data's available,
which is not.
I like to recall to people that, you know,
a couple of years ago, everyone,
there was not really big cities.
Everyone lived in the countryside and villages.
And in villages, everybody knew everything about you.
Very, you had never any privacy.
Is that bad?
Are we better off now?
Well, arguably no, because what did you get for that loss of at least certain kinds of
privacy?
Well, people helped each other.
Because they know everything about you, they know something's bad happening, and they will
help you with that.
Right?
And now you live in a big city, no one knows them out.
You get no help.
So it kind of depends, the answer.
I want certain people who I trust
and there should be relationships.
I should kind of manage all those.
But who knows what about me?
I should have some agency there.
It shouldn't be a drift and a sea of technology
where I know I just said,
I don't want to go reading things and checking boxes.
So I don't know how to do that.
And I'm not a privacy researcher per se.
I just, I recognize the vast complexity of this.
It's not just technology, it's not just legal scholars
meeting technologists.
There's got to be kind of a whole layers around it.
And so when I alluded this, emerging engineering field,
this is a big part of it.
When electrical engineering came came, I'm
not went around in the time, but you just didn't plug electricity into walls and it all
kind of worked. You know, I have like underwriters laboratory that reassures you that that plug's
not going to burn up your house and that that machine will do this and that and everything.
They'll be whole people who can install things. They'll be people who can watch the installers.
There'll be a whole layers, you know, an onion of these kind of things.
And for things as deep and interesting as privacy, which is as least as
interesting as electricity, that's going to take decades to kind of work out,
but it's going to require a lot of new structures that we don't have right now.
So it's kind of hard to talk about it.
And you're saying there's a lot of money to be made if you get it right.
Absolutely.
Absolutely.
You should look at it.
A lot of money to be made in all these things that provide
human services and people recognize them as useful parts of their lives. So yeah.
So yeah, the dialogue sometimes goes from the exuberant technologists to the no technology is good
kind of. And that's, you know, in a public discourse, you know, and newsrooms, you see too much
of this kind of thing. And the sober discussions in the middle, which are the challenges you want to have, are
where we need to be having our conversations.
And, you know, actually, there's not many forum for for those.
You know, that's kind of what I would look for.
Maybe I could go and I could read a comment section of something, and it would actually
be this kind of dialogue going back and forth.
You don't see much of this, right?
Which is why actually there's a resurgence of podcasts out of all because good
are really hungry for conversation. Yeah. But there's technology is not helping
much. So comment sections of anything including YouTube. Yeah. It's not hurting
and not hurting. Yeah. And you think technically speaking is possible to help.
I don't know the answers, but it's a less anonymity,
a little more locality, you know,
worlds that you kind of enter in
and you trust the people there in those worlds
so that when you start having a discussion, you know,
not only is that people are not gonna hurt you,
but it's not gonna be a total waste of your time
because there's a lot of wasting of time that, you know,
a lot of us, I pulled out of Facebook early on
because it was clearly gonna waste a lot of of my time even though there was some value.
And so yeah worlds that are somehow you in or in you know what you're getting and it's kind of
appeals to you you might new things might happen but you kind of some trust in that world.
And there's some deep interesting complex psychological aspects around anonymity
how that changes human behavior.
And indeed, quite dark.
And quite dark.
Yeah, I think a lot of us, especially those of us who really loved the advent of technology,
I love social networks when they came out.
I didn't say any negatives there at all.
But then I started seeing comment sections.
I think it was maybe, you know, one of the CNN or something.
And I started to go, wow, this darkness I just did not know about
and our technology is now amplifying it.
So sorry for the big philosophical question,
but on that topic, do you think human beings,
because you've also, out of all things,
I put in psychology too,
do you think human beings are fundamentally good?
Like all of us have good intent that could be mined or is it depending on context and environment
everybody could be evil. So my answer is fundamentally good, but fundamentally limited. All of us have very,
you know, blinkers on. We don't see the other person's pain that easily. We don't see the other
person's point of view that easily. We're very much in our own head in our own world.
And on my good days, I think the technology could open us up to you in more perspectives and more, let's blinker and more understanding.
You know, a lot of wars in human history happened because of just ignorance. They didn't, they thought the other person was doing this,
well, their person wasn't doing this and we have a huge amount of that. But in my lifetime, I've not seen technology really help in that way yet.
And I do believe in that.
But, you know, no, I think fundamentally, when we're at PUM, it's a good, the people suffer,
people have grievances, people have grudges, and those things cause them to do things they
probably wouldn't want.
They regret it often.
So no, I think it's a, you know, part of the progress of technology is to indeed allow it to be easier
to be the real good person you actually are.
Well, but do you think individual human life or society could be modeled as an optimization
problem?
Not the way I think, typically.
That's your time out.
One of the most complex phenomena in the whole universe.
Which individual human life or society is a whole? Both. I think, typically, I mean, that's your time out that one of the most complex phenomena in the whole, you know, in all of the universe.
The individual human life for society is a whole.
Both.
Both.
I mean, the individual human life is amazingly complex.
And so, you know, optimization is kind of just one
branch of mathematics that talks about certain kind
of things, and it just feels way too limited
for the complexity of such things.
What properties of optimization problems,
do you think so?
Do you think so?
Do you think most interesting problems that could be solved through optimization?
What kind of properties does that surface have?
Non-convexity, convexity linearity, all those kinds of things, saddle points?
Well, so optimization is just one piece of mathematics.
You know, there's like, you just, even in our era, we're aware that say sampling is coming
up examples of something.
What's optimization? What's sampling? Well, you think you can, if you're kind of a certain kind
of atmosphere, you can try to blend them and make them seem to be sort of the same thing, but
optimization is roughly speaking, trying to find a point that a single point, that is the optimum
of a criterion function of some kind.
And sampling is trying to, from that same surface, treat that as a distribution or density
and find points that have high density.
So I want the entire distribution and an sampling paradigm, and I want the single point that's
the best point in the optimization paradigm.
Now if you are optimizing in the space of probability measures,
the output of that could be a whole probability distribution.
So you can start to make these things the same.
But in mathematics, if you go too high up that abstraction hierarchy,
you start to lose the ability to do the interesting theorems,
so you don't try to overly over-abstract.
So as a small tangent, what kind of world do you do you find more appealing?
One that is deterministic or stochastic.
Well, that's easy. I mean, I'm a statistician.
You know, the world is highly stochastic. Wait, I don't know what's going to happen in the next five
minutes, right? So what you're going to ask what we're going to do.
Do to the uncertainty, do to the certainty? Do you do the massive uncertainty? Massive uncertainty.
And so the best I can do is have come rough sense
or probability distribution on things
and somehow use that in my reasoning
about what to do now.
So how does the distributed at scale
when you have multi-agent systems look like?
So optimization can optimize,
sort of, it makes a lot more sense,
sort of, at least from my,
from robotics perspective, for a single robot,
for a single agent,
trying to optimize some objective function.
When you start to enter the real world,
this game theory ready concept starts popping up.
That, how do you see optimization in this,
because you've talked about markets in a scale,
what does that look like?
Do you see it as optimization?
Do you see it as sampling?
Do you see, how should you march?
Yeah, so it's all blend together.
And a system designer thinking about how to build
an incentivized system will have a blend of all these things.
So, you know, a particle in a potential well is optimizing a functional called a Lagrangian. Right? Particle
doesn't know that. There's no algorithm running that does that. It just
happens. It's a description mathematically of something that helps us
understand as analysts what's happening. Right? And so the same will happen when
we talk about, you know, mixtures of humans and computers and markets and so
on so forth. There'll be certain principles that allow us to understand what's happening, whether or not the actual
algorithms are being used by any sense is not clear.
Now at some point I may have set up a multi-agent or market kind of system, and I'm now thinking
about an individual agent in that system, and they're asked to do some task and they're
incentivized in some way they get certain signals and they have some utility, maybe what they will do at that point is they just won't know the answer. They may have to
optimize to find an answer. So an artist could be embedded inside of an overall market.
You know, in game theory is very, very broad. It is often studied very narrowly for certain kinds
of problems, but it's roughly speaking. I don't know what you're going to do.
So I kind of anticipate that a little bit, and you anticipate what I'm going to
anticipate in, and we kind of go back and forth in our own minds.
We run kind of thought experiments.
You've talked about this interesting point in terms of game theory.
So you know, most optimization problems really hate saddle points.
Maybe you can discover saddle points are, but I have heard you kind of mentioned
that there's a, there's a branch of optimization
that you could try to explicitly look for saddle points
as a good thing.
Oh, not optimization.
That's just game theory.
There's all kinds of different equilibrium game theory
and some of them are highly explanatory behavior.
They're not attempting to be algorithmic.
They're just trying to say,
if you happen to be at this equilibrium, you would see certain
kind of behavior, and we see that in real life.
That's what an economist wants to do, especially a behavioral economist.
In continuous differential game theory, you're in continuous spaces.
Some of the simplest equilibrium are saddle points.
A Nash equilibrium is a saddle point.
It's a special kind of saddle point. So, classically in game theory, you were trying to find Nash equilibrium.
And algorithmic game theory, you're trying to find algorithms that would find them.
And so, you're trying to find saddle points. I mean, so that's literally what you're trying to do.
But, you know, any economist knows that Nash equilibrium have their limitations. They are
definitely not that explanatory in many situations. They're not what you really want
There's other kind of equilibria and there's names associated with these because they came from history with certain people working on them
But there will be new ones emerging
So you know one example is a stacklberg equilibrium. So you know Nash you and I are both playing this game against each other or for each other
Maybe it's cooperative and we're both going to think it, then we're going to decide and we're going to
do our thing simultaneously.
In a stockleberg, no, I'm going to be the first mover.
I'm going to make a move.
You're going to look at my move and then you're going to make yours.
Now, since I know you're going to look at my move, I anticipate what you're going to
do, and so I don't do something stupid.
But then I know that you are also anticipating me, so we're kind of going back
into the form line, but there is then a first mover thing. And so there is a different
equilibrium. So just mathematically, yeah, these things have certain topologies, certain shapes,
they're like salivates, and then how are they dynamically or dynamically, how do you move towards
them, how do you move away from things? So some of these questions have answers. They've been studied.
Others do not, especially if it becomes stochastic, especially if there's large numbers
of decentralized things.
There's just young people getting in this field who kind of think it's all done because
we have TensorFlow.
Well, no, these are all open problems.
They're really important and interesting.
It's about strategic settings.
How do I collect data?
Suppose I don't know what you're gonna do
because I don't know you very well, right?
Well, I got to collect data about you.
So maybe I want to push you in a part of the space
where I don't know much about you so I can get data.
And then later I'll realize that you'll never go there
because of the way the game is set up.
But that's part of the overall data analysis context, is that?
Yeah, even the game of poker is fascinating space.
Well, whenever there's any answer
into your lack of information. It's a
super exciting space. Yeah,
just
Lingart optimization for a second. So when we look at deep learning, it's essentially minimization of a complicated
loss function. So is there something insightful or hopeful that you see
in the kinds of function surface that loss functions, the deep learning in the real world is trying to optimize over.
Is there something interesting? Is it just the usual kind of problems of
optimization? I think from an optimization point of view that surface
verfloids it's pretty smooth and secondly if there's over if it's over
parameterizer,
there's kind of lots of paths down to reasonable optimal.
And so kind of the getting downhill to an optimum
is viewed as not as hard as you might have expected
in high dimensions.
The fact that some optimum tend to be really good ones
and others not so good,
and you tend to, sometimes you find the good ones
is sort of still needs explanation.
But the particular surface is coming for the particular generation of neural nets. I kind of
suspect those will change. In 10 years, it will not be exactly. Those surfaces, there'll be some
others that are, and optimization theory will help contribute to why other surfaces or why other
algorithms. Layers of arithmetic operations with a little bit of non-linearity,
that didn't come from neuroscience per se. I mean, maybe in the minds of some of the people
working on it, they were thinking even about brains, but they were arithmetic circuits
in all kinds of fields, computer science, control theory, and so on. And that layers of these
could transform things in certain ways, and that if it's smooth, maybe you could find parameter values, you know,
it's a big discovery that it's working, it's able to work at the scale.
But in terms of, I don't think that'll work stuck with that and we're certainly not stuck
with that because we're understanding the brain.
So in terms of on the algorithm side, sort of gradient descent, do you think we're stuck
with gradient descent?
Is variance of it, what variance do you find interesting?
Or do you think there'll be something else invented
that is able to walk all over these optimization spaces
in more interesting ways?
So there's a co-design of the surface
and the architecture and the algorithm.
So if you just ask if we stay with the kind of architectures
that we have now, not just neural
nets, but phase retrieval architectures or make it's completion architectures and so
on, I think we've kind of come to a place where a stochastic gradient algorithms are dominant
and there are versions that are a little better than others.
They have more guarantees, they're more little better than others. They have more guarantees.
They're more robust and so on.
And there's ongoing research to kind of figure out, which is the best time for which situation.
But I think that that'll start to co-evolve, that that'll put pressure on the actual architecture.
And so we shouldn't do it in this particular way, we should do it in a different way,
because this other algorithm is now available if you do it in a different way.
So that I can't really anticipate that co-evolution process.
But you know, gradients are amazing mathematical objects. They have a lot of people who
sort of study them more deeply mathematically or are kind of shocked about what they are
and what they can do. I mean, to think about this way, if I suppose that I tell you if
you move along the x-axis,
you go uphill in some objective by three units.
Whereas if you move along the y-axis,
you go uphill by seven units.
Now I'm going to only allow you to move a certain unit
distance.
What are you going to do?
Well, most people will say, I'm going to go along the y-axis.
I'm getting the biggest bang for my buck, you know, and my buck is only one unit, so I'm going to put
all of it in the y-axis, right? And why should I even take any of my strength, my step size,
and put any of it in the x-axis, because I'm getting less bang for my buck? That seems
like a completely clear argument, and it's wrong. Because the gradient direction is not to go along the wax,
it's to take a little bit of the x-axis.
And to understand that, you have to know some math.
And so even a trivial, so-called operator like the gradient
is not trivial and so exploiting its properties
is still very, very important.
Now we know that just per getting to sand
has got all kinds of problems.
It gets stuck in many ways and it had never, you know,
good dimension dependence and so on. So my
own line of work recently has been about what
kinds of stochasticity, how can we get dimension
dependence, how can we do the theory of that.
And we've come up pretty favorable results with
certain kinds of stochasticity. We have sufficient
conditions generally. We know if you do this, we
will give you a good guarantee.
We don't have necessary conditions that it must be done a certain way in general.
So, stuckasticity, how much randomness to inject into the walking along the gradient?
And what kind of randomness?
Why is randomness good in this process?
Why is stuckasticity good?
Yeah, so I can give you simple answers, but in some sense, again, it's kind of amazing.
Stochasticity just, you know,
particular features of a surface that could have hurt you
if you were doing one thing deterministically
won't hurt you because, you know, by chance,
there's a very little chance that you would get hurt.
And, you know, so here, stochasticity,
you know, itochasticity saves you from some of the particular features of surfaces that, in fact, if you think about surfaces that are discontinuous in a first
derivative, like absolute value function, you will go down and hit that point where there's
non-differentiability. And if you're running a deterministic algorithm, at that point, you can really do something bad.
Where the so-casticity just means it's pretty unlikely
that's going to happen.
You're going to hit that point.
So it's, again, not trivial analyzed,
but especially in higher dimensions,
also so-casticity are intuitions ever
good about it, but it has properties
that are very appealing in high dimensions
for a lot of large number of reasons. So it's all part of the mathematics. It's once fun to work in the field as you get to
understand this mathematics. Long story short, partly empirically, it was discovered,
stochastic gradient, it's very effective, and theory followed, I'd say, that, but I don't see that we're getting clearly out of that.
What's the most beautiful, mysterious, profound idea to you in optimization?
I don't know the most, but let me just say that, you know, Nesterov's work on Nesterov
acceleration to me is pretty surprising and pretty deep.
Can you elaborate? Well, this start acceleration is just that, I suppose that we are going to use gradients
to move around in space for the reasons I've alluded to.
They're nice directions to move.
And suppose that I tell you that you're only allowed to use gradients.
You're not going to be allowed to, you know, it's a local person, it can only sense kind
of the change in the surface.
But I'm going to give you kind of a computer that's able to store all your previous gradients.
And so you start to learn something about the surface.
And I'm going to restrict you to maybe move in the direction of like a linear span of all the gradients.
So you can't kind of just move in some arbitrary direction.
Right. So now we have a well-defined mathematical complexity model.
There's a certain classes of algorithms that can do that,
and others that can't.
And we can ask for certain kinds of surfaces,
how fast can you get down to the optimum?
So there's an answer to these.
So for a smooth convex function, there's an answer,
which is 1 over the number of steps squared.
You will be within a ball of that size after K steps.
Gradient descent in particular has a slower rate.
It's one of a K, okay.
So you could ask is gradient descent actually,
even though we know it's a good algorithm,
is it the best algorithm in the sense of the answer is no.
Be well, not clear yet, because one of our K scores
is a lower bound, that's probably the best you can do.
What a gradient is the one of our K, but is there something better?
And so I think it's a surprise to most the Nest Drawf discovered a new algorithm
that has got two pieces to it. It uses two gradients
and puts those together in a certain kind of obscure way.
And the thing doesn't even move downhill all the time.
It sometimes goes back uphill.
And if you're a physicist,
that kind of makes some sense, you're building up some momentum.
And that is kind of the right intuition,
but that intuition is not enough to understand kind of how to do it,
and why it works.
But it does.
It achieves one over a K-squared,
and it has a mathematical structure,
and it's still kind of to this day, a lot of us are running papers and trying to explore that and understand it
So there's lots of cool ideas and optimization, but just kind of using gradients. I think is number one that goes back, you know, 150 years
And then Nestor I think has made a major contribution with this idea. So like you said gradients themselves are in some sense mysterious
Yeah, they're not they're not as trivial as, not as trivial.
Mechanically coordinate descent is more of a trivial when you just pick one of the coordinates
and go on the one. That's how we think that's how our human mind is. That's how our human
mind's think and gradients are not that easy for a human mind to grapple with. An absurd
question, but what is statistics?
So here it's a little bit, it's somewhere between math and science and technology.
It's somewhere in that convex hole.
So it's set of principles that allow you to make inferences that have got some reason to
be believed.
And also principle that allow you to make decisions where you can have some reason to believe
you're not going to make errors.
So all that requires some assumptions about what do you mean by an error?
What do you mean by the probabilities?
But after you start making some assumptions, you're led to conclusions that yes, I can
guarantee that if you do this in this way, your probability of making error will be small.
Your probability of continuing to not make errors over time will be small. And probability found something that's real will be small, will be high.
So decision making is a big part?
So the original statistics, short history was that it's going to go back
as a formal discipline, 250 years or so.
It was called inverse probability because around that era,
probability was developed
sort of especially to explain gambling situations.
Of course.
And interesting.
So you would say, well, given the state of nature is this,
there's a certain roulette board that has a certain mechanism
and it, what kind of outcomes do I expect to see?
And especially if I do things long, long amounts of time,
what outcomes do I see, and the physicists start to pay attention to this.
And then people say, well, given this is from the problem around,
what if I saw certain outcomes, could I infer what the underlying mechanism was?
That's an inverse problem.
And in fact, for quite a while,
statistics was called inverse probability.
That was the name of the field.
And I believe that it was Laplace who was working in Napoleon's government who
was trying, who needed to do a census of France, learn about the people there. So he
went and got gathered data and he analyzed that data to determine policy and said, let's
call this field that does this kind of thing statistics because the word state is in there. In French, that's
eight-ta, but it's the study of data for the state. Anyway, that caught on and it's been called
statistics ever since. But by the time it got formalized, it was sort of in the 30s.
And around that time, there was game theory and decision theory developed nearby.
People in that era didn't think of themselves as either computer science or statistics or control or econ.
They were all above.
And so, you know, a von Neumann is developing Game Theory, but also thinking of that as Decision Theory.
Wall was an econometrician developing Decision Theory, and then, you know, turning that into statistics.
And so it's all about, here's not just data
and you analyze it, here's a loss function,
here's what you care about,
here's the question you're trying to ask.
Here is a probability model,
and here's the risk you will face
if you make certain decisions.
And to this day, in most advanced statistical curricula,
you teach decision theory is the starting point,
and then it branches out
and the two branches are basing in a frequentist, but that's it's all about decisions. In statistics, what is the most
beautiful mysterious, maybe surprising idea that you've come across? Yeah, good question.
I mean, there's a bunch of surprising ones. There's something that's way too
identical for this thing, but something called James Stein estimation, which is kind of
surprising and really takes time to wrap your head around.
Can you try to maybe?
I think I don't want even want to try. Let me just say a colleague at Steven Stigler,
University of Chicago wrote a really beautiful paper on James Stein estimation, which
helps to, its views of paradox, it kind of defeats the minds attempts to understand it,
but you can, and Steve has a nice perspective on that.
So, one of the troubles with statistics is that it's like in physics, that are in quantum
physics, you have multiple interpretations.
There's a wave in particle duality in physics, and you get used to that over time, but
it still kind of haunts you that you don't really,
you know, quite understand the relationship.
The electrons away when electrons are particle.
Well, well, the same thing happens here.
There's Bayesian ways of thinking and frequentist
and they are different.
They often, they sometimes become sort of the same in practice,
but they are physically different.
And then in some practice, they are not the same at all.
They give you rather different answers. And so it is very much like wave and particle duality,
and that is something you have to kind of get used to in the field.
Can you define Bayesian of frequencies?
Yeah, in decision theory, you can make a, I have a, I have a video that people could see,
it's called, are you a Bayesian or a frequentist, and kind of help try to, to make it really clear.
It comes from decision theory. So, you know, decision theory, you're talking about loss functions,
which are a function of data, X, and parameter theta.
There are a function of two arguments.
Okay? Neither one of those arguments is known.
You don't know the data, a priori. It's random.
And the parameters are known.
All right, so you have this function of two things you don't know,
and you're trying to say, I want that function to be small.
I want small loss, all right?
Well, what are you going to do?
So you sort of say, well, I'm going to average over
these quantities or maximize over them or something so that,
you know, I turn that uncertainty into something certain.
So you could look at the first argument and average over it,
or you could look at the second argument and average over it.
That's Bayesian frequentes. So the frequentes says, I'm going to look at the first argument and average over it or you could look at the second argument average over it That's Bayesian frequented so the the frequented says I'm going to look at the X the data
And I'm going to take that as random and I'm going to average over the distribution. So I take the expectation loss under X
Theta is held fixed
Right, that's called the risk and so it's looking at other all the data sets you could get
Right and say how well will a certain procedure do under all the data sets you could get, right, and say how well will a certain procedure
do under all those data sets?
That's called a frequent as guarantee, right?
So I think of this very appropriate when you're building a piece of software and you're
shipping it out there and people are to use it on all kinds of data sets.
You want to have a stamp of guarantee on it that as people run it on many, many data sets
that you never even thought about, that 95% of the time it will do the right thing. Perfectly reasonable.
The Bayesian perspective says,
well, no, I'm gonna look at the other argument
of the law's function, the theta part.
That's unknown, and I'm uncertain about it.
So I could have my own personal probability
for what it is, how many tall people are there out there?
I'm trying to further average high the population.
Well, I have an idea roughly what the height is.
So I'm gonna to average over the the the theta.
So now that loss function has only now again one argument's gone.
Now it's a function of X. And that's what a Bayesian does is they say, well let's just
focus on the particular X we got, the data set we got, we condition on that.
Condition on the X, I say something about my loss.
That's a Bayesian approach to things.
And the Bayesian will argue that it's not relevant to look
at all the other data sets you could have gotten
and average over them, the frequentest approach.
It's really only the data sets you got, all right?
And I do agree with that, especially in situations
where you're working with a scientist,
you can learn a lot about the domain,
and you're really only focused on certain kinds of data, and you
gathered your data, and you make inferences.
I don't agree with it, though, that, you know, in the sense that there are needs for frequent
discaranties.
You're writing software, people are using it out there, you want to say something.
So these two things have to go to fight each other a little bit, but they have to blend.
So long story short, there's a set of ideas that are right in the middle, they're called
empirical bays.
And empirical bays sort of starts with the Bayesian framework.
It's kind of arguably philosophically more reasonable and kosher, right down a bunch of
the math that kind of flows from that, and then realize there's a bunch of things you
don't know because it's the real world and you don't know everything. So you're uncertain about certain quantities. At that
point, ask, is there a reasonable way to plug in an estimate for those things? Okay. And
in some cases, there's quite a reasonable thing to do to plug in. There's a natural thing
you can observe in the world that you can plug in and then do a little bit more mathematics
and assure yourself it's really good. So my math are based on human expertise.
What are good things?
Well, they're both going in.
The Bayesian framework allows you
to put a lot of human expertise in.
Yeah.
But the math kind of guides you along that path
and then kind of reassures you at the end.
You could put that stamp of approval.
Under certain assumptions, this thing will work.
So you asked question, what's my favorite,
you know, what's the most surprising nice idea?
So one that is more accessible
is something called
false discovery rate, which is, you know,
you're making not just one hypothesis test
or making one decision, you're making a whole bag of them.
And in that bag of decisions, you look at the ones
where you made a discovery, you announced
that something interesting had happened, all right?
That's gonna be some subset of your big bag.
In the ones you made a discovery,
which subset of those are bad? There are false discoveries. You'd like the fraction of your
false discoveries among your discoveries to be small. That's a different criterion that
accuracy or precision or recall or sensitivity and specificity. It's a different quantity.
Those latter ones are almost all of them, have more of a frequentest flavor.
They say, given the truth is that the null hypothesis is true, here's what actors say I would
get.
Or given that the alternative is true, here's what I would get.
So it's kind of going forward from the state of nature to the data.
The Bayesian goes the other direction from the data back to the state of nature.
And that's actually what false discovery rate is. It says, given you made a discovery, okay, that's conditioned on your data,
what's the probability of the hypothesis that's going the other direction. And so the classical
frequency, look at that, so I can't know that there's some priors needed in that. And the empirical
Bayesian goes ahead and plows forward and starts writing down these formulas and realizes
at some point some of those things can actually be estimated in a reasonable way. And so it's kind
of a beautiful set of ideas. So this kind of line of argument has come out. It's not certainly mine,
but it sort of came out from Robbins around 1960. Brad Efron has written beautifully about this in various papers and books. And the FDR is, you know, Ben Yameeni and Israel, John's story did this Bayesian interpretation
and so on.
So I've just absorbed these things over the years and find it a very healthy way to think
about statistics.
Let me ask you about intelligence to jump slightly back out into philosophy, perhaps.
You said that, maybe you can elaborate,
but you said that defining just even the question
of what is intelligence is a,
where is there's a very difficult question?
Is there a useful question?
Do you think we'll one day understand
the fundamentals of human intelligence
and what it means,
have good benchmarks for general intelligence
that we put before our machines.
So I don't work on these topics so much.
You're really asking a question for a psychologist, really, and I studied some, but I don't consider
myself, at least an expert at this point.
You know, a psychologist aims to understand human intelligence.
I think, me and the psychologist, I know are fairly humble about this.
They might try to understand how a baby understands whether something is a solid or a liquid or
whether something is hidden or not.
Maybe the child starts to learn the meaning of certain words, what's a verb, what's a noun,
and also slowly but surely trying to figure out things.
But human's ability to take a really complicated environment, reason about it, abstract about it, find the right abstractions, communicate about it, interact, and so on, is just, you know, really staggeringly rich and complicated. And so, I think in all humidedly,
we don't think we're kind of aiming for that
in the near future.
And certainly psychologists doing experiments
with babies in the lab or with people talking
has a much more limited aspiration.
And, you know, quantum and diversity
would look at our reasoning patterns.
And they're not deeply understanding
all the how we do our reasoning,
but they're sort of saying,
here's some oddities about the reasoning
and some things you should think about it.
But also, as I emphasize some things I've been writing about,
AI, the revolution hasn't happened yet.
Quite a block post.
I've been emphasizing that if you step back and look at intelligent systems of any kind
in whatever you mean by intelligence, it's not just the humans or the animals or the plants or whatever.
So a market that brings goods into a city, food to restaurants or something every day,
is a system.
It's a decentralized set of decisions, looking at it from far enough away, it's just like
a collection of neurons.
Every neuron is making its own little decisions, presumably in some way.
If you step back enough, every little part of an economic system was making its all of his decisions. And just like with a brain who knows what any
individual neuron does and know what the overall goal is, right? But something happens at some
aggregate level, same thing with the economy, people eat in a city. And it's robust. It works at
all scales, small villages, to big cities. It's been working for thousands of years. It works rain or shine, so it's adaptive.
So all the kind of, you know, those are adjeves,
one tends to apply to intelligent systems.
Robost adaptive, you know, you don't need to keep
adjusting it itself, self-healing and whatever.
Plus not perfect, you know,
intelligences are never perfect and markets are not perfect.
But I do not believe in this ear that you can not,
that you can say, well, our computers, our humans are smart, but you know, no markets are not perfect. But I do not believe in this ear that you can say, well, our computers,
our humans are smart, but no markets are not. More markets are. So they are intelligent.
Now, we humans didn't evolve to be markets. We've been participating in them,
but we are not ourselves a market per se. The neurons could be viewed as a market.
There's economic neuroscience kind of perspective.
That's interesting to pursue all that.
The point though is that if you were to study humans and really be the world's best psychologists
study for thousands of years and come up with the theory of human intelligence, you might
have never discovered principles of markets, you know, supply demand curves and matching
and auctions and all that.
Those are real principles and they lead to an form of intelligence that's not maybe human intelligence.
It's arguably another kind of intelligence.
They're probably are third kinds of intelligence or fourth that none of us are really thinking
too much about it right now.
So, if you really, and all those are relevant to computer systems in the future, certainly
the market one is relevant right now, whereas understand human intelligence is not so clear
that it's relevant right now, whereas understanding human intelligence is not so clear that it's relevant right now, probably not.
So if you want general intelligence, whatever one means by that, or understanding intelligence
and a deep sense and all that, it definitely has to be not just human intelligence.
It's got to be this broader thing.
That's not a mystery.
Markets are intelligent.
It's definitely not just a philosophical sense to say, we've got to move beyond intelligence.
That sounds ridiculous.
But it's not.
And in that block, we'll see to find different kinds of intelligent infrastructure, AI, which
I really like.
It's some of the concepts you've just been describing.
Do you see yourself?
If we see Earth, human civilization is a single organism.
Do you think the intelligence of that organism, when you think from the perspective of markets
and intelligence infrastructure is increasing?
Is it increasing linearly?
Is it increasing exponentially?
What do you think the future of that intelligence?
Yeah, I don't know.
I don't tend to think, I don't tend to answer questions like that because, you know, that's
science fiction.
I was hoping to quench your off guard.
Well, again, because you said it's so far in the future, it's fun to ask, and you'll
probably, you know, like you said, predicting the future is really nearly impossible.
But say, as an axiom, one day we create a human level, a superhuman level intelligent,
not the scale of markets, but the scale of an individual. What do you think is, what do you think it would take to do that, or maybe to ask another
question, is how would that system be different than the biological human beings that we see
around us today?
Is it possible to say anything interesting to that question, or is it just a stupid question?
It sounds stupid question, but it's science fiction.
Science fiction.
And so I'm totally happy to read science fiction and think about it from time my own life.
I love that there was this brain and a vat kind of little thing that people were talking
about when I was a student.
I remember, imagine that between your brain and your body, there's a bunch of wires, right?
And suppose that every one of them was replaced
with a literal wire.
And then suppose that wire was turned
at actually a little wireless, there was a receiver and sender.
So the brain has got all the senders
and receiver on all of its exiting axons
and all the dendrites down the body
have replaced with senders and receivers.
Now you could move the body off somewhere
and put the brain in a vat.
Right?
And then you could do things like start killing off those centers of receivers one by one.
And after you've killed off all of them, where is that person?
They thought they were out in the body walking around the world and they moved on.
So those are science-fiction things.
Those are fun to think about.
It's just intriguing about where is, what is thought, where is it, and all that.
And I think every 18-year-old, it's to take philosophy classes and think about these things.
And I think that everyone should think about what could happen in society that's kind of
bad and all that.
But I really don't think that's the right thing for most of us that are my age group to
be doing and thinking about.
I really think that we have so many more present, you know, first challenges and dangers and
real things to build and all that, such that, you know, spending too and dangers and real things to build and all that
such that you know, spending too much time on science fiction, at least in public for like this,
I think is not what we should be doing. Maybe over beers and private. That's right. I'm well
I'm not gonna broadcast where I have beers because this is gonna go on Facebook and have a lot of people showing up there.
But yeah, I'll
this is gonna go on Facebook and add a whole lot of people showing up there. But yeah, I'll, I love Facebook, Twitter, Amazon, YouTube.
I have optimistic and hopeful, but maybe, maybe I don't have grounds for such optimism
and hope.
Let me ask in terms, you've mentored some of the brightest, sort of some of the seminal
figures in the field. Can you give advice
to people who undergrads you today? What does it take to take, you know, advice in their
journey, if they're interested in machine learning in AI and in the ideas of markets
from economics and psychology and all the kinds of things that you've exploring, what
steps they take on that journey?
Well, yeah, first of all, the doors open
and second it's a journey.
I like your language there.
It is not that you're so brilliant
and you have great brilliant ideas
and therefore that's just, you know,
that's how you have success
or that's how you enter into the field.
It's that you apprentice yourself,
you spend a lot of time, you work on hard things,
you try and pull back and you lot of time, you work on hard things, you try and
pull back and you be as broad as you can, you talk lots of people.
And it's like entering any kind of a creative community.
There's years that are needed and human connections are critical to it.
So I think about being a musician or being an artist or something, immediately from day one, you're a genius and therefore you do it. No, you, you know,
practice really, really hard on basics and you, uh, be humble about where you are and then,
and you realize you'll never be an expert on everything. So you kind of pick and there's a lot of
randomness and a lot of kind of, um, luck, but luck. But luck just kind of picks out which branch of the tree
you go down, but you'll go down some branch.
So yeah, it's a community.
So the graduate school is, I still think,
is one of the wonderful phenomena that we have in our world.
It's very much about apprenticeship with an advisor.
It's very much about a group of people you belong to.
It's a four or five year process.
So it's plenty of time to start from kind of nothing to come up to something, you know more more expertise
And then start to have your own creativity start to flower even surprise in your own self
um, and it's a very cooperative endeavor. It's I think a lot of people think of
Sciences highly competitive and I think in some other fields it might be more so
Here, it's way more cooperative than you might imagine. And people are always teaching each other something and people are always more than happy
to be clear that so I feel I'm an expert on certain kind of things but I'm very much not
expert on lots of other things and a lot of them are relevant and a lot of them are
I should know but should in some sense I you know don't. So I'm always willing to reveal my ignorance
to people around me so they can teach me things.
And I think a lot of us feel that way about our field.
So it's very cooperative.
I might add, it's also very international
because it's so cooperative.
We see no barriers.
And so the nationalism that you see,
especially in the current era,
and everything, it's just at odds with the way
that most of us think about what we're doing here. This is a human endeavor and we cooperate and are very
much trying to do it together for the benefit of everybody. So last question, where and how and
why did you learn French and which language is more beautiful English of French? Great question.
So first of all, I think Italians actually more beautiful
than French in English.
And I also speak that.
So I'm married to an Italian and I have kids
and we speak Italian.
Anyway, now the all kidding aside, every language allows
you to express things a bit differently.
And it is one of the great fun things to do in life
is to explore those things.
So in fact, when I kids or teens or college students ask me what, at least study, I say,
well, do what your heart is, certainly do a lot of math, math is good for everybody,
but do some poetry and do some history and do some language to, you know, throughout
your life, you'll want to be a thinking person.
You'll want to have done that. For me, yeah, French I learned when I was, I'd say, a late teen.
I was living in the middle of the country in Kansas, and not much was going on in Kansas,
with all due respect to Kansas.
And so my parents happened to have some French books on the shelf and just in my boredom,
I pulled them down and I found this is fun.
And I kind of learned the language by reading.
And when I first heard it spoken, I had no idea what was being spoken,
but I realized I had somehow knew it from some previous life.
And so I made the connection.
But then I traveled and just I loved to go beyond my own barriers
and my comfort or whatever.
And I found myself in trains in France,
next to say say, older people
who would live the whole life of their own,
and the ability to communicate with them was special.
And the ability to also see myself in other people's shoes
and have empathy and kind of work on that language
as part of that.
So after that kind of experience, and also
embedding myself in French culture, which is you know quite quite amazing
You know languages are rich not just because there's something inherently beautiful about it
But it's all the creativity that went into it. So I learned a lot of songs red poems red books
And then I was here actually at MIT where we're doing the podcast today and young professor
You know not yet married and not having a lot of friends
in the area.
So I just didn't have, I was going to kind of a board person.
I said, I heard a lot of Italian surround.
There's happened to be a lot of Italians at MIT, like Italian professors for some reason.
And so I was kind of vaguely understanding what they were talking about.
I said, well, I should learn this language too.
So I did.
And then later, I met my spouse and, you know,
I'll tell you, I'm becoming more important on my life. But, um, but I go to China a lot
these days, I go to Asia, I go to Europe, and every time I go, I kind of amaze by the
richness of human experience and the people don't have any idea if you haven't traveled
kind of how amazing a rich and I love the diversity. It's not just a buzz word to me, it really means something.
I love the, you know, the, you know,
I'm bad myself with other people's experiences.
And so, yeah, learning language is a big part of them.
I think I've said in some interview at some point
that if I had, you know, millions of dollars
and infinite time whatever, what would you really work on?
If you really wanted to do AI,
and for me that is natural language.
And really done right, you know, deep understanding of language. That's to me an amazingly interesting scientific
challenge. And one we're very far away. One we're very far away, but good natural language
people are kind of really invested then. I think a lot of them see that's where the core of AI is.
If you understand that, you really help human communication. You understand something about
the human mind, the semantics that come out of the human mind. And I agree. I think that will be such a long time. So I didn't do that in my
career just because I kind of, I was behind in the early days. I didn't kind of know enough of that
stuff. I was at MIT. I didn't learn much language. And it was too late at some point to kind of spend
a whole career doing that. But I admire that field. And so in my little way, by learning language,
that part of my brain has been trained up.
Yeah, and it was right.
You truly are the Miles Davis on machine learning.
I don't think this is a better place than it was.
Michael's a huge honor talking to you today,
and mercy Bucco.
All right, it's been my pleasure.
Thank you.
Thanks for listening to this conversation
with Michael, I Jordan. And thank you to our presenting sponsor, Cash App. has been my pleasure. Thank you. If you enjoy this podcast, subscribe to my YouTube, give it 5 stars and Apple podcasts,
support it on Patreon, or simply connect with me on Twitter at Lex Friedman.
And now let me leave you with some words of wisdom from Michael I. Jordan, from his
blog post titled Artificial Intelligence that Revolution Hasn't Happened yet, calling
for broadening the scope of the AI field.
We should embrace the fact that what we are witnessing is the creation of a new branch
of engineering.
Determine engineering is often invoked in a narrow sense, in academia and beyond, with
overtones of cold, affectless machinery and negative connotations of loss of control
by humans.
But an engineering discipline can be what we want it to be, in the current era with
a real opportunity to conceive of something historically new, a human-centric engineering
discipline.
All resist giving this emerging discipline a name, but if the acronym AI continues to
be used, let's be aware of the very real limitations of this placeholder.
Let's broaden our scope, tone down the hype, and recognize the serious challenges ahead.
Thank you.