The Knowledge Project with Shane Parrish - Kenneth Stanley: Set The Right Objectives
Episode Date: October 4, 2022Artificial intelligence researcher and author Kenneth Stanley has argued that “as soon as you create an objective, you ruin your ability to reach it.” So what should you consider when thinking abo...ut your objectives, and what will set you up for success? On this episode Stanley discusses how to set the right objectives for your life, why we’re too tied to accomplishments, what role accountability plays in our education system, the value of peer review, why transformative innovations are always counter intuitive, and so much more. Stanley is the co–author of Why Greatness Cannot Be Planned: The Myth of the Objective, as well as the former Head of Core AI Research at Uber AI and the Open-Endedness Team Leader at OpenAI. He has also served as the Charles Millican Professor in Computer Science at University of Central Florida. -- Want even more? Members get early access, hand-edited transcripts, member-only episodes, and so much more. Learn more here: https://fs.blog/membership/ Every Sunday our Brain Food newsletter shares timeless insights and ideas that you can use at work and home. Add it to your inbox: https://fs.blog/newsletter/ Follow Shane on Twitter at: https://twitter.com/ShaneAParrish Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You might like assessment and accountability and objectives and metrics because they make you feel better because it feels like we're making sure that nothing bad will happen.
But you have to recognize it's just a security blanket.
It doesn't work.
Take off the straight jacket, get rid of the security blanket, and just acknowledge how reality works.
And it would be better.
I prefer if reality worked where if I set an objective, like I could just get to it as long as I'm sufficiently determined to do it.
That would be really convenient, but it's just not the way reality works.
Welcome to the Knowledge Project podcast. I'm your host, Shane Parrish.
This podcast is about mastering the best of what other people have already figured out so that you can apply their insights in life and business.
If you're listening to this, you're missing out.
If you'd like special member-only episodes, access before anyone else, hand-edited transcripts, including my personal highlights, and other member-only content, you can join at fs.blog slash membership.
Check with the show notes for a link.
Today I'm speaking with Ken Stanley.
Ken is the author of Why Greatness Cannot Be Planned, The Myth of the Objective, a book I read and loved, so I reached out and wanted to chat with him.
Ken is currently deciding his next adventure but recently led a research team at OpenAI.
In this episode, we're going to take a simple idea and take it seriously.
That idea is the question of objectives.
They seem good when they're modest, but things get a lot more complicated the more ambitious they get.
We also explore the growing obsession with metrophication, why we accept failure in science,
but not when it comes to things like politics or education, why trying harder doesn't always help you
achieve the outcomes you see, while you can't be so tied to your destination that you're not
open to the unexpected and unplanned, and why you should avoid ideas that make too much sense.
It's time to listen and learn.
You wrote a book about questioning the value of objectives, which revealed a surprising
paradox, that objectives are good when they're modest, but things get more complicated.
when they're ambitious. Can you expand on that? Yeah, this is something that people don't talk about
that often. People don't talk about the problem with setting objectives. There are some
controversies in society. This is currently not one of them. But yet, if you think about it,
this is something that we do all the time. Basically, one of the, like, I think, deepest facets of our
culture is that we think of accomplishment and achievement and discovery in terms of setting an
objective than pursuing it. And I do research in artificial intelligence my normal job. And
in the course of doing that research, we just started to see undeniable evidence that that approach
to achievement has some serious flaws. And that realization at first was mostly just an
kind of algorithmic realization, like, okay, well, this has this has applications and implications
for artificial intelligence. But it dawned on us over time that actually it has a lot more
implications than just for artificial intelligence because it's not just something people do in the
algorithms of artificial intelligence, but in basically in life and in our culture. It's what we do all
the time. And it started to seem to me maybe almost urgent that this is actually brought into
like a public conversation of some sort. So that's why myself and my co-author, Joel Lehman,
decided to kind of take the unconventional road of writing a book like this, which is not an AI book.
and kind of try to provoke at least at least a conversation, if not some kind of change in the way that
institutions and people structure what they're doing. I think we're going to get into sort of some of
the drawbacks to this approach and maybe some of the nuances around it. Before we do, you sort of
mentioned that some great ideas were never objectives for anyone, at least until they were discovered
rock and roll. I think penicillin is another one. Can you say more on that? Yeah. I mean, it's related to the
idea of serendipity. And, you know, in serendipitous kinds of discoveries, you weren't expecting
to make it. And I think the insight is that this is something that is much more common than
the kind of narrative that we tell herself about how discoveries, inventions, innovations are
made. And actually, like, a lot of what we do that facilitates making these kinds of important
discoveries is to actually set us up, set ourselves up for having effective serendipity,
which is not the way that we, you know, talk about things when you talk about setting an
objective and just moving towards it. So of course, something like rock and roll, which is a
good example, it's not the kind of thing that you could set as an objective because it
doesn't even exist as an idea until you run into it. And yet somehow, you know, the pieces
were in place, like when Elvis was there on the scene, for him to kind of run into this.
And the question I think that's interesting is why that happens, like what kind of situation
leads to that and what kind of person will take advantage of that kind of situation.
Are all objectives the same?
No.
And it's an important caveat to what I'm saying that there are many objectives that I guess
I would characterize them as just modest.
Like, for example, you know, I want to be able to run for longer or I want to lose some
weight or maybe even like I want to get a degree in accounting.
nothing against accounting.
But the reason I call those things modest is just because, well, they've been done many times.
Like, we know that these are achievable things.
And I think those need to be distinguished from what I would call ambitious objective.
So by ambitious, I mean, these are things we don't know how to do.
We're not sure how they're going to get accomplished, even though we want to accomplish them.
Curing cancer or achieving artificial general intelligence or something like that, like those are really ambitious.
And so when I critique objectives or when the book does and it tries to make it clear, it's really the ambitious ones that we're talking about here.
Like having modest objectives, it shouldn't be affected by this critique.
And it would be verging on kind of nutty, cranky type of behavior to try to get rid of those.
Of course, you can set modest objectives.
But you have to remember, though, that like a lot of our society runs on the ambitious ones.
Like we are banking on innovation to save us from all kinds of problems and also to deliver us into like a new world of different types.
And so we depend on this kind of ambitious stuff to happen.
And the fact that we run things as if they actually happen through objectives is perhaps self-deception that is then really grinding down our efficiency, like ability to really take advantage of the resources that we have to make these kinds of discoveries by recognizing how they actually work.
So the core problem with the ambitious objectives then is that in many cases, trying harder won't
help you achieve the outcomes you're seeking and sort of like a follow-on to that.
You can't be so tied to your vision of accomplishment that you're not open to the unexpected
and unplanned.
Yeah.
So it is true that like one of the principles in the book is that you can actually block
your own ability to reach an objective by setting it, which is paradoxical. So grappling with that
is, you know, hard and important in something that the book tries to discuss, like, how to grapple
with that. But yes, we're actually causing ourselves to achieve less in these ambitious cases
by setting very ambitious objectives. And one of the things that we do when we set an ambitious
objective conventionally is that we would also set some metrics up to men.
progress towards that objective. And that's where I think things really get tripped up is these metrics or
assessments. We love assessment in our culture. We have a very big assessment culture. And the assessment
is basically trying to give us a security blanket so we can feel like we're moving towards the
objective in that we're actually making progress. And the problem is that a lot of the time,
even if your score on a metric is going up in the short run, it doesn't mean it will all get all
the way to the point that you want it to get. It's a fundamental problem.
It's called deception and it's a fundamental problem of all complex problems.
And so the fact that we rely so much on these assessments and metrics is very deceiving and can ultimately cause us to invest a lot in a deceptive path, which is actually going to lead to a dead end.
And that's why it can actually be, although it's counterintuitive, it can actually be bad for you to have a very strict objective that you're assessing movement towards.
Let's sort of make that tangible.
one of the examples you give in the book around that is schools and improving student performance.
And I'm thinking here that's, you know, both hard to argue with and it also never seems to improve
despite billions, if not trillions of dollars. And so progress sort of can't be packed into a single
metric, but what should we do instead for something like that where we also need accountability
on behalf of sort of politicians or decision makers who have some sort of skin in
the game for their choices. That word accountability always comes up. Is it sort of like why we feel
like we have to have these metrics and assessments? And education is a great example of this.
So the education system does have an objective. And it's a little bit fuzzy. It's not usually
stated explicitly, but I'll characterize it something like the objective of the education system
is for everyone that's in the education system, all of the students to score perfectly on a bunch
of assessment tests. That would be ideal if everybody was perfect. Of course, we're never going to get
quite to there, but that would be the ultimate perfect objective. What leads you to that outcome?
We don't come close to that outcome right now. We have many people whose performance is way below
what we want to see. So that's our problem. That's what the problem with the education system is.
And so we're trying to solve it by being objective. So we say, okay, what we need then is we need
some standardized tests. We're going to blanket like the entire country, basically, with these
standardized tests and other countries to do something similar. And then we have a universal
measure or metric that can be used, like to decide whether progress is going well in local
locations. Like if like what's going on in your local school district is still being assessed
with this kind of like a global test that is used across the nation, you know, then we can
compare things and see if things are moving in the right direction in different locations. And
the problem is that this is subject to this deceptive problem or what I call this objective
paradox, which is that you can look like, you know, your metrics are going up a little bit from
year to year. But that has nothing to do with necessarily getting to a point where everybody is
scoring perfectly or even above the threshold that we would consider acceptable. And that's borne out
by history. Like it never happens. We keep on trying to revive like every decade or something.
like there's a whole new, like, push to like, let's really do this seriously this time.
And just the same thing over and over again.
Like, we don't learn from this mistake.
It's that the problem isn't that the assessment is somehow flawed,
but the problem is with assessment itself.
We cannot make progress in certain kinds of extremely complex problems,
which education is one of those,
simply by just laying out some assessment system
and then trying to follow it towards this global objective,
which is just incredibly complex to get to.
But what comes into play then is what you said,
which is this accountability issue.
If you get a critique like what I just said at first, I mean, people make critiques,
obviously against standardized tests.
And the response is usually like, okay, well, where's accountability going to come from?
This is the way to hold people responsible.
And the problem is that it's not necessarily a dichotomy the way it's being presented
where like you either have accountability or you have no assessment at all.
There is a possibility of having accountability with a different approach.
But we need to have an approach that recognizes how you actually make innovative progress in an extremely complex problem, which is what the book tries to get into in general how those kinds of discoveries are made.
And really what happens in these kinds of problems is that because we don't know the stepping stones, like this is the key thing, the stepping stones that we need to traverse through in order to get to the outcome that we finally want, which in this case is like this universal high achievement, we don't actually know what we need to cross through.
And what we can almost be sure is that one of those things that's not a stepping stone is everybody getting just a tiny bit better next year.
Like that's not probably going to happen.
And if it did happen, that not going to not probably going to lead to this kind of universal global high achievement.
And so it's very deceptive.
And so the stepping stones are likely things that are counterintuitive.
And this is where these metrics start to break down like that we use across all these institutions.
Is that if the actual stepping stones that lead to where we want to go are counterintuitive.
In other words, they're not what you would expect, then the metrics are useless, right?
Because they won't detect those stepping stones because they don't look like what the metrics are trying to detect.
And if you think about it, of course they're going to be counterintuitive.
It's like a rule because the thing is if the stepping stones are not counterintuitive, then it's not a hard problem.
So we would have solved it already.
Like that's basically what makes a problem hard is you don't know what the stepping stones are.
If you do, then you don't have a problem.
You don't even need to do the assessments.
Just follow the stepping stones.
And so because we don't know the stepping stones, what we need to do is we need to proliferate stepping stones, like stepping stone candidates, things that could lead to something interesting, but we don't know which one.
This is a lot related in some ways to investing.
It's a lot like investing.
It's like you have this portfolio of ideas and you don't know which one is going to pay off, but you need to have that portfolio because you can't a priori make that kind of prediction.
And so we will have some, we would need to have some stepping stones that we check into
that don't ultimately pay off.
But if we have a portfolio, then some will pay off and eventually branched up more stepping
stones and some of those will lead to this ultimate holy grail.
But the thing is that obviously we're going to have some risk and we're going to have
some things that don't work out.
And we need to tolerate that.
And that's because that's what allows stepping stones to proliferate.
And generally, these kinds of assessment and accountability cultures don't allow that.
You know, if you have this accountability culture, you're not going to be willing to tolerate
having things that don't look like they're succeeding with respect to your naive metrics.
And so you need a completely different kind of a culture and you still want accountability
because people just can't live without accountability.
But I think accountability needs to be much more nuanced.
It needs to recognize that what makes something valuable is if it's an interesting stepping
stone, not whether a metric is going up. It's like if there's a teacher out there in some obscure
town in the middle of Alabama or something who does something interesting, like the key to getting
the education system overall to improve is to disseminate that through the social network.
Like what that teacher did, it's not necessarily the solution to everything, all the problems
you have in education, but it will be the solution for some people and they can follow up on that
and go to the next stepping stone and see where it leads. Maybe some of these things lead to dead ends,
but we can't find out if it doesn't disseminate through the social network, in this case,
the network of teachers. There's nothing set up whatsoever to facilitate that in our system.
Everything is centralized and globalized so that everything is assessed with respect to the same
kind of criteria. So if something interesting happens in some obscure place, nobody's going to know
about it. Nobody can follow up on it. Nobody can think about it, discuss it. But in the new version of
assessment, that should be recognized and rewarded. In some way, I can think of ways, but I try not
to take up too much time with this. We could just,
discuss what we want. But there are ways we can imagine that peer review and things like that could
allow us to recognize interesting things. Do we still have assessment? Like we don't allow completely
crazy things to go on. You know, like if somebody proposes like let's just not do anything in
school and let the kids run around or something. Okay, this would get caught by something like
peer review. But we do need to be able to recognize things that are interesting, which means
things that are not objectively detectable through the usual assessment techniques.
I have four thoughts that came from that. Wouldn't, wouldn't peer review?
necessarily push back on anything that's counterintuitive?
So there's a cultural issue actually with that.
Like if we live in a culture where you're basically under the gun all the time.
So people are basically your boss is looking at you and saying if you don't walk the narrow line
that we consider to be like the well, the accountable line, you're in trouble, like big trouble.
Like you could lose your job or something, you could lose funding, then the peer review system
will also suffer from that culture.
People will be like trying to kind of patrol the culture to make sure that it's being adhered to.
But I don't think that has to be the way it works.
I think that peer review is what it's supposed to do is allow individuals to speak from their individual perspective.
So it basically disentangles the individual from this large kind of global monolithic view of what it means to be doing something good.
And that happens.
that is not an impossible vision, but because, you know, people, generally how innovation actually
happens is individual connections. Most people see some, some idea somebody had and say,
that doesn't fit with the usual paradigm. Like, that's not actually a good idea. Like, we know
the right way. Now we're all very sure right now what we need to be doing. But there's some one,
you know, contrarian or something who sees that. It's like actually like, I can see a lot of
potential here. And if you give them the space to express that,
In fact, express means not just to write a critique, which they could, but also to follow up and actually try it themselves and build on it because they see the spark there of potential, which other people don't see.
Then, yeah, I think that that is how ideas percolate their networks.
Like, there's people, individuals make decisions that are somewhat disentangled from like the large mass of consensus around the field.
And I think that peer review can facilitate that, but you have to start giving people permit.
make it clear that this is a change.
Like, we're not in this universal assessment culture anymore.
Like, that's not how this is going to work.
But we're still going to use the peers as accountability.
Like, your peers will see what you do, so you can't just go crazy and do something stupid.
And they will, you know, report it if it's something absolutely intolerable.
But they have an opportunity to make unique assessments.
We're not telling them what they should like.
It's just like in academic publishing, you know, you do get peer review.
Like, that's a place where a peer review does happen.
as sort of like a standard aspect of the culture, you don't tell the reviewers like how to think
about, like the reviewers are the experts in their field. Like so a paper has been submitted to a journal
and we have some reviewers who are other scientists, you know, tell them, okay, well, well,
here's how you should think about this. Like they get to think about it how they want to think
about it. But that doesn't mean that there aren't like big problems with peer review. And in fact,
I think peer review in scientific publication and also in like the assessment of proposals for
getting grants is flood also because of objective thinking, but it is a place where I think
we can, if we structure it right, which would be somewhat different than how it's structured
even there, I think we can start to escape this kind of a global thinking.
It's so interesting because like the peer review system is sort of also not caught the replication
crisis that we've, how crisis is an overuse of the term, but it hasn't caught that sort of
mistake, but it sounds like what you're really saying is that, you know, with ambitious goals,
they're far off in the future. We don't know what the next stepping stone is. And so it's better to
almost take like an evolutionary approach where we're creating these mutations or copying errors or
sort of, you know, we're trying all these little experiments. And then we see which of those
experiments leads to some interesting insights or conclusions. And then the idea being we take those
conclusions or that interesting insight and then we propagate it to all the other nodes almost
like nature sort of rewarding variations and we do this blindly because we we don't know sort of
what will yield the best results is that how do you think of it? Yeah that's a good characterization
actually it's not a it's not a coincidence that you bring up evolution because the background
behind the genesis of this discussion in the book is that you know I was working in
evolutionary algorithms. And this is like in a branch of artificial intelligence. And I was trying
to understand what actually allows evolution in nature to make the kinds of incredible innovations
that it makes. You know, if you think about it like from a computer science perspective, like
as opposed to biology perspective, like what evolution is, it's a very unique thing in the sense that
it's kind of like a search or like a learning algorithm that discovered everything that was
ever created in nature in a single run. This is very different from like what you see in
typical machine learning, which is like, we're going to try to solve a very hard problem and
just that one problem and all the resources go to that one problem and that's the objective
and that's the run. It's very, very unique that you would have a single run discover the
solution to every problem. And so what I mean by problems like how do you get flight to work,
how do you get photosynthesis to work? How do you get human level intelligence? Like it's all one
run. And I was trying to understand this from an algorithmic perspective, not like a biology
textbook, but can we actually create algorithms that work in some analogous way to what nature
has done? And so it comes from some of the revelations that came from that work, like this
discussion which seems somewhat remote from that, but it's very connected actually
because it's based on a recognition of how nature was innovating. And one of the
the keys that you mentioned, I just wanted to highlight, like the word interesting, you know,
when we talk about selection in nature, say it's like, who survives? Like, usually the word
interesting isn't like what comes up. Usually you hear the word fitness. But actually,
I think like we won't go into all detail, but you could, you could take an alternative
interpretation of nature that the way it's set up is actually a way of detecting things that are
interesting in some sense. You know, because of the fact that what everybody has to be, somehow I
sometimes put it is like a walking Xerox machine. Like you have to have within your guts something that
will make a copy, which is extremely complex or else that lineage will not persist. And that means
that like pretty much everything's going to be interesting in some sense. It's a very abstract and weird
sense. But like you can't degenerate into complete meaninglessness because everybody has to be a walking
Xerox machine. And this keeps things honest and kind of interesting. Now when we move to other paradigms
like, you know, like education or something like that or invention of another type of
like human invention, civilization, like how we progress through the space of possible inventions,
then, like, it's a different because, like, here we care about our view of what's interesting.
You know, evolution sort of as an arbitrary concept of what's interesting.
It just happens to be the way nature is structured.
But we care about things that we find interesting as people.
And so really the crux of the matter is to really delve into the issue of what is interesting.
You know, the education example is an example, you know, you could look at for reference.
It's like, what does it mean to have an interesting teaching technique independent from, like, the immediate assessment implication of it?
And we are afraid of that conversation, I think, as a culture.
We do not like talking about whether something's interesting because, again, it seems to somehow veer around this accountability issue.
We don't want to know your subjective view of why your little thing is interesting, your pet idea, because I can't assess it in some objective sense.
And so I don't like to hear about this.
This also, I think, is one of the problems with peer reviews that you don't allow it.
what to really discuss interestingness.
But the truth is, interestingness is the magic sauce.
And that's what I think humans are really good at.
That is why civilization has actually created all of the amazing artifacts and genres
and musical and artistic and literary genres that it's gone through over the eons
is because, you know, I mentioned those just because I don't want this to only be about technology.
It's about everything.
And it's because we have a nose for the interesting.
We're super, super sensitive to what's interesting.
interesting. We're just, we've gotten to a point in our culture where we're not allowed to talk about it.
And I just want to point out that it's not unprincipled to let people talk about what's interesting
because we're talking about experts here. You know, we're not saying that like, okay, in the field
of education, we're just going to go up to some random person on the street and ask them whether
some random teacher somewhere did something interesting. We're talking about the people who are
experts in education who have a history who are like actually been in the field for years.
The idea that those people, that their opinions about what's interesting are invalid is, like, completely throwing away, like, I think decades of societal investment, like, all of the education that you put into that person from the time they went into kindergarten all the way up to the point that they, like, got out of graduate school, like, what was the point of all that if we don't trust their judgment on anything subjectively? It's actually the subjective judgments are the interesting ones because, like, the objective judgments are easy. You don't need a degree to just measure something.
something. You know, some kid takes some test. You get a score. You can average them across everybody
at the school. Who needs to have a degree for that? Like, anybody could look at that and just tell
you how it's going. It's the interestingness judgments that require education, experience,
like deep insight. And so the fact that we are paranoid and afraid and unable to engage with
the question, what's interesting, I think, is crippling, like, to our ability to innovate.
That's fascinating in a couple ways. One of the ways is that you mentioned the word judgment
and judgment subjective, and we're hesitant to say or do anything that's subjective,
and we grab onto these metrics, right?
So an example would be sort of like during COVID.
We, if we were mutating, we could have done something where, you know,
the best teacher in your state or your province or your country, the best grade five
teacher is now teaching all grade fives, right?
This could have been set up and it could have been, you know, you could have added access
to the best teachers in the world at whatever level that you're at.
in whatever subject you're at or even the best teachers in your state or your country or even
your city. I mean, we could have done this at any of those levels and we didn't because that
would require, A, it would require sort of like trying something that might fail, which I want
to talk about in a second, but B, it would require us saying something subjective that this is
a better teacher than somebody else. And we're so hesitant to do that. Can you, what are your thoughts
when I say that? Yeah, I agree with the point. Like, we could, we could debate about whether we
should have one teacher, like teaching everybody in New York State or something like that.
I mean, I'm not saying we shouldn't, but we could debate about it, obviously.
But the point that you're making that, like, we can't even, like, consider it because of
the structure of the system, that is a problem.
Why can't we debate about it?
Like, in an official sense, I mean, you and I could debate about it and it will have no effect
on anything.
But, like, the actual official system, the formal system that actually has the gatekeepers inside
of it will not ever go through that debate because yeah it's like it's incompatible with the
assessment system and it's just like way too unwieldy given that bureaucratic system the way it's set
up and i think that's an example of what is impeding our ability to collect stepping stones and
try different things and see their outcomes and really be flexible and innovative and sort of like
proliferate ideas in the spirit of nature um but it's it's i think it's multifaceted so it's not only
you can't point to just one reason for it.
It's like pervasive across all kinds of levels of the culture that we're afraid of this
kind of engagement and exploration, including what you alluded to briefly, just that we are
afraid to take risks.
I mean, that's part of it.
Like, what if it doesn't work?
You know, like this could be the end of the world or something.
And I think it depends on the domain whether risk is tolerable.
Like, we should acknowledge that risk sometimes isn't worth it.
You know, like there are some places you don't want to explore necessarily.
And so maybe there are some places you can't innovate because you just can't tolerate the risk.
This may be true for an individual.
Like, you know, you've got a family to take care of you.
You can't really go try some crazy thing right now.
Or it could be like at a societal level, like we can't try a whole new economic system.
You know, it might be interesting.
Like it would cause potentially so much devastation that we just can't take the risk.
So we have to acknowledge that like some risks are too risky.
but I think that's orthogonal to the idea that we can't discuss what's interesting.
Like, I think we can still discuss what's interesting.
We can also talk about whether the risk is too high, but there are certainly systems where
risk is tolerable, and it won't be too devastating.
And, you know, we have to thread that needle depending on the system.
Like, we don't want, like, kids to go through a year of school and learn absolutely nothing.
But that doesn't mean we can't try anything interesting whatsoever either.
And so we have to thread that needle.
carefully depending on the domain. Some domains, like there's lots of room for risk. Like,
for example, science funding. I mean, the whole point of it is risk as far as I can see.
Like, we don't expect most of these things to work out, I think. I don't know if that's the
view of the National Science Foundation, but my view is it's okay if lots of things don't work out.
So in a situation like that, like there's not like any real downside, like to really, really,
like just doubling down on the way I'm advocating, which is being a lot less objective,
a lot more subjective and a lot more willing to talk about what's interesting.
And then in more kind of like brittle situations, like the national economy,
we have to be a little more cautious about that because there's only one national economy.
But we can do it, I think.
We just have to be clear-eyed about what the consequences might be.
It's like nature or evolution, if you want to call it that, has no concept of loss aversion.
So it'll just keep trying things.
And if it's fit, it'll reproduce.
And if it's, you know, not, it'll eventually wean itself out.
And then it'll keep trying the same things over and over again.
and it doesn't have a consciousness, so it doesn't think about being wrong or the consequences
of being wrong or, you know, moving backwards to move forward. It doesn't have any of those
concepts where we do. Like we accept failure in science, but we won't accept failure when it
comes to an education system. So anybody who comes in and tries something else, they have this
really weird equation, right, where you have a very linear, small upside and a very
exponential downside if things go wrong. And so it prevents us from trying anything new. And so
that's why everything always looks like what was before it with slight wording differences or nuances
around it. Yeah, yeah. It's true. That's a good point about nature. How like there isn't this kind of
assessment. There's nothing like, you know, should this mutation go forward? You know, like let's have
a committee look at this first before we check into it. It just, it just happens. And you know, sometimes we'll
will be bad and it won't that lineage won't persist and but what's interesting is you look at it
in an aggregate like the system is obviously absolutely prolifically creative I mean nature is like
the ultimate creative genius and so it's it's because of that that's why it's like not afraid
to try things and it's willing to invest resources into things but you know of course we can't be
that laissez-faire like we can just let anything happen anywhere I understand that and acknowledge
You know, this is, the people's lives can be affected.
But we certainly can swing the pendulum a little bit away from the current assessment,
accountability, paranoia.
And especially in some domains, like science research, where it's like very natural to do that.
Like, there's no reason to be paranoid in these kinds of domains.
In other domains like education, you have to be a little more careful.
But I think it's not working anywhere.
Like, that's one of the points here, I think, is that, look, you might like assessment and
accountability and objectives and metrics,
because they make you feel better because it feels like we're making sure that nothing bad will happen.
But you have to recognize it's just a security blanket.
It doesn't work.
I mean, look at the education system.
Like, it never gets any better.
So, like, it's just making you feel better.
But it's not actually doing anything productive.
So maybe it would be worth it.
And actually not any worse.
If we did allow some more risk taking, which is not just about risk, it's about actually following interesting things.
That's really what it's about on the positive side.
Risk sounds like it's all negative.
But we're talking about following the interest.
But maybe things actually be better if we did that to take off the straight jacket, get
rid of the security blanket, and just acknowledge how reality works.
And it's unfortunate that reality works this way.
You know, it'd be better.
I prefer if reality worked where if I set an objective, like I could just get to it as long
as I'm sufficiently determined to do it.
You know, that would be really convenient.
But it's just not the way reality works.
And so reality is inconvenient and difficult and scary.
How do you map that to sort of like Elon Musk saying something like crazy or what seems
like crazy, which is like, we're going to go to Mars in the next eight years, which is this
objective, and we don't have all those stepping stones. We might have a little bit, but I mean,
he's building custom engines and, you know, there's a little bit of technology that exists that
may be, but there's also, there's a psychological value, I think, in some ways of pulling
people toward a pursuit that might never even be reached, right? Mars might be one example,
but CEOs and politicians do this too. And at its best, it sort of unites us.
And it pulls us through sometimes when we're having hard times and we feel part of something larger than ourselves.
We feel part of something meaningful.
So there's a psychological angle to pursuing these big, large ambitions.
Yeah, that is an interesting question because we see these quests that are set up.
The self-driving cars is one of those, which from the perspective of this objective critique are really interesting
because they basically are very ambitious objectives being set.
So they're an example of what I'm critiquing, basically,
like just like completely, plainly that.
And they, they often are like not successful for the, I would claim for the reasons that are in the critique.
Like, so the self-driving car thing, like back in 2016 or so, I don't know the exact year.
It's around 2016.
People like Musk, but others too, not just him, we're saying like this is around
the corner, like one or two years, like you're going to start seeing these services. And it didn't
happen. And, you know, the, like the interpretation through what I'm saying would be that,
well, that's not a surprise. This isn't how innovation actually happens. Like, you don't set some
extremely ambitious objective where we don't yet know the stepping stones and then just like double
down and throw all your money at it and it's going to happen. That's not how things actually work.
And of course, what the crux of the argument has to do with, well, but are the stepping stones? And then
stepping stones actually there. That's the real question. Like, I think often visionaries are interpreted
as people who make these statements. Like, we're going to go to Mars. Like, there's a visionary.
Like, let's, you know, let's put the halo on that person. Like, they're a visionary. But the thing
is that that's just speculation because we don't know what the stepping stones are to getting humans
on Mars yet. We don't know that. I think a visionary is somebody, in contrast, who has
recognized when the stepping stones actually have snapped into place.
Now, that's a person you should follow, and that's a very interesting and unique kind of person.
It's a different kind of a person. And I think that's more like a Steve Jobs type of a person.
Like the Elon Musk kind of thing is, I agree with you, though, on the point that it might rally interest in an area.
And that connects to the word interestingness.
Like, that could be a positive thing.
So it's not clear there's all negative, just somebody saying we're going to go to Mars.
Like, there could be a positive side because it basically moves resources and people's interest into an area,
which might matter.
And so we could still say, well, actually the predictions here are wrong,
but the social effect, like cultural effect, actually is positive.
I could see that.
And I think that makes things a little hard because, you know,
it's complicated because of that.
Like, it's not like we can just fully critique somebody like that and like dismiss them.
But they're not doing that.
So I think to be fair, like in terms of who you want to lionize,
I think you have to be fair, like, to at least acknowledge that, like,
that's not what they're actually doing.
They're not just saying, like, this is culturally, like, a good place for us to be interested.
They're making claims that are not really well-founded.
And so it's not really visionary to make these kinds of claims.
And so you don't want to go too far in kind of embracing and lionizing this kind of thing.
But you can say, well, they might have had a positive effect, and we can concede that.
But I think what's really interesting is the people who recognize when the stepping stones are there.
Because even the visionaries, the so-called visionaries are so bad at that.
Like, they're the ones who tell us, like, right around the corner, this is going to happen.
that's going to happen. Never actually ends up being that way. But it's like this rare kind of a person
who's like, huh, you know, look at the things that we have in this iPad. Look at the technologies
that we have like screen technology. Right now, like we could actually make this iPhone concept
for real. Like it's actually possible to do it. And that's not just speculation. It's not like I'm
going to predict like 10 years from now there's going to be this phone thing that does all these
magical things. That's like I actually realized this is now is the time. That's very hard.
Like, people have visions all the time of, like, really cool stuff that might have been, like, flying.
Like, people were trying to build planes for hundreds of years.
They're not, like, particularly impressive people because they said, like, we might be able to build a flying machine.
And they were wrong, like, about how long it was going to take, what's going to go into it.
But, you know, the right brothers are the right place at the right time.
So, you know, I view the right brothers more in that way, like, it's like they saw that the stepping stones had now snapped into view.
And so they actually saw this actually is the time where this can happen.
Those are the people to be impressed with and to follow, I think.
But hold on, like at a metal level, and it might be completely wrong here, isn't that a variation?
Like, we're trying this idea now and it doesn't work.
And just because it doesn't work doesn't mean that we shouldn't try it, because that in and of itself is a variation that we're, you know, this is like how nature proceeds.
So, yeah, I think this gets into some subtlety.
Like, is it ultimately, you know, actually harmful to just try something?
even if like it's really unrealistic and like it I think it depends on what your motivations like if
you're going in that direction because it's interesting it might not be harmful I think the field of
AI is kind of like that like at some level there's like this really grandiose conception of like some
human like computer and and like that is I think a naive objective right now like it just don't
know how to do that we don't know what the stepping stones are that lead to that although they're getting
closer perhaps but they're still not close i would say um but at the same time like the the point that
like investigating around the area of like algorithms that have intelligent like qualities
that's still valuable i think because it's interesting like one thing that happens i think if you do
is you're you're unerthing stepping stones that could lead to something um else that isn't
artificial general intelligence, but still really valuable. That's not usually how it's thought of.
I guess in some way that's would be disappointing if it's like, well, I made some progress,
but he's never going to get to the AI that I'm envisioning, but it still caused something cool to
happen. But in effect, that's really what's happening. You know, like the effect on industry of
AI is significant, but it's not because human level intelligence has emerged, because it hasn't.
It's because these things have other implications that are in the short run quite useful and
interesting. And so it's sort of a side effect of the fact that people have this kind of grandiose
vision that a lot of interest has now focused in and like now caused a lot of stepping stones to be
uncovered. And we don't really know where those stepping stones lead. They may not lead to the human
level, but they do lead to interesting things. So you have to, I think you have to distinguish which
type of thing are we talking about. Like when somebody says something like AI, like a self-driving car,
flying car, you know, whatever it is, are we talking about like this is an interesting space to play
around in because we're going to find some interesting stuff or are we literally believing that like okay within two years we're going to be on mars and or even 10 years and it's like you know the answer is like extremely critical like i mean you know you could be wrong and still have gotten a lot of people on some interesting stepping stones but at least i would want to for myself try to disentangle those that question and decide which which type of visionary is this and what type of vision is this um because it i think it guides the realisticness and also it it
It takes out a lot of inefficiency to recognize why you're doing something.
Because when I have investigated AI, I think of myself as just basically looking at stepping
stones because they're interesting.
I don't necessarily think this leads to AI.
This doesn't lead to AI because I don't know.
It's like way far off.
I can't tell you.
So what do you tell your boss?
Not you, but like somebody listening to this.
How do I go to my boss and say, you know what?
These objectives, they're really destroying my creative thinking here.
How do I step back and, you know, what do I?
say that is a serious problem that actually is one of the big reasons that we wrote wrote the book
because it's kind of like the book is i think a weapon um it becomes an argument yeah it's trying to
empower people to make an argument to their boss you know because it's it's it's really scary i think
to argue like this like this sounds wacky actually like if you don't have any context you haven't
listen to this show read the book or anything and if you just go to your boss and you're just like
look, I'm just doing this because it's interesting. And there's no assessment. You know, we're going to drop the assessment. Just let me do it, though, because interestingness is really important. I mean, your boss is going to, yeah, you're risking your job. Your boss is going to freak. And it's not only because your boss is like a jerk because he's not necessarily a jerk or she is not necessarily a jerk. The real problem for your boss is they have a boss. How are they going to explain it to their boss? The problem is that this culture percolate through everything. So like everybody's trapped in, even if they believe in what I'm saying,
Like I've said, I mean, I've talked to a lot of people about this, a lot of different organizations.
Like, the book has caused me to come before, like, all kinds of audiences that I never would have
encountered. And a lot of the time, I meet people who are gatekeepers. And they're like,
you know, people who decide what should go forward, what should not go forward. And they love,
they love the principle of the argument. Like, they're like, this is so inspiring. Like,
I'd love to change things around here. But, and there's always this caveat. Like, I answer to
this, that and the other thing. And this is going to be really hard, like, to explain this to
these people. And so I'm not really sure what I can do. And then basically, it doesn't really lead
to any changes. And so I'm hoping that the book will actually empower through this kind of
discussion becoming more mainstream. Hopefully, I mean, that's idealistic. People being able to
make this argument without being ridiculous. Like, this is serious. Like, this is principled. It's based
on research. And there's a reason that justifies doing things like this. But isn't this sort of like
the destructive nature of capitalism or, you know,
that it just like sort of feeds on itself.
And then so you have an interesting idea at work, you can't pursue it, that becomes a startup.
So you find the next stepping stone, you find something interesting for you.
Your workplace won't necessarily allow you to explore it.
So you quit your job and you find a couple like-minded people and now you get to explore it.
Yeah.
I mean, I'm only talking about improving things, really.
It's not like nothing ever works at all.
Obviously, we see progress, like rock and roll was invented.
Cool stuff happens.
You know, people create startups that change the world.
It's not like nothing can happen at all.
At least we're not in this kind of like horrible entrenched situation.
And then there are, you know, dictatorships and things where that effectively is like that.
But like we at least have a certain amount of flexibility in our society.
But I'm basically saying it could work a lot better.
And in fact, like it shouldn't be the case that like in an organization or a business, for example,
or like a corporation where like there is a part of the corporation,
which is supposed to be facilitating innovation,
that the only way to actually do something innovative
is to actually quit and start a startup company.
You know,
I don't feel bad that that happened
because like the startup ended up being really cool,
but something is wrong, though.
Like what is going on inside of that organization?
Like they could have captured that idea,
but they let it go.
And this is, I think,
pervasive all over the place
because of our objective culture.
And so things could be a lot better
and life could be a lot better
because like dropping out of your job
and having to start something new,
and risking your career and your finances, it kind of sucks.
And it really shouldn't be necessary because this is actually a principled thing to do,
especially if you're part of the innovative component.
Like I just want to acknowledge there are parts of companies where you shouldn't necessarily
be about innovation.
It's not every single thing that's being done is not about innovation.
Like some things need to be conservative.
Some things need to be conserved.
But there are the parts of companies explicitly, you know, supposedly assigned to innovate.
And they actually work in this objective.
way, which is just like completely backwards. And so does, so do like huge agencies like the
National Science Foundation, which is, I think, very objectively run. Like, if you look at the
criteria for funding, it's very objective. It's like you have to propose to this committee by
telling them what your objectives are and then assessing whether you can get to those objectives
and being very clear on what the assessment will be. And it's completely objective. And then it's
consensus driven by a committee, which is not also, which is another thing I've talked about,
which is not great for doing this kind of thing. And so, like, yeah, there, it's,
that still might happen. Like, I myself have pursued projects that, like, were rejected by those
committees because I basically was pissed off and thought, screw them. I'm going to just do it.
But that shouldn't, that's not very ideal. Like, it should work better than that. That shouldn't,
it shouldn't be like, you always have to, like, be a rebel in order to do something, which is
principled. You had two sort of insights on decision making that you mentioned before we, we got on
here. And the first was that you told me one rule of thumb that you used for deciding on projects is to
try to avoid ideas that make too much sense. Can you double click on that? Yeah. This is it's kind of fun
because when I say it, people look at me like I'm insane. It's like somebody says, look at this
great idea. Like it's usually something in science. It's like, isn't this exciting? Or maybe even
we should follow up on this or do something. And I'm just like, well, I do think it's a good idea,
but it makes too much sense. So I really don't find it that exciting. And so that's a little personal
heuristic, which is related to what we're discussing. And it's sort of what it is for me,
or rule of thumb, is that, like, I recognize, because of all this, that when you talk about
stepping stones that lead somewhere really revolutionary, they're going to be counterintuitive.
You know, and if you think about it, that makes a lot of sense, because if they were just
intuitive, like obvious, then we would have crossed them anyway, you know? Like, it's not a problem.
getting to things that are really important or interesting
will cross through counterintuitive stepping stones.
And so what that means is that they won't make sense at first.
That's basically the definition of counterintuitive.
Like it's true that in hindsight, they'll make sense.
Because at some point you look back and you say,
oh, well, I see why this led to that.
But looking forward, they will be strange and counterintuitive
and basically seem like they don't make sense.
And even like if you think about the theme that we're talking about here,
which is in effect that to achieve our highest goals, we should be willing to abandon them,
doesn't sound like it makes sense at first, which is why it's a good stepping stone,
because they're always going to be like that.
They don't make sense at first.
But in hindsight, you can look at it like what we've discussed if you're starting to be
convinced, and it does make sense, actually, but only in hindsight.
At first, you propose, when you hear a proposal like that, you're like, what the heck is that?
That's crazy.
And those are the ones I'd really rather pursue, you know, because that are the ones that are going to lead to something revolutionary.
And so if you give me something that makes a lot of sense, I basically think, well, someone's going to follow it.
It's guaranteed.
It makes sense.
But it's not exciting enough for me because, like, I know it's going to get followed.
Someone else can do it.
Like the ones that aren't going to get done are the ones that don't make sense.
So I'd rather think of something like that, and that would be more exciting.
That goes with your sort of second heuristic, which is trying to imagine the people you know.
And if they can predict what you would do next, then you probably shouldn't.
do that because if it's predictable, somebody else can and probably will do it.
Yeah, yeah.
So that's another heuristic.
Yeah, like I'm at least I feel much more pleased if I do something next that isn't
what you would have expected me to do.
That means, yeah, trying to think about up front, like, what is the natural next thing
that you would think I would do?
And then not doing it.
It's trying to run away from it.
And this is related to the concept of novelty.
Like, basically, that was a large concept in the,
the book is novelty because novelty is a very large component of
interestingness it's not all of it but like just about anything that's
interesting is novel it's other things too but it's at least novel what do you
mean when you say novel yeah novel means it hasn't been tried before it doesn't
look like things that have come before it's different in some very
fundamental way from other things that have existed in the past and and so
those are like like think about things like you know what's what's not
novel changes over time, of course, like something like, you know, the idea of like a little
room that's like on top of wheels and can move you from A to B, like, you know, a hundred years ago
or 120 years ago, that would be like a super interesting thing. Like if you actually could get
something like that. That would be like dinner conversation. Like, could you imagine how will
this affect society that like this little thing on wheels can go anywhere? Now today, that's not
very novel. Like, it's completely not novel. And that's why it's not interesting. Like, this is
not good dinner conversation, like, because it's been done and it's been done for 100 years.
And so, like, it changes. And so generally speaking, like this, the next stepping stones,
the ones that are going to be interesting are also going to be novel. So that's generally a rule
of thumb. And so, like, running away, like novelty means running away from where you've been in the
past. You know, a lot of objective problem solving is the opposite. It's about converging along the
same path. Convergence is basically what it means to optimize. You converge towards that global
optimum if you're doing well. But it means you're basically sticking to the same path. You're not trying
to get away from it. Not with these exact opposites. Like, I'm getting away from where I've been in the
past. And that's uncomfortable, of course, by nature and risky and everything. I mean, because it's like
if I've been successful in some path that I've taken in the past and now I'm in a situation where I'm now
trying to take a different path, well, of course, that's going to be like super uncomfortable for me.
Like I'm leaving behind like everything that makes me feel comfortable and safe. But then again,
it's a heuristic for innovation, you know, because it will be through the novel and the
interesting that innovation actually happens. And so I think that's why generally I think, well,
it is, you can kind of predict what I would do next. You look at a certain trajectory that I've
taken, but like I guess the way I think is that then someone will do it, you know, like it's,
it's kind of obvious. You don't need to be me anymore. Like this path has been laid and I'm no,
I'm sort of like pushed myself out of relevance because the path is now clear. I think this is why
like luminaries kind of get stuck in a rut as they get older it's part of it I think is that it's really it's
really scary like to leave that path because that's what you're known for that's what everybody
respects you for and you're comfortable in it but the problem is it becomes predictable and so you
look less innovative the farther down the path you go and the less you deviate and so it takes a lot
of energy to actually intentionally deviate but it's I think it's a good heuristic for if you really want to
continue to be innovative.
Yeah, it's a good place to end this conversation, Ken.
I want to thank you for your time today and, like, it was a fruitful discussion.
Thank you so much.
Yeah, it was really great to be here.
Thanks for listening and learning with us.
For a complete list of episodes, show notes, transcripts, and more.
Go to fs.
combelog slash podcast.
Or just Google the Knowledge Project.
Until next time.