Making Sense with Sam Harris - #40 — Complexity & Stupidity

Episode Date: July 12, 2016

Sam Harris talks to biologist David Krakauer about information, complex systems, and the future of humanity. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access ...to all full-length episodes at samharris.org/subscribe.

Transcript
Discussion (0)
Starting point is 00:00:00 Thank you. of the Making Sense Podcast, you'll need to subscribe at samharris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only content. We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers. So if you enjoy what we're doing here, please consider becoming one. Today I'm going to be speaking with David Krakauer, who runs the Santa Fe Institute, one of the most interesting organizations scientifically anywhere. And David is a mathematical biologist. He has a PhD in evolutionary theory from Oxford. But being at the Santa Fe Institute puts him at the crossroads of many different areas of inquiry. We talk a little bit about what the Institute is, but given that its focus is on complex systems, the people there attempt to
Starting point is 00:01:18 understand complexity using every scientific and intellectual tool available. So David knows a lot about many things, as you'll hear in this conversation. We start by covering some foundational concepts in science, like information and complexity and intelligence, then move on from there to talk about the implications for society and culture and the future. In any case, I love talking to David, and I hope you enjoy the ground we covered. And now I give you David Krakauer. I have David Krakauer on the line. David, thanks for joining me on the podcast. Pleasure to be with you. importance of culture and the importance of artifacts that we create for human intelligence
Starting point is 00:02:26 and resisting our slide into stupidity, which you talked about, which was the focus of your talk. But before we get there, let's just set the stage a little bit. Tell us a little bit about your scientific interests and background. So, well, it's great to be with you first. My scientific interests, as I've come to understand them, are essentially grappling with the problem of the evolution of intelligence and stupidity on Earth. And it's quite common for people to talk about intelligence. It's less common for people to talk about stupidity, even though arguably it's more common.
Starting point is 00:03:04 It's less common for people to talk about stupidity, even though arguably it's more common. And so my background is in mathematical evolutionary theory. And I essentially work on information and computation in nature that would include the nature that we've created, that we call technology. And where it came from, what it's call technology, and where it came from, what it's doing today, and where it's going in the future. And so would you describe yourself as a mathematical biologist? Is that the right category? Yeah, I think it's reasonable. I think, unfortunately, all of these categories are starting to strain a little.
Starting point is 00:03:39 Yeah, well, and now you're running the Santa Fe Institute, which I think quite happily, And now you're running the Santa Fe Institute, which I think quite happily, its existence seems to be predicated on the porousness of these boundaries between disciplines or even their non-existence. And so maybe describe the Institute for people who are not familiar with it. Yeah, so the Santa Fe Institute is in Santa Fe, New Mexico, as the name would suggest. It was founded in the mid-80s by a group of Nobel laureates from physics and economics and others who were interested in trying to do for the complex world what mathematical physics had done so successfully for the simple world. I should explain that. So the simple world would be the solar system, or inorganic chemistry, or black holes. They're not easy to understand, but you can encapsulate their fundamental
Starting point is 00:04:33 properties by writing down a system of equations. When you get to the complex world, which basically means networked adaptive systems. So that could be a brain, a network of neurons. It could be a society. It could even be the internet. And in those networked adaptive systems, complex systems, the kinds of formalisms that we had created historically to deal with simple systems failed. That's why we don't have Maxwell's equations of the brain, right? We have large textbooks with many anatomical descriptions, some schematic representations of function, and some very specialized models. And the question for us at SFI is, are there general principles that span the economy,
Starting point is 00:05:20 brains, the internet, and so on? And what is the most natural way of articulating them mathematically and computationally. And how is SFI different from the Institute for Advanced Study at Princeton, where I think you also were, if I'm not mistaken? Yes, that's right. So the IAS in Princeton is a lot older. It was founded in the 30s.
Starting point is 00:05:41 We were founded in the 30s. We were founded in the 80s. IAS is an extraordinary place, but the model, if you like, is much more traditional. IAS has tenure, it has departments, and it has schools. We do not have tenure, we do not have departments, and we do not have schools. They've created, in some sense they've replicated i guess um a very successful model that is the university model we decided to start again from a blank slate and we asked the question if you were now reinventing the research institute based on everything that we now know post-scientific revolution post-techntechnological revolution, etc., what should it look like? And so it's a more radical model.
Starting point is 00:06:29 And so we decided very early just to discard any mention of disciplines and departments and focus as hard as we could on the common denominators of the complex systems that we were studying. And it's truly interdisciplinary. You have economists and mathematicians and biologists and physicists all throwing in their two cents on the same problems. Is that correct? Absolutely. I mean, just as an example, I mean, you know, there's all this debate now about the demise of the humanities. But we, from the very very beginning decided that that wasn't a worthwhile distinction between the natural sciences and the humanities.
Starting point is 00:07:09 We were working on the archeology of the Southwest and using computational and physical models since the 80s and have produced what is by now a very well-known series of theories for why, for example, some of the native civilizations of the American Southwest declined, the origin of ancient cities. And all of these are based on computational and energetic theories and close collaborations between archaeologists and, say, physicists. So the way we do it, I don't like to call it interdisciplinary because that's, in some sense, genuflecting in the direction of a superstition
Starting point is 00:07:44 that I don't want to take seriously. And so what happens when you ignore all of that and say, let's certainly use the skills that we've acquired in the disciplines, but let's leave them at the door and just be intelligent about complex problems. Yeah, really what you have as an institutional argument seems to me for the unity of knowledge or consilience, that really the boundaries between disciplines are much more a matter of university architecture and just the kind of bandwidth issues of any individual life where you have, it takes a long time to get very good at one thing. And so by definition, you know, someone starts out in one area as opposed to another and spends a rather long time there in order to get competent. Anyway, I think what you're doing there is very exciting. Thank you. So before we get into your talk, there's a few things I just want you to enlighten me and our audience about, because there's some concepts here that you are going to use that I think are difficult to get one's head around. And the first is the concept of information. And I think there are many senses in which we use
Starting point is 00:08:50 this term and not all of them are commensurable. It seems to me that there is a root concept, however, that potentially unites fields like genetics and brain science and computer science and even physics. So how do you think about information? Yeah, so I should say, and we've talked about this before, Sam, and that is it's sometimes what I call the M-cubed mayhem. That is M raised to the power three mayhem. And the mayhem comes from not understanding the difference between mathematics, the first M, mathematical models, the second M, and metaphors, the third.
Starting point is 00:09:29 And there are terms, scientific terms, mathematical terms, that are also used idiomatically or have a colloquial meaning. And they very often get us into deep water, energy, fitness, utility, capacity, information, computation. And so we all use them in our daily lives, probably very effectively, but they also have a technical meaning. And what happens often is that arguments flare up because one person is using it mathematically and another person metaphorically, and they don't realize they're doing this. So that's the first point to make.
Starting point is 00:10:07 And they're all valuable. I don't mean to say that there is only a mathematical definition of information, but it's worth bearing in mind that when I talk about it, that's what I mean. So that's the first point. scientific storied history, starting with essentially the birth of the field that we now call statistical mechanics. And this was essentially a Boltzmann trying to understand the arrow of time in the physical world, the origin of irreversibility. Why is it that you can crack and break an egg, but the reverse almost never happens? Why is it that you can burn wood into ash and smoke, but the reverse almost never happens?
Starting point is 00:10:53 He created in the 1870s a theory called the H theorem, where he essentially had in mind lots of little billiard balls bumping into each other chaotically. He called it molecular chaos. And through repeated collisions, you start with a fairly ordered billiard table, but at the end, they're distributed rather randomly all over the table. And that was Boltzmann. And he thought maybe the underlying molecular structure of matter was like lots of little billiard balls. And the reason why we observe certain phenomena in nature as irreversible is because of molecular chaos. And that was formalized later by a very famous American physicist, Josiah Willard Gibbs. But many years later, the baton was picked up by
Starting point is 00:11:38 an engineer working at Bell Labs, Claude Shannon. He realized that there was a connection between physics and irreversibility and the error of time and information. It was very deep insight that he had. Before explaining how that works, what did Claude Shannon do? He said, look, here's what information is. Let's say I want you to navigate from one part of a city to another, from A to B, in a car. I could just drive around randomly. It would take an awful long time to get there, but I might eventually get there.
Starting point is 00:12:13 Alternatively, I could give you a map or driving directions, and you'd get there very efficiently. And the difference between the time taken to get there randomly and the time taken to get there with directions is a measure of information and shannon mathematized that concept and said that is the reduction of uncertainty you start off not knowing where to go you get information in the form of a map or driving directions, and then you get there directly. And he formalized that, and he called that information. And it's the opposite of what Boltzmann and Gibbs were talking about.
Starting point is 00:12:53 It's a system going, instead of going from the ordered into the disordered state, the billiard balls on the table starting maybe in a lattice and ending up randomly distributed, it's going from a state of them being random because you don't know where to go to becoming ordered. And so it turns out that Shannon realized that information is in fact the negative of thermodynamic entropy. And it was a beautiful connection that he made between what we now think of as the science of information and what was the science of statistical physics. Well, so let's bring this into the domain of biology because I've been hearing now with increasing frequency this idea that biological systems and even brains do not process information and that the analogy of the brain as a computer
Starting point is 00:13:47 do not process information. And that the analogy of the brain as a computer is no more valid than the analogy of it as a system of hydraulic pumps or a wheel works powered by springs and gears or a telegraph. And these are all old analogies to the most current technology of the time. And there was an article in Aeon magazine, I think it's just an online journal, that probably a dozen people sent to me. And I thought it made this case very badly. And you and I talked about this briefly when we first met. Now, it seems to me, no one to my knowledge thinks that the brain is a computer in exactly the way our current computers are computers. We're not talking about von Neumann architecture in our brains. But the idea that it doesn't process information at all, and the idea that the claim that it does is just as crazy as claiming that it's a mechanism of gears and springs,
Starting point is 00:14:35 strikes me as fairly delusional. But I keep meeting people who will argue this, and some of them are very high level in the sciences. So I was hoping we could talk a little bit about the ways in which biological systems, in particular brains, encode and transmit information. Yes. So this takes me right back to my M-cubed mayhem, because that's a beautiful example in that paper of the author not knowing the difference between a mathematical model and a metaphor. And so you gave a beautiful example. You talked about springs and levers and their physical
Starting point is 00:15:10 artifacts, right? And then there are mathematical models of springs and levers, which are actually used in understanding string theory. So, okay. So let's talk a little bit about the computer and the brain. It's very important because you mentioned von Neumann, and it spans elegantly that spectrum from mathematics to mathematical models to metaphors. The first real theory of computing that we have is due to Alan Turing in the 1930s. And he was a mathematician. Many people know him from the movie The Imitation Game and for his extraordinary work on Enigma and decoding German submarine codes in the
Starting point is 00:15:54 Second World War. But what he's most famous for in our world is answering a really deep mathematical question that was posed by the German mathematician David Hilbert in 1928. And Hilbert said, could I give a machine a mathematical question or proposition, and it would tell me in reasonable amount of time whether it was true or whether it was false? And that's the question he posed. Could we, in some sense, automate mathematics? And in 1936, Turing, in answering that question, invented a mathematical model that we now know as the Turing machine. And it's a beautiful thing. I'm sure you've talked about it on your show before. And Turing did something remarkable. He said, you know, you can't answer that question. There are certain mathematical statements that are fundamentally uncomputable.
Starting point is 00:16:51 You could never answer them. And it was a really profound breakthrough in mathematics because it said there are certain things in the world that we could never know through computation. So years later, Turing himself in the 40s realized that in solving a mathematical problem, he had actually invented a mathematical model, the Turing machine. And he realized the Turing machine was actually not just a model for solving math problems, but it was actually the model of problem solving itself. And the model of problem solving itself is what we mean
Starting point is 00:17:25 by computation. And in the 1950s, actually, 58, John von Neumann, who you mentioned, wrote a book, the famous book called The Computer and the Brain. They said, perhaps what Alan Turing had done in his paper on intelligent machinery, has given us the mathematical machinery for understanding the brain itself. And at that point, it became a metaphor. And John von Neumann himself realized it was a metaphor, but he thought it was a very powerful one, as they so often are.
Starting point is 00:17:57 So that's the history. And so now, up into the present. So as you point out, there's a tendency to be a bit epistemologically narcissistic. We tend to use whatever current model we use and project that onto the natural world as almost the best fitting template for how it operates. or the utility and disutility of the concept. The value of what Turing and von Neumann did was give us a framework for starting to understand how a problem-solving machine could operate. We didn't really have in our mind's eye an understanding for how that could work,
Starting point is 00:18:39 and they gave us a model for how it could work. For many reasons, some of which you've mentioned, the model is highly imperfect. Computers are not robust. If I stick a pencil in your CPU, your machine will stop working, but I can sever the two hemispheres of the brain, and you can still function. You're very efficient. Your brain consumes about 20% of the energy of your body, which is like 20 watts. It's 20% of a light bulb. Your laptop consumes about that and has some tiny fraction of your power.
Starting point is 00:19:15 And they're highly connected. The neurons are densely wired, whereas that's not true of computer circuits, which are only locally wired. And most importantly, the brain is constantly rewiring and adapting based on inputs, and your computer is not. So we know the ways in which it's not the same. But as I say, it's useful as a thought experiment for how the brain might operate.
Starting point is 00:19:40 So that's the computer term. But now let's take the information term. That one, for me, and that magazine article you mentioned is criticizing the information concept, not the computer term. But now let's take the information term. That one for me, and that magazine article you mentioned, is criticizing the information concept, not the computer concept, which is limited, and we all agree, but the information concept is not. So we've already determined what information is mathematically. It's the reduction of uncertainty. And if you think about your visual system,
Starting point is 00:20:03 when you open your eyes in the morning and you don't know what's out there in the world, electromagnetic energy, which is transduced by photoreceptors in your retina and then transmitted through to visual cortex, allows you to know something about the world that you did not know before. So it's like going from the billiard balls all over the table to the billiard balls in a particular configuration. Very formally speaking, you have reduced the uncertainty about the world. You've increased the information. And it turns out you can measure that mathematically. And the extent to which that's useful is proved by essentially neuroprosthetics. The information theory of the brain allows us to build cochlear
Starting point is 00:20:46 implants. It allows us to control robotic limbs with our brains. So it's not a metaphor. It's a deep mathematical principle. It's a principle that allows us to understand how the brain is operating and re-engineer it. And so it's one of those cases where I think the article is so utterly confused that it's almost not worth attending to. Now, that's information. Information processing, if that's synonymous in your vocabulary with computing in the Turing sense, then you and I have just agreed that it's not right. But if information processing is what you do with Shannon information, for example, to transduce electromagnetic impulses into electrical firing patterns in the brain,
Starting point is 00:21:34 then it's absolutely applicable. And then how you store it, and then how you combine information sources. So when I see an orange, it's orange color and it's also a sphere. I have tactile, mechanical impulses. I have visual electromagnetic impulses. And in my brain, they're combined into a coherent representation of an object in the world. And the coherent representation is in the form of an informational language spiking. And so, you know, it's extraordinarily useful. It's allowed us to engineer neuro, you know, biologically memetic architectures.
Starting point is 00:22:23 And it's made a huge difference in the lives of many individuals who have been born with severe disabilities. So I think we can take that article and shred it. Yeah. As I was reading the article, again, this is, it was one of those almost not even wrong categories of error. But, you know, I was thinking of things like genes can be on or off, right? So there's a digital component going all the way down into the genome. And the genome itself is a kind of memory, right? It's a memory for structure and physiology and even certain behaviors that have proved adaptive in the past. And therefore, it's a template for producing those in future organisms.
Starting point is 00:23:02 That's exactly right. And so that's the great power of mathematical concepts because, and again, we have to be clear in making distinctions between the metaphor of memory, right, and the mathematical model of memory. And the beautiful thing, that's why mathematics is so extraordinary and powerful, is that once we move to the mathematical model of memory, exactly as you say,
Starting point is 00:23:23 you can demonstrate that there are memories stored in genes, there are memories stored in the brain, there are memories stored in culture, and they bear an extraordinary family resemblance through the resemblance in the mathematical equations. So you described it as consilience. In Ed Wilson's term, you could describe it as unification in the language of physics. And they're totally legitimate. Where we run into trouble is if we don't move to mathematics, but we only remain in the world of metaphor. And there, of course, everyone has a slightly different matrix of associations, and you can never fully resolve the ambiguities. Right. Except though, even at the level, forget about the math for a second,
Starting point is 00:24:05 let's just talk about something that's perilously close to metaphor. We are simply talking about cause and effect relationships that, in this case, reliably link inputs and outputs, right? So there's, I mean, there is just a, even in that article, he was talking about the nervous system being changed by experience. He just didn't want to talk about the resulting changes in terms of memory or information storage or encoding or anything else that, that suggested an analogy to a computer. But there's just this, this fact that change in physical structure can produce reliable change in its capacities going forward. Yes. And whether we want to call that memory or not, or learning or not, biologically, physically, that's what we're talking about. Absolutely. It's what we're talking about. No,
Starting point is 00:24:57 you're right. You see, that's the point. It has to do with this legitimate fear of anthropomorphism. with this legitimate fear of anthropomorphism. And I think that what we do in these sort of more exact sciences is try and pin down our definitions so as to eliminate some of the ambiguities. They never go away entirely. But my suspicion, Sam, is that the author of that article will simply find a language that doesn't have its roots, if you like, in the world of information and apply these new terms. But we would realize if we read it through thoroughly that they were in fact just synonyms. synonyms. He would find himself having to use these terms because they are, to the best of our knowledge, the best terms we have to explain the regularities we observe. Right. And yet we don't have to use terms like hydraulic pumps or the four humors. We can grant
Starting point is 00:25:58 that there have been bad analogies in the past where the details are not actually conserved in any way going forward. Well, but look at a good example, you know, it's a beautiful example because where we have used that is if you're talking about your cardiac system or your urinogenital system, it is entirely appropriate to use Harvey's model, which was the pump, right? So the ones that worked have stuck. And I think it's just time that will tell us whether or not our use of the informational concept will be an anachronism or will have enduring value. Well, for those of you who are interested to read this paper that we are trashing, I will put the link on my blog beneath where I embed this podcast. So now moving on to
Starting point is 00:26:46 your core area of interest, we've dealt with information. What is complexity? Yes. And so that's a very, that's a wonderful example of one of these terms that we use in daily life, but also has mathematical meaning. So the simplest way to think about complexity is as follows. Imagine you had a very regular object like a cube. You could express it just by describing its linear dimensions. And that would tell you what a cube is. And imagine you want to explain something at the other end of the spectrum, like a gas in a room. You could articulate that very reliably by just giving the mean velocities of particles in air. So these two extremes, the very regular, a crystal, to the very random,
Starting point is 00:27:50 a gas, permit of a description, which is very short. And so over the phone or over Skype, as we're speaking, I could describe to you very reliably, a regular object or a very irregular object. But now let's imagine you said, can you please describe to me, David, a mouse? And I said, well, it's this sort of weird tubular thing. And it's got hairs at one end, it's got this long appendage at the other, etc. It would take an awfully long time to describe. And complexity is essentially proportional to that description. So that's a metaphor. And it turns out mathematically, the complex phenomena live somewhere between the regular and the random. And their hallmark signature is that their mathematical descriptions are long.
Starting point is 00:28:48 And that's what's made complexity science so hard because Einstein could write down a beautiful equation like E equals MC squared that captures the equivalence between energy and mass and has all these beautiful implications in special relativity, you know, less than a line. But how would you write down a mouse, which seems like a much more boring thing than energy and matter? And you can't. And so that's one way, intuitive way of thinking about a complex phenomena, which is how long does the description have to be to reliably capture much of what you consider interesting about it. And one point to make immediately is that, you know, if you look at physical phenomena,
Starting point is 00:29:31 they started off long too, right? So before Kepler revolutionized our understanding of celestial mechanics, we had armillary spheres with all these epicycles and deference, right, explaining incorrectly the circular motion of celestial mass. And it took a while for us to realize that there was a very compact, elegant way of describing them. And it could be that for many complex phenomena, there is a very elegant, compact way of describing them. But many others, I don't think that will be the case. So complexity are, as I said, these networked adaptive systems. Complexity itself as a concept mathematically tries to capture how hard it is to describe a phenomenon. And as they get more complex, these descriptions get longer and longer and longer and
Starting point is 00:30:25 longer. Right, right. You said something about randomness there that caught my ear because I thought if I gave you a truly random string of digits, unless you're talking about there was some method by which to produce it reliably, let's say, you know, like the decimal expansion of pi, that can be compressed. But if it's just a the decimal expansion of pi, that can be compressed. But if it's just a truly random series of digits, that's not compressible, right? That's just... That's absolutely right. And so that's a very important distinction.
Starting point is 00:30:54 And that is, I can describe the process of generating heads and tails by describing the dynamics of a coin. And so that's very short, right? But if I was trying to describe the thing I observe, then you're saying it would be incompressible and the description would be as long as the sequence described. In all of these cases, you're always talking about the underlying causal process that generates the pattern and not the pattern itself. And that's a very important distinction. So now, I think this is the first time I've ever conducted a conversation or interview like this, which is just kind of stepping through definitions, but I think it's warranted in this case. So what is intelligence and how is it related to complexity? Yeah, so, you know, intelligence is, as I say to people,
Starting point is 00:31:52 one of the topics about which we have been most stupid. And in so many ways, and we probably shouldn't get into it, not least that it is the topic about which we are least evolutionary, right? Because all of our definitions of intelligence are based on measurements that can only be applied to humans, and by and large, humans that speak English or what have you. So it's one of those areas that's been extremely foolishly pursued. So I don't mean an IQ test, okay? Because the IQ test is not interesting if you're trying to calculate the intelligence of an octopus, which I would like to know because I believe in evolution. And I think that we need to understand where these things come from. And just having a definition
Starting point is 00:32:38 that applies to one particular species doesn't help us. So what is it? And we've talked about entropy and computation, and they're going to be the keys to understanding intelligence. And so let's go back to randomness. The examples I like to give is the Rubik's Cube, because it's a beautiful little mental model, metaphor. If I gave you a cube, and I asked you to solve it and you just randomly manipulated it. Since it has on the order of 10 quintillion solutions, which is a very large number, you basically, if you were immortal, would eventually solve it. But it would take the lifetime of several universes to do so. But it would take the lifetime of several universes to do so.
Starting point is 00:33:26 That is random performance. Stupid performance is if you took one face of the cube and you just manipulated that one face and turned it, rotated it forever. And as everyone knows, if you did that, you would never solve the cube if you weren't already at the solution. And it would be an infinite process that would never be resolved. That rule is, in my definition, stupid. It is significantly worse than chance. Now let's take someone who's learned how to manipulate a cube and is familiar with
Starting point is 00:34:00 various rules. And these rules allow you, any initial configuration to solve the cube in 20 moves or less. That is intelligent behavior. So significantly better than chance. And this sounds a little counterintuitive, perhaps, until you realize that's how we use the word in our daily lives. You know, if I sat down with an extraordinary mathematician, and I said, I can't solve that equation. And they say, well, no, it's easy. Here, this is what you do. And you look at it, you say, oh, yes, it is easy, right? You made that look easy. That's what we mean when we say someone is smart. They make things look easy. If, on the
Starting point is 00:34:41 other hand, I sat down with someone who was incapable, and they just kept dividing by two, for whatever reason, I'd say, what on earth are you doing? What a stupid thing to do. You'll never solve the problem. What a foolish thing to do. What an inefficient thing to do. So that is what we mean by intelligence. It's the thing that we do that ensures that the problem is very efficiently solved and in a way that makes it appear effortless. And stupidity is a set of rules that we use to ensure that the problem will be solved in longer than chance or never, right? And is nevertheless pursued with alacrity and enthusiasm. And so now we're getting closer to the actual substance of the lecture you gave that I want you to recapitulate part of here because I just found it fascinating. And I mean, you can
Starting point is 00:35:39 recapitulate as much as you want to of it, but I'm in particular interested in the boundary line you drew between biology and culture and the way in which culture is a machine really for increasing our intelligence. And then you at some point express some real fear that we are producing culture or stewarding our institutional intelligence in a way that is actually making us biologically or personally less intelligent, perhaps to a dangerous degree in certain circumstances. So if you could just get us there at this point. Yeah. So this is a little bit of a lengthy narrative. I'm going to try and compress it. I'll make it as least complex as possible. So, you know, most of us are brainwashed to believe that we're born with a certain innate intelligence and we learn things to solve problems, but our intelligence goes basically unchanged, right?
Starting point is 00:36:48 And so, and you hear this all the time in conversations. They'll say, you know, that person's really smart, just because they never worked very hard and they didn't learn very much. Whereas that person's not very smart, but they learned a great deal and it makes them look smarter, that sort of thing. And I think that's absolute rubbish. So I think there's a very real sense in which education and learning makes you smarter. So that's sort of, in some sense, my premise. But just to stop there for a second, you wouldn't dispute, though, that there are differences in what psychologists have come to call G, general intelligence, and that this is somehow
Starting point is 00:37:22 not necessarily predicated upon acquiring new information. I would dispute that. So you think the concept of IQ is just useless, not just in octopi, but in people? More or less. And I should explain why. And I think, you know, a lot of recent research is required to understand why. I mean, let's just take an example. There are just canonical examples. You know, the young Mozart, right?
Starting point is 00:37:52 People will say, well, look, wait a minute. This is a kid at the age of seven, you know, had absolute pitch. And, you know, in his teens, you could play him a symphony that he could recollect note for note and reproduce on a score and et cetera, right? And surely this is an individual who's born. And what we now understand, of course, is that his father was a tyrant who from an extraordinarily young age drilled him and his sister in acquiring perfect pitch, in the subtleties of musical notation. And consequently, he was able to acquire very young characteristics that normally you wouldn't acquire later because normally you wouldn't be drilled. And in fact, more and more studies indicating that
Starting point is 00:38:40 if you subject individuals to deliberative practice regimes, they can acquire skills that seem almost extraordinary. Let's take G and the IQ in general. We now know that what it really seems to be measuring is working memory. Many working memory tasks are correlated and they live on this low dimensional space that we call gene. And now, one of the classic studies was the number of numbers that you could hold in your head. In other words, I recite off a number of numbers and I ask you to remember them. And 10 minutes later, I ask you, you're not allowed to write them down.
Starting point is 00:39:23 But what you do is you replay them in your mind. And people could do 10, maybe they could do 11, and this was considered to be some upper limit on our short-term memory for numbers. And yet a series of experiments have now been studied where through very intelligent and ingenious means of encoding numbers, we have people now who can remember up to 300. And these are individuals, by the way, who at no point in their lives ever showed any particular extraordinary memory capacity. And so the evidence is on the side of plasticity, not on innate aptitudes.
Starting point is 00:40:09 sides of plasticity, not on innate aptitudes. And to the extent that IQ is fundamentally measuring working memory, we now know how to start extending it. So that's an important point. I wouldn't deny that there are innate variations. I mean, I am not six foot five. I'm not even six foot. And so I will never be a basketball player. And so there are functions in the world that are responsive to variation that looks as if it's somewhat inflexible. But in the world of the brain, given that it is not a computer and the wiring diagram is not fixed in the factory but actually adapts to inputs there's much more hope that the variation is and in fact evidence that the variation is much greater than we had thought so the plasticity and trainability can just ride atop
Starting point is 00:41:00 variation that exists that is innate so you could have differences in aptitude with and without training. That's exactly right. And I think that's precisely true. And I think the open question for us is how much of that, if you like, innate Lego material is universal, right? Whereas how many of those pieces had already been pre-assembled into little castles and cars, which we then could build upon. And I think that are some people arriving on the stage with an advantage is actually not known. And I think what I'm reporting is that the current deliberative practice data suggests that that's less true. Right. Than we thought it was. That's the point. Right, which puts the onus to an even greater degree
Starting point is 00:41:51 than most people would expect on culture and on what you do with your time and on parenting and all of this machinery that is outside any individual brain, all of this machinery that is outside any individual brain, which is in a very material sense, augmenting its intelligence. And so take us into that direction. Yeah, so that's a very important point. So that's why that connection is important to make.
Starting point is 00:42:18 So, okay, so now we've basically understood what intelligence is, what stupidity is. We understand that we are flexible to an extraordinary degree, what stupidity is. We understand that we are flexible to an extraordinary degree, maybe not infinitely so. And as you point out, the inputs then become much more important than we had thought in the past. And so let's now move into intelligent, or what sometimes gets called cognitive artifacts. So here's an example.
Starting point is 00:42:53 Your ability to do mathematics or perform mathematical reasoning is not something you were born with. You did not invent numbers. You did not invent geometry or topology or calculus or algebraic geometry or number theory or anything else for that matter. They were all given to you if you chose to study mathematics as a class, in a class. And what those things allow you to do is solve problems that other people cannot solve. And for all of us in our lives, numbers are the, you know, in some sense, the lowest hanging fruit in our mathematical education.
Starting point is 00:43:26 And so let's look at numbers. There are many number systems in the world. They're very ancient. Ancient Sumerian cuneiform numbers, about 5,000 years old. Ancient Egyptian numbers. And here's a good example of stupidity in culture. Western Europe, for 1,500 years, used Roman numbers, Roman numerals, from about the second century BC to about 1500 AD, towards the number of objects, but terrible for performing calculation. So adding to what's x plus v, you know, what's x11 multiplied by 1v, and so on. It just doesn't work. And yet, for 1500 years, the human brain opted to deliberate over arithmetic operations using Roman numerals
Starting point is 00:44:28 that don't work. And the consequence of that is that Europeans, for much of their history, could not divide and multiply. And it's an extraordinary thing, because it's unbelievably stupid. And it's unbelievably stupid when you realize that in India and Arabia, they had a number system started in India, moved to Arabia, that was available from about the second century, that is the one that we use today, that would effortlessly be able to multiply and divide numbers. And so that's a beautiful example of the interface between culture and our own reasoning.
Starting point is 00:45:02 And the reason it's so intriguing is because once I've taught you a number system, like the Indian Arabic number system, base 10 number system, you don't need the world anymore. You don't need paper anymore to write it down. You can do these operations in your mind's eye. And that's what makes them so fascinating. And I call that kind of object that was invented over the course of centuries by many, many minds, complementary cognitive artifacts. And their unique characteristic to it is not only do they augment your ability to reason in the form, for example, of multiplying or dividing. But when I take them away from you, you have in your mind a trace of their attributes that you can deploy. And that, it's interesting, that's probably what's new in thinking about the evolution of cultural intelligence. For a long time, psychologists, cognitive scientists, archaeologists have understood that there are
Starting point is 00:46:07 objects in the world that allow us to do things we couldn't do otherwise, right? I mean, a fork, right? Or a scythe, right? Or a wheel, you know, it's been understood. But there is a special kind of object in the world that not only does what the wheel and the side and the fork does, but it also changes the wiring of your brain. So that you can build in your brain a virtual fork or a virtual side or a virtual wheel. Of course, it's not that little bit. And that is, I would claim, by the way, the unique characteristic of human evolution. Wouldn't you put language itself into this category? Absolutely, I would. Absolutely, I would.
Starting point is 00:46:46 Absolutely, I would. The reason I separate them, by the way, is that many people erroneously assume that the algorithm... If you'd like to continue listening to this conversation, you'll need to subscribe at SamHarris.org. Once you do, you'll get access to all full-length episodes of the Making Sense podcast, along Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.