Making Sense with Sam Harris - #332 — Can We Contain Artificial Intelligence?

Episode Date: August 28, 2023

Sam Harris speaks with Mustafa Suleyman about his new book, “The Coming Wave: Technology, Power, and the 21st Century’s Greatest Dilemma.” They discuss the progress in artificial intelligence ma...de at his company DeepMind, the acquisition of DeepMind by Google, Atari DQN, AlphaGo, AlphaZero, AlphaFold, the invention of new knowledge, the risks of our making progress in AI, “superintelligence” as a distraction from more pressing problems, the inevitable spread of general-purpose technology, the nature of intelligence, productivity growth and labor disruptions, “the containment problem,” the importance of scale, Moore’s law, Inflection AI, open-source LLMs, changing the norms of work and leisure, the redistribution of value, introducing friction into the deployment of AI, regulatory capture, a misinformation apocalypse, digital watermarks, asymmetric threats, conflict and cooperation with China, supply-chain monopolies, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

Transcript
Discussion (0)
Starting point is 00:00:00 Thank you. of the Making Sense Podcast, you'll need to subscribe at samharris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only content. We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers. So if you enjoy what we're doing here, please consider becoming one. Welcome to the Making Sense Podcast. This is Sam Harris. Okay, just a reminder that subscribers to the podcast can now share full episodes by going to the episode page on my website and getting the link. And you can share one-to-one with friends and family,
Starting point is 00:01:07 or you can post to social media, whatever you like. Okay, today I'm speaking with Mustafa Suleiman. Mustafa is the co-founder and CEO of Inflection AI and a venture partner at Greylock, a venture capital firm. Before that, he co-founded DeepMind, which is one of the world's leading artificial intelligence companies, now part of Google. And he was vice president of AI product management and AI policy at Google. And he is also the author of a new book, The Coming Wave, Technology, Power, and the 21st Century's Greatest Dilemma, which is the focus of today's conversation. We talk about the new book.
Starting point is 00:01:52 We talk about the progress that was made in AI by his company, DeepMind, various landmarks they achieved, Atari DQN, AlphaGo, AlphaZero, AlphaFold. We discussed the amazing fact that we now have technology that can invent new knowledge, the risks of our making progress in AI, superintelligence as a distraction from more pressing problems, the inevitable spread of general-purpose technology, the nature of intelligence, productivity growth and labor disruption, the containment problem, the importance of scale, open-source LLMs, changing norms of work and leisure, the redistribution of value, introducing friction into the deployment of AI, regulatory capture, the looming possibility of a misinformation apocalypse, digital watermarks,
Starting point is 00:02:48 asymmetric threats, conflict and cooperation with China, supply chain monopolies, and other topics. Anyway, it was great to get Mustafa here. He's one of the pioneers in this field, and as you'll hear, he shares many of my concerns, but with different points of emphasis. And now I bring you Mustafa Suleiman. I am here with Mustafa Suleiman. Mustafa, thanks for joining me. Great to be with you, Sam. Thanks for having me. So you have a new book, which the world needs because this is the problem of our time. The title is The Coming Wave, Technology, Power, and the 21st Century's Greatest Dilemma. And we will get into the book because it's really quite a good read.
Starting point is 00:03:40 And we will talk about what that coming wave is. But you're especially concerned about AI, which is your wheelhouse, but also you're talking about synthetic biology and to, I guess, a lesser degree, robotics and some other technologies that are going to be more and more present if things don't run totally off the rails for us. But before we jump into the book, let's talk about your background. How would you describe the bona fides that have brought you to this conversation? Yeah. I mean, I started life, I guess, as an entrepreneur when I was 18. I started my first company, which was a point of sale system, a sales company. And we were sort of
Starting point is 00:04:26 installing these sort of very early PDAs back in 2002, 2003 and networking equipment. It wasn't successful, but that was my first attempt. I dropped out of Oxford at the end of my second year where I was reading philosophy to start a charity and I helped two or three other people get a telephone counseling service off the ground. It was a secular service for young British Muslims. I was a, I just turned an atheist at the time, having been to Oxford, discovered human rights principles and the ideas of universal justice and managed to sort of move out of the faith and decided that I really wanted to dedicate my life to doing good and studying philosophy and the theory was too esoteric and too distant from action.
Starting point is 00:05:21 I'm a very kind of practical action-focused person. So I spent a couple of years doing that. A little bit after that time, I spent a year or so working in local government as a human rights policy officer for the mayor of London at the time. I think I was 21 when I started that job. It was very big and exciting, but ultimately quite unsatisfying and frustrating. Who was the mayor? Was that Johnson? That was before Johnson. Yeah, quite a bit before. It was Ken Livingston back in 2004. Right. So quite a while back in London. And then from there, I wanted to see how I could scale up my
Starting point is 00:06:00 impact in the world. And I helped to start a conflict resolution firm i was very lucky at the age of 22 to be able to co-found this this consultancy with a group of some of the most practiced uh negotiation experts in the world some of the people who are involved in the peace and reconciliation process in south Africa post-apartheid. And it's a big group of us coming together with very different skills and backgrounds. And I had an incredible three years there working all over the world in Cyprus and for the Dutch government, on the Israel-Palestine question, many different places. And it was hugely inspiring and taught me a lot about the world. But I sort of fundamentally realized from there that if I didn't get back to technology, I would miss the most important transition, you know, wave, if you like, happening in my lifetime.
Starting point is 00:06:57 And, you know, I set about shortly after the climate negotiations that we were working on in 2009 in Copenhagen. Everyone left feeling frustrated and disappointed that we hadn't managed to reach agreement. And this was the year that sort of Obama was coming over and everyone had a lot of hope. And it didn't happen, it turns out, for another 10 or 12 years. And I sort of had this aha moment. I was like, if I don't get back to technology, then I'm going to miss the most important thing happening. And so I set about on this quest trying to sort of find anyone who I knew even tangentially who was working in technology. when we were teenagers um his older brother was demis isabis and uh we were playing poker together one night in the victoria casino in london and we got chatting about the ways that at the time you know we framed it as robots were going to transform the world and deliver enormous productivity boosts and improve efficiency in every respect and we were sort of debating like
Starting point is 00:08:03 how do you do good in the world? How do you get things done? You know, what is the real set of incentives and efforts that really makes a difference? And, you know, both very passionate about science and technology and having a positive impact in the world. And, you know, one thing led to another, and eventually we ended up starting DeepMind. I did that for 10 years. Yeah, along with Shane Legg, right? Shane is our other co-founder, exactly. Shane was at Gatsby Computational Neuroscience Unit in London at the time,
Starting point is 00:08:36 and he had just finished his PhD a few years earlier. He was doing postdoctoral research. And his PhD was on definitions of intelligence which was super interesting it was very obscure and really really relevant he was sort of trying to synthesize 60 or so different definitions of intelligence and trying tried to sort of abstract that into an algorithmic construct one that we could use to measure progress towards some defined goal. And his frame was that intelligence is the ability to perform well across a wide range of environments. So the core emphasis was that intelligence was about generality, right? And we can get into this. There's lots of different
Starting point is 00:09:18 definitions of intelligence, which place emphasis on different aspects of our capabilities, but generality has become the core concept that sort of dominated the field for the last sort of 12, 15 years. And of course, the term AGI, I mean, that predated Shane, but I think it was very much popularized by our kind of mission. You know, sort of it was really the first time in a long time that a company had been founded to invent general intelligence or AGI. And that was our mission to try and build safe and ethical artificial general intelligence. So I'm trying to remember where we met. I know we were both at the Puerto Rico conference at the beginning of 2015 that, I think, I don't know if it was the first of these meetings, but it was the first that I was aware of that really focused the conversation on AI safety and
Starting point is 00:10:13 risk. And I know I met Demis there. I think you and I met in LA subsequent to that. Is that right? Yeah, I think we met, I can't remember if we met before or after that but i think we had we had a common interest in our la conversation it might have been just before that talking about extremism and radicalization and terrorism and oh within islam yeah yeah that's right yeah yeah so i yeah i can't i don't think we met in puerto rico but that that conference was very formative of my you know it's really like like my first impression of how big a deal this was going to be ultimately. And then there was a subsequent conference in 2017 at Asilomar where I think we met again. And I think I met Shane there as well. So before we jump into, again, the book and what you're doing currently, because you've since moved on from DeepMind and you have a new company that we'll talk about. But let's talk about DeepMind because it really was, you know, it's been eclipsed in the popular consciousness by OpenAI of late with the advent of ChatGPT and large language models.
Starting point is 00:11:29 But prior to that, really, DeepMind was the preeminent, may in fact still be the preeminent AI company, but it's now a branch of Google. Give us a little bit of the history there and tell us what was accomplished. Because at DeepMind, you had several breakthroughs that were just fundamental. And you really put AI back on the map. And prior to what you did there, we were in an AI, so-called AI winter, where it was just common knowledge that this artificial intelligence thing wasn't really panning out. And then all of a sudden, everything changed. So I think pre-acquisition, which was in 2014, I think there were probably two principal
Starting point is 00:12:17 contributions that we made. I think the first is we made a very early bet on deep learning. I mean, The first is we made a very early bet on deep learning. I mean, the company was founded in 2010, in the summer of 2010. And it really wasn't for a couple of years that deep learning had even appeared on the field, even academically, with the ImageNet challenge a few years after we founded. So that was a very significant bet that we made early and that we got right. And the consequence of that was that we were able to hire some of the best PhDs and postdoctoral researchers in the world, you know, who at the time were working on this very obscure,
Starting point is 00:12:54 very uninteresting, you know, largely not very valuable subject. In fact, Jeff Hinton was one of our consultants. So was his student at the time, Ilya Satskeva, who's now chief scientist and co-founder of OpenAI, along with many others from OpenAI and elsewhere who basically either worked with us full-time or worked with us as consultants. And that was largely reflective of the fact that we got the bet right early on deep learning. you know reflective of the fact that we got the bet right early on deep learning the second contribution i would say was the combination of deep learning and reinforcement learning i mean if deep learning was was obscure reinforcement learning was even more theoretical and you know
Starting point is 00:13:37 we were actually quite careful to frame our mission among academics you know less around sort of agi and more around applied machine learning. You know, there was a, certainly in the very early days, we were a bit hush-hush about it. But, you know, as we got more traction in 2011, 2012, it became very attractive to people who were otherwise quite theoretical in their outlook to come and work on problems like reinforcement learning in, you know, sort a more engineering-focused setting, albeit still a research lab. And it was the combination of deep learning and reinforcement learning that led to our
Starting point is 00:14:13 first, I think, major contribution, which was the Atari DQN AI. DQN was a pretty incredible system. so you know dqn was a pretty incredible system i mean it essentially learned to play 50 or so of the old school sort of 80s atari's games atari games to human level performance simply from the pixels learning to correlate a set of rewarding moments in the game via score with a set of frames that led to that score in the run-up to that and the actions that were taken there and that was a really significant achievement it was actually that which caught larry page's attention and led him to email us you know and and you know sort of invite us to come and be part of of google and then google acquired you and uh what was the logic there you just it was
Starting point is 00:15:07 good to have google's resources to scale or i mean larry made a very simple claim which was you know i've spent you know the last you know 10 years or so building a platform with all the resources necessary to make a really big bet on AGI you know why should you guys go through all of that again you know we'll give you the freedom you need to carry on operating as a essentially a you know independent subsidiary even though we were part of Google why wouldn't you just come and work with us and have all the resources you need to to scale you know significantly which is what we did and it's it's it was a very compelling proposition because at the time you know monetizing deep learning back in 2014 was going to be really tough so but google had its own ai division as well that was just kind of
Starting point is 00:16:03 working in parallel with deep mindind. Did you guys, at some point, you guys merged? I don't know if that happened after you left or before, but was there a firewall between the two divisions for a time and then that came down or how'd that work? Yeah. So the division you're referring to is Google brain which is run by jeff dean and i think that started in 2015 with andrew actually as well and you know in some ways that's the kind of beauty of google's scale right that it was able to run multiple huge billion dollar efforts in parallel and the merger which i think has been long coming, actually only happened this year. So Google plus DeepMind is now Google DeepMind.
Starting point is 00:16:49 And most of the kind of open-ended research on AI is now consolidated around Google DeepMind. And all of the sort of more focused applied research that helps Google products more directly in the short term is focused on a separate division, Google Research. Right. So you had the Atari game breakthrough, which caught everyone's attention because you have these, if memory serves, you managed to build a system that had, I mean, system that had, I mean, it achieved human level competence and beyond and also achieved novel strategies that many humans wouldn't come up with. But then the real breakthroughs that got everyone's attention were with AlphaGo and AlphaZero and AlphaFold. Perhaps you can run through those because that's when, at least to my eye, things just became unignorable in the AI field. Yeah, that's exactly right. I mean,
Starting point is 00:17:51 it's pretty interesting because sort of after we got acquired, it was actually Sergey that was sort of insisting that we tackle Go. I mean, his point know that go is a massively complex space and you know all the traditional methods that have previously been used for games before dqn which essentially involved hand crafting rule-based features which is really what drove the work behind deep blue ibm's model you know a long time ago 97 i think it was you know go has something like 10 to the power of 170 possible configurations of the board so it's a 19 by 19 board with black and white stones and the rules are very simple it's a turn-based game where each player simply moves one place places one stone on the board and when you surround your opponent's
Starting point is 00:18:46 stones you remove them from the board and the goal is to sort of surround your opponent and so it is a very simple rule set but it's a massively complicated possible set of different configurations that can emerge and so you you can't sort of search all possible branches of that space because it's so enormous. Yeah. 10 to the 170 is like more atoms than there are in the known universe approximately. Yeah. I think something like 10 to the 80 that gets you all the protons in the universe. So yeah, it gets bigger still when you're talking about though. Yeah. Right. So this needed a new suite of methods and you know i think it was an incredible experience seeing alpha go progressively get
Starting point is 00:19:32 better and better i mean we we already had an inkling for this when we saw it play the atari games but this was just seismically more complicated and vast and yet it was using the same basic principle actually the same principle that has subsequently been applied in in protein folding too so you know i think that's what's really interesting about this is that is the generality of the ideas that simply scale with more compute you know because a couple of years later alpha goGo became AlphaZero, which essentially achieved superhuman performance without any learning from prior games. So part of the trick with AlphaGo is that it looked at hundreds of thousands of prior games. It's almost like the expert knowledge of existing players that
Starting point is 00:20:20 has been handed down for centuries of playing the game whereas alpha zero was able to learn entirely through self-play you know almost like i think the intuition is spawning instances of itself in order to play against itself in simulated environments many many hundreds of millions of billions of times way more it turns out to be way more valuable than bootstrapping itself from the first principles of human knowledge, which if you think about the size of the state space represents a minor subset of all possible configurations of that board. And that was a kind of remarkable insight.
Starting point is 00:20:55 And actually, it did the same thing for other games, including chess and shogi and so on. Yeah, that's a really fascinating development where it's now uncoupled from the repository of human knowledge. It plays itself. And over the course of, I think it was just a day of self-play, it was better than AlphaGo and any other system, right? Right. That's exactly right. And obviously obviously that's partly a function of compute but the basic principle gives an important intuition which is that because these methods are so general they can be paralyzed and scaled up and that means that you know we can sort of take advantage of all of the you know traditional assets of you know computing infrastructure rather than relying on you know old school methods
Starting point is 00:21:45 you know perfect memory parallelizable compute you know moore's law you know daisy chaining compute together just like we do with with with gpus these days so you know in some ways that's a that's the key intuition because it means the sort of barrier to application of the quality of the algorithm is lower because it's turbocharged by all these other underlying drivers, which are also improving the power and performance of these models. Mm-hmm. Champion came up with a move that all Go experts thought they immediately recognized as a mistake, but then when the game played out, it turned out to be this brilliant novel move that no human would have made, and it's just a piece of discovered Go knowledge. Yeah. I mean, I remember sitting in the commentary room live watching that unfold and listening to the commentator who was himself a 9dan expert say that it was a mistake. He was like, oh no, we've lost.
Starting point is 00:22:55 And it took 15 minutes for him to correct that and sort of come back and reflect on it. It was a really remarkable moment. And actually, you know, for me, it was a great inspiration, you know, because this is why we started the company. I mean, the quest was to try to invent new knowledge. I mean, our goal here is to try to design algorithms that can teach us something that we don't know, not just reproduce existing knowledge and synthesize information in new ways, but genuinely discover new strategies or new molecules or, you know, new compounds, new ideas, and contribute to the, you know, the kind of well of human knowledge and capability. And, you know, this was a kind of first, well, actually it was the second indication because the first instinct I got for that was watching the atari games player learn new strategies from scratch and this this was
Starting point is 00:23:49 kind of the second i think and what about alpha fold because this is a very different application of the same technology what what what did you guys do there and what was the the project well protein folding is a long-standing. And we actually started working on this as a hackathon, which started in my group back in 2016. And it was really just an experiment to see if, you know, some of the AlphaGo models could actually make progress here. And the basic idea is that if you can sort of generate, you know generate an example of the way a protein folds, this folding structure might tell you something about the value of that molecule in practice, what it can do, what its strengths and weaknesses are, and so on. on. The nice thing about it is because it operated in a simulated environment, it was quite similar to some of the games that we had been playing, teaching our models to play.
Starting point is 00:24:54 Previously the experiments had done something like 190,000 proteins, which is about 0.1% of all the proteins in existence. But in AlphaFold 2, the team actually open-sourced something like 200 million protein structures all in one go, which is sort of all known proteins. This is a massive breakthrough that took four or five years of work in development. And I think just gives an indication of the kinds of things that become possible with these sorts of methods. Yeah, I forget. Someone gave what purported to be a kind of a straightforward comparison between what AlphaFold did there and the academic years of PhD theses. And it was something like 200 million PhD theses got accomplished in a few years there in terms of
Starting point is 00:25:46 solving those protein folding problems. Yeah, I mean, those kinds of insights, those kinds of sort of compressions are similar to, you know, across the board with many technologies. Another one that's sort of similar to that is that the amount of sort of labor that once produced 50 minutes of light in the 18th century now produces 50 years worth of light. And that just gives a sense for how technology has this massive compressive effect that is hugely leveraging in terms of what we can do. Yeah, there's another crazy analogy in your book talking about the size of these, the parameters of these new large language models, which we'll get to,
Starting point is 00:26:32 but the comparison was something like executing all of these floating point operations. If every operation were a drop of water, the largest large language models execute as many calculations as would fit into the entire Pacific Ocean. So it's just, the scale is astounding. Right. So your book was a bit of a surprise for me because you are more worried than I realized about how all of this can go wrong. And I got the sense in, you and I haven't spoken very much,
Starting point is 00:27:07 but in talking to you and Demis and Shane, I got the sense that, and these conversations are now several years old, that you were more sanguine about our solving all of the relevant problems, you know, alignment being the chief among them, but other concerns of bad incentives and arms race conditions and etc. You all were putting a fairly brave face on a problem that was making many of us increasingly shrill and, you know, not to say hysterical. And so there were, you know, I guess the most hysterical voice of the moment is someone like Eliezer Yudkowsky. And there was obviously Nick Bostrom and others who were, you know, issuing fairly grave warnings about how
Starting point is 00:27:59 it was more likely than not that we were going to screw this up and build something that we really can't control ultimately ultimately and that could well destroy us. And on the way to the worst possible outcome, there are many bad, very likely outcomes like a misinformation apocalypse and other risks. But in your book, you don't give the risks short shrift. I mean, you do seem to suggest that, and certainly when you add in the attendant risks of synthetic biology here, which is that, you know, as worried as we are, there really is no break to pull. I mean, the incentives are such that we're going to build this. And so we have to sort of figure out how to repair the rocket as it's taking off and align it properly as it's taking off, because there's just no, there's no getting off this ride at the moment, despite the fact that people are calling for a moratorium, or some people are. So I guess before we jump into the
Starting point is 00:29:10 book, when did you get worried? Were you always worried, or are you among the newly worried? People like Geoff Hinton, who you mentioned is really the godfather of this technology. And he just recently resigned from Google so that he could express his worries in public. And he seems to have just become worried in the presence of these large language models. quite inscrutable to me that he suddenly had this change of heart because, you know, in my view, the basis for this concern was always self-evident. So give me the memoir of your concerns here. Yeah, so this is not a new consideration for me. I've been worried about this from, you know, the very first days when we founded the company. In fact, our strap line on our business plan that we took to Silicon Valley in 2010 was building artificial general intelligence safely and ethically for the benefit of everyone. That was something that was critical to me all the way through. When we sold the company, we made it a condition of the acquisition
Starting point is 00:30:23 that we have an ethics and safety board with some independent members overseeing technology in the public interest. Our technologies wouldn't be used for military purposes like lethal autonomous weapons or surveillance by the state. And since then at Google, I went through lots and lots of different efforts to experiment with different kinds of oversight boards and charters and external scrutiny and independent audits and all kinds of things. And so I'd say I've definitely been top of mind for me all the way through. I think where I diverge from the sort of Bostrom camp a bit is that I think that the language around superintelligence has actually been a bit of a distraction. And I think it was quite obviously a distraction from fairly early on. I think that the focus on this, you know, sort of intelligence explosion, this AI that
Starting point is 00:31:18 recursively self-improves and suddenly takes over everybody and turns the world to paperclips, I think has consumed way more time than the idea justifies. And actually, I think there's a bunch of more near-term, very practical things that we should be concerned about. They shouldn't create shrill alarmism or panic, but they are real consequences that if we don't take them seriously, then they have the potential to cause serious harm. And if we continue down this path of complete openness without any sort of checks and balances on how this technology arrives in the world, then essentially it has the potential to cause a great deal of chaos. And I'm not talking about AIs running out of control and robots and so on. I'm really talking about massively amplifying the spread of misinformation and more generally reducing the power, reducing the barrier to entry to be able to exercise power. That is fundamentally what this technology is. I mean,
Starting point is 00:32:23 in my book, I have a framing, which I think is more helpful, around a modern Turing test, one that evaluates capabilities, like what can an AI do? I think that we should be much more focused on what it can do rather than what it can say is important and has huge influence, but increasingly it's going to have capabilities. And so an artificial capable intelligence, an ACI, is something that has the potential not just to influence and persuade, but also to learn to use APIs and initiate actions, queries, calls in third-party environments. It'll be able to use browsers and parse the pixels on the browser to be able to click buttons and take actions in those environments. It'll be able to call, phone up and speak to, communicate with other AIs and other humans. So these technologies are getting smaller and smaller and more and more capable, are getting cheaper to build. And so if you look out over a 10 to 20 year period, I think the story is one of a proliferation of power in the conventional sense,
Starting point is 00:33:35 not so much an intelligence explosion, which, by the way, just for the record, I think is an important thing for us to think about. And I care very deeply about I think is an important thing for us to think about. And I care very deeply about existential risk and AGI safety. But I think that the more practical risks are not getting enough consideration. And that's actually a big part of the book. In no way does that make me a pessimist. I mean, I'm absolutely an optimist. I'm hopeful and positive about technology. I want to build things to make people's lives better and to help us create more value in the world and reduce suffering. And I think that's the true upside of these technologies and we will be able to deliver them on that upside. But no technology comes without risk and we have to consciously and proactively attend to the downsides. Otherwise,
Starting point is 00:34:24 we haven't really achieved our full objective. And that's the purpose of speaking up about it. Well, before we get into details about the downsides, let's talk about how this might go well. I guess before we talk about the upside, let's just define the terms in the title of your book. The title is The Coming Wave. Let's just define the terms in the title of your book. The title is The Coming Wave. What is the coming wave? So when you look back over the millennia, there have been waves of general purpose technologies from fire to the invention of the wheel to electricity. And each of these waves, to the extent that they have been lasting and valuable, are general purpose technologies which enable other technologies. And that's what makes them a wave.
Starting point is 00:35:13 They're enablers of other activity, their general purpose in nature. And as they get more useful, naturally, people experiment with them, they iterate, they invent, they adapt them, and they get cheaper and easier to use. And that's how they proliferate. So in the history of technologies, all technologies that have been useful that are real general purpose technologies have spread far and wide and got cheaper. And almost universally, that is an incredibly good thing. It has transformed our world. And I think that that's an important but very simple concept to grasp because if that is a law of technology, if it is a fundamental property of the evolution of technology, which I'm arguing it is, then that has real consequences for the next wave because the next wave is a wave of intelligence and of life itself.
Starting point is 00:36:10 Intelligence is the ability to take actions. It is the ability to synthesize information, make predictions, and affect the world around you. It's almost the definition of power. And everything that is in our visual sphere, everything in our world, if you look around you at this very minute today, has been affected in a very material way by intelligence. It is the thing that has produced all of the value and all of the products and affected the landscape that you can see around you in a huge way. And so the prospect of being able to distill what makes us unique as a species into an
Starting point is 00:36:53 algorithmic construct that can benefit from being scaled up and paralyzed, that can benefit from perfect memory and compute and consuming vast amounts of data, trillions of words of data, is enormous. I mean, that in itself is almost like, you know, gold. It's like alchemy. It's like being able to capture the essence of what has made us capable and add more knowledge and, you know, essentially science and technology into the human ecosystem. So imagine that everybody will now in the future, in 10 years, 15 years, have access to the very best, you know, doctor in the world, the very best educator, you know, the very best personal assistant and chief of staff. And any one of these roles, I think, is going to be
Starting point is 00:37:45 very, very widely available to billions of people. People often say to me, well, aren't the rich going to benefit first? Or is it going to be unfair in terms of access? Yes, for a period of time, that's true. But we're actually living in one of the most meritocratic moments in the history of our species. Every single one of us, no matter how wealthy you are, every one of us in the Western world, really the top 2 billion people on the planet, have access to the same smartphone. No matter how much you earn, you cannot buy a smartphone or a laptop that is better than the very richest that's an unbelievably meritocratic moment that is worth really meditating on and that is largely a function of these exponentials you know the cost of chips
Starting point is 00:38:34 has exponentially declined over the last 70 years and that's driven mass proliferation and if intelligence and life are subject to those same exponentials, which I think they are, over the next two to three decades, then the primary trend that we have to cope with in terms of our culture and our politics and commerce is this idea that intelligence, the ability to get stuff done, is about to proliferate. And that's going to produce a Cambrian explosion of productivity. Everybody is going to get access to a tool that enables them to pursue their agenda, to make us all smarter and more productive and more capable. So I think it might be one of the most productive periods in the history of humanity. And I think, of course, the challenge there is that it may also be one of the most unstable over the next 20 years. Yeah. So that cornucopia image immediately begets the downside concern of massive labor disruption, which many people doubt in principle. They just think that we've learned
Starting point is 00:39:46 over the course of the last 200 years of technological advancement and economic thinking that there is no such thing as a true canceling of a need for human labor. And so people draw the obvious analogies from agriculture and other previous periods of labor disruption and conclude that this time is no different. canceling innovations born of AI will just open new lanes for human creativity and there'll be better jobs. And just as we were happy to get rid of jobs in agriculture and coal mines and open them up in the service sector, we're going to do the same with AI. I remain quite skeptical that this time is the same, given the nature of the technology. As you just said, this is the first moment where we are envisioning a technology which is a true replacement for human intelligence. If we're talking about general intelligence, and we're talking about the competence that you just described, the ability to do things in
Starting point is 00:41:13 addition to saying things, we are talking about the cancellation of human work, at least in principle. And strangely, I mean, this is not a terrible surprise now, but it would have been a surprise probably 20 years ago. This is coming for the higher cognitive, higher status, white-collar jobs before it's coming for blue-collar jobs. How do you view the prospect of labor disruption here? And how confident are you that everyone can be retrained with their nearly omniscient AI assistants and chiefs of staffs and find something worth doing that other people will pay them to do? I mean, I'm with you.
Starting point is 00:42:03 I've long been skeptical of people who've said that, you know, this will be just like the agricultural revolution or, you know, this will be like the horse and cart and, and, and cars, you know, people will have more wealth. The productivity will, will drive wealth creation. And then that wealth creation will drive demand for new products. And we couldn't possibly imagine you know what people are going to want to consume and and what people are going to create with this new wealth and new time and and that that's typically how the argument goes and i i've never found that compelling i mean i i think that if you if you look at it it's been quite predictable the last decade i mean these models are deliberately trying to
Starting point is 00:42:46 replace human cognitive abilities in fact they have been slowly climbing the ladder of of human cognitive abilities for many years i mean we started with image recognition and audio recognition and then moved on to you know know, audio generation, image generation, and then text, you know, understanding, text recognition, and then now text generation. And, you know, it was kind of interesting because if you think even just two or three years ago, people would have said, well, AIs will never be creative. That's not achievable. You know, well, AIs will never be creative. That's not achievable. Creativity will always be the preserve of humans and judgment is somehow unique and special to what it means to be human. Or like, you know, AIs will never have empathy, will always be able to do care work and, you know,
Starting point is 00:43:38 emotional care is something that's special. You can never replace that connection. I mean, both of those are now self-evidently not true and I think have been quite predictable. So I think that the honest way to look at this is that these are only temporarily augmenting of human intelligence. If you think about the trajectory over 30 years, I mean, let's not quibble over whether it's 5 years, 10 years, or 15 years. Just think about it long-term. I think we can all agree long-term.
Starting point is 00:44:03 If these exponential trajectories continue then you know they're they're clearly only temporarily going to turbocharge an existing human and so we have to really think okay long term what does it mean to have systems that are this powerful this cheap this widely proliferated and that's where i think the the broad concept i have in the book of containment comes in because you can start to get an intuition for you know the the massive consequences of the spread of this kind of power and then start to think about what are the what are the sorts of things we would want to do about it because on the face of it like you said earlier the incentives are absolutely overwhelming i mean technology has always been a machine of statecraft.
Starting point is 00:44:47 It's been used by militaries and used by nation states to serve citizens and drive us forward. And now it is the fundamental driving force of nation states, you know, being commercially competitive, having the best companies, having the best labor market that drives our competitive edge. You know, so from a state market, that drives our competitive edge. You know, so from a state perspective, a nation state perspective, you know, from an individual scientific perspective, the huge drive to explore and invent and discover. And of course, from a commercial perspective, the, you know, the profit incentive is phenomenal. And all of these are good things, provided they can be well managed and provided we can mitigate the downsides. And I think we have to be focused on those downsides and not be afraid to talk about them. I mean, you know, so I definitely experience when I bring up these topics over the years, this kind of what I describe in the book as a pessimism aversion you know there's there's people who are just just sort of constitutionally unable to have a dark conversation about how things may go wrong and i'll get accused of like not being
Starting point is 00:45:52 an optimist or something as though that's like a you know a sin or something or or that being a pessimist or an optimist is somehow you know a good way of framing things to me both are biased i'm just observing you know the the kind of facts as i see them and i think that's an important sort of misconception and unhelpful framing of pessimism and optimism because we have to start with our best assessment of the facts and try to reject those facts if they're you know inaccurate in some way and then then try to collectively predict what the consequences are going to be like and i think you know it's sort of another trend over the last sort of decade or so, post-financial crisis, I feel like
Starting point is 00:46:29 people, public intellectuals and elites in general and everyone in general has sort of just like got a bit allergic to predictions, right? We've got a bit scared of being wrong. And I think that that's another thing that we've got to shed. So we've got to focus on trying to make some of these predictions. They may be wrong. I may have got this completely wrong, but it's important to lay out a case for what might happen and start taking steps towards mitigation and adaptation. Well, you invoke the concept of containment, which does a lot of work in the book. And you have this phrase, the containment problem that you use throughout. What is the containment problem? In its most basic form, the idea of containment is that we should be able to demonstrate to
Starting point is 00:47:16 ourselves that technologies that we invent should always be accountable to humans and within our control. So it's the ability to close down or constrain or limit a new technology at any stage of its development or deployment. And that's a grand claim, but actually put in the most simple terms, it basically says we shouldn't allow technologies to run out of our control, right? If we can't say what destiny we want for how a technology impacts our species, then we're at the mercy of it, right? And I think the idea is if we don't have mechanisms to shape that and restrict its capabilities, then it potentially leads us into some quite catastrophic outcomes over a 30-year period. Do you think we've lost the moment already? I mean, it seems like the digital genie is
Starting point is 00:48:15 more or less out of the bottle. I mean, this is something that, if anything, surprised me, and I know certainly surprised the people who are more focused on AI safety, and again, people like Yudkowsky, in recent developments around these LLMs, was that we missed a moment that many of us more or less expected, or more or less sure was coming, which was there'd be a breakthrough at some company like DeepMind where the people building the technology would recognize that they had finally gotten into the end zone or close enough to it
Starting point is 00:48:51 so that they're now in the presence of something that's fundamentally different than anything that's come before. And there'd be this question, okay, is this safe to work with? Is this safe to release into the wild? Is this safe to release into the wild? Is this safe to create an API for? So the idea was that you'd have this digital oracle in a box
Starting point is 00:49:15 that would already have been air-gapped from the internet and incapable of doing anything until we let it out. And then the question would be, have we done enough safety testing to let it out? But now it's pretty clear that everything is already more or less out and we're building our most powerful models already in the wild, right? And they're already hooked up to things and they already have millions of people playing with them
Starting point is 00:49:42 and they're open source versions of the next best model. And so is containment even a dream at this point? So it's definitely not too late. We're a long, long way away. This is really just the beginning. We have plenty of time to address this. And the more that these models and these ideas happen in the open, the more they can be scrutinized and they can be pressure tested and held accountable. So I think it's great that
Starting point is 00:50:12 they're happening in open source at the moment. So you like Sam Altman's, this is what Sam has always said, that the philosophy behind open AI is do this stuff out in the open, let people play with it, and we will learn a lot as we get closer and closer to building something that we have to worry about. I think that we have to be humble about the practical reality about how these things emerge, right? So the initial framing that it was going to be possible to invent this oracle ai that stays in a box and we'll just probe it and poke it and test it until we can prove that it's you know going to be safe and that we'll stay in the bunker and keep it hidden from everybody i mean this is a complete nonsense and it's attached to the super intelligence framing it was just a completely
Starting point is 00:51:00 wrong metaphor that totally ignores the history of all technologies. And actually, this is one of the core motivations for me in the book is that I had time during the pandemic to really sleep and reflect and really deeply think, okay, what is actually happening here on a multi-century scale? And what are the patterns of history around how inventions end up proliferating? And it's really stating the obvious. It's almost like ridiculously simplistic, but it needed to be said that actually, as soon as something as an idea is invented, millions of other people have approximately the same idea within just weeks, months, years, especially in our modern digitized world. And so we should expect, and as we do see,
Starting point is 00:51:45 the open source movement to be right hot on the heels of the absolute frontier. And so, I mean, just one small example of that to give an intuition, GPT-3 was launched in the summer of 2020, so three years ago, 175 billion parameters, and is now regularly being trained at 2 billion parameters. And so that is a massive reduction in serving cost, you know, that now means that people can have open source versions of GPT-3 that have broadly the same capabilities, right, but are actually extremely cheap to serve and indeed to train. So if that trajectory continues, then we should expect that what is cutting edge today, frontier models like ours at Inflection and like GPT-4, GPT-3.5 even, will be open source
Starting point is 00:52:40 in the next two to three years. And so what does it mean that those capabilities are available to everybody, right? And I think that is a great thing for where we are today. But if the trajectory of exponentially increasing compute and size of models continues for another three, four, five generations, which we all expect it to, then that's a different question. We have to step back and honestly ask ourselves, what does it mean that this kind of power is going to proliferate in open source, number one? And number two, how do we hold accountable those who are developing these mega models, even if they are centralized and closed, myself included, OpenAI, DeepMind, etc. And if you just look at the amount of compute, it's predictable and breathtaking and i think people forget how predictable this is
Starting point is 00:53:25 so going back to atari dqn we developed that model in 2013 and it used two petaflops of computation right so a petaflop is a billion million operations right so imagine a billion people each holding one million calculators each and doing a a complex calculation all at the same time pressing equals right so that's that would be one petaflop and atari used two petaflops over several weeks of computation a decade later the cutting edge models that we develop at inflection for Pi, our AI, use five billion times the compute that was used to play Atari DQN. So 10 billion, billion, million. It's just like-
Starting point is 00:54:17 Now you're sounding like Ali G. Exactly. That's basically 10 orders of magnitude more compute in a decade so one order of magnitude every year so 10x every year for 10 years which is way more than moore's law everyone's familiar with moore's law 70 years of doubling doubling every 18 months or whatever i mean that's that is minuscule by comparison now of course there's a very good hand maybe if you'd like to continue listening to this conversation you'll need to subscribe at sam I mean, that is minuscule by comparison. Now, of course, there's a very good end, maybe. If you'd like to continue listening to this conversation,
Starting point is 00:54:50 you'll need to subscribe at SamHarris.org. Once you do, you'll get access to all full-length episodes of the Making Sense podcast, along with other subscriber-only content, including bonus episodes and AMAs and the conversations I've been having on the Waking Up app. The Making Sense podcast is ad-free and relies entirely on listener support, and you can subscribe now at SamHarris.org.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.