The Joe Walker Podcast - Larry Summers — AGI and the Next Industrial Revolution

Episode Date: October 21, 2024

Larry Summers is a former US Treasury Secretary (1999-2001), Chief Economist at the World Bank (1991-1993), and Director of the National Economic Council under President Obama (2009-2010). He also ser...ved as President of Harvard University (2001-2006). Currently, he is the Charles W. Eliot University Professor at Harvard University, and he sits on the board of directors at OpenAI, one of the fastest-growing companies in history. Full transcript and video available at: https://josephnoelwalker.com/larry-summers-159/See omnystudio.com/listener for privacy information.

Transcript
Discussion (0)
Starting point is 00:00:00 Today, it's my great honor to be speaking with Larry Summers. Larry is arguably the preeminent American economic policymaker of his generation. He was a secretary of the treasury, among many other roles, and he's currently on the board at OpenAI, among many other roles. Larry, welcome to the podcast. Good to be with you. So in this conversation, I want to focus a lot on the economic implications of AI. And if, as many serious people think, AI is likely to induce a step function change in human economic growth,
Starting point is 00:00:31 getting to chat with you in 2024 feels a little bit like an interviewer getting to speak with Adam Smith in the early decades of the Industrial Revolution. Except I feel like I'm in a much more privileged position because I think you know a lot more about what's happening in San Francisco than Smith knew what was going on in Manchester and Birmingham. So first question, you joined the board at OpenAI about a year ago. And that means if OpenAI succeeds in creating artificial general intelligence in the next few years, as it's attempting to do, you'll be one of nine people in the room who deems whether that has happened. And I know you've been thinking at least about the economic implications of the technology for several years, but perhaps you hadn't thought so much about the technology itself,
Starting point is 00:01:09 about deep learning until you joined the board. So I'm just generally interested, how does someone like Larry Summers go about getting up to speed on a new topic with respect to the technology itself? What kind of things have you been reading? What kinds of people have you been speaking to? What kinds of learning strategies have you been employing?
Starting point is 00:01:28 Look, I think this is a fundamentally important thing. I think that the more I study history, the more I am struck that the major inflection points in history have to do with technology. I did a calculation not long ago, and I calculated that while only 7% of the people who've ever lived are alive right now, two-thirds of the GDP that's ever been produced by human beings was produced during my lifetime. And on reasonable projections, there could be three times as much produced in the next 50 years as there has been through all of human history to this point. And so technology, what it means for human productivity, that's the largest part of what drives history. So I've been learning about other technological revolutions. I had never been caused to think appreciably about the transition thousands of years ago from hunter-gatherer society to agricultural society. I've thought about the implications of the Renaissance,
Starting point is 00:02:59 the implications of the great turn away from a Malthusian dynamic that was represented by the Industrial Revolution. So the first part of it is thinking about technology and what it means in broad ways. The second is understanding, not at the level of a research contributor to the science, but at the level of billions of parameters, which is an entirely different new world for somebody who used to think that if he estimated a regression equation with 60 coefficients, that was a really large model. So I've been watching blogs,
Starting point is 00:04:02 listening to YouTubes with tutorials, spending my time talking with people at OpenAI to try to get an understanding of the technology and what's involved in the science of the technology. At one stage when I expressed this interest and Sam Altman asked me, do you want to learn to program these things? I said, no, I'm too old for that. I want to get to the kind of understanding that you can get to a physics if you're not willing to learn the mathematics of tensors,
Starting point is 00:04:50 the kind of understanding that you can get to here short of being a person who can actually execute. And then I've tried to read literature and talk to people who are engaged in application prepared to speculate about what kind of applications are likely to be possible at some point in the future. of understanding relevant historical moments, understanding the stuff of the technology, and thinking about people who are engaged in the relevant kind of application. I suppose it's a little bit like if you were present at the moment when nuclear became possible,
Starting point is 00:06:17 you'd want to understand previous moments of staggering new destructive technology. You'd want to talk a lot with the physicists who were involved, and you'd want to talk to military strategists, doctors who had potential uses for radiation, those involved in the energy industries who might want to think about the implications of inexpensive energy not coming from fossil fuels. Of course, I think that this technology potentially has implications greater than any past technology because fire doesn't make more fire. Electricity doesn't make more electricity, but AI has the capacity to be self-improving.
Starting point is 00:07:38 On the technology itself, so maybe you're not going to learn how to code up a transformer or whatever, but do you recall some of the specific videos you've watched or things you've read that were especially helpful? since I don't remember precisely which of them were more proprietary and which of them were not. But I think there are a number that have come out of OpenAI, but they've come out of other places as well. Tutorials that have been powerfully about these models in ways that are accessible to people like me, whose initial and early trainings were in econometrics and statistical inference. And so I would mention their writings as things that are particularly relevant. And just quickly, since you joined the board at OpenAI,
Starting point is 00:09:18 roughly how many hours per week have you been spending on OpenAI-related stuff? I think it varies, but, you know, a day a week would be, uh, a, would be in the range. And some of that has been trying to come up to speed with understanding, uh, the technology. Some of that has had to do with a company that has mushroomed in scale and that, you know, has developed large revenue streams and market value probably faster than any company in history, has all sorts of governance challenges and issues. And that has been part of my concern and remit as well. If you think of all the various bottlenecks to scaling AI,
Starting point is 00:10:20 data, chip production, capital, et cetera, energy, which one strikes you as the most underrated at the moment? Well, I would not underestimate the fact that there are substantial questions around imagination and things still happen that surprise people. And so ultimately, I suspect that when the history of this is written after it's been successful, new great insights about ways to strengthen reasoning capacity, ways to use compute more efficiently,
Starting point is 00:11:13 ways to generate information that can be a basis for training. I would emphasize ideas and having more of them more quickly that can come to application is, I think, something that's very important. is likely to be on compute and on access to chips that can be used both in training and inference in these models. I think if you take a somewhat longer run view, I suspect that energy is likely to be the larger constraint. But probably sophisticated chips is the nearer term limiting factor on which I'd focus. I want to elicit one more kind of factual premise before
Starting point is 00:12:26 we move on to talking about the economic implications. So approximately what share of time did today's AI researchers spend on tasks that AI will be doing for them in five years, based on your conversations with technologists? I don't know, but if the answer were less than 25%, I'd be quite surprised, but it's very hard for me to estimate in between. And in a way, it depends on how much, how you exactly define the tasks, you know. Right. Ordering lunch is part of our day. Managing our lives is part of our day. Managing routine corporate interactions. Scheduling is part of our day.
Starting point is 00:13:34 And that stuff will obviously be among the first stuff to for their where there will be where there will be substitution. But even in tasks that are closely defined as research, I think the capacity of AI to program and to create software is likely to be a very substantial augmenter of what software engineers do. So the range of opinions is 25% to 75% of AI research. I'm not sure whether that's the range. The range of opinions as to the best guess might be smaller than that. But I think the range of uncertainty about what the reality is is probably very, very wide at this point, but with a pretty high floor. Got it. So maybe about 50% of AI research itself might be automated in five years? I don't want to, I want to preserve the sense of very great uncertainty.
Starting point is 00:15:01 Fair enough. So let's talk about the economic implications of AI. First, a somewhat tangential question. If we take the last 150 years of US GDP per capita growth, real GDP per capita growth, it's grown at about 2% per year. It's been remarkably steady. And the biggest interruption to that was obviously the Great Depression, where GDP plunged about 20% in four years, but then it just quickly resumes its march of about 2% per year. What do you think is the best explanation for the remarkable steadiness of US growth? Well, I think it's been a little more complicated than that, because I think you have to start by thinking about growth as the sum of workforce growth and productivity growth. And there's been a little bit of fluctuation. There's been fluctuation in both of those things.
Starting point is 00:15:57 When I first started studying economics as a kid in the 1960s, people thought that the potential GDP growth of the United States was approaching 4% because they thought at that time that population and labor force growth would run at about 2%. And they thought that productivity growth would run at 2%. Today, we have rather more modest conceptions because labor force growth is likely to be much slower, given that women on average are now having less than two children, that immigration is somewhat limited, and that the very large wave of increased labor force growth that came about as it became presumptive for young and middle-aged women to be in the labor force, that was a one-time event. So labor force growth slower than it used to be.
Starting point is 00:17:08 Productivity growth was much faster from 1945 to 1973 than it was subsequently. There was a very good decade from the early mid-90s to the early mid-decade of the noughts. But other than that, productivity growth has been running distinctly south of 1%, at least as we measure it. So I'm not sure there's any God-given law that has explained why it has been relatively stable, because the things underneath it have been fluctuating a fair amount. But I suspect that if one was looking to theorize about it, it would be that for societies at the cutting edge, like the United States, there's only so much room for the creation and application of new technology
Starting point is 00:18:25 and that labor force growth and capital accumulation associated with labor force growth have an inherent stability to them. Okay, so to make sure I understand, for frontier economies, it's much more likely that there's a kind of endogenous story that's explaining why growth's been so steady relating to some population growth maybe counterbalancing ideas getting harder to find or something like that yeah i don't want to overdo i think your statement respectfully joe
Starting point is 00:19:01 probably overstated just how much stability there has been from period to period and from decade to decade. And of course, if you look at non-frontier economies, they often, or not usually, but in a number of highly prominent cases, the Asian countries, most of which are concentrated in Asia, have had periods of extremely rapid growth that came in part from integrating into the global economy and developing technological capacity as they did that. Right. So if we take the long view and look at gross world product over many thousands of years, growth rates have been increasing over time. How likely is it that AI initiates another, a new growth regime with average growth that's say 10x faster than today? I think it is.
Starting point is 00:20:10 I think the kind of growth that followed the Industrial Revolution was probably unimaginable to people before the Industrial Revolution. And I think even the kind of growth that followed the Renaissance that can perhaps be dated to the 1500s probably seemed implausible to people beforehand. So I hesitate to make definitive statements. My instinct is that substantial acceleration is possible. I find 10x to be, and growth at a level where productivity doubles every four years to be hard to imagine. There are certain things that seem to me to have some limits on how much they can be accelerated. It only takes so long. It takes so long to build a building. It takes so long to make a plan. But the idea of a qualitative acceleration in the rate of progress has to be regarded, it seems to me, as something that's very possible. Some people think that AI might not only deliver a regime of much faster economic growth,
Starting point is 00:21:44 but might actually instigate an economic singularity where growth rates are increasing every year. And the kind of mechanism there would be we sort of automate A in our production function. And so we have this feedback loop between output and R&D being increasingly automated. What do you think is the best economic argument for believing that ever-increasing growth rates
Starting point is 00:22:09 won't happen with AI? Is it some kind of like Baumol's cost disease argument where there are still going to be some bottlenecks in R&D that prevent us from getting those ever-increasing growth rates? I would put it slightly differently. I think I would put it that in a sense, sectors where there's activities where, and this is in a way related to your Baumol comment, activities where there is sufficiently rapid growth almost always see very rapidly falling prices. And unless there's highly elastic demand for them, that means they become a smaller and smaller share of the total economy. So we saw super rapid growth in agriculture, but because people only wanted so much food, the consequence of that was that it became a declining share of the economy.
Starting point is 00:23:10 And so even if it had fast or accelerating growth, that, where the share of GDP that is manufacturing is declining. But that's not a consequence of manufacturing's failure. It's a consequence of manufacturing's success. A classic example was provided by the Yale economist Bill Nordhaus with respect to illumination. The illumination sector has made vast progress, 8%, 10% a year for many decades. but the consequence of that has been that on the one hand, there's night little league games played all the time in a way that was not the case when I was a kid. On the other hand, candle making was a significant sector of the economy in the 19th century. And nobody thinks of the illumination sector as being
Starting point is 00:24:28 an important sector of the economy. So I think it's almost inevitable that whatever the residuum of activities that inherently involve the passage of time and inherently involve human interaction, it will always be the case that 20 minutes of intimacy between two individuals takes 20 minutes. And so that type of activity will inevitably become a larger and larger share by value of the economy. And then when the productivity growth of the overall economy is a weighted average of the growth in individual sectors, the sectors where there's the most rapid growth will come over time to get less and less weight. Right. So I want to talk about how AI might be applied to enable economic policymakers. And I want to speak first about developing countries. So assume that we do get AGI.
Starting point is 00:25:50 I wonder how much that might be able to help economic policymakers in developing countries. So maybe you could interpret the success of the Asian economies as, you know, where they were getting consistent 7.5% GDP growth per year. Maybe you could interpret that as existence proof that much better economic policymaking can translate into massive increases in GDP. But on the other hand, there are these constraints like social and political constraints, which might be more important. So how much do you think AI would be able to enable greater economic growth in developing countries through helping policymakers make better decisions? Well, I think the ability to import knowledge and apply that knowledge and expertise pervasively is something that is very important apart from economic policy. It was really hard for the United States to learn a lot of what was known in Britain about how to make a successful textile factory in the early 19th century. And with AI, what's known anywhere is likely to be known everywhere
Starting point is 00:27:13 to a much greater extent than is true today. And that more rapid transmission of knowledge is, I think, likely to be the most important positive in terms of accelerating development. Certainly, there are hugely consequential and difficult choices that developing country policymakers make, whether it's managing monetary policy or probably even more consequentially strategic sectoral policies about which sectors to promote. and a more accurate and full distillation of past human experience and extrapolation to a new case, I think it's likely to contribute to a wiser economic policy, which permits more rapid growth. Moving to the US, take the Fed, for example, how much better could monetary policy
Starting point is 00:28:26 be if the Fed had AGI? So could we massively reduce the incidence of financial and macro economic instability? Or are those things subject to kind of chaotic tipping points that just aren't really amenable to intelligence? I think it's a very important question. The weather and the equations that govern weather are susceptible to chaotic dynamics, and that places sort of inherent limits on weather forecasting. Nonetheless, we're able each decade to go one day longer and have the same quality forecast that we had in the previous decade. So the five-day forecast in this decade is like the four-day forecast was a decade ago, or the three-day forecast was two decades ago. So I suspect we are far short of some inherent limit with respect to economic forecasting. I'm not certain because there's a fundamental difference
Starting point is 00:29:46 between economic forecasting and weather forecasting, which is the weather forecast doesn't affect the weather, but the economic forecast does affect the economy. But my guess is that we will be able to forecast with more accuracy, which means we will be able to stabilize with more accuracy. And that should lead to better policies. And it may be that we will find that to take a different sort of natural world problem, that AI will improve the field of seismology, earthquake prediction, which involves predicting rare convulsive events.
Starting point is 00:30:41 And it may be that it will aid in predicting financial crashes and evaluating bubbles, and all of that would obviously also progress to come over time. I would caution as a very general rule, Joe, that things take longer to happen than you think they will. And then they happen faster than you thought they could. And so I would hesitate to assume that these benefits are going to be available to us immediately, just as I would hesitate to think that we're not going to make progress from where we are now. There'll probably be a J curve for AI. So retrospectively, how much would having AGI have helped economic policymakers
Starting point is 00:31:46 in the Obama administration during the financial crisis and Great Recession? Because if I think about that time, what was scarce wasn't so much intelligence, but what I would describe as constraints of human social organization. So two examples. Firstly, cram down legislation wasn't passed, not because people didn't know it would be helpful, but because the Obama administration couldn't muster the requisite 60 votes in the Senate. Or another example, policies to convert debt to equity weren't implemented, not because economists didn't realize that that wouldn't have helped, but because the administration lacked the sort of state capacity to negotiate and track those contracts over time. So how much would AGI have helped you during the financial
Starting point is 00:32:35 crisis and great recession or the constraints things that again, weren't really amenable to So, I'm not sure that in either of these cases it's quite as simple as you suggest. Depending on how crammed down legislation was structured, it could have set off a wave of bankruptcy-type events that would have had moral hazard consequences and exacerbated the seriousness of the financial crisis. And so that kind of uncertainty was one of the things that held back and slowed the movement of that legislation, and similarly with respect to various other schemes. But in general, it is easier to reach solutions where the epistemology is clear. And I would think that better knowledge of all the aspects of the financial crisis and better and more shared understandings of the causal mechanisms, which I think comes from promote better research would likely have been to lead to better solutions.
Starting point is 00:34:11 On the other hand, you know, I think in retrospect, most people feel that the fiscal stimulus provided by the Obama administration was too small. In my judgment, and I think the people who were closest to the events, that did not reflect a misguided analytical judgment by the Obama administration. It reflected the political constraints of working to get rapid progress through Congress. Now, if there had been better economic science and so it had been clear what the right size of stimulus was, and the argument was less arbitrary,
Starting point is 00:34:58 people would have been more likely to have been prepared to politically support the right thing. So I think there is a contribution. You know, I like to say that it's no accident, Joe, that there are quack cures for the common cold and some forms of cancer, but no quack cures for broken arms or strep throat. And that's because when there's clear and definitive knowledge and understanding, then people rally around behind that. But when there isn't a expert scientific solution that works, that's when you get more debate, more volatility of approach, perhaps more flaky solutions. And I think better artificial intelligence over time is likely to drive greater understanding, and that will contribute to better outcomes. Interesting. Some final questions on the geopolitical implications of AI and governance.
Starting point is 00:36:14 We don't have to spend too much time on this, but you drew the analogy earlier to the technology of nuclear energy and atomic weapons. I had an interview with Richard Rhodes last year and he mentioned that the Manhattan Project was infiltrated by Russian spies almost immediately. Stalin had about 20 to 30 people in the Manhattan Project over the course of the war. Klaus Fuchs was literally giving the blueprints for the implosion device to Stalin indirectly
Starting point is 00:36:39 and he was one of the scientists on the project. There's no way the CCP isn't already infiltrating major AI labs in the US and UK and stealing their IP, right? Look, I think that this is going to be an important area for us, for everybody to think about going forward. And I think the, and thinking about the security and thinking about the importance of American leadership is, I think, a very large issue. On the one hand, a certain amount of open flow of information is what drives our progress
Starting point is 00:37:21 and is what keeps us ahead. On the other hand, there is a tension between the preservation of secrecy and the open flow of information. What's pretty hard to judge is what kinds of things you can learn by spying on and what kinds of things you can't. And, you know, I use the example of the difficulties that the Americans had emulating British textile technology in the 1800s. It's not that they couldn't get blueprints of the British factories. It's that a blueprint wasn't really enough to figure out how to make a factory work effectively. And there are all sorts of things like that. So what the right way to manage the security aspects is, after all,
Starting point is 00:38:33 openness and the sense of our advantage in developing new technologies relative to what the more closed Soviet Union had that on most readings of history contributed to our winning the Cold War in the 1980s. So I would recognize the overwhelming importance of security issues, but what kinds of leaks we should do, how much to control, I think are very, very complex questions. And not all proposals that are directed at restricting the flow of information are necessarily desirable because they may so chill our own capacity to make progress. So I have many follow-up questions on that, but in the interest of time, I'll jump to my next question. Say we wanted to create a Larry Summers checklist of criteria or thresholds for when artificial intelligence should be nationalized, like
Starting point is 00:39:45 should become a government project, what would that checklist contain? You know, I'm not certain that I quite accept the premise of the question that at some point it should be nationalized. I mean, there have been immense implications of powerful computing. If you think about it, powerful computing over the 50 years, 60 years since the 1960s, has transformed everything. There's nothing military we do that doesn't depend upon computing. An automobile is a very complex computing device with hundreds and thousands of chips.
Starting point is 00:40:44 Computing is central to national security, but it never would have been a good idea to nationalize computing. So should there be some things that are nationalized Should the government have a capacity to produce in certain areas? Yes. But if you think about our history, if you think about how we put man on the moon, we didn control over how that project was going to take place. So I am open to the idea that there are certain things that government should nationalize, but I think framing the principal way that governments take responsibility or nurture the development for national security of technology is to nationalize them is, I think, an ahistoric view. Which parts of the production line for AI are the things that would be the biggest
Starting point is 00:42:00 candidates for nationalization? I think, I don't feel like I have a good sense of that. Again, I would come back to computing, where it doesn't feel like we've nationalized much of anything, but we've managed it in the fullness of it all really, very, very, really, very, very well. So I don't want to, I don't want to rule out that there would be things that should be nationalized at all, but I don't want to lean into that as a principal policy response either. Of all the US presidents, you've worked most closely with Bill Clinton. You probably have the best model of him. As we get closer, potentially, to artificial general intelligence, what's your model of, say Bill Clinton was president, how would he be thinking about the governance aspects of that problem? Well, I've worked very closely with both Bill Clinton and Barack Obama, and I think they both were enormously thoughtful,
Starting point is 00:43:11 and I think they both recognized that complicated problems required evolutionary evolutionary rather than revolutionary solutions that they needed to be approached through multiple channels. And then in some ways, solutions needed to, seeds needed to be planted, and then one needed to see what the best kind of solution was. But I think government needs to be very familiar with what is going on, have close relationships with the major actors. But I think you need to be very careful that establishing one particular structure to channel things in a particular direction, if that turns out not to have been the right direction,
Starting point is 00:44:22 can be very costly. So you want a portfolio approach. Penultimate question, if OpenAI changes its structure from a partnership between a nonprofit and a capped for-profit to a public benefit corporation, have all the incentives to responsible stewardship that a not-for-profit can. And that indeed, the history of not-for-profit hospitals and a variety of other not-for-profit structures suggests that they can be very much dominated by the commercial incentives of those who act within them. So I don't think of the possibility of moving to a corporation as reflecting any desire to move away from public interest type considerations, rather a way to reflect existing not-for-profit law, which reflect also the need to have vehicles that can be
Starting point is 00:46:13 credible capital raisers to pursue a public interest mission. Final question. So you operate in two relevant worlds. One is the world of technologists, which you have contact with through the board. The other is the world of academic economists who, on the whole, don't seem overly convinced of AI's extraordinary economic potential. For example, you know, Daron Acemoglu, who won the Nobel Prize a couple of days ago, has this paper where he predicts that AI will deliver productivity increases of only about 0.6% over 10 years.
Starting point is 00:46:46 How do you explain this discrepancy? And what does the economics profession seem to be missing about AI? I have huge respect for Daron, but I don't find his analysis convincing on this. He leaves out entirely in that analysis the possibility that we will have more rapid scientific progress, more rapid social scientific progress, or better decision-making because of artificial intelligence. So his analysis seems to me to have the character of the analysis that was done by IBM that concluded that the worldwide market for computers would be five mainframes, or the analysis that was done by AT&T at one stage that couldn't imagine a demand for as many as a million cell phones globally.
Starting point is 00:47:47 Larry, it's been a great honor speaking with you. I know you now have to go to another call, but thank you so much for being so generous with your time. Thank you. Done. I hope that was helpful. Hey, everyone. Thanks for listening to that episode. One important message before you go, I'm opening the podcast up to more sponsors. Currently, I'm not capturing enough of the value I'm creating with the show. For example, last episode was my biggest ever, but it wasn't
Starting point is 00:48:14 sponsored. Now I'm lucky to have built an incredibly smart audience and I want to work with great organizations who'd benefit from reaching this audience. If you'd like to sponsor the show, you can email me. My email address is joe at jnwpod.com. That's joe, J-O-E, at jnwpod.com. You can also contact me via my website, jnwpod.com. Thanks, and until next time, ciao. you

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.