Your Undivided Attention - AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

Episode Date: May 9, 2024

Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that’s not what happened. Can we do better this time around?RECOMMENDED... MEDIAPower and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the worldCan we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary pathRethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentivesRECOMMENDED YUA EPISODESThe Three Rules of Humane TechThe Tech We Need for 21st Century DemocracyCan We Govern AI?An Alternative to Silicon Valley UnicornsYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

Transcript
Discussion (0)
Starting point is 00:00:00 Hey everyone, it's Tristan. Welcome to your undivided attention. When we think about the health of societies, whether we acknowledge it or not, a major part of what we're talking about is whether the economy is working or not. Because when the system is going well, we have confidence that working harder will lead to more rewards, more opportunities, that we have a stable and meaningful place in society. We don't have to worry about whether our bills are going to get paid or whether we can afford to send our kids. to school or have enough money for retirement, people become more cooperative, violence falls. But when the system isn't going well, people start to feel pessimistic, resentful, and things start to feel zero-sum. The more you have, the less I have. In a lot of 2024 election polling,
Starting point is 00:00:47 a lot of voters are saying that what matters to them most is which president would translate into better economic prospects for me and my family. So how is the economy going? How do we even know? And how would we think about this? Do we think about the growth of societies and the amount of stuff we're producing? Do we think about the unemployment rate and who's working and who's not? Do we think about inequality and distribution and who's getting what? And that's Daniel Barque, who's our new executive director at Center for Humane Technology, and has a background in some of these questions.
Starting point is 00:01:17 And we're here to interview Dorone Asimoglu, who's a professor of economics at MIT. His most recent book with Simon Johnson is called Power and Progress, Our Thousand Year's Struggle Over Technology and Prosperity. Because if you add AI on top of this equation that Daniel's laying out, will adding AI help fix the economy and create a sense of renewed growth and opportunity and heal some of our political problems? Or will AI further exacerbate inequality, concentrate wealth, and make the world feel even more precarious?
Starting point is 00:01:48 So, Deroon, welcome to the podcast. Thank you very much, Tristan and Daniel. I'm really happy to be here with you. So looking at this historically, Duran, in your book you wrote about how In early agriculture, we got better plows, smarter crop rotation, more uses of horses and improved mills, but that created almost no benefit for peasants. You talk about how textile factories of the early British Industrial Revolution generated wealth for a few, but they didn't raise worker incomes for almost 100 years. So how do we tell the difference between when a new technology in a new efficiency gain
Starting point is 00:02:18 is going to create those broad-based benefits or not? Well, thank you, Trista. I think that's critical. And the reason why we actually take this historical perspective in the book, as you've very nicely summarized, is because there is this narrative that is common both in public intellectual circles and in the tech circles that let technology rip.
Starting point is 00:02:40 And at the end, there are automatic mechanisms that are going to make sure that we all benefit. And then the people who advocate that say, look, in the past we had all these disruptive technologies, and look, we're all so much better off. And that's just a misreading of history. if you go back you see yes of course we are enormously better off than people who lived in the stone age and we are much much healthier much more comfortable much more prosperous and people who lived in 1700
Starting point is 00:03:09 but it wasn't a linear steady process it was much more contested with very significant ups and downs from which we can learn a lot and you pointed out to two examples both of them are very interesting the agricultural economy of Europe, you know, up to 1600s, 1700s, is a perfect example of major productivity improvements that didn't lead to benefits for workers, the farm workers. Why not? Because that was based on a coercive system, and if you were a servile laborer, you are not going to get those benefits, not a big surprise. The British Industrial Revolution is even more important because that's the one that many of these debates actually invoke. And of course, today's comforts and improvements are unthinkable without that process which put industrial knowledge, scientific knowledge, into the use of humans.
Starting point is 00:04:04 But the first 100 years or so of the British Industrial Revolution were dark times for the working people. Real incomes did not increase. Their working hours intensified, you know, probably increased about 10 to 20%. So real hourly earnings probably declined. for most workers. Workers in the most dynamic sector,
Starting point is 00:04:25 such as those in textiles, during some decades experienced 50% declines in their real earnings. And working conditions and health significantly worsened for workers. Why did that happen? Well, because, and you'll see the parallels to AI, new technologies were used for automating and sideline workers,
Starting point is 00:04:46 and they were used for monitoring and imposing more discipline on workers. And how did we get better outcomes, not through any type of automatic process. That's such an interesting point, because so much of the AI discussion is around productivity and wealth creation and how it's going to make things better. But there's so many examples of where increased productivity from technology just made things worse. Yes. If you look at what happens in the United Kingdom, in the second half of the 19th century, it's as revolutionary as you can imagine. The political system completely changes from a system in which less than,
Starting point is 00:05:21 10% of the male adult population had the vote to one in which first all men and then all adults have the vote. Trade unions were banned and very heavily prosecuted while employers could combine together and do all sorts of things against workers. Unions were recognized and started negotiating on behalf of their members. That was very important for banning child labor, working rights, safer work conditions, higher wages. But also the direction of technology changed. use technology in more pro-worker ways was really at the center of what happened in Britain. And then later in the United States, all of those kind of benefits that we see in terms of workers are getting better, slowly, never a steady process.
Starting point is 00:06:05 It's all very intimately linked on how we use technology and who has power over technology. When I hear this, I can hear attention between two different kinds of theory about how the world gets better, how it improves, right? on one hand, people building things that help people become more productive, get more of what they want, the act of discovery, right, the learning rate of society. Increasing the learning rate of society means that you make the world more abundant. On the other hand, the story you just told is about making sure that those technologies that we build are spread across society in a way that we all benefit from. And there is a tension between these two views. and I wonder if you can...
Starting point is 00:06:50 Yeah, absolutely. There is. I don't think there is, but we have to be careful in how we place them together so that we avoid that seeming tension. We need both of them. We need discovery. We need knowledge.
Starting point is 00:07:05 We need scientific and practical knowledge to expand. But we also need to use that in the right way. And the tension comes from the narrative that whenever you criticize the unfettered entrepreneurial process, you are saying, let's stop the discovery. No, I don't think anybody today is saying, let's stop the discovery. I don't think even the most trenchant critics of the current AI are saying we should never do artificial intelligence.
Starting point is 00:07:37 I think the criticism are directed at, A, how we are using AI, be how we are conceptualizing AI and what we want from it, and see who controls AI. So I think we can have archaicenedia too. We can have science, we can have practical knowledge, we can have human reach expand, but we can do that in a more pro-human way. So in other words, let me put you an analogy,
Starting point is 00:08:02 perhaps not the right analogy, but I don't think what we're talking about is blocking a current. It's about redirecting the current in a more beneficial direction, away from creating floods, but towards creating greater fertility, greater nourishment for us. Can you tell us what changed starting around the 70s that made that sort of overall productivity,
Starting point is 00:08:25 everything's getting better, everything's increasing? What changed? That's a great question, Daniel. I wish I had a crisp answer. I think there are a number of things that happened at the same time that made us partly squand, the great promise of digital technologies and direct progress of these technologies
Starting point is 00:08:48 in more pro-capital, anti-worker ways. So in the 1970s, we had a reaction against the regulatory state that was part and parcel of the post-war compact where productivity would increase, businesses would invest and flourish, but at the same time, they would also be regulated and they would be compelled via,
Starting point is 00:09:12 variety of channels to share those gains with their workers. And some business people didn't like either part of this equation. Two related sort of ideas became more and more popular. One is you want to lift the regulations because regulations are creating inefficiencies. So let the businesses do what they want. And second, you don't want managers to share the gains too much with their workers. You should just look after the shareholders' interests first and foremost. But at the same time, also unions started getting weaker. First, for structural reasons, industry, where blue-color workers worked, started reducing its employment, and that was the stronghold of unions. So one of the big early acts that Ronald Reagan did was the firing of the professional aircraft
Starting point is 00:10:02 traffic controllers who were striking, and that was something that many businesses emulated and took a very harsh line against their workers who were making demands. It also then conditioned how we would use digital technologies. You would know the history even better than I would, but in the 1950s, 60s, 70s, there were all these people who thought digital tools would be democratizing tools. They would be pro-worker tools. But once businesses became stronger and prioritized cost-cutting,
Starting point is 00:10:34 that created a sort of least-resistant path for monetizing digital tools, which was, let's use them for automation. Let's use them for large companies to introduce and keep labor costs down. And, of course, that was music to the ears of many managers. So, you know, productivity has these two legs. We need to reduce labor costs, but at the same time, we need to create new things where labor can be very productively deployed. So if you don't do the second leg, you're just on the one leg.
Starting point is 00:11:04 It's not going to work very well. Yeah, that reminds me of the economist John Maynard Keynes and his term technological unemployment, which he coined in the 1930s, and you have a version of him in your book defining this as unemployment where are economizing the use of labor outruns the pace with which we can find
Starting point is 00:11:21 new uses of labor. And it's kind of a crisis of time in a certain sense. You can automate away certain things so long as there are new things to move to, but if there's not as many new things to move to, you're going to have some kind of crisis. And that's where the precarity comes in. 100%.
Starting point is 00:11:35 That's, I think, the story of our age, especially for low-indication workers. You know, we've been through this before, But when we went through this, for example, in the first several decades of the 20th century, where 40% of the labor force moved in the course of about 40 years out of agriculture, we created a tremendous number of new jobs in industry and services for workers of all skill levels. But imagine we just mechanized agriculture and we didn't do anything in industry and services. That would not be a very happy world for either distributional reasons or productivity reasons.
Starting point is 00:12:11 So productivity is clearly important, but it sometimes is kind of used as an argument ender in itself. Like if something is really good for productivity, then it must be an inherently good thing, and it's silly to argue against it. But is it really an effective measure of how people's lives will actually improve? And is productivity experienced equally across sectors, across society? Well, I mean, I think that productivity is important for that, but it's not central, because the question that you're posed. in some sense is, are we heading towards a two-tiered society where a small fraction, which could be just a tech barons, or it could be a group of very well-trained engineers and other specialized workers, are going to benefit, have high status, have high political,
Starting point is 00:13:00 economic and social power, and the rest of society is going to fall behind. And if that's the central question, productivity is important, but it's not dispositive. So you can end up with a two-tiered society, with five, percent productivity growth and three percent productivity growth and two percent productivity growth. And what we have experienced in the United States over the last 40 years is a huge leap towards that two-tiered society. Inequality has skyrocketed, especially people at the very top have benefited, either
Starting point is 00:13:28 as capital income owners or entrepreneurs or very skilled workers. And a lot of other Americans have not. And the question is, is AI going to reverse that? Some people say it's going to reverse that They don't deny that Or AI is going to continue or even Accelerate that trend So I think that's central
Starting point is 00:13:48 Because even if we get 10% productivity growth Huge number by the way We're never going to get there That's a different conversation We can talk about why not But even if we get 10% productivity growth If the price of that is a truly two-tier society
Starting point is 00:14:01 I would be very unhappy Well and I really want to tell the story of the internet Because as someone who grew up In that time period where the internet was coming on And I became a technologist, largely because of the dogma of humanity being unleashed, right? Being able to use these tools, the bicycles of the mind, to become massively more productive, to be able to work from anywhere. And to be sure, you know, tasks that used to take us an hour, now take me five minutes, buying a plane flight, getting a cab.
Starting point is 00:14:29 And so in some sense, that kind of productivity gain seems massive. And it seems like, of course, our society should have gotten orders of magnitude better. And yet, when we look at the story of the Internet over the last decade or two, the productivity gains that we hoped we would get out of the Internet seem to be not reflected in our data in our society. Can we just talk a little bit about the hopes about the Internet in the last 10 or 20 years versus how this shows up for an economist?
Starting point is 00:14:54 Well, look, I'm a huge fan of the Internet as well, and we have truly revolutionized many things with the Internet. And look, I would be much less worried if I thought AI was going to replicate what we got with the Internet. But what I really lose sleepover is that we're going to do much worse with AI than the Internet for a variety of reasons that we can get into. But, you know, you put your finger on it. I think the reason why we did well with the Internet, as well as we later screwed up a few things,
Starting point is 00:15:29 but we did well with the Internet is, A, Internet is an amazing informational tool. and B, it enables us to generate a lot of new tasks and new products. I think those are the key aspects of the Internet that are on the positive side of the ledger. Then we've of course used the Internet for misinformation, for manipulation, for malicious attacks, and misleading people, and those are on the negative side. But I think the manipulation room is much greater with AI,
Starting point is 00:16:03 and I think the hype that AI is going to just solve everything, just sit back, is also going to make us not get quite the informational benefits from AI that we got with the Internet. One thing I want to just add before we go on is oftentimes people say, is the Internet been a good force or a bad force? And with our work, you know, with the social media issues that we focused on, we want to make sure we're distinguishing between actually a sub-economy inside the total economy, which is the engagement economy.
Starting point is 00:16:31 It's the psychological influence economy. it's the race to the bottom of the brainstem, and we can differentiate the engagement economy from the internet and say the internet that includes just ordering a Uber or booking a flight is very different than the internet in which a trillion dollar market cap company
Starting point is 00:16:46 has a supercomputer pointed at your brain to addict you, which is kind of an anti-product... Which is also an anti-productivity a system, right? Because an engagement economy distracts people, addicts people, doomskrolls people, makes them lonely, makes them spend hours not sleeping and it actually degrades global productivity in this invisible way.
Starting point is 00:17:04 And I think that's one of the things that's confronted economists is like we got some productivity from the Internet, but then maybe that got eaten away by the last, let's say, 15 years of this engagement economy as it started to grow stronger. I couldn't agree more. And the issue, of course, is where does the Internet stop and the algorithmic manipulation begin? Is that part of the same thing?
Starting point is 00:17:26 Or can we draw a line and say that was the good Internet? But to me, you hit the nail on the head. Any technology, of course, can be good or bad. It really is going to depend on how you use it. But certain technologies have more room for misuse and certain social environments exacerbate those problems. I see a lot of these issues with AI because of algorithmic decision making because that has the potential to intensify that engagement problem, the mental health problem, the misinformation problem, the personal data being exploited problem. So all of those, again, are the tail end of the internet or early algorithms that are going to perhaps develop in accelerated ways now. So maybe this is the right time to reintroduce artificial intelligence.
Starting point is 00:18:10 The reason I wanted us to get here is one of the solutions to the paradox of the internet productivity issue, why it's improved so many things in our world. And yet the productivity numbers don't show a skyrocketing worker productivity is that, as Tristan said, we have these massive gains, but we also have these massive drags on our society at the same time. 100%. And as we look at AI, we ask the question, is it going to be the same? I mean, there's obviously questions about who gets to do the work
Starting point is 00:18:37 and who gets the benefit. But even before we get there, as we think about how AI might make us more productive and unleash human society, what do you see as the kinds of the games we might see as well as the new kinds of drags we might see on society? Well, I think any technology, especially a broad-based or platform technology,
Starting point is 00:18:56 such as AI, is going to do many things at the same time. So it's a question of balance. One thing that many technologies do, and the Internet did to some extent as well, but it was to a minimal extent, electricity did, you know, computers did. They're going to automate some tasks. Nothing wrong with that. We're all glad that we're not, you know, shoveling heavy dirt 12 hours a day. Great.
Starting point is 00:19:18 It's better that, you know, automated machinery cranes do that job. Now, the issue is that if you approach, prioritize automation and don't do other things with technology, then you quickly run into, okay, fine, we've automated a bunch of things. The next bunch we're going to automate is not going to be so productive because, you know, humans were pretty good at what they were doing. And moreover, automation is going to create two layers of very major inequalities that look like they were very, very important from the 1980s onwards.
Starting point is 00:19:52 First of all, between capital and labor, automation helps capital, doesn't help labor. Second, between workers who experience the automation by being displaced from the tasks that they were performing and those that get to command the machines and design the new jobs and design the monitoring and run the companies. So both of these dimensions of inequality have skyrocketed since 1980s. Who are the people who have not done well in the United States? It's the people who used to be blue-color workers because we've introduced a lot of digital machinery that has automated welding and painting and assembly job. It's the robots, it's some of the office jobs.
Starting point is 00:20:34 So what else can we do with new technology? We can create more humane, more human jobs, jobs that have higher quality that use problem-solving and decision-making skills of workers and create new tasks, new capabilities for workers. And that's why I was saying Internet was great early on because it did exactly that, by providing better information, by providing platforms for bringing people together and getting them to do new things.
Starting point is 00:21:00 We can do a lot more of that with AI, and my concern is that that's not where we're going. So for those people who are saying that this is all going to be fine, and this is yet another sort of moral panic or something like that, and just wait, AI will just distribute all these benefits. I think it's important for us to say, look, there's a lot of people out there who want to believe, and it would be really important to know,
Starting point is 00:21:22 is this going to lead to abundance? For those who believe that, how would you steal man their argument? I think it comes down to three things. One is history. That viewpoint is based on a reading of history. And I just gave you a reading of history where I said there was no automatic correction mechanism
Starting point is 00:21:41 and things went really badly for a while. They may either dispute that or they may say, well, you're looking too short. It's not 100 years or it's not 80 years. You should look at 200 years. and look, things have turned out well. And why would they say that? They would say, in terms of mechanisms,
Starting point is 00:21:59 that there are two sorts of adjustments that would take place. One adjustment would be through productivity, and that's why Daniel was right in starting there, if we get huge productivity gains, and labor remains at least relevant for some tasks, it is possible that labor will also benefit and way labor of different types would benefit. And third, they may be hopeful that we have institutional adaptability, meaning that even if technology runs ahead of our norms, institutions, and our current social equilibrium, we will somehow adjust to it and make sure that we undertake the adjustments from which other people will benefit.
Starting point is 00:22:41 So it is these sort of optimistic read of history, optimistic read of how much productivity gains we will get and how those will be distributed, and optimistic read of how we will be distributed, an optimistic read of how we can smooth out the rough edges via institutional and norm-based adjustments. Well, I have to admit, I think I have more sympathy for this view, so maybe it would be useful. Okay, good, let's debate it. Yeah. So if we paint the picture a little bit,
Starting point is 00:23:04 it's hard for me to believe that having more or less a full-time assistant in my pocket to help me make things that I want to do real in the world, and having that for everyone on Earth, oh my god i haven't seen my friend in two months make sure that we always meet up oh like help me figure out what the what the most cost effective way is for me to repair my sink help me you know these tasks which as you know as as someone who owns a home i spend a lot of my time repairing things and figuring out the best way to repair things down to as an employee i spent a lot of my
Starting point is 00:23:36 time sorting through my my email inbox if this technology is at the verge of being able to help turn these tasks from taking hours and hours of our time to taking minutes or seconds of our time. On one hand, it's hard for me to believe that this wouldn't allow for a new age of abundance. Now, the distribution problem is real and making sure that that doesn't just accrue to capital owners is absolutely real. But the sympathy I have with the view is aren't these two separate problems? How do we improve the productivity is one problem? And how do we make sure that it's not captured by centralized interest seems like a separate problem. Well, I think you're raising some excellent points, Daniel, but let me take up each one of them.
Starting point is 00:24:20 So the first one is, yes, I agree with you, if we use these tools in the right way to help decision-making, for all kinds of workers, there are potential productivity benefits. How large they are going to be? I think they are substantial, but not revolutionary, but we can get into those debates. And part of it is because many of the things that AI is going to help you with, you are probably doing them anyway. So it's only certain types of tasks where your productivity can be boosted. Like, for example, if I want to repair the garage door, that I can't do right now.
Starting point is 00:24:55 So AI, if it gives me enough instructions to be able to do that, that would increase my productivity. But then again, what would I, today, if my garage door is broken, I get a repair person. So relative to the repair person, you know, how much productivity? productivity gain. So we have to do all of these calculations to know what the magnitude of that productivity gain is. Second, who is getting that productivity gain? So all of those AI assistant ideas are very manager, very highly educated worker focused. Are the immigrant workers working in farms going to get the productivity boost because of the assistant? Are blue color manual workers going to get that? Are those in Amazon factories or warehouses going to get that? So we have
Starting point is 00:25:38 worry about that. So in particular, taking the welfare state and the social safety net as given, if you want shared prosperity, it's not enough to say let productivity on the whole or in the aggregate increase. We have to also make sure that productivity and the capabilities of workers form very different backgrounds and very different skill levels increases as well. So it's very important to make that AI assistant or co-pilot, whatever you're going to call it, be available not just to Daniel, but also to electricians, plumbers, carpenters, and blue-color workers. So that's a whole different thing. And then finally, we can go the UBI way.
Starting point is 00:26:19 Well, let's not worry about any of that. We'll make our institutions change such that we'll distribute these gains. Just a note here. UBI is universal basic income, and there are several different versions of this idea, but the one promoted by technologists like Sam Altman is that AI will generate such incredible wealth, that we could redistribute some of that money, and that this would create a standardized equal income for everybody, or just for workers who've been displaced by AI. And I'm very skeptical that we would do that, and I'm very skeptical that that would be
Starting point is 00:26:50 enough, and I'm very skeptical that that would really solve the problem of a two-tiered society in which some people have high status and get all the rewards, all the just desserts, and others are, you know, takers of crumbs. So there are multiple layers that even a very capable AI assistant, even if it turned out to be a reality, wouldn't solve. There's some very deep questions here. In your book and in your work, you know, you really lay out the distinction between machine intelligence and machine usefulness. And I really like this distinction because usefulness says we're not building to potentially replace someone. We're building to augment. We're building to be useful, useful to whom, useful to a human agent.
Starting point is 00:27:35 So it privileges the human agent. And if we were having a global sort of race to pursue machine usefulness rather than machine intelligence, we would know that we would be entering into some phase of human augmentation. So we wanted to ask you what we need to change in order to get on what you call the human complementary path where AI complements worker skills and expertise. Well, I think that, thank you very much, Tristan. I think that's really central and it's something I'm very passionate about
Starting point is 00:28:03 because I think that has to be part of the solution to the current chondrum. And I think it requires different norms, different regulations, and different priorities in the tech industry. And let me start with the last one, because the history here is also very interesting, which is that going back to the very beginning of the computer age, there have been these two battling visions, or if you want to call them, ideologies, among technologists, computer scientists, AI, researchers, or AI enthusiasts. One of them, going back to Turing, elevates autonomous machine intelligence. You don't need the humans to tell AI what to do, of course, in practice.
Starting point is 00:28:42 Some humans do. Most humans don't. But that coexisted for a very long time with ideas that we put under this rubric of machine usefulness. People like Norbert Wiener from more or less around the same time worried very much about robots taking jobs and wages and said, well, you know, we want to have machines working with people to make them more productive. Douglas Engelbard, who sort of both practiced this and developed some of the philosophical ideas leading to things like the computer mouse, hyperlink, hypertext, menu-driven computers. J.C.R. Licklider, whose, you know, same philosophical approach led to ARPANET protocols and Internet protocols.
Starting point is 00:29:25 So we've done quite well with that machine usefulness perspective. But there's two things that have always stood in the way of machine usefulness. One is that it's much harder work to make money out of machine usefulness. You have to find ways of complementing humans and then find the right humans to sell the right businesses, and it's not going to be mass-scale production. Whereas if you do at least a sort of simple imitation of machine intelligence, which is, okay, we're going to automate all these tasks, you can market that much more easily to business leaders who want to.
Starting point is 00:29:58 to cut costs. Second, I think it pains me to say that, but machine intelligence has a better story. You know, machine usefulness doesn't make good Hollywood movies. Machine intelligence, killer robots, superintelligence does. So we are naturally drawn to these dreams of machines becoming amazing and then battling in this existential war against humans. So all of that, I think, have pushed us more and more into this machine intelligence. And we need to change that. And that's why I think emphasizing that the only thing we want from digital tools isn't imitate us or replicate us,
Starting point is 00:30:37 if you want machine usefulness, that means you want machines to make workers' jobs better, who's going to know what's the best way of doing that? Often it's going to be the workers themselves. At least they're going to have some input. But if we sideline worker voice, we're less likely to get there. Well, so I have a question about this, because on one hand, I love this vision, sometimes called the Centaur vision,
Starting point is 00:30:59 of how humans and AI would collaborate and were better together than separately. But it also feels a little rosy to me. For example, former chess champion Gary Kasparov came up with a concept called Advanced Chess, where human and a computer team up to play against another human and a computer. And he thought that these would always beat just AI alone or just humans alone. And I question that because the truth seems to be that in more and more domains, machines are just outclassing people. Absolutely. That's not my vision. To me, machine usefulness doesn't mean that in every task, humans and machines together are going to do better.
Starting point is 00:31:38 You know, cranes, you don't need humans to be lifting alongside cranes. My vision is that we'll automate some tasks, but there are many other things with the right information provision creating of new tasks that we can be. boost human productivity. So in some sense, this is why the internet and the right line from the internet to AI is so important. If we can use these as tools for providing the right type of information
Starting point is 00:32:06 and expertise boosting capabilities to humans, that's the way, I think, to create the most transformative machine usefulness. But just as one follow-up to that, when I think about making sure that you're pairing workers, in AI, sometimes it can feel to me like we're saying that there must always be drivers inside of cars, and that as an attempt to solve the redistribution problem, we're actually building an
Starting point is 00:32:34 architecture that may not be the best architecture, we might be stifling innovation in the process. How do you think about pairing us? I completely agree with that, but I would just put the following caveat, which is, no, I don't think we always need automated machinery to be babysitted by a human worker. But we also should be careful not to rush into automation. So there's nothing wrong ultimately having self-driving cars that don't have drivers. But if you do that too soon,
Starting point is 00:33:05 you are replicating human capabilities while machines are not very productive and are not very capable. And so you're not going to get the productivity boost and you're going to get lots of side effects that you may not want, unnecessary accidents or other problems with self-driving vehicles.
Starting point is 00:33:20 So ultimately, I think we want humans in three key areas. We want human creativity. I think human creativity is not something we are going to be able to replace by machines, not at least in any foreseeable future. We can talk about why that is, but that's my sort of assessment. Second, we want human agency, meaning that humans have a useful purpose, that makes their lives meaningful and adds to their flourishing. And third, we need human decision-making when it comes to the aims of society, through democratic means.
Starting point is 00:34:01 We don't want machines to set our preferences and set our objectives. So when we can combine sufficient new tasks and new activities boosted by the right type of information provision, the right type of tools, to achieve better flourishing within those three domains, I think that's the best from my point of view. But, of course, there are some philosophical questions here, and some people may take different positions on some of these issues. So you're outlining a vision of AI being adopted in a pro-human way. Can you say more about that?
Starting point is 00:34:33 What I just describe, a pro-human direction, it's not just socially desirable, but it's technically feasible. So if we can agree that there is a way we can use AI, that is consistent with some reasonable set of consensus philosophical social objectives, philosophical aims, and that it is technically feasible. I think that really changes the debate. Right now, we are in the dark
Starting point is 00:34:58 because we are told this technology is going to be doing great things. And don't worry about the details. There are some very smart people in some basements or in some very fancy offices who are going to decide it and they have your best interest in mind. Just articulating what is desirable
Starting point is 00:35:17 and what is technically feasible. I think that would be a big step in clarifying the debates. And then the hard work of, well, how do we make sure that we get there? Can we get there if we just clarified and give these instructions to X, Y, and Z who will then steer it for us? Probably not our experience in history of delegating unconstrained power to business leaders, ideological leaders or political leaders, has not been great. So democratic control is a very important part of it. Control comes both with the electoral process, but it also comes with civil society, media,
Starting point is 00:35:54 people being engaged in terms of sharing their perspectives and voice in workplaces. And I'm not suggesting, of course, that we should all get together and write new AI algorithms or design the next models, but we can have democratic input into this process. In the same way, that democratic input into how we use nuclear technology is critical. That requires institutional changes. It requires much better regulation. It may require changes in the incentives of the business world. For example, right now in the United States and in much of the industrialized world,
Starting point is 00:36:31 we subsidize companies for automating. We put much higher taxes if they hire labor than when they introduce a new machinery. We may also need to change the market structure, but also business models. I'm completely on board with the agenda of, antitrust. But I'm not sure that antitrust by itself is going to be enough because, look, if you break up meta and you have WhatsApp, Instagram, and Facebook, and they have the same business model, it's not going to change some of these issues that we're talking about. So one of the important ideas that I have been advocating is we really need to have regulatory
Starting point is 00:37:07 and fiscal means of changing business models. Right now, a lot of the future of digital technologies and AI is being shaped by this business model. where you monetize individual data through digital ads and through other kinds of manipulation, we can deal with that. We can deal with that by, for example, having a digital ad tax, which is not meant to punish these companies per se,
Starting point is 00:37:33 but it's meant to create rule for alternative business models. Today, you cannot have Wikipedia anymore. Today, you cannot have a subscription-based social media because the digital ad-based business model dominates everything. but taxing digital ads both because they are manipulative but also in order to open up the space I think is very important and I think having labor voice here is very, very important.
Starting point is 00:37:56 I think one of the most inspirational moments to me last year was when the WGA had the kind of a modern conversation about technology with Hollywood studios and their point wasn't to say we're against AI, their point was to say we want workers to have a say on how to use AI. I think that's a great principle. Daron, you offer five principles
Starting point is 00:38:19 to shift incentives away from automation for its own sake and productivity at all cost to instead reward better paying and rewarding work for more people. So can we go over each of them? You talk about the tax system, labor voice, government funding for the right kinds of research,
Starting point is 00:38:35 an AI division within government, and reforming business models. You want to just sort of talk about how these things can come together to change those incentives, just so you know in our work. we always reference the Charlie Munker quote. If you show me the incentives, I'll show you the outcome.
Starting point is 00:38:48 And if you want a different outcome, you'd have to change the incentives. And what you're talking about with these principles is the tools we can bring to change those incentives. 100%. And let me be extremely upfront and say, I have much greater confidence in what I think is a desirable outcome we can go to than the possibility of any given tool to effectively push us in that direction. and some of these tools may not work as intended, and some of these tools may be distorted by other things. But I believe there are several important principles that we have to hit here.
Starting point is 00:39:25 One is you have to fix the fiscal incentives of companies, and that's what the tax system is about. Right now the tax system is distorted. It rewards excessive automation. It penalizes firms when they hire labor or when they train labor. So we can change that. We can go to a tax system much more similar to what we had in the 60s and 70s, where capital and labor income are taxed much more similarly.
Starting point is 00:39:50 Second, as we talked about earlier on, labor voice is critical. AI should not be something that's done to labor, nor should it be something that's done to the rest of the world, by the way. Voice for the rest of the world is something we have to worry about as well. But I've been U.S. focused here, so let's keep that. That means you need workers' input. they have good ideas that would benefit both themselves and the companies. So just general sort of balance of power in society between capital and labor
Starting point is 00:40:21 is going to be very useful as well as the fiscal incentives. This by itself is not going to solve the problem of some very promising machine useful technology is not going to get enough investment because it's not clear how to monetize them on all of those things. So that's why having a federal agency that provides seed funding and subsidies for the most promising human complementary AI technologies, I think is also an idea to consider. So as in the area, for example, energy, I don't think we would be where we are today, not a great place, but not a horrible place in terms of solar technologies, wind technologies, if we did not subsidize the early development of these alternatives to fossil fuel. So we have to think about doing the same. We talked about digital ad taxes. That's about the business model.
Starting point is 00:41:15 So if the business model remains distorted, that's going to really put a limit on what you can do with these technologies, what the priorities of the industry are. And that is relevant both for wages, inequality, but also for democracy. Mental health, that's a great thing for social policy to target. Each of these things that you just mentioned is a way to try to steer this and steer the outcome of this. And again, steering it is important because letting the race just play out unfettered
Starting point is 00:41:45 gets you at very suboptimal outcomes, gets you at very bad outcomes. But can you talk for a second about we're trying to steer something that we barely understand that is hitting the world at breakneck speed and we don't even understand what we're dealing with yet?
Starting point is 00:42:00 Can you speak about how we need to be careful as we begin to apply these steers based on the fact that we're not even fully sure what we're dealing with. Yes, absolutely, Daniel, 100%, and I completely agree, if the price of steering is to end all progress,
Starting point is 00:42:16 that's too steep a price. So the hope is that you can steer it in a soft touch way. You encourage private investment, entrepreneurship, the appropriate amount of risk-taking, innovation, but you give a helping hand to the more socially desirable areas. So in the area of energy, which I mentioned already, so let me mention that again. The choice is not between we end all use of energy and new technologies.
Starting point is 00:42:43 The choice is, can we steer technology a little bit away from fossil fuels with new innovations in renewables, battery technology, geothermal, perhaps hydrogen. So I think that's the model. And we're going to make mistakes. No regulation is perfect. All regulation will be exploited. all regulation will create some inefficiencies. But I also think that 20th century governments have shown great skill in regulating very complex
Starting point is 00:43:12 activities. And the idea that AI is completely unregulatable, well, that's proven wrong by China. I'm not condoning what China is doing. They are doing it in a very non-democratic way with very non-democratic objectives. But the Chinese central government has been able to regulate AI, has been able to, regulate technology companies, how much they engage, for example, young users, what they work on. Unfortunately, they've used that to steer the direction towards more surveillance and facial recognition technologies rather than the human complementary technologies, but
Starting point is 00:43:47 it shows it can be done. So we can take that lesson, but do it through democratic means and for democratic better objectives. And if the price of that is, you know, the next version of chat GPT is three months delayed, I'm not too bothered. If the price of that is AI is completely killed, that's a huge price. But we're not talking about that. We're just talking about, well, let's put some regulatory barriers. And I think that dichotomy of, oh, you know, if you are asking for regulation, you must be against progress. I think that's just a false dichotomy. I love the metaphor in your book and you cite sort of the climate movement as, you know, a need to reverse the trend of if we're optimizing for
Starting point is 00:44:29 pure power in terms of energy. Fossil fuels makes the most sense. It's the most efficient, you know, it's the cheapest. We get the most bang for our buck, most mobile, transportable, storable, all these things. And yet we know, and we discover later, if you're just optimizing for that power, it's going to lead it this bad outcome. And we all have to coordinate something different. And you cite the importance of the muckrakers and the activists and the reformers and the inconvenient truth film and the people who are putting attention on these things that then drive it to something eventually like the Inflation Reduction Act, where you're massively, you know, changing incentives on the order of hundreds of billions of dollars to go a different
Starting point is 00:45:03 direction. And I was just thinking metaphorically, I almost imagined you proposing in my own mind and imagination, something like an AI disruption reduction act. You know, you're proposing this kind of $100 billion. Right. And then in 10 years' time, we can also talk about the social dilemma, having played some important role just like inconvenient truth. Well, I hope so. I hope so. And, you know, we can say things like, do we want generative AI to create virtual influencers that have AI generated characters that are up for grabs. Of course we don't want these things. And people can see that the incentives are what's driving these things that no one wants, not because it's actually valuable or helpful to us.
Starting point is 00:45:38 And so I know we've taken your time more than we had expected. We're very, very grateful for you spending the time with us. It's been a fantastic conversation. My pleasure. It's so delightful to have this conversation. And strongly recommend to our listeners to read Daron's book, Power and Progress. You can find it online everywhere that the Internet will take you. Your undivided attention is produced by the Center for Humane Technology, a non-profit working to catalyze a humane future. Our senior producer is Julia Scott.
Starting point is 00:46:14 Josh Lash is our researcher and producer. Kirsten McMurray is our associate producer. And our executive producer is Sasha Fegan. Mixing on this episode by Jeff Sudaken, original music by Ryan and Hayes Holiday. And a special thanks to the whole Center for Humane Technology team. for making this podcast possible. You can find show notes, transcripts, and much more at humanetech.com.
Starting point is 00:46:35 And if you like the podcast, we'd be grateful if you could rate it on Apple Podcasts, because it helps other people find the show. And if you made it all the way here, let me give one more thank you to you for giving us your undivided attention.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.