Moody's Talks - Inside Economics - Inside AI with Anthropic's Peter McCrory

Episode Date: May 1, 2026

Peter McCrory, the Head of Economics at AI juggernaut Anthropic, joins the Inside Economics team to consider all things AI and the economy. The discussion begins with how the group is using Claude in ...our work, then shifts to AI’s current and expected lift to productivity, and to the underappreciated economic ramifications of AI. It turns out that Lancaster PA, is turning out some great economists. Guest: Peter McCrory, Head of Economics at Anthropic For more from Peter McCrory: https://peter-mccrory.github.io/ Read The Macroeconomic Consequences of AI and Aging and the Productivity Puzzle Email us at InsideEconomics@moodys.com for more info about the Moody's Summit '26 Conference in San Diego Hosts: Mark Zandi – Chief Economist, Moody’s Analytics, Cris deRitis – Deputy Chief Economist, Moody’s Analytics, and Marisa DiNatale – Senior Director - Head of Global Forecasting, Moody’s Analytics Follow Mark Zandi on 'X' and BlueSky @MarkZandi, Cris deRitis on LinkedIn, and Marisa DiNatale on LinkedIn Questions or Comments, please email us at InsideEconomics@moodys.com. We would love to hear from you.  To stay informed and follow the insights of Moody's Analytics economists, visit Economic View. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:13 Welcome to Inside Economics. I'm Mark Sandy, the chief economist of Moody's Analytics, and I'm joined by my two trusty co-host, Marissa Dean Natali and Chris Reedy. Hi, guys. Hey, Mark. Good to see you. Hi, Chris. I should say right up front, we've got a great guest, Peter McCrory from Anthropic.
Starting point is 00:00:30 He's ahead of economics at Anthropic. We're big caught fans here. We're going to turn to him in just a few minutes. But before we do, we're going to no chit-chat, no banter. We're getting right down to brass tax here. because we don't have a whole lot of time. We've got a great conversation with Peter. The economy, a lot going on this past week.
Starting point is 00:00:51 We got a lot of data, GDP, inflation, income, savings, lots of stuff, housing. And we had the Fed meeting. Let me turn right to you, Marissa. Where do you want to begin? Why do we begin with the Fed? I think it's the – All right. Yeah.
Starting point is 00:01:07 Okay, so what the Fed do? Well, the Fed met, and they kept interest rates unchanged. But apparently it was a pretty contentious meeting over how to communicate about the future of rate hikes here. So there were a number of regional Fed governors who wanted a less sort of accommodative stance. So the language had been that future Fed movements would probably be biased toward rate cuts. And they wanted to take that language out given the inflation. were facing because of the war with Iran. So there were, I think, three of them argued for change in language. And then, of course, you have Stephen Myron, who advocated as he has since he's been,
Starting point is 00:01:56 he was put on the board by President Trump for a quarter point cut. And obviously that didn't happen. Also, this was Jerome Powell's last meeting as Fed Chair. It seems Kevin Warsh will be confirmed and will become the next Fed chair on the 15th of May. But Jerome Powell announced that he is going to stay on indefinitely. He didn't give a time. He will stay on the board of governors because he is still concerned about Fed independence and the administration's attack on the Fed. A lot going on there.
Starting point is 00:02:32 Very dramatic. Yeah, a lot of drama. Chris, what do you make of it all? Yeah, not terribly unsurprising given the volatility that we have here. But yeah, I think I called Powell staying on the board.
Starting point is 00:02:46 So that's... You did. I was wrong on that. Yeah. Yeah, I think in this time, I think it's, you know, he's certainly got his concerns and... Well, that's a tell, isn't it?
Starting point is 00:02:57 I mean, like a massive tell that he's very concerned about fed independence. Absolutely, right? I mean, he said it explicitly. Really. Did he use those words, Fed Independence?
Starting point is 00:03:05 He said that he's... not going anywhere until he's certain that the Justice Department has really dropped its investigation into the Fed. They dropped it, right? But there were comments that we could reopen it at any time and we're going to have the Fed's Inspector General look into this. And so the door has been left open for future judicial action. And he called that out specifically as a reason he's going to stay on. And it feels like if you take this all together, what Powell did and the dissents, it feels like the probability that the next move by the Fed is going to be a cut is now much lower. And there's even now a rising probability that the next move will be a rate hike. Is that appropriate interpretation, Marissa? Yeah, that's absolutely true. That's right.
Starting point is 00:03:55 I was just going to pull up the data. But at the next Fed meeting, yeah, the odds are. only 7% for a rate cut at the next meeting in June. What's the probability of rate hike? Actually, zero. Zero. Okay. Yeah, I don't expect them to move that quickly.
Starting point is 00:04:18 But it does feel like they're now increasingly focused on the inflationary effects of the war and everything else. I mean, inflation is, this gets to the data because we got new inflation data on the consumer expenditure inflator, the measure of inflation, the Fed targets the 2% target. That's now, they think core PC, isn't that over 3% year over year? Isn't it? 3.2? 3.2%. Is that the core excluding food and energy?
Starting point is 00:04:46 Yeah. 3.5 on the headline. It's 3.5 on the headline. And targets 2 and the direction of travel here is not good, right? I mean, it's going to accelerate, given what's going on with gasoline prices and everything else. So you can see why they're concerned. And inflation expectations, I keep looking at that five-year break-even.
Starting point is 00:05:05 That's the difference between the yield on the five-year treasury and the five-year tips, Treasury inflation-protective securities. That's kind of a window into what bond investors think inflation is going to be. That continues to be hovering at the very high end of the range that's prevailed over the last four or five years. And it feels like it might break through here pretty soon. So I expect that's what we're hearing from Fed officials and why they're saying, hey look, the next move might not be a cut. It might be an actual increase.
Starting point is 00:05:33 Chris, we're going to keep this short because we do want to get to Peter. What about the economic data? Anything you want to call out there? You mentioned PC. I guess we got ECI. Wait, wait, wait, wait, wait, wait. Are you going to go to the ECI? Oh, sorry.
Starting point is 00:05:49 You know, in my mind, I'm already, I thought we already talked about GDP. GDP is the big one. GDP, yes. Why don't you talk about permits and, manvolia or something. Permits were up to. Anyway, GDP, that was the big one.
Starting point is 00:06:05 Yeah. That was the big test of your, your, my model. Your AI model. Yeah. Well, I'll have to say, it was at 3-1 a week ago with,
Starting point is 00:06:15 and then we got all the data this week. You got a lot of data. And the trade data, I ran and came in at 2.6, so it brought down the estimate. But still, still. I still lost out to Justin because,
Starting point is 00:06:26 you know, he runs our model here, with the current quarter model, and it was at two on the nose, I think, right? He was on two at the nose. Yeah, I got it exactly right. Yeah. Yeah. So what do you make of that 2%?
Starting point is 00:06:38 Chris? Not great. Not great, especially on the back of the 0.5% in Q4, right? So that was, you know, we, you've discussed in the past, some shutdown effects, right? Clearly Q4 was weak because of the government shutdown. We expected a boost in Q1 as a result of that. So really, I think we should be averaging that out.
Starting point is 00:06:58 And that's, you know, what, 1.25 percent. So not a great number. The one great number, the better number that I saw was the final demand for private consumption. So that's consumption plus investment, excluding government and imports and inventories. That actually was relatively strong. It was 2.5 percent. So maybe a little closer to. I mean, we're getting an investment boom, right?
Starting point is 00:07:22 Fixed investment. That's right. That's right. I mean, consumption kind of helped me. And you're saying that's a positive. That's for sure. Two and a half percent was. Yeah.
Starting point is 00:07:29 Was there. Yeah. Yeah. So, yeah, we got the little investment bump. And that offset some of the other weakness. And any other data you want to call out? I mean, I'd call out the saving rate. I don't know if you noticed that, but that fell again.
Starting point is 00:07:44 Maybe that will get revised, it often does. But we're down to 3.6% on the saving rate. That's about as low as it ever gets. So consumers are, consumer spending was pretty punk, I thought. You know, it wasn't really low. It slowed quite a bit. Slowed quite a bit. Yeah.
Starting point is 00:07:57 And that's before the Iran effects really kick into gear. And that's with a 3.6% saving rate. So that does, I don't know. That feels a little concerning to me, you know, as we move forward here, into this Q2 with the now what's going on with the war. And gasoline prices now in a new high. We're $4.40 for a gallon of ridder on leaded. So I don't know. I came away from the week thinking, you know, this isn't good.
Starting point is 00:08:21 Inflation's high and moving in the wrong direction and growth is weak and moving in the wrong direction. It's just not surprising, you know, higher oil prices and tariffs. You get this kind of effect. But here it is. It's not very good. Okay. Anything else before we turn to Peter? Chris, Marissa, anything?
Starting point is 00:08:42 Okay. All right. All right. Well, let's bring in our guest, Peter McCroar, the head of economics at Anthropic. Hi, Peter. It's a privilege to be here. Thanks for having me. I want to tell you, I don't know about those two other guys, but I'm a massive Claude fan.
Starting point is 00:08:57 I'm all over, Claude. Amazing. You too, Chris, I think you are too, aren't you? I was an early adopter. Absolutely. Oh, okay. What does that mean early adopter? Like last week or?
Starting point is 00:09:11 Last year. Last year. Yeah, last year. I really want to hear how you all are using Claude, which maybe is the question that you might be about to ask me, which I'm happy to answer as well. Well, you go first. Why don't you tell?
Starting point is 00:09:23 I'm very curious. How does one become the head of economics at Anthropic. Like, how does that happen? Yeah. Yeah, let us know. I just very curious how you got your career path, how you got where you are. Yeah, it's been kind of an unexpected journey over the year. So actually, I began college as a philosophy and English double major, not knowing anything about economics. When I went to college, I thought I was going to be done doing anything with math.
Starting point is 00:09:53 I even said that to my dad at one point. And he was like, well, maybe you should. consider taking some math-related course. And some friends really encouraged me to take an intro to microeconomics class, my sophomore year, just because the professor was great and the course was maybe a blend of the types of things I was thinking about from that philosophy in English perspective. And I haven't really looked back since. After college, I worked at the St. Louis Fed as a research assistant where I got exposed to doing really high quality economic research with policy orientation. And then I went and did my PhD at
Starting point is 00:10:33 Berkeley, focusing on applied macroeconomic questions, mostly like fiscal policy and how that works during times of crises. And I really thought I was going to go into academia. And as with many things, the pandemic was a very disruptive time. And I, for some personal and professional reasons, decided to take a different path. And I joined J.P. Morgan's U.S. economic research team right out of grad school, spent those days thinking about forecasting, analyzing basically all aspects of the U.S. economy, which was a very interesting time, as it has been since, to be paying attention to what's happening. I found myself intrigued by what economists do within tech and moved over to a role at LinkedIn on an applied science team doing a blend of
Starting point is 00:11:26 machine learning experimentation, causal inference around the jobs marketplace. How do you make the marketplace where job seekers and companies are trying to find one another? I then moved over to their research institute thinking about how do I use the data at LinkedIn to understand labor market questions that are of first-order importance. I spent a lot of time trying to measure labor market tightness. And I wasn't exactly looking to come to Anthropic, but this opportunity opened up, and I knew that they had begun this work to develop the Anthropic Economic Index, which is a measure of how Claude is being used by people and businesses around the world. And it was kind of a clear-eyed picture of how AI is already beginning to change the labor market and the broader economy.
Starting point is 00:12:16 And I had the opportunity to come and join and invest in and build out that effort. And I joined about a year ago in June. Oh, very cool. So were you raised in the Midwest? Is that how you got to the St. Louis Fed? No, I'm originally from Lancaster, Pennsylvania. Oh, cool. Yeah.
Starting point is 00:12:36 Do you have a connection? Have you been? Well, I'm We're all from Pennsylvania. Oh, In fact, In fact, you won't know You may know this person
Starting point is 00:12:46 Because he's probably this Adam Ozemeck I do know Adam You don't know Adam There are a few of us A hometown Lancaster economists And before I joined Anthropic, I was living in Lancaster But then moved out to the Bay Area
Starting point is 00:13:01 When I took this job And would meet up with Adam occasionally You know, Adam, it was a very he worked with us for many years. In fact, we've got a number of great papers we've written together, the group of us. And he was our very first podcast guest, external podcast guest. Wow. We started this.
Starting point is 00:13:21 How long ago did, I think it's been five years, right? Chris, Marcia, it's been five years since we started this. Yeah. Every week. And Adam was number one. He was just a great guy. He's a great guy, you know. I hope I can live up to the bar set by.
Starting point is 00:13:36 former economist from Lancaster, you know, former guests who are economists from Lancaster, Pennsylvania. Right. You know, were you in Bruce Kasman's World. I was, yeah, yeah, Bruce and then Mike Froly. Mike Foley. Yeah, exactly. Oh, very cool. Boy, you've done a lot.
Starting point is 00:13:55 You're a very young guy. I mean, you've accomplished, I mean, I'm presuming you, you're a lot younger than me, and you look like you've, you know, much younger, accomplished a lot in a very, short period of time. So very cool. Let me ask you this. We'll be happy to share what we're using Claude for, but can we ask you just how you're using Claude? Is it pervasive in all the work that you do? Absolutely. And I would say I would maybe like break the sound down into two different ways that I and the team use Claude for economic research. One is for doing this, to becoming more efficient at the type of work that economists already do. So this would be downloading data,
Starting point is 00:14:48 running regressions, visualizing, creating charts, and Claude code in particular is an incredible asset for that type of implementing of the tasks that economists do. because of that, it also means that you can iterate on ideas much more quickly. And I find myself tasking Claude with an exploratory question about some way that AI might be affecting the economy. And you can get almost an immediate signal on whether that's an idea worth digging into. So this notion of iterating on ideas as a way of sampling and maybe even finding the best idea to explore. But then the second way that I would say we use Claude is by broadening the scope of the sorts of things that economists maybe are expected to do in these sorts of roles. So I don't know much about developing front-end development of interactive dashboards.
Starting point is 00:15:50 But this is the sort of thing that Claude can do very well. And so I know the data very well. I know I have opinions on how to visualize the data and even how to present it to, my counterparts throughout the company. I can ask Claude to spin up an interactive dashboard, and then I can share that with my comms, partners, marketing partners, et cetera, as a way of illustrating. This is what we're seeing in the data. This is maybe how we might want to represent it in some of our future research. And that becoming more efficient at the work that you're already doing and the broadening out of the scope of what you're able to accomplish, I do
Starting point is 00:16:31 think are two key axes for thinking about the impact of AI. It's not just about doing what you're already doing faster, but also doing more in a different range of tasks. You know, one thing I find useful is I start with Claude. That's because I know that best and I feel comfortable with it. But sometimes I'll take what Claude produces and I'll give it to another LLM and I'll say, hey, you know, what do you think? What are we missing? Do you do that as well? Do you use other LLMs?
Starting point is 00:17:04 I mean, maybe use different Clod instances as opposed to using other LLMs. But, you know, one thing that that reminds me of is in a report we put out in February called Economic Permitives, where we introduce simple, basic ways of cutting into how people use Claude, and for the types of tasks
Starting point is 00:17:27 they're tackling. In general, Claude seems to succeed at the tasks that people give them, but the most complex tasks are where the model tends to struggle the most. And so you can see this negative gradient between task complexity and effectiveness. What this suggests is that this aspect of the sort of process of using these models still require human expertise, to evaluate, to assess whether the work actually is of sufficient high quality. I mean, relying on another LLM would be another way to kind of automate or streamline that process. But in my experience, in iterating very quickly, Claude can, in general, move in the right direction, but we'll make sometimes even very subtle mistakes,
Starting point is 00:18:26 you know, specifying a regression in an unwieldy way that doesn't actually answer the question that I want to answer or writing down a macroeconomic model and misunderstanding what one of the parameters in the model actually is. And that reliance on or the complementarity of human expertise to really get value out of the model is something that we see in our data. I think earlier, Chris, you mentioned about being an early adopter of Claude.
Starting point is 00:18:56 We put out a report called Learning Curves, where we asked this question, are people who have used Claude longer, more effective than those who are just starting? And even after six months, people who have been using Claude for six months are more likely to interact with Claude as a thought partner in a more collaborative way. And even when you control for the specific tasks that they're doing and a whole host of other factors, they seem to get more value. out of the model seem to be more effective. And so in that sense, it kind of points in the direction of skill expertise and complementarity to the pure capability of the model. Hey, Chris, you want to give an example of how are you using, Claude?
Starting point is 00:19:37 I will say Chris is very good. I mean, I'll ask a question. Yeah. He'll come back with an unbelievable answer. And at first, you know, six months ago, I go, wow, what's he doing? This guy's really smart. This guy's really smart. He's gotten a lot smarter, but he's really quite adept at it.
Starting point is 00:19:59 So you want to give an example, Chris, of how are you using it? In many ways, very similar to what Peter described, right? It's my research assistant, right? He just doesn't give me a research assistant, so Claude becomes my research assistant. It's not true. What does he talking about? But certainly, a lot of coding, right? That really was the breakthrough, right?
Starting point is 00:20:20 I didn't know Python very well, for example. I was an R-coder or a SaaS programmer, e-views, and the company was moving in the direction of Python, and I was really reluctant to learn another language. But with Claude, very easy, it's almost seamless. It's, you know, I picked up Python very easily. In fact, you know, Claude writes Python better than I certainly could, but I'm assuming than many other professional programs.
Starting point is 00:20:50 could even aspire to. So it just increases that learning curve tremendously. So when it comes to modeling or data wrangling, all the economics type of analysis we do, it just facilitates. And I certainly second the iteration, the iterative nature of that's where I see the real value. And I think from a, if the longer you use it, the more comfortable you become with that aspect or understanding, oh, you know, don't stop with the first question. You need to. refine it and redirect. One of my favorite things to do, similar to what Mark described, is to ask the system to critique itself, right?
Starting point is 00:21:30 When I see that the output is not right or it's not interpreting, I say, step back, pretend like you're a PhD economist, you know, looking at this from the outside, and review this and tell me what you think. And oftentimes it comes back and identifies its errors, right? It says, oh, well, you know, this is clearly a flaw. Let me go back and rework that for you. So it's, it's, it's just been a very, um, complimentary tool, right? I see this as really expanding my, my capabilities and not so much the substitution effects,
Starting point is 00:22:00 but I see much more of the complementary. Maybe, you know, we, um, the, a partner team at the, at Anthropic societal impacts did this large scale survey of Claude users, around 81,000 people where, uh, using this anthropic interviewer, Claude is asking people questions about how they feel about AI's potential impact on society and on the economy. We dug into some of the data that that survey produced to understand some of the economic implications of AI based on what people reveal to us. And on this question of what productivity effect do people get from the model, they emphasize that the top response was actually broadening of scope and not doing things faster.
Starting point is 00:22:57 And so that's partly informing why, you know, in my experience, I emphasize those two dimensions, but it's actually something that we see more broadly when we ask our users how they're experiencing and using AI. Marissa, do you have a good example? Well, I was thinking about how we used it when we came up with that vicious cycle labor market index, right? So we were playing around with the data. How could we improve on the SOM rule? And then we put it through Claude and basically said, are there other permutations of this that would yield more accurate results? Like if we took different moving averages or we looked back five years rather than one or we looked back, I don't know, just sort of different permutations of it. And that helped, right? that helped fact check what we were doing and gave us some other ideas to come up with. So I agree, it's that broadening. It's like, how can I improve this or how can I make this better? I used it yesterday. I was putting together a presentation that I'm giving in a couple of weeks. And I
Starting point is 00:24:07 asked it, how can I make this more succinct? Or what should my takeaways be here? And this is the audience I'm speaking to. It's a bunch of tech investors, like what's going to be the most relevant for them to their economic takeaways. So kind of just helping. me think about things that I hadn't thought about myself, I think is the best way to use it for me right now. You know, I got a call or an email from the staff of a congresswoman on last Friday a week ago. There's a piece of legislation, housing-related, that they wanted me to evaluate, but to have the evaluation by the end of day on Monday.
Starting point is 00:24:49 And of course, I had another paper due on private credit I was working on. So I said, fine. And it was a lot of moving parts, you know, in the proposal and a lot of estimates as to what the impact would be on housing supply. So, you know, I said, okay, I'm going to give it to Claude. I gave it, that did a bunch of other stuff. And I said, go look at everything Mark Zandi has written that's in the public domain, write the, write an assessment of this proposal in Mark Zandi's voice. And it came back, and I said, what else do you want to know?
Starting point is 00:25:29 And it came back and asked a bunch of questions. Like, the one I found most interesting was how critical should I be of the proposal. Should I be critical to what degree? And how long do you want it to be and all kinds of questions like that? It comes back, and I go back and forth a few times, and I got a product that was not exactly what I would use, but it was certainly where I would start. And then I took it and I was able to go with it. But here, Peter, I bring that up as a question for you, something that's bothering me. And you're a philosophy major, so maybe you can help me with this.
Starting point is 00:26:08 So if I keep going back to Claude and say, use Mark Zandi's voice and produce something for me, and then I put that on the public domain, Won't increasingly over time be less Mark Zandi and more clawed? And when does it become clawed and not Mark Sandy? Do you know what I'm saying? The ship of Theseus question in a way is like over time you're successfully replacing just one board. And at what point does it cease to be the old ship and become the new ship? New ship. I guess it's an interesting question. I mean, in some sense, you are still exercising oversight,
Starting point is 00:26:50 at least in this example, of what is put out into the public domain. I suppose if you had this be fully autonomous, that would raise some questions. Right. But I think that, like, I mean, in your experience, like the very first pass is, it can be very good at present, but is it doing, is it getting all the way to 100, 100% is it, does it have the, you know, there are things that you know that might not necessarily be in the public domain or aspect that you're thinking that feed into the types of arguments
Starting point is 00:27:28 that you want to make this sort of tacit knowledge that might not be as readily accessible to the model just based on those public domain writings. But maybe at the same time I would say this having access to the right information for the task at hand is, very important. It's not just capabilities alone. We see something along these lines when we look at enterprise data, like how businesses themselves embed cloud capabilities through the API
Starting point is 00:28:01 for new and existing workflows, where the very complex tasks that businesses are beginning to automate with Claude rely on disproportionately more contextual information relative to more straightforward tasks. And what this illustrates to me is, well, one, like, what are the tasks that might not actually be in our data yet? These would be things like things where Claude could be very capable, but maybe the business hasn't done the data modernization yet to get all the pieces of the puzzle, all the information centralized and codified and structured in a way that's available to the model. or tacit knowledge that's sort of dispersed within the organization.
Starting point is 00:28:49 If there's something that your coworker knows that's relevant to what Claude is trying to complete, unless that information is elicited, the model may struggle to complete that task. And so, I think this is like sort of how I think about the broader economic implications. It hinges on business adoption and the complementary investments that ultimately will be required. to unlock the productivity gains that we see in the microdata, but until those are made, it might not show up in the aggregate. Yeah, one thing I want to point out that document that came back from Claude,
Starting point is 00:29:29 it took me two hours to get it to a place where I was comfortable sending it along. If I had not had that, it would have taken me two days, no doubt about it. And it would have been worse, because the other thing had asked me for, can I, there's said, I'm looking at the supply estimate, There's really three key assumptions that are being used. Do you want me to create a sensitivity analysis? And I go, oh, wow. Yeah, sure. I go right ahead. Let me see what it looks like. You know, that that time saving estimate is like two hours relative to 48 or two hours relative, I guess, like working days. I don't, maybe it's like closer to 16 hours. It's like on the right order of magnitude of what we get when we ask Claude to estimate. look at, you know, Claude in a privacy-preserving way, looks at the conversations and sees what tasks people are completing. And then we ask Claude to estimate how long would it take someone to complete the task that Claude is doing
Starting point is 00:30:28 if they didn't have access to the AI and then how much time did it actually take them. And compiling information from reports, which in some sense is sort of in the vein of the exercise that you're describing, where you have to, like, go out and get information and think about how to, compile it and synthesize it for the objective at hand. That's, we see like time savings on the order of like 80 to 90%. What's interesting is we can see that across the full range of tasks that people are bringing to Claude. And then we can estimate what would this imply about the overall economy if we add those
Starting point is 00:31:07 task time savings up according to the structure of the U.S. economy. And what you get from that exercise is increasing labor productivity growth of 1.8 percentage points per year over the next decade, if that's how long it takes for these capabilities to diffuse. And of course, you all are paying very close attention to these sorts of data. That's a very big number. It would bring us back to late 90s, early 2000s levels of labor productivity. We don't see that yet in the aggregate data. But this is just current models and current usage patterns. The potential scale of impact, I think, is quite large. Well, let's go there next. That's a great segue into the macroeconomic consequences of AI. And there's demand-side consequences like the build-out of the infrastructure
Starting point is 00:32:00 and maybe wealth effects on consumption generated by the run-up in stock prices of these companies. Everyone involved is wealthier, and that drives a lot of spending. But on the supply side, in terms of productivity, I think you just said it, it's hard to, at least so far, it's hard to see, it's hard to connect those dots, isn't it between AI and the productivity? You would agree with that? I would agree with that. If I recall, the SFFFED's sort of business cycle adjusted TFP number is points more toward like capital deepening and the CAPEX belt out itself as being related to the, the higher productivity numbers that we've seen, I think it's too early to attribute this to AI per se. If you look in the cross-section across sectors, there are some people who have argued that
Starting point is 00:32:55 maybe we're beginning to see some signs that the places where AI could in principle provide more productivity lift has somewhat more elevated productivity growth, but I don't think it's dispositive at this point. So the exercise that we do is not to say, what is the real-time impact so far, but rather, how can we kind of bound the scale of impact based on the types of automated deployment of Claude that we see on our platform at present?
Starting point is 00:33:28 So if you look at economists' expectations of AI's contribution to future productivity growth, It's the kind of the on top of what business as usual, let's call it business's usual productivity growth. Yep. There's a big range among the economists, not surprisingly. Everyone's using a different methodology and different historical analogs and using different data. But it ranges somewhere on average by my calculation about a half, in terms of total factor
Starting point is 00:34:04 productivity, you mentioned TFP, total factor productivity, somewhere around a half. half a percentage point per annum, you know, something like that over the next 10 years. Yep. Labor productivity would be a bit higher. It would be, what, 7, 8, 10 percent percent per annum. And that would be consistent roughly with what happened during the Internet period, mid-90s through the mid-2000s when the Internet came on and productivity improved. That's kind of sort of involved in line with that, which is not surprising because that's what economists like me would do. What we do do, we go back and look at history.
Starting point is 00:34:38 and say, okay, here's the analog, and we're going to use that here, because we don't, otherwise, we feel kind of rudderless. But it sounds like what you're saying is, based on what you're observing, so far, is it's going to be measurably stronger than that. It's not 7, 8, 10, 7, 1.8 percentage point per annum, something like that. So the number 1.8 is labor productivity, which assumes some capital deepening. I think it's closer to like one percentage point TFP. But we do that to benchmark it against this literature review put out by some researchers I think in 2024, 2025, where our number is not the largest estimate out there,
Starting point is 00:35:27 like based on current usage of current models, but it is not as sort of pessimistic as some of the very low numbers that have been, I mean, it's a bit above the median. Now, I would say that I think that AI has the potential to be more consequential for productivity than Internet, perhaps, for a number of reasons. One, you know, the, it's a general purpose technology. I mean, the Internet is as well, but it's, it has the potential to accelerate
Starting point is 00:36:05 so much aspect of cognitive work across the entire economy. You don't need to build out a new digital infrastructure to deploy it. You can access the model right away. You don't need specialized skills in order to get value from it. You just pull up your computer and you start firing away the questions that you have for Claude. And so we might expect diffusion to be much faster. and indeed, if you look at business and consumer surveys, it does look like adoption rates have moved higher more quickly
Starting point is 00:36:40 than did Internet and other past technologies. In our data, when we look at the diffusion of the technology within the U.S., so like how quickly do late-adopting regions catch up to early adopting regions in terms of usage per capita, that process appears to be moving at about five to ten times faster than past consequential technologies in the 20th century. And there's this paper in the QJE, quarterly journal of economics, like the diffusion of technologies.
Starting point is 00:37:14 I would recommend folks to dig into that paper to get a historical analog of what we might be seeing today. But then you add to that, that it's also an innovation in the method of innovation, where I think AI has this possibility that it could automate the innovation, process itself, loosening the bottleneck of the number of effective R&D researchers that you have within the economy. And we all know that that R&D aspect of the economy is crucially
Starting point is 00:37:46 important to long-run growth. And so this exercise that we do with our data sort of takes as given that this is a one-time lift in productivity spread out over 10 years. But if we have the automation of innovation at scale than perhaps were poised for much larger effects. I think there's a lot of uncertainty here, and so I don't want to only make the strong case for very large effects. There are unexpected bottlenecks in the process and how this plays out also is informed by business adoption. But given how quickly things are moving, I suppose, will get an answer before too long as to whether this lift and productivity is sustained by AI. You know, I don't want to apply too high level a degree of precision, but just so I,
Starting point is 00:38:39 because I just think in terms of, I'm a forecaster, I think in terms of numbers. So in our forecast for the next 10 years through 30, 2035, 2036, we're putting into our numbers. the ad from AI, only AI, is 8 tenths of a percent per annum. That's labor productivity growth. Labor productivity. Oh, labor productivity. Yeah, labor productivity. Yeah, labor productivity goes.
Starting point is 00:39:10 But TFP would be five tenths of a percent per annum, something like that. But let's just do labor productivity because that's what most people understand and get their mind around. So it sounds like what you're saying is if you had put, if you were asked to put pen to paper, put numbers down, you would say not 0.8% per ham, you'd say 1.8% per ann. I might just qualify that we're not, we didn't make a specific forecast. Right. For sure. So it's the exercise to say, assume that it takes 10 years for the efficiencies that we observe at the task level to spread throughout the economy. Right.
Starting point is 00:39:53 Taking as given the structure of the economy. But then in this economic primitives report, we did some assessment of like how robust is that prediction. Okay. And if there are bottlenecks that limit how much productivity you can get at the job level from just automating specific tasks, you can actually push it much closer to what you're describing, which is on the order of 0.8 percentage points, 0.5 percentage points, even labor productivity. I think the example that I have in mind here is, we see teachers using Claude for sort of developing course curricula. And you can save a lot of time using Claude to iterate on that content, but you still have to spend the entire day in front of a classroom teaching.
Starting point is 00:40:44 And so maybe there's a quality improvement, but that's a clear bottleneck to the effective production of sort of education. And if there are these sort of very crucial tasks that people are doing that can't be automated within different jobs, that would be a source by which the number that we produce could be pushed downward. At the same time, the models are improving very quickly. I'm sure you're familiar with the meter chart that sort of describes the task horizon that these models can rely on. liably complete that's doubling roughly every four to seven months. And there's some recent academic evidence arguing that this sort of doubling time is occurring across a wider range of tasks than what the meter study looks at. And so we're moving very quickly into territory that's
Starting point is 00:41:45 very hard to evaluate. And I love the point that you make that it's not just about point estimates, but sort of a rate, like think in terms of a distribution of potential outcomes, both for the productivity and also for how we think about how this could show up in terms of the labor market. Okay, and just to even put it, I know I'm pressing and I don't want to press two hours. I just want to get my mind around your perspective. So if it's 1.8% percentage points per annum, and that's the contribution to labor productivity growth from AI, then that would imply.
Starting point is 00:42:22 you know, productivity gains that are in aggregate, quite sizable, you know, three and a half, four percent, you know, something like that. And that means GDP growth is going to be in that ballpark. We won't get a lot of labor force growth, but we'll get some. So let's just say 4 percent, kind of real GDP growth. And we've been growing closer to two. Is that kind of sort of what you're thinking? Is the most likely scenario in the middle of the distribution of of possible outcomes here, something along those lines? Or am I just stretching it too far? You know, it's hard to, I think, assess in some sense
Starting point is 00:43:05 like what your baseline GDP growth would be in the absence of, you know, there's this, there's this, you know, I think, we saw the labor productivity slowdown of the last 20 to 30 years, and prominent arguments for why we were past the era of transformative general purpose, technologies, population growth being one of them, maybe just like it's hard to find new ideas that, like, we had,
Starting point is 00:43:41 electrification was so crucially important about 100 years ago. And so I think in that sense, it's hard to know what the benchmark would be in a counterfactual sense in the apps. of AI innovation and even our baseline forecasts prior to the advent of large language models might have implicitly, even without thinking about it, implicitly been incorporating the arrival of some new innovation that would allow us to sustain the type of economic growth that we had seen. And of course, historically in the U.S., at least, GDP per capita has just steadily grown at two percentage points per year over the long 20th century from 18,
Starting point is 00:44:23 the dawn of modern economic growth to the present. And during that period of time, we saw immense structural transformation, the decline in agriculture, the hump-shaped movement in manufacturing, the rise of services. And yet the engine just kind of hummed at this steady pace. And so that gives me a little bit of pause to maybe, even if I was going to say 4%, I would maybe say risks or it would be skewed to the downside on that outlook, given sort of that historical precedent. At the same time, and I'm sure that you have talked about this, I'm curious if you've seen it or have talked about it,
Starting point is 00:45:03 the figure from the Dallas Fed that creates the chart that I'm describing, but then articulates three potential scenarios. They've seen it, yeah. Of AI. One is sort of the singularity in an optimistic direction, in the singularity in a pessimistic direction, or an ever so slight lift in TFP growth overall. So a lot of uncertainty, that's a big motivation for the work that we're doing on my team,
Starting point is 00:45:32 which is just trying to measure how are people and businesses using these tools? Increasingly, how is this already affecting economic activity? And the hardest problem, which sounds like you're already making some headway on, is how do you assign likelihoods to the range of uncertain scenarios that could materialize in one to five years from now? And I view that as perhaps the most,
Starting point is 00:45:58 among the most urgent and pressing questions for my team and for society more broadly is to think about the range of scenarios before us and where we might be heading and also what actions that might demand of us today because we have a say over the matter of how this all goes. Well, I want to come back and explore the implications of that kind of rapid productivity growth. You know, go to this concern about the dystopic impacts that might potentially occur in the labor market.
Starting point is 00:46:29 But before I do that, first let me brag. My nephew is Nate Rush, and he's at MITR and was involved, does a lot of their research. And a very cool kid, a very smart guy. And but before I do, let me turn it back to Marissa and Chris to see if there's any Anything that I'll say want to push on here before we start talking about the job dystopian aspects of these, potentially of what's unfolding here? Marissa, anything you want to push on or ask about? So I was curious about, you know, the discussion you had about your teacher's example, right? And then Mark's example about evaluating this paper.
Starting point is 00:47:12 And then it came back and said, do you want to do a sensitivity analysis? And he wouldn't have done that, right? So I think there's an argument that you were making that perhaps in instances where it's not explicitly saving time and maybe increasing quality, which implies that the cost of things could fall as a result of AI, right? Like the cost of services, the cost of producing maybe falling if quality is rising. How do you factor that? into the outlook on productivity or growth or how it's going to have an impact on the larger economy.
Starting point is 00:47:52 Yeah, that's a great question. I think another way that I think about this particular question is whether or not the productivity gains actually show up in official statistics. Measured GDP. A great example is if you look at the history of light production, this paper by Nordhaus, where you look at what it took to produce light tens of thousands of years ago all the way up to the present. And you drew a very careful analysis, and you can measure light production based on sort of like lumine intensity.
Starting point is 00:48:37 And one of the broad punchlines of that paper is, yeah, maybe the way that we, measure the price of light in a market is not fully accounting for the productivity effects over the longer scope of economic history. And so I think for the reasons that you describe, for reasons around, you know, people are getting, using these tools, many of them for free, not necessarily paying for them. There's a lot of consumer surplus much in the way that The internet brought a lot of unmeasured productivity that failed to show up in measured GDP. I don't have a good answer on like the quantitative implications, but I do think that that's important to keep in mind.
Starting point is 00:49:26 Chris, anything you want to bring up? Yeah, quickly, just along the lines of your productivity assumptions going forward, how do you think about the improvements to the mall? You mentioned the models are improving very rapidly today. Are you assuming some type of plateau? Do you think we're going towards artificial general intelligence? And that's kind of built into your more rapid productivity assumption relative to ours? Or how do you think about that aspect? Yeah.
Starting point is 00:49:57 So the specific exercise that we did focused really on how people and businesses are using the current generation of model. So you look at specific tasks like compiling information from reports, checking diagnostic images, see how much time savings they have, look at the share of economic activity that those tasks represent, and then add it up using what's referred to as Hulton's theorem to sort of standard macro growth accounting techniques for adding up microlevel efficiency gains. and just that number that you get is about 1.8 percentage points, labor productivity, lift. In the primitives report, I forgot to mention earlier, we also introduced this notion of like,
Starting point is 00:50:47 did Claude succeed at what it was asked to do, yes or no? And so one reason why you might think that this number is too sanguine or optimistic is that it's like failing to account for the fact that sometimes Claude produces more work
Starting point is 00:51:03 or requires more human involvement to get the output. And so actually, if you take that into account, I believe that the number that you get is closer to one percentage point, labor productivity lift. And so I think you can push this model, and you can kind of assess the range of implications of what we see in our data in terms of sort of the anticipated lift to aggregate productivity,
Starting point is 00:51:32 more in the direction of the number that you've ballparked, and ballpark is probably unfair to the careful consideration of that number. So how you came up with that number. But at the same time, what I don't have a good handle on is how this rapid expansion of model capabilities alters this number. At the end of the day, it's mediated by business adoption, and I just think we don't have a good handle on both the determinants and the consequences of business adoption and how that is affected by model capability improvements. Got it. But at the end of the day, it's current technology, basically.
Starting point is 00:52:18 Yeah, yeah. So I think we're poised for a potentially very large effect. So whether it's my, our forecast of the benefits to productivity or your forecast, they're big. and taking at face value all else equal seems very positive. I mean, productivity growth is kind of what drives the train for people's living standards. We want the productivity gains. But there is a growing, there is a concern, I don't know if it's growing, but a concern that the productivity gains will come on so quickly. If we're going to get 1.8% per annum over the next 10 years, that's got to start happening
Starting point is 00:53:00 here pretty quickly. And that's a lot of productivity growth in addition to what we're getting. And the economy at this point is already not creating any jobs. I mean, if you look at the aggregate job statistics, it's basically going sideways here for the past year. What does it mean if we now start to see these AI productivity boosts really starting to kick into gear? And by everyone's expectation, that's got to happen pretty soon. soon. Maybe not 2026, but certainly by 27, 28, we're going to kick into a higher gear here.
Starting point is 00:53:38 And if we were not in a different world in terms of job creation, is that something that's on the radar screen? Am I thinking about that wrongly? Is that a worry? So one way that we have tried to answer this question of what impact AI might have in the labor market is by trying to focus on what might we see in the data today that would give us a clear signal that this is actually happening. So in early March, we put out this report that compared potential capabilities of large language models against where Cloud is being used for automated purposes and for work purposes. And there's actually quite a bit of a gap. between those two theoretical and observed exposure notions.
Starting point is 00:54:32 So computer and mathematical jobs have something like 95% of the time-weighted tasks that those sorts of workers do as being the sorts of things that large language models in principle could do. But in our data, we only see around a third of those tasks showing up. This includes things like, or jobs that have high observed exposure include data, entry workers, technical writers, some types of computer programming. What you can do is then ask how have unemployment rates evolved for those workers as compared to workers who are not exposed to AI even in this observed sense? And when you do that, there's no material impact yet in the
Starting point is 00:55:20 aggregate. I think this is consistent with the fact that the labor market, you know, has actually been reasonably resilient and healthy prime age employment to population ratios are at multi-decade highs. The unemployment rate is within striking distance to the Fed's full maximum employment mandate. But we think that this might be a sensible framework going forward that if there is displacement in a compressed amount of time for the most exposed workers, it should at the very at least show up here. Now, one thing that we did see in that report that was more suggestive than conclusive was that hiring rates for younger workers in jobs that have a high observed AI exposure seemed to have
Starting point is 00:56:09 weakened in the past year or so as compared to other young workers in unexposed roles. It hasn't shown up in terms of clear differences in unemployment rates. but there's some suggestive signal that maybe pockets of the labor market are already beginning to be affected. Just to push a little bit harder, though, on that. I mean, so far, yeah, we haven't seen anything. You've got a nice framework for trying to understand how things might play out going forward. But, I mean, if we start to see the productivity gains that we're forecasting, expecting, Yep.
Starting point is 00:56:49 What's the dynamic that will allow the economy to digest that without having a real difficulty with it? You know, without job loss and that short circuits the economy's ability to grow. Yep. How is that going to work out? So, I mean, I guess, you know, TFP is holding fixed input. How much output do you get? And so I think one question is, is that productivity predicated on substitution
Starting point is 00:57:22 or complementing the capabilities that people have? What we see in the data right now, and this might not hold, I mean, the models are improving very quickly, and the scope of what they can handle reliably and autonomously is improving. But right now it looks more like there will be uneven implications,
Starting point is 00:57:43 and perhaps it may even be a skill-biased technology that amplifies and reinforces certain types of expertise. We were talking earlier about it's broadening out the scope of what I'm able to do. My expertise helps me complete this memo in two hours, which would have taken me two days. That looks like there's greater productivity for those who have the requisite expertise. There are some jobs, however, where Claude is very good at handling the most central task, and we are already beginning to see Claude being used for that
Starting point is 00:58:23 by businesses in the API. Data entry-type tasks, for example, or customer support, for example, are jobs that might have a greater risk of displacement, even if the overall labor market is on shore footing. And I think the complementarity productivity lift, you know,
Starting point is 00:58:53 you could have an increase in GDP without an offsetting decline, without a decline in employment overall. But there could be a lot of turn under the surface. Got it. Got it. Okay. Can I ask a question about the sort of the demographics of what you looked at, Peter? You mentioned young workers, people just entering the labor market that there's some evidence that hiring has slowed. Curious if you looked at the opposite end of the labor market, so older workers in these highly exposed occupations, is there any evidence that maybe they're leaving the labor force or retiring quicker or anything like that? Oh, that's really interesting. We didn't look at that margin of labor market exit. I don't, so I don't have a strong view there. One thing I
Starting point is 00:59:44 will say is in this survey of 81,000 people that Claude sort of did, the people who are, well, two things. One, this observed exposure based on how people and businesses are using Claude in the data is correlated with worries about job displacement for people who took our survey based on the occupation that they report. So if they're, if they say that they're in a job where we see Claude being used for automated purposes, they seem to be more worried about losing their job in the next, sort of in the future. And then if you break that down by who's most concerned by how long they've been in the labor markets, so like early career workers versus later career workers, early career workers are much more concerned about labor market displacement. Okay. And this
Starting point is 01:00:41 actually is somewhat consistent with we compared our observed exposure by occupation against the BLS's forecasts of which jobs will grow or decline over the next decade. And a lot of other factors are at play in those sorts of forecasts, but nevertheless, they're negatively correlated. So higher observed exposure today seems to be correlated with the sorts of jobs that the BLS anticipates will wane as a share of employment. overall. And so perhaps we're picking up on some structural transformation, whether or not that shows up in terms of aggregate unemployment is, I think, a little less clear and partly depends on whether it's outright substitution and automation versus augmentation and enhancing the demand for
Starting point is 01:01:32 increasingly valuable labor. I want to just mention that one of the papers that, I wrote with Adam, Adam Osmek, and Dante De Antone, one of our colleagues, was assessing the impact of aging on productivity. And we found that the older the workforce, the bigger the weight on productivity gains. This was based on ADP data, so we had a lot of good granular data. And we explored two possible theories. One, we call the wise man theory, where if the older workers leave, they're taking institutional knowledge with them, and the organization that's left is diminished, and productivity is weaker. The other was what we call the albatross theory, and that is that the older workers are not,
Starting point is 01:02:26 don't adopt technology quickly, and they don't allow the rest of the organization to adopt that technology because they're sitting on top of the organization. So you want to guess which theory is more right, the wise man or the albatross? Oh, man, I should have read every paper you wrote so that I came prepared. Quick-ass-clod. You should be a quad to me. It was. It was, okay.
Starting point is 01:02:57 It was the albatross theory. Yeah. I mean, I guess that's kind of consistent with sort of who has historically been most adept at navigating technological change and transformation. It's younger workers. And the, yeah, I mean, this is an incredibly uncertain labor market to have graduated into. We had the largest non-recessionary labor market slowed down, a globally coordinated in some sense monetary tightening against the backdrop of all sorts of macroeconomic volatility.
Starting point is 01:03:31 When the economy has more uncertainty and firms are perhaps less eager to make certain types of investment, including investing in younger workers, that tends to hit those people harder. And then you throw AI into the mix. But I guess like the silver lining perhaps is what you're alluding to, which is younger workers are most capable at figuring out how to adapt and be creative with new technologies and maybe even be more willing to try new tools. Well, we've kept you an hour, hard to believe. The conversation could go on forever. I thought maybe that we could end this way, and I'm not sure if it's a fair way, and you tell me if it's not. So there's lots of different concerns about AI, you know, everything from overvaluation in the equity market,
Starting point is 01:04:25 overbuilding, leverage is accumulating for the buildout of the infrastructure to cyber issues, worries about terrorism, social media, impersonation. I mean, you can go on forever. You know, there's a lot of concerns. So two questions. One is, if you had to pick one of those things that we should be worried about, which one is it that we should be most worried about in your mind? And secondly, what is the one of the benefits of AI that we're just not, people who just aren't talking about or thinking about that, you know, should be on people's radar screens? Because there's a tremendous Men is, you know, don't get me wrong. I think AI is really very, very, it's critical.
Starting point is 01:05:08 We need it. Bring it on because we need those productivity gains. Labor Force growth is coming to a standstill given demographic, the aging and immigration policy and everything. So we need the productivity gains. But is there something we're some positive, potential positive out there that we're just not talking about? Again, these may be unfair, but I'll try anyway. Yeah. So on the first one, I'll not kind of keep it to something that I think is under-discussed,
Starting point is 01:05:37 but is an important aspect of how AI adoption sort of interacts or has implications for the overall economy, and that's how it interacts with the business cycle. So it might not be the case that AI, suppose AI does not cause a rise in unemployment. There are other things, other shock. that buffet the economy that could push us into a downturn. And we know some evidence from the Great Recession that businesses may take that as an opportunity to restructure operations
Starting point is 01:06:15 and invest in new technologies to be prepared for the other side of the downturn. So the classic paper here is the Hirsch-Bind and Khan do recessions cause skill-biased technological adoption? And so here I wonder if, suppose we do go into a recession, does the availability of AI, which can automate so much, so many types of cognitive work and the models will be improving, does this accelerate adoption and serve to prolong or even amplify a shock should one materialize? And this also has implications for who may end up struggling the most. in a downturn, should one materialize. On the other hand, if the economy is at full employment,
Starting point is 01:07:07 there are appropriate price signals. This is kind of a lesson that we learned during the pandemic, is that people will reallocate to where there is demand, and there will be opportunities. So even if AI ends up displacing some form of work, there are other opportunities elsewhere within the economy. And so in this sense, just having good macroeconomic policy can itself be a good guide to sort of getting us on the right path to this transition that we're on. The, no, the second thing was like an under-emphasized.
Starting point is 01:07:50 Yeah. Yeah, I mean, there's a lot of good that's coming out. Is there something out there that you want to call out? Yeah, maybe I would just end. emphasize that this like automation of innovation is not just about TFP growth. It's also about improvements in living standards, health and wellness and scientific innovation, things that will materially and measurably improve the quality of our lives, that the technology could help us arrive at solutions to problems that have proven vexing to us and would otherwise take
Starting point is 01:08:26 maybe a century to achieve. And so we shouldn't just think about the benefits in terms of this GDP, but also the other many great things that do correlate with GDP, which is improvements in human flourishing and health and science and so on. Well, Peter, we've taken up a lot of your time. I really do appreciate you spending it with us. And we're very much looking forward to all the good work And again, we're a huge fan, so keep it coming. Don't raise the prices, I'm just saying, you know. Just personal favor, you know, maybe you can put in a good word. But we love, I love what Claude, what Anthropics is doing, and really appreciate it.
Starting point is 01:09:16 And thank you again for coming on. It's been a real joy. Thanks for all the great questions and discussion. All right. Great. And with that, dear listener, we are going to call this a podcast. Take care now.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.