Your Undivided Attention - AI and the Future of Work: What You Need to Know

Episode Date: December 4, 2025

No matter where you sit within the economy, whether you're a CEO or an entry level worker, everyone's feeling uneasy about AI and the future of work. Uncertainty about career paths, job security, and ...life planning makes thinking about the future anxiety inducing. In this episode, Daniel Barcay sits down with two experts on AI and work to examine what's actually happening in today's labor market and what's likely coming in the near-term. We explore the crucial question: Can we create conditions for AI to enrich work and careers, or are we headed toward widespread economic instability? Ethan Mollick is a professor at the Wharton School of the University of Pennsylvania, where he studies innovation, entrepreneurship, and the future of work. He's the author of Co-Intelligence: Living and Working with AI.Molly Kinder is a senior fellow at the Brookings Institution, where she researches the intersection of AI, work, and economic opportunity. She recently led research with the Yale Budget Lab examining AI's real-time impact on the labor market. RECOMMENDED MEDIACo-Intelligence: Living and Working with AI by Ethan MollickFurther reading on Molly’s study with the Yale Budget LabThe “Canaries in the Coal Mine” Study from Stanford’s Digital Economy LabEthan’s substack One Useful Thing RECOMMENDED YUA EPISODESIs AI Productivity Worth Our Humanity? with Prof. Michael SandelWe Have to Get It Right’: Gary Marcus On Untamed AIAI Is Moving Fast. We Need Laws that Will Too.Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins CORRECTIONSEthan said that in 2022, experts believed there was a 2.5% chance that ChatGPT would be able to win the Math Olympiad. However, that was only among forecasters with more general knowledge (the exact number was 2.3%). Among domain expert forecasters, the odds were an 8.6% chance.Ethan claimed that over 50% of Americans say that they’re using AI at work. We weren’t able to independently verify this claim and most studies we found showed lower rates of reported use of AI with American workers. There are reports from other countries, notably Denmark, which show higher rates of AI use.Ethan indirectly quoted the Walmart CEO Doug McMillon as having a goal to “keep all 3 million employees and to figure out new ways to expand what they use.” In fact, McMillon’s language on AI has been much softer, saying that “AI is expected to create a number of jobs at Walmart, which will offset those that it replaces.” Additionally, Walmart has 2.1 million employees, not 3. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:00 Hey everyone, this is Daniel Barquay. Welcome to your undivided attention. No matter where you sit within the economy, whether you're a CEO or an entry-level worker, a software engineer or a teacher, everyone's feeling pretty uneasy right now about AI and the future of work. Unease about our career progressions, about what our job might look like in a few years' time, or quite frankly, whether we're going to be able to find a job at all. You know, all of this unease, this fundamental uncertainty makes it really hard to plan for our future. What should I study in school?
Starting point is 00:00:35 What new skills do I really need to grow my career? Will my work be supercharged by AI or will AI replace my job entirely? And do I have enough certainty to really buy that house or start a family or should I be saving to weather the storm? Doing good work and ultimately living a happy life depends on having some predictability, some stable understanding of what are places in the world. and AI has injected some serious uncertainty into that picture. And many of us feel caught in the middle of some strong narratives. On the one hand, rosy visions of our creativity being unleashed at work
Starting point is 00:01:08 and on the other, some pretty dire warnings of being replaced entirely. So today we're going to try to cut through some of that confusion. We're going to look at what's already happening in the labor market right now and talk about what's likely coming in the next few years as this technology becomes more capable and more embedded in the workforce. And we're going to ask the crucial question, how do we get this right? Can we create the conditions for an AI economy that really enriches our work and our careers, or are we headed towards a much more unstable economic future?
Starting point is 00:01:38 Our guests for today are two economists who've been paying very close attention to how AI is already changing the nature of work. Molly Kinder is a senior fellow at the Brookings Institution, where she researches the impact of AI on the labor market. And Ethan Mollock is a professor at the Wharton School of Business, at the University of Pennsylvania, where he studies innovation, entrepreneurship, and the future of work. He's also the author of Co-Intelligence, living and working with AI.
Starting point is 00:02:04 Ethan and Molly, thank you so much for coming on your undivided attention. Thanks for having us. Glad to be here. So I want to start our conversation today with a snapshot of how AI is already impacting the labor market in the fall of 2025. Molly, you recently worked with the Budget Lab at Yale, and you put together a report to try to do exactly, that. What did you find? Great. Well, first let me say why we took on this report. So I think we are in a
Starting point is 00:02:31 moment of national anxiety. People are very worried about the impact of AI in jobs. And because of a lot of the very sensational headlines, it can often feel like we are already in the midst of a jobs apocalypse that already the labor market is being dramatically disrupted and people are losing their jobs left and right. That is often how we feel in the moment. So I teamed up with the Yale Budget Lab, Martha Gimbel, Joshua Kendall, and Maddie Lee. And we did a really deep dive into labor market data to ask the question, since ChachyPT's launch three years ago this month, have we seen economy-wide disruption to the labor force? And this was really trying to ground where we are today in the data.
Starting point is 00:03:21 And the headline is surprising to many people, given the sort of state of national anxiety we find ourselves in. Overall, we actually found a labor market more characterized by stability than disruption. So what we did was we looked at based on exposure to AI. Are we really seeing the mix of jobs moving away from the sort of more exposed jobs to less exposed jobs? And the The headline is we really aren't. We're not seeing evidence of true economy-wide major disruption. Now, it's important to note that doesn't mean that AI has had zero impact on jobs. Absolutely, there could be some creative jobs, some coding jobs, some customer service jobs that have been negatively impacted. Our methodology is not meant to look at very granular jobs. It's really looking at zooming out across the entire
Starting point is 00:04:19 economy to say, are we really seeing major disruption? And for us, our answer was no. With one very important potential caveat, our data did see some disruption to the youngest workers, so to early career workers. It isn't clear from our data, whether that's because of AI and whether some of those disruption trends predate the launch of chat GPT, but certainly we are seeing more occupational churn amongst the earliest career workers, which resonates with some recent data out of Stanford that did find elevated unemployment amongst young people. And that was the Canaries and the Coal Mine study, right? That's correct, yeah.
Starting point is 00:05:06 And I'm really curious about your take on that, because when I read the Canaries and the Coal Mine study, I came across this very different picture that said there are some really strong early warnings of labor displacement. And yet you seem a little more muted in what you want to say about the economy. There actually isn't a lot of daylight between our paper and the Canary's paper, except maybe some of the newspaper headlines that framed the findings. If you look overall at that Canary's paper, they did not find any substantial labor market change with that ADP data since ChiPT's launch for any age group. other than the earliest workers.
Starting point is 00:05:46 So actually, if you zoom out and just took a snapshot of the overall labor market and not just that segment of 25 and younger, their finding is the same as ours, which is we're really not seeing much of a discernible impact, broadly speaking, on the labor market. They found quite sizable impacts on young workers in AI-exposed careers. But our data does not counter that. What we can't say, though, is whether or not AI is causing that. I don't think economists have yet teased out exactly the isolated effect of AI versus other economic impacts, say the uncertainty in the economy, interest rates, tariffs, cyclical changes like the overhiring of coding workers during the pandemic. So there's a lot of factors that are playing into the weak job market for young people.
Starting point is 00:06:39 I believe AI is contributing to the picture. It's just we have to be a little careful about suggesting all of it is from AI when there are other factors. And Ethan, you know, your work looks at more of the nuts and bolts of like workers and organizations with AI. What do you make of this? I would be absolutely shocked if you saw a large-scale impact immediately. I think things have been changing very rapidly in the last four or five months. But I think that in terms of actual impact, that would be kind of surprising. Now, that being said, what we are finding from study after study, my co-authors work on other people,
Starting point is 00:07:15 is that AI has a broad impact on productivity and performance, on creativity and innovation. Basically, any job that you take that is a highly educated, highly creative, highly paid job, there is an overlap with AI, right? And overlap means transformation at a minimum. And we're starting to see that stuff happen, right? So we do have pretty strong beliefs that AI is going to be transformational. I don't think the macro patterns would pick up something very large yet. Even if you just take something like coding, for example,
Starting point is 00:07:45 it really took until a cursor introduced agenda coding in 2024, and we now have some data just came out from paper that there's 39% improvement in productivity from getting that. So, like, everything we have is that early AI models were much less impressive from a productivity impact and for all the economic impact, and I think we'll see that in the future, not today. And I would just add to Ethan's point, when you look at history, this is not surprising at all. In our paper, we actually compare these first nearly three years of occupational change since ChachyPT's launch to previous waves of technology. So the computer and the internet, they're on a very similar trajectory. And there's lots of reasons why there is a gap between the speed of the technology and how much it's really being adopted in the workplace, which I think that gap.
Starting point is 00:08:36 right there is responsible for a lot of the more muted early impacts. So I think that's really important for people to get. You're saying that, you know, people are much more exposed to this transformation than we're currently seeing happen in the job market. Yes. So there is a very large gap between the exposure of occupations and sectors to this technology and the actual usage in the workplace. So what we see when we look across sectors at usage,
Starting point is 00:09:06 is highly uneven. There's a handful of sectors that are way out in front with very widespread adoption. Ethan would be very thoughtful and reflecting on this. I mean, there are some sectors where there's not a lot of friction. It's really easy. I mean, in research, I can just turn to Chachapiti deep research. There's no friction. There's no regulation. It's easy for me to do. Coding, it's very easy to turn to cursor. There are other sectors. There's a lot of friction, whether it is skittishness about privacy, health care, even some in finance, a lot of companies are worried about their proprietary data. And so there's just very highly uneven usage. So even within the jobs that are quote unquote exposed, at least a sort of medium to high level, we are really not yet seeing the potential of the disruption realized because of these lags in usage and also lags in sort of technological
Starting point is 00:10:03 quality. You know, Ethan, a few years ago, you introduced this concept, the jagged frontier to kind of talk about this, about the sort of different capabilities that AI has. Can you walk us through what the jagged frontier is and how that helps us think about this? Sure. I mean, so the idea of the jagged frontier is that AI is good at some stuff and bad some stuff, to the most basic way, right? And that's hard a priority to know what that is, especially if you don't use the systems a lot, right? So in the early days, that would have been, and when we talk to GPT4, we would have said math is a weak spot, right? The AI hallucinates math all the time, or citations is a weak spot. And so what are the implications of that that you've seen in how people are
Starting point is 00:10:38 using AI in their jobs? Well, I mean, one thing is the frontier is filling in and expanding. So Phil Tetlock and company have this forecasting group, and they get a bunch of experts together to forecast future. In 2022, the year that chat GPT came out, the forecast was that there was a 2.5% chance that AI would be able to win the International Math Olympiad in 2020. And not only did two models do this, OpenAI and Gemini, but they won it with pure LLMs, right? The thought was you would need a large language model using some math tool. Nope, it turns out now we've figured out how to make LLMs good at math. And so a whole frontier that used to be very bad at math, they're now PhD level in many cases and not all at math.
Starting point is 00:11:22 I'd really love to make sure we ground people in how is this playing out of the world. So with that jagged frontier, how is that affecting the way people are using it now in the corporate environments? So, AI still has some strengths and weaknesses. Some of those are models themselves. Some of those are the interfaces we use to talk to those models. And as a result, there are these gaps of things AI can't do, right?
Starting point is 00:11:47 I mean, obviously some of that is it doesn't have legs and won't walk across the room. But also, there are capability gaps that appear in any job, right? That mean that if you're highly exposed to AI, you still probably have a couple things that the AI cannot possibly do. because it is either not built for it or the models aren't good enough yet,
Starting point is 00:12:04 and that changes how use operates. The goal of the AI labs is to fill in those gaps or to push the frontier past the point where your error rate is lower than human, so who cares? So I have a strapline that if you can do your job locked in the closet with a computer, you're far more at risk in the future with AI than if you can't. It actually is the kind of opposite of the pandemic, where the jobs that had to be in person were sort of at risk of COVID, and those of us in white-collar jobs who could work for home were safe. It's kind of the reverse now. If your job
Starting point is 00:12:38 really can be done sitting in a closet with a computer with no human interaction, that's a much more problematic job. But we aren't there yet. I mean, I think a really major deterrent to widespread adoption in the workplace has been the fact that these models still mess up. Or the idea that you still need a human in the loop to oversee it. But I don't know if that's true. I don't know if that's true of the current models that are out in the last month or two. I don't know if it's true
Starting point is 00:13:07 so much of the pro-in-thinking level models that are out there. I think people talk about models messing up and then they're using CHAPT-5, which is a router and it often puts them to a dumber model, right? I don't actually think that that is well documented at this point, that the mess-up rate is that high compared to humans or the hallucination rate is still where it was. And I think when we say the models mess up,
Starting point is 00:13:29 we're having this assumption that it's like a year ago or if you're using a weaker model, absolutely, you're going to get hallucinations and mistakes. I'm not sure that that is present with the current set of generation of technology coming out right now. But regardless of the state of conversation, Ethan, this has led you to write a lot about how people are hiding their use of AI, right?
Starting point is 00:13:48 I mean, people may be afraid that they're going to get risk slapped for it, maybe using AI in the workforce, but are actually, you know, writing it and saying, no, I'm not using AI. Can you talk about what you're finding there? Yeah, I mean, I think that there's a whole bunch of reasons, right? Let's go back to the main thing that people talk about using AI for and somewhat were to blame because we kicked off the discussion about productivity with our early research. But if you think about it, right, let's say that you are using AI at work. AI is very single player right now, right? Like, I work with an AI system.
Starting point is 00:14:17 We're just barely in the days of like, how do we build a system for the entire organization? So it's very much an individual worker using it. Now, think about their incentives, right? First of all, They look like geniuses right now because they're using AI to fill their gaps. Do they want everyone to know they're not a genius? It's the AI genius. No. Second, there's an AI policy in place that explain that usually is based on old understanding of what AI could do. Often data fears that aren't really an issue anymore, but it mean that you get fired if you use AI wrong.
Starting point is 00:14:41 So no one's going to show using AI. Or they didn't even know who to talk to if they're using AI to. So who would they show they're using AI to? So AI use the sort of secret cyborg phenomenon I talk about is ubiquitous, right? We know over 50% of Americans say they're using AI at work. in at least in the survey data, which you can have doubts about one way or another, they're claiming that in the one-fifth of tasks they use AI for in these surveys, they're getting three times productivity gain.
Starting point is 00:15:04 And then even assuming you get their productivity gain, let's say I can now produce PowerPoints in one-tenth of time, the bottleneck becomes process. What do I do with 10 times more PowerPoints? Or even more directly, coders are more productive. We have not built a replacement for agile development, which is what people still use to code. How do I have a two-week sprint where my coder is 100 times more productive?
Starting point is 00:15:23 Like, what do my daily stand-ups look like? How do I change our work operates? What are the barriers? So, like, I think the technology is being adopted very quickly. I think people are seeing very big productivity impacts individually. I think the question is, how do you translate this to organizational ones, is partially not just an economic and process one, but also motivation. Personally, I use AI all the time in my job.
Starting point is 00:15:45 Not because my employer told me to or even really encouraged it. I'm just finding so many ways it's enhancing my research, saving me time, making me time, me more productive and really enhancing my thinking very much, you know, in Ethan's book, sort of this co-intelligence. But, you know, when I look at my own institution, we haven't fundamentally re-engineered any of our workflows across any of our divisions because of AI. And it's very much up to individuals to adopt and find sort of individual tweaks from it. So my gut is that as organizations figure out how to really embed this technology and not just count on individuals using chat GPT, but really embed the API and sort of re-engineer
Starting point is 00:16:30 their workflows. That's where you might see not only more productivity, but also, frankly, more labor displacement as well. I think you both seem to agree that we're not seeing massive transformations yet at the macro level, but you're also saying that we need to be watching for early signals of that transformation. So, like, what should we be looking at? Like, what would be? the canaries in the coal mine that this transformation is starting. Yeah, I mean, we say in our paper that our methodology was very purposely broad and big. It could catch if the house was on fire, not if there was like an individual stove fire in a small room. The methodology of looking at
Starting point is 00:17:13 the labor market broadly is not going to pick up the early canaries. I think the headlines have been so sensational. They have instilled far more fear than is justified. And that could be its own self-fulfilling prophecy. Companies are looking over the shoulder. They're hearing all about these layoffs. They're thinking, should I be laying off employees? Should I stop hiring? And that can feed on itself. So I think we need to have a grounded sense of really where we are. But I think the reason why I do my job at Brookings is that I thoroughly believe in the transformative potential of this technology to reshape work. And I don't think that where we are today is necessarily where we're going to be tomorrow. I think it's imperative that we track very close
Starting point is 00:17:53 the labor market impacts, especially in some of the sectors where we saw the greatest adoption. What about the early movers? What about customer service? What about coding? I'm looking at finance. I'm looking at marketing. What's happening with early career workers? That's where the greatest noise is right now. So I think the public should be reassured that we are not in the midst of a jobs apocalypse, but we should be very concerned that this is a technology that will reshape the workforce, and we have to stay vigilant about it. I would add something else, right? I think if I had a problem with this conversation,
Starting point is 00:18:27 which has been really interesting, the problem of the conversation is it makes the technology external, right? This is a thing that's being done to us, and its consequences are inevitable and destructive, and that's it. And I don't think that's necessarily the case. I think we have agency over how this stuff is used, and the AI labs are still trying to figure this stuff out. I talk to them all the time.
Starting point is 00:18:47 And you, by the way, look at the differences in announcements from, say, Walmart, where the CEO has said, My goal is to keep all three million employees and to figure out new ways to expend what they use, right? And you could say, are they going to do it or not? But that's the statement versus Amazon that might be like, we're going to get rid of as many people as possible. There is this chance to show a model that works, right? The fact that everybody has a consultant at their disposal might have an impact on consulting jobs. Maybe that actually superpowers all the jobs for management was lacking.
Starting point is 00:19:13 The fact that a product manager can now do coding and do some prototyping can expand what we do. The fact that this tool works for innovation makes a difference. And I think that it's up to people in organizations to figure out how this is used and there's competing models of use. And I think that we would be behoove ourselves to spend more time thinking about what the twist is going to be. What do we want this to be used for rather than just inevitably talking about, which was very important. Still, like, job loss is inevitable. We haven't seen it yet, but don't worry, everyone's going to lose their job soon. So that's the direction we want to go.
Starting point is 00:19:46 I mean, you contrasted Walmart with Amazon and you're saying, okay, we want to be in a world of much more creative management, of much more creative understanding about how we can all play a part. But I'm not convinced that that's the world we're going to end up in. I think maybe it's, my worry is that these sort of these beautiful stories about AI unleashing our productivity
Starting point is 00:20:01 are going to actually feel relatively short-lived as eventually entire job functions get replaced and the pressure is to just do away with them. Are we pulling up the job ladder underneath us? Are we removing all these entry-level positions? In a lot of ways, this conversation really has the same story behind it as every other conversation about AI, which is what we're really asking is how good will the models get and how fast, right?
Starting point is 00:20:24 I think that the GPD 5 class models are good enough to transform all of work, but they will transform them gradually over the next, you know, 10 years as people figure stuff out, which is enough chance to say, you know, what should we do differently? And by the way, part of the reason why you might not want to just have productivity gain to job loss is if your productivity gain is the models doing the work, all your competitors and every person in the world has actually the exact same models of you. There's like nine AI models that matter in the world right now. And if these nine models, there's no sorts of competitive advantage in the long term
Starting point is 00:20:53 in having the same AI as everyone else run your decision-making process. So there might be reasons you want to still have things done by humans or differently. But the bigger question that we're all asking is, how good do these models get and how fasts? And the goal of every AI company is AGI, artificial general intelligence, a machine smarter than a human at every intellectual task. They think they will get there in the next two years. Some people already think they're there. But you could see that may not transform jobs overnight.
Starting point is 00:21:19 But, like, that is the question. If models are better than humans across a wide variety of tasks, then it's a matter of time and we have to figure out what everyone does with their lives. If that doesn't happen, it looks more like the technology stalls out
Starting point is 00:21:30 or the jagged frontier is too jagged, then we're in a world where we're going to see competition between people who use AI's augmentation and automation. I think augmentation will often win over automation, but we don't know yet. And that's really the big question. So let me back up for a second, which is, you know, one of the things we often cover at the Center for your main technology is, is people radically underestimate both how transformative to the good and to the bad of technology is. And they come with simple narratives about what this is going to do.
Starting point is 00:22:03 And then we're surprised five or ten years later when the technology was so much more complex than we thought that we drove the car into, you know, this to the other ditch by the side of the road, that we didn't stop to imagine what this would do to our world. And I guess what I'm trying to ask you is, like, if we look at the next few years, what are the transformations that are going to be surprising to our labor market that you two will understand, but that people won't have thought about? On my end, I would say, I think that people are underestimating the level of quality of work that these systems can produce. And I'm partially the fault. Like, when I wrote co-intelligence, intern was the right analogy to use for AI. It is not working at intern level anymore.
Starting point is 00:22:44 And I think that one of the things that will blindside people a bit is how capable these systems are. I am now getting fully automated papers out of these systems that I would be impressed by a second-year graduate student producing. Right? We're not there yet in replacing me as a professor, but like if you had told me I can get a high-quality academic paper or if I throw something in a GPD-5 pro, it finds errors in my papers that 10 seminars and the review process
Starting point is 00:23:08 and a thousand citations since have never located before. Like the changes are to high-level, high-intellectual-level work, I think people are not expecting as much as they are. And then I think the big bet, the possibility one way and the other is agents just in the last four months for a variety of really interesting reasons have just started to work. And the question is, are they going to get as good as people think? Because then it becomes very different when I can just say to the AI, hey, go through my email, figure out what my priorities are, email our top sales prospects that I haven't had attention to, go back and forth with them, build the products they need to customize it and the proposals and just take care of stuff. Like, that is what the labs are aiming for. And if we're there, that's a very different change.
Starting point is 00:23:49 Then I turn to the AI and ask to write the proposal. It's not a good proposal. I ask you to change the proposal again. And then I check my email because the AI can't check my email. And then it misses some of the context of who the person is. I think people are not expecting models to get as good as I think they're already getting. But all that leads me back to the question we started with at the top, which is I think I'm afraid that this notion of a gradual labor transition that we're going to wake up one day and say it's a 10% in there. It's a 20%, it's a 30%.
Starting point is 00:24:15 It's not how this is going to be. We're going to wake up one day to realize that the connective tissue between doing these different tasks that make up our jobs, suddenly an AI can do it, and suddenly an entire function is automated. Aren't we likely to see these big punctuated changes in, you know, radiologists are safe right now because they're overseeing the AI, and all of a sudden you wake up next month, and you know what? We don't need radiologists anymore. Yeah, if the agent stuff works the way the AI labs want, and there's lots of ifs in that statement, right, that we could talk about, if it does, then, yes, it will be slowly and then all at once.
Starting point is 00:24:52 Because the problem with substitution is everything we're talking about with the process, right? Like, you know, if the system isn't very good, if you have to do a lot of work building custom solutions, if you have to ask a career people to replace themselves with AI, you're going to have all sorts of forms of resistance. But if I can just go ask an AI agent, do this task, figure it out, then we have a very sudden change.
Starting point is 00:25:13 And that is what the world that people are aiming for is, right? And so, again, you know, we don't know. And Molly, how does that affect your work? Like, what do you think? I think this notion of a drop-in remote worker, vis-vian AI agent, is what is driving fear in people. Because that is unbelievably disruptive. If the AI labs can truly create an agent that is literally just drop in,
Starting point is 00:25:39 you now are like covering certain functions and you're basically my virtual teammates. That vision is extremely disruptive. Personally, I think we are overestimating how quickly that's going to come and how many bottlenecks there are that are very sort of interpersonal systems. I mean, most of our jobs don't look just like coding. And I think there's a reason why coding is out in front. The real world is far messier. When I sit in Washington, D.C., I often work out a LePan Cotidian in Capitol Hill. And I'm surrounded by lobbyists and people whose whole world is relationships and influence.
Starting point is 00:26:21 And when I go to Silicon Valley, they live in a world of coding where, you know, it's just a very, there's many aspects of our job that I think are not going to be so easy to replace with a drop in remote worker. So I don't have the same AI 2020. 27 fear that we're staring down in a year from now. But I agree with Ethan that typically I expect this to be more gradual than what you're hearing from Silicon Valley. But there could be pretty dramatic punctuations. If agents get really good, I think it will start moving a lot faster. You know, the other thing I would say is I totally agree with Ethan and you, Daniel, that I think the public in many ways is underestimating how good these models are getting at certain very skilled, highly cognitive tasks. You know, when Chachupit deep research came out,
Starting point is 00:27:11 that is my job. So I had that experience of this moment, what Ethan talked about in his book, I felt it. I mean, my hair is standing up on my arms right now because I had that out-of-body experience where I got access to it. I asked it to write a paper. I have wanted a famous economist to write for years, which is what can we learn positively from the last few decades? of technology automation and women, because women have been a lot more resilient than men. So I gave a bunch of really high-quality papers and some people they should. The paper that ChatGPD put out was so well done. I've shared it with lots of extremely influential economists as my example of how good this is.
Starting point is 00:27:51 And this is going to creep up in so many different, very expert, high-quality knowledge jobs. And that for society is dramatic change. just a few years ago, if I had been on this podcast before Chatchipi's launch, which was three years ago this month, I never would have identified these highly skilled, highly cognitive roles as being susceptible. So still think in the real world, it's going to be slower, like to your point about radiologists. I actually think it's going to move slower to fully replace humans in some of these roles. But I think businesses are, sectors are going to be disrupted, roles are going to be disrupted. It's going to be uneven, but it will happen. And I think what instills fear in the heads of
Starting point is 00:28:37 so many Americans is this sense of Russian roulette. Are you going to be the person that's going to wake up one day? And there's a version of chat cheap deep research that can do your job. And I think that's terrifying to people to sense this is, these are careers people have spent a lot of money, a lot of time on their education, years of experience. And I think people feel quite vulnerable. But again, the sort of caveat to that is I don't think we are facing down in two years PhD-level drop-in remote workers that are going to substitute for most of us. But I have three kids. So my oldest is 10. So when I look out, I think 10 years from now is still when he's in college. Like this is still in the lifetime of a lot of us, especially those of us with kids. Like this, where this could go
Starting point is 00:29:22 could be mind-boggling. But I think we should feel some comfort that tomorrow our organizations are not going to be full of drop-in remote workers. And I agree. I mean, I think that there's, but I feel like what ends up happening sometimes in these discussions, and I think Molly we're on the very same page, whether than Daniel, which is that there is this sort of view of like, it's either all hype or it's 2027 and there's supertological machines and we're all just going to be building machine pyramids or something like that for them. And I think that there's a tendency to swing to one side or another. And especially for people who are kind of rational people who study this field like us to be, you know, the hype is overblown. The hype is. off, like in almost certainly, but it's not off by as much as people who, like, that doesn't mean things look normal in the near future. Right. Well, it's like the hype is overblown, but the skepticism is overblown, too. Right.
Starting point is 00:30:10 And the timeline is there. Like, people, there's enough value now in the models that people will figure out a way, like, let's say there's a financial collapse of AI stuff. I'm not convinced that there's a bubble, but there could be a bubble. I don't know, I have any idea. I don't think that matters very much from, I think a lot of people think that something is going to make this all go away, that we're going to hit some limit and then AI is done for and we're going to work like so it's either you can ignore it or you have to panic all the time
Starting point is 00:30:33 and i think we are in the world's um either best or worst place which is you have agency right now like this is the time for policy intervention this is the time for companies to show models of good use but it is not a time where it's like either are we're all doomed or you know we're all saved right i love that statement so much and actually that was partly the motivation of the research paper i put out with Yale was not to say there's nothing to see here. I am very firmly believing that this technology has enormous capability, but it was to say, look, we have a moment to catch our breath and shape the way this is going to play out. I don't like the fearmongering coming from Silicon Valley in a way that strips us of our agency. This thing is coming tomorrow. There's nothing we can do
Starting point is 00:31:19 to stop it. It's this inevitable force. Every job loss is all about AI. This is coming for you, don't even go to college. I mean, this is sometimes the tenor of the conversation. Part of what we wanted to do with ground the conversation to say, today, we are not yet in a job's apocalypse, is not to say it will never come. It's to say, let's society catch our breath and let us steer this. Let us have agency because this is not going away and every day it's getting better. So we do have to make sure that we are steering it. And I think, again, a lot of incentives in the system are not steering us toward a sort of pro-worker vision. Earlier in this conversation, you said, Molly, that you know, you're not worried about the
Starting point is 00:32:02 tech or organizations. You're worried about the wrong incentives. Pull us into that. What are the incentives that you're seeing? And why does it worry you? Yeah. So, first of all, I worry that we are spending an absolutely mind-boggling amount of money on investing in these systems. And one of my fundamental worries is, are investors expecting an economy with a lot of those drop-in remote workers? So just to be clear, you're saying that because trillion dollars has been poured into this already, there's an expectation of getting a return on that capital, and that expectation could become turning the screws on business models, turning the screws on workers. Is that what you're saying? These are decisions that are going to be made at the employer level.
Starting point is 00:32:51 It is going to be the decision of employers to decide how much this is going to use to get more out of your workers, to augment, to unleash new possibilities to grow, versus simply this is a cost-cutting exercise and it's a race to the bottom. And my worry with a lot of the sort of pressure on the C-suite is we got a show in the short run some return on our investment. And one of the quickest ways to get there is this kind of race to the bottom with labor savings. And then when you see, you know, Morgan Stanley coming out with, here's the potential return on all this investment, and it's a huge number. And a lot of it, over half of it was coming from labor savings. It does make you question, what are the incentives of this? And are we operating in a world where, you know, if you take a long view, these employers are going to need to train up their future level threes and level fours, who are going to be able to do things that technology can't do. But are they just, you know, thinking about their short-run costs. So let's cut our entry level, be damned if this means that in three years we're not going to have a pipeline of talent. So I think some of these incentives, I think, I worry, are going to push us into the world that is not optimal for workers and might steer us in a world where we see pretty phenomenal inequality. Who benefits from this technology
Starting point is 00:34:10 and who doesn't? I think this is really what keeps me up at night. Ethan, do you see the same picture? Yeah, I mean, I think that that's a wise point, which is what the incentives are, leaving aside bubbles or not bubbles. On the other hand, I do think that if you talk to the AI labs, I think that they still view this as that scientific research gets accelerated and everybody has like, it's abundance for everybody. We just don't have a path that leads from where we are now to abundance for all. There's a policy decision to make. What does that look like, right? There's just the fact that even if everything works out great, living through industrial revolution historically sucks, right?
Starting point is 00:34:46 Like, you know, like, you know, like there was, it's a tough time in the earlier Dutch of revolution. Lifesans fall before they grow up again. And, and so I don't know the model there, but I do think that there is concern that a gentle pathway, like there's a lot of attention paid to hard takeoffs of technology. I think that one thing Molly's pointing out that we should be talking more about is sort of hard takeoffs of automation versus having a period with more competing designs for how we approach using AI, more humane designs. I have a feeling some of those will win, right? I think that there are more solvable problems with output than people think.
Starting point is 00:35:20 Like, I think that the bitter lesson is that if you want a particular output, AI is really good. You can teach an AI to do that output. But if process matters, right? And so the answer to the Brookings problem, right, is everyone going to do these reports. It's that a better report would be that everyone debated with each other. During writing the report, you'll end up with a better report in the end. And so the question is, how do we reestablish the idea that process matters,
Starting point is 00:35:42 interaction matters. And I think giving us more time to decide would probably be helpful. So given all these powerful incentives, right, these powerful cost-cutting incentives, these labor replacement incentives, how do we shift them towards that future, Ethan, that you're pointing out?
Starting point is 00:35:55 Like, if we could design something different in policy, in the way that we roll this out in companies, in the way that people use it, what are the levers to end up with a better outcome? My self-serving view from being in a university is that this is the time that universities actually could be extremely helpful. because we might need to bolt on an extra session that is apprenticeship but for knowledge workers,
Starting point is 00:36:16 which we always trusted to happen inside organizations. Maybe we need to treat like level two consultants as if they were welders and have more formal training with testing and other stuff built in. We do know how to do that, but we'd have to shift the incentives to make that happen, right? I think the other example of this is more R&D effort now going into use cases that are positive for AI. I mean, I do a lot of work on education AI. It baffles me that there has been no crash effort to build the universal tutor yet. You know, as somebody who's done education for at this point, 20 years building technology for education, there's a lot of cynicism in the education community about how technology works. But we actually have some early evidence that AI
Starting point is 00:36:53 tutors are amazing. And certainly for people who don't have access to enough schooling or something similar, we need crash programs like that. What's the crash program for how humans can work with AI workers? And I think the incentives are misaligned in that direction. I think a lot of academia and policy institutes, Molly aside, aren't taking this very seriously that this is actually a big disruption. And I think that there are actually some intellectual lift that's required right now to incentivize people to actually, here's a way humans can work with AI to be better than the AI alone. And that's not happening yet. Yeah, it's really hard to come up with sort of big, bold ideas that can change the incentives. That has been my express mission. 2025 is a year of solution. So I've
Starting point is 00:37:34 been batting around some big ideas. First, I would say at a very high level, and I want to acknowledge my friend Stephanie Bell at the Partnership AI, has several times shared this idea with me that starting at benchmarks, every time we are talking about measuring AI, it's whether or not it's better than a human. Right off the bat that steers us in the wrong direction, why are we trying to best humans? Why isn't the benchmark some kind of combined, like making the human better. So right off the bat, I think we have all the wrong incentives when we're measuring the thing that was actually probably not good for society. Then you can imagine funds. Like we've got DARPA. We have all sorts of sort of federal money going toward innovation. Why are we not steering
Starting point is 00:38:19 that toward a new benchmark where you can prove that the sort of the output that you're aiming for is sort of leveling up humans in some way? So I think you could imagine like tying some sort of innovation funding to that. And that could be somewhere where I think the public sector can really make a difference. I think another area that is really important, I'm really thinking a lot about employers. When we think about how AI is going to impact work and workers, it's going to happen in the workplace. And so I think the question becomes like, what are the incentives of employers and what levers do we have to nudge in a better direction for workers? And that could be everything from more of a focus on augmentation versus automation. It could be sharing the gains. What
Starting point is 00:38:59 happens when there's big productivity gains? Are workers going to get paid? Are they going to get more time off? I mean, there's big questions around that. What are levers where we can steer in a better direction and can public policy play a role? I've been working for a few months on a big idea that I hope to be publishing soon around how can we change the incentive structure of employers vis-a-vis these entry-level hires. I mean, Ethan was saying, yes, I think certainly we can imagine a world where universities, you know, you can take on more schooling to get that apprenticeship, but then those costs fall to the young person. And what happens when the employer is they're the ones getting the cost savings and the extra profit from cutting? What kind of
Starting point is 00:39:40 incentives can we push so we could give carrots and sticks to make employers still do some of those trainings? I'm searching for credible visions that paint a pro-social version of that incentive. But I have to say I'm not very optimistic on that front. Well, Daniel, one of the reasons why I feel some pessimism is that we have something that I've documented with colleagues, the great mismatch. If you like at the sectors in the economy that have the greatest exposure to AI, meaning this is where we expect the greatest disruption, they have the lowest union density across the entire economy, typically 4%, 3%, as low as 1% in finance, which means 90 plus percent, 95 plus percent of workers in these sectors. have no collective bargaining. If we lived in a country with more collective bargaining, if there were gains, you know, so workers became more productive because of AI and could do far more and can almost level up to a new role, you could imagine a process by which workers can figure out some
Starting point is 00:40:38 gain sharing. We don't have that kind of power in the workplace. And so it either is going to be left to employers to voluntarily take a high road approach. And I will say we have no definition in this country of what it looks like to be a high road employer on AI. We do for things like wages, but we don't have that high road yet, and I think we should develop that and get a consensus on, or is there going to be some public policy that's going to force this? Is there going to be at some point, I don't think we're anywhere close to this right now because of where we are with the AI trajectory, but could you imagine some, you know,
Starting point is 00:41:13 legislation that imposes something like a four-day work week, or, you know, what are the mechanisms by which there is gain sharing. That is one of the sort of questions. I think right now so much of the policy discussion is either reskilling or redistribution through something like UBI. Nothing is about the work itself and how do we make sure workers benefit as they become more productive with AI. And I think these are some of the big North Star big ideas that I think we as a policy community need to come up with. We spent so much time in this conversation talking about AI's effects on people just entering the labor market, the latter being pulled up behind us and everything. If you both had one piece of advice for people entering the labor market,
Starting point is 00:41:57 I want to start with you, Ethan, because you're a professor at Wharton. Your advice for your students at Wharton in an age of AI, what is it? My first joke to everybody who asks this is that they should go into a regulated industry that can't be changed because of too much government oversight or enough government oversight. But outside of that, I think that, I think that jobs that are bundled where the bundle of tasks is incredibly diverse covering a lot of different kinds of interaction
Starting point is 00:42:26 right so I think about doctor being one of these right you wouldn't expect someone to be equally good at hand skills and empathy and administration and diagnosis and keep up with the research that that's a nice example of professor right is one where like my job is clearly going to be disrupted but like
Starting point is 00:42:42 a professor does many things many of our jobs are very complicated so I think a single serving job where you're doing one narrow thing is, you know, writing press release every day is a much more risky job than many interactions with many sets of people at different kinds of levels in the real world. So there are a lot of good jobs like that.
Starting point is 00:42:59 That's really interesting. That's like the career you should be taking is one of breath and... I think, because what does this help you with? Also, you know, I'm an entrepreneurship professor. When you think about it, entrepreneurship is all about you're really good at one thing and you hope that none of the other stuff
Starting point is 00:43:15 you're terrible that destroys you, right? And this is a great time for entrepreneurship, too, because the AI stops from being at the 0th percentile of a few things you would have been the 0th percentile of, and now you're the 8th percentile of everything, you're not amazing at. So I think jobs where you're held back by one or two skills might actually be really interesting places for the future, too, where I'm not a good writer, but I'm incredibly good at working with people. Maybe a sales job that I couldn't do before is now doable in a way it couldn't be. So I actually think they're like bundled jobs, complex jobs, jobs that have
Starting point is 00:43:43 many sets of skills where I'd be focusing. So I think for young people, be good at being a human. I think relational skills, you know, being influential, being able to kind of get up and speak and motivate and influence and connect with people is definitely something that AI is not going to be able, anything embodied like that. I don't think the AI is going to be very good at right now. I would also say AI has so many superpowers. embrace AI, find your passion, and make sure, again, you're as much flexing your humanness as it is being a vessel by which AI is going to make you powerful. You know, I think neither of your jobs is under threat right now, but both of your jobs are going to change wildly over the next few years. And I look forward to keeping up with both of you as we ride this wave.
Starting point is 00:44:38 Thanks for coming on. Thanks, Daniel. I really appreciate it. Thanks for having me. Your undivided attention is produced by the Center for Humane Technology, a non-profit working to catalyze a humane future. Our senior producer is Julius Scott. Josh Lash is our researcher and producer. And our executive producer is Sasha Fegan,
Starting point is 00:45:00 mixing on this episode by Jeff Sudaken, original music by Ryan and Hayes Holiday, and a special thanks to the whole Center for Humane Technology team for making this podcast possible. You can find show notes, transcripts, and much more at humanetech.com. And if you like the podcast, we'd be grateful if you could rate it on Apple Podcasts, because it helps other people find the show.
Starting point is 00:45:20 And if you made it all the way here, thank you for giving us your undivided attention.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.