The Weekly Show with Jon Stewart - AI & The Future of Work with Daron Acemoglu and David Autor
Episode Date: April 22, 2026As artificial intelligence continues to integrate into the workforce, Jon is joined by MIT economists David Autor and Daron Acemoglu, recipient of the 2024 Nobel Prize in Economics, to understand what... the future might hold for American workers. Together, they explore lessons from past waves of technological change, examine what pro-worker AI could look like, and discuss what policies could help workers navigate an increasingly uncertain economic future -- and whether the incentives exist to achieve them. This episode is brought to you by: GROUND NEWS - Go to https://groundnews.com/stewart to see all sides of every story. Subscribe for 40% off the Vantage Subscription only for a limited time through my link https://groundnews.com/stewart AVOCADO GREEN MATTRESS - Go to https://AvocadoGreenMattress.com/TWS and check out their mattress and bedding sale! BOLL AND BRANCH - Go to https://BollAndBranch.com/tws with code TWS to unlock 15% off. BOMBAS - Head over to https://Bombas.com/WEEKLY and use code WEEKLY for 20% off your first purchase. Follow The Weekly Show with Jon Stewart on social media for more: > YouTube: https://www.youtube.com/@weeklyshowpodcast > Instagram: https://www.instagram.com/weeklyshowpodcast > TikTok: https://tiktok.com/@weeklyshowpodcast > X: https://x.com/weeklyshowpod > BlueSky: https://bsky.app/profile/theweeklyshowpodcast.com Host/Executive Producer – Jon Stewart Executive Producer – James Dixon Executive Producer – Chris McShane Executive Producer – Caity Gray Lead Producer – Lauren Walker Producer – Brittany Mehmedovic Producer – Gillian Spear Video Editor & Engineer – Rob Vitolo Audio Editor & Engineer – Nicole Boyce Music by Hansdle Hsu Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Need a vehicle that isn't afraid to make a splash?
That's the Volkswagen Taos.
Capable and confident, the Volkswagen Taos is fit for everyday life.
Nimble in traffic, agile and tight spots, and still spacious enough for weekend getaways.
While available 4-motion all-wheel drive gives confidence in rain and snow.
The capable Taos, you deserve more confidence.
Visit vw.ca to learn more.
SuvW, German engineered for all.
Ladies and gentlemen, welcome.
My name is John Stewart.
It's another weekly show podcast on this Earth Day Eve.
Is that?
Do you celebrate?
I love Earth.
I can't wait.
The pitter-patter of little feet at 6 in the morning running downstairs to open up
the Earth Day presence.
And as this glorious Earth is being celebrated while simultaneously being destroyed on the back end of it,
I thought it would be appropriate not to worry about Iran, not to worry about climate change,
but to worry about a third exercise.
threat, which is AI, artificial intelligence. It is happening people. And it's about time that we had a
sober conversation about its deletrious effects, but also its opportunities. And so we're going to
go straight to the source. We're going to go to two brilliant, brilliant MIT economists.
We're going to talk to us a little bit about the possibilities of AI.
the collateral damage of AI and the various ways we might be able to mitigate that.
So we're just going to get right into it with those cats right now.
Here they are.
Folks, we're going to break it down today in terms of the AI revolution
and what will be the repercussions for the American people, the American worker,
the world writ large.
Who do you go to for this kind of thing?
You go to the experts.
You go to the brilliant people.
You go to Daron Osamoglu, Nobel laureate.
I don't throw that around.
Nobel laureate in economics, MIT Institute professor and David Otter, Rubenfeld Professor of Economics at MIT.
Guys, thank you so much for joining us today.
Oh, our pleasure.
Absolutely.
Thanks for having us on.
David and Dern, I am beginning to get increasingly discomfited by the speed at which AI seems to be infiltrating into not just sort of the popular consensus in culture, but the workforce.
So I want to ask you guys, what is our time frame as this technology is, when are we going to really feel the full effect of this new technology?
Just beginning to get worried about it now, John?
Don't.
You know me.
You know me.
We know each.
No, I've been, you know, I'm worried about everything.
So am I.
And I'm very worried about this too.
Yeah.
Not about the timeline because the timeline is so uncertain.
It's hard for me to worry about something that's so uncertain.
But with all of the consequences, I think we are definitely not ready for AI.
The workforce isn't ready for AI.
We don't know what it's going to do.
I think the people who are really not ready for AI are the students whose learning is going to be affected in so many different ways.
And we don't know.
We have no guardrails, no ways of ensuring that.
students are actually learning how to learn and they can actually
become experts in anything in the age of AI when they can get a lot of answers from
AI. So there's just so many things to be concerned with.
Now, David, will they need to learn anything? Because
won't AI, what will they need to learn? Won't we all just be?
If they don't need to learn anything, then they're just not needed as workers.
And we don't want to be in that scenario. Right. So we do need people to have
expertise and mastery. And I do think AI has both potential and risk, right? And I think
Daron will talk more about the risk, so I'll probably talk more about the potential. And let me point
out that although I do not have a Nobel Prize, around here at MIT, it's more distinguished to
not have one than to have one. David, can I tell you, I love how you've set yourself apart from your
colleagues. Exactly. By not getting a Nobel Prize. Exactly. Someone's got to, you know,
someone's got to stand out. You know what? The idea that you have that rebellious spirit,
at MIT to go against the grain and not get a Nobel Prize.
Well, then let's start with that.
David, the real concern is, look, and let's step back for a moment.
We talk about disruptions for workers over time, you know, industrial revolution,
globalization.
Those were sort of the dynamics that really impacted workers, but those took place over time.
So, David, you're going to talk more about the potential.
Talk us through the previous disruptions and how AI fits into those paradigms.
Sure.
So let me first say, just to bring it to the present first, right, what we should be concerned
about is not running out of jobs, per se, but having jobs where their expert labor is not
needed.
So a future in which everyone is like carrying the box from the UPS truck to the front door is
very different from a future in which everyone is doing medical care, right? So it's not the
quantity per se, but whether specialized human labor is still needed. I think it will be, but it
really matters whether we are replaceable, whether we are all kind of redundant versions of one
another, or rather we have real added value in this economy. Now, we've been through lots of
technological transitions. Some have been much more traumatic than others. The Industrial Revolution
was very much so. There's a 60-year period that people refer to as Engels Paws in the first
industrial revolution where productivity was rising rapidly and yet working class wages were not.
And artisanal labor, these people who had spent their lives developing expertise in weaving and
so on, they were just wiped out. And it took decades before there was actually need for specialized
labor again. A lot of what, you know, who worked in those dark satanic mills? It was basically
unmarried women and indentured children doing dirty, dangerous, unskilled work. And it took decades
really into the late 1700s, I'm sorry, excuse me, late 1800s, I'm sorry, until we started
to- See, this is why you don't have the Nobel. See, David, you got to hold me back.
You've got to know the right century. That's right. Until we actually started to use specialized
skills again, where people needed to follow rules and need to master tools and their expertise
was really needed. And so that was a very traumatic technological transition. And eventually we came
through it okay, but most the people who were there at the outset did not. And a lot of these
transitions, they, you know, young people adapt them usually more successfully by choosing different
careers. People don't make big career transitions in mid-adulthood. They don't go from being,
you know, a steel worker to a doctor or a programmer to a nurse. They, they, they, they
And so those transitions are kind of generational.
And so when it moves really fast, as it did in the era of the China trade shock, for example, people just get left behind.
Places eventually recover, but individuals much less so.
And, you know, you talk about it's very interesting.
And Doreon, maybe we'll ask you.
We're talking about specialized labor, you know, and David is talking about the craftspeople who knew weaving and those things.
And they're replaced by automation and these kinds of things.
manufacturing jobs that were replaced in the China shock maybe weren't considered as specialized,
but still blue collar. Is AI going to bring about those same disruptions, but in what you
would call, I guess, white collar labor or less specialized knowledge and more administrative knowledge?
I think it certainly will. The time frame is unclear. Just to add to what David said,
you know, this kind of experience is not a distant one.
As David's own work shows, the China shock when it led to cheap imports coming and destroying parts of manufacturing had the same effect.
You're talking about in the 2000s when China was admitted to the WTO and then.
Yeah, starting in 1990s, but especially after 2000.
But really after 2000.
Right.
Okay.
And robots at a much smaller scale had exactly the same effects.
huge increase in productivity for steel, electronics, cars, but blue-color workers lost their jobs.
Many communities, just like with the Chinese imports shock, were thrown into recession.
And the same thing can happen if there is very rapid displacement of white-color jobs.
Now, the timing is very unclear.
There is a lot of hype and a lot of reality to the,
capabilities of AI models. So far, we're not seeing mass layoffs. We may be seeing some slowdown
in hiring. It's unclear. And white-color jobs are less concentrated geographically compared to, say,
textiles or toys, the things that were affected by Chinese imports or cars, definitely, or steel.
But the numbers of jobs in white-color occupations is high. So there could be.
a lot of people who lose their jobs.
Now, the thing is that despite the tremendous advances in AI over the last eight months or so,
these models are not yet able to do the whole occupation for many of the white-collar jobs.
Yet, that may be to come, or it may take a while.
That's why there is so much uncertainty.
But uncertainty is a very bad reason to be complacent.
David, you know, a story that those.
that are behind AI tell us is very different. You know, when the people that are creating these
AI models talk, they talk in utopian terms. We will be freed from the burden of the toil.
We will paint and write poetry, even though AI is probably going to do that as well. But when
they talk to their investors, they speak very differently. And I want to ask you about a quote
that I heard. There was a gentleman who was talking to his investors about AI and he said,
it will allow you the benefit of productivity without the tax of human labor. He referred to human
labor, us, as a tax, as something that a company wants to avoid paying to retain productivity.
That's what worries me is that, you know, we talk a lot about this and it's always framed in
terms of productivity.
So wouldn't you like to be freed from your podcasting job, John?
Listen, man, I've been toiling in the podcast minds for, I'm getting podcast long.
It's a terrible, it is a terrible crippling addiction.
Yeah.
So, you know, most of us are, you know, both workers and consumers, and we're not going to be
able to consume if we're not working.
And, but of course, from the perspective of a firm, right, they want their customers.
They'd rather not have their workers, right?
With labor, you know, economists will tell you this.
Labor demand is derived demand, right?
It's not, it's not that firms want labor.
Explain that, derived demand.
What is, what is that?
Yeah, they want to make stuff, right?
And usually making stuff requires, you know, space and people and, you know, electricity
and stuff and people.
But if they could make it without the people, they would be just as happy.
It's, you know, it's like spinal tap, you know.
If they had the sex and the drugs, they could do without the rock and roll, right?
Right.
You know, but of course, people have always been necessary.
So although firms have always had this fantasy that, you know, that, you know,
they could fully automate. They'd never been able to do so. And often it's kind of turned out
not how they expected, right? So during the, you know, the era of numerically controlled machines,
they thought they would deskill and replace workers. Actually, they turned, you know, manufacturing
workers into programmers. So it doesn't always work out the way that firms expect it to. But it may,
this time, there's certainly many, many more things that are subject to AI automation than were subject
to the previous era because AI has a whole new set of capabilities, right? Previous computers could do
routine tasks. They could follow rules. Rules specified so tightly that a non-sent, non-improvisational,
non-problem, non-creative machine could just carry it out without having to understand what it's
doing. That really limited the set of activities that we could subject to computer programming.
But now AI learns inductively, right? It learns from unstructured information. It infers rules.
It, you know, solves problems without our even understanding how it's solving them. That allows it to
enter many, many new realms. Now, let's, you know, to make this very concrete, it's useful, I think,
to contrast like two occupations that one that people talk about all the time and one they should
be talking about. Okay. So the one they talk about all the time is long-haul truck drivers, right? And
they're about three and a half million of them in the United States and they say, you know,
they're going to be replaced by autonomous vehicles. That is a problem we can handle because it's
going to go very slowly, right? The day that, let's say Elon Musk announces tomorrow, he has
a self-driving truck and let's just pretend we believe him. And it's, you know, it's, you know,
That's how I've been operating for years.
And so it totally works.
We're not going to throw all our trucks into the Atlantic Ocean and buy new ones tomorrow.
It's going to take decades to replace all of that capital and all the infrastructure.
So that's going to be a slow transition.
And labor markets can deal with transitions that happen in a couple of percentage points a year because people retire, new people don't enter.
That's manageable.
You're saying if it takes place over a generation, then that's something that even though it will be disruptive, it won't be catastrophic.
Exactly.
Now let's think of call center.
workers. They're about as many of them in the United States as there are long-haul truckers.
They're paid less. They're primarily women, but they're just as many. Those jobs can go very,
very quickly, right? Because those can be the, you know, automation can encroach rapidly.
I don't think they'll all go. The ones that remain will actually be more specialized.
They'll be at the top of the queue, right? When the AI says, I give up, you will be handed over to,
you know, the last 20 people standing. So rather than 20 people, five people will handle what's left of
the human tasks that need to be handled.
And let's just say that's a mixed bag, right?
Those will be better jobs.
They'll be higher paid.
There'll be more expertise intensive, but there'll be fewer of them, right?
And that will, we'll see this in language translation.
We'll see this in call centers.
We may see this in software as well, right?
Software will bifurcate.
We'll have, you know, a small number of people who, you know, build AI models,
who run data centers, who run enterprise software, and they'll be highly paid and
highly specialized.
And then we'll kind of have infinity vibe coders, right?
And they'll be like Uber drivers, right?
You'll call them up to write an app for you.
They won't be paid nothing.
There'll be a lot of them that won't be highly paid.
So we're going to see a variety of impacts,
but the ones work that is just,
that is fully cognitive work, right,
is much, much more vulnerable.
It can change much more quickly.
Eventually, robotics will also, you know,
more and more enter the physical realm,
but that's still, you know, some ways off.
Ground news, it's this website nap.
It's designed to give readers a better way,
an easier way to navigate the news.
You know, if you go on the algorithmic, the Twitters and the things,
or the weaponized news organizations or the websites,
you don't even understand how they're manipulating your worldview
and how they're getting past the reptilian barriers that you have
towards polarization and all those different things.
Ground news gives you the information you need to be able to battle that.
It pulls together every article about the same news story from all outlets all over the world and puts them in one place and not not incentivized for like the worst most hostile most partisan take.
It tells you where it's coming from.
They show you how reliable the source is and who's funding it.
Who's funding it?
Follow the money.
Know who's behind the headline.
I'm telling you, man.
The Nobel Peace Center has even mentioned that Ground News is an excellent way to stay informed.
Noble Peace Center.
That's, I think, the one that Trump started.
I think it's the 3D prints Nobel Peace Prize.
It just hands them out.
The platform's independently operated, supported by its subscribers, so they stay independent,
and they stay mission-driven.
They don't get sucked into this slop.
If you want to see the full picture, go to Ground News.
They can help you through the noise and get to the heart of the news.
go to groundnews.com slash steward.
Subscribe for 40% off
the unlimited access vantage subscription.
Discount available only for a limited time.
And this brings the price down to like $5 a month.
That's groundnews.com slash steward
or scan the QR code on the screen.
But so let's talk about that, Doreland.
You know, when we talk about
these sort of two areas of work,
which is the human expertise that needs to be done,
and then physical work where robotics do.
Everything is moving in that direction.
AI feels like it's stripped mind the entirety of human accomplishment.
You know, the 10,000 years that we have spent developing these areas of expertise,
these areas of knowledge, the kinds of things that made us feel relevant to the progress of the human condition,
AI comes in and six months later goes, okay, what else you got?
What else are you going to feed me?
And then it starts to move forward.
Are you confident that so what David's talking about is already a reduction of the human
workforce?
Is that the thing that you are most concerned about?
Or is it the eradication?
Yeah, reduction is first and eradication is later.
And in the process, wages be stagnating or even declining.
And David, you know, everything David said, I agree with.
But there's one other thing to add.
Again, it's a wildcard because we don't know how quickly these AI capabilities will develop
and how quickly they will be adopted.
But all of our earlier examples of displacement, which, as I said and David said,
haven't been so good for workers, such as during the first 80 years or so,
the British Industrial Revolution or during China and robot shocks, they were confined to a few
occupations. Even then it was very hard for people to relocate and get jobs and newcomers to
find jobs. But you know, weavers during the British Industrial Revolution, once power looms
came in, they lost about two-thirds of their earnings. But they could then become unskilled factory
operators. Blue-color workers went to construction or other things, or some of them withdrew from
the labor force. If Dario Amadeh or some of the other people who are most vocal about the
capabilities of these models and what they will do to the workforce are correct, there are going to be
many sectors at the same time being hit. So yes, if the rest of the economy was booming and
3.5 million customer service representatives were laid off, we could find other jobs for them,
perhaps for a somewhat lower pay. But what if all occupations are going in the same direction?
That is Armageddon. Now, I don't think that's going to happen anytime soon.
David just sighed. You said Armageddon and David sighed.
I will let David. I mean, that's not going to happen anytime soon. But I think we have to
be prepared for it because some people are saying that's going to happen in the next two, three,
four, five years. Either those plans are leading trillions of dollars of investment, which are going
to come to nothing, or there is going to be a grain of truth in some aspect of it. But either way,
we have to be prepared for that. Now, displacement is real. So you're talking about either this is a
financial bubble where an incredible amount of capital is being poured into a technology that
ultimately will be a bubble that, you know, resolves nothing and is not worth the investment,
which causes a kind of financial catastrophe or it's real and it causes a personal human labor
catastrophe. Is that? I would say I'm somewhere in between. I think.
The speed of which will be much slower, which will then lead to a lot of money being lost because the investments need to be monetized and they need to be monetized soon if these investments are going to pay off.
So I am in the middle.
I think that these capabilities will come at some point, but not as soon as these investments are being motivated by.
But I am uncertain enough that either all of it being a bubble or all of it happening.
within the next five years, you know, can I say with good conscience, that's a zero probability
event? I cannot. I mean, so many technologists are saying, look, in our labs, we have these
even more amazing models. I don't believe it. I don't believe it, but I can't say, oh,
necessarily it's wrong. How do you, how do you means test these hypotheses? I cannot. We cannot. Nobody
can. We can't do it. Because they're all based on what's going to come next year and we don't have
access to it. So everything we're doing is we're looking backwards. We're looking backwards.
But not forwards. David, you were going to say something. Okay. So first I don't think that the success
of AI companies and the value of their investments entirely depends on them displacing labor.
If we just got much more productive, that would also pay off, right? So if we got more efficient
in health care, if we got, you know, better at transportation, if we did education better. So it doesn't
all have to come from just throwing people out of work. And it's also important to remember that
although these transitions have been wrenching, we're infinitely more wealthy than we were 200 years ago.
We are much better off.
None of us wants to live.
On the main.
On the main.
But obviously, if you look in certain, you know, I don't think the Rust Belt would say, yeah, that was globalization was great for us.
No, no.
They're not starving, right?
Right.
They're not starving.
They're not.
Look, I don't mean to be unsympathetic.
Yeah, I know.
The standard of living in almost anywhere in America, including in the least privileged places,
People have indoor plumbing.
They are not food deprived by and large.
They have some access to education.
They have some safety.
It's much better than conditions in pre-industrial England, you know, 250 years ago.
So I don't think that so although there's always costs and I don't mean to minimize them, I think they're real and the transitional costs are enormous.
And the beneficiaries are not the same as those who are harmed.
so it's not like they just make these.
But I think we should recognize
there's enormous upside potential here as well.
We shouldn't only be sentimental
about what would be lost.
We should also recognize the opportunity
to accelerate science,
to improve our adaptation to climate change
and energy generation,
to improve medicine,
to do education better.
We might do it worse.
We can do it better.
And distribute more of the world's wealth
to more of the people in the world.
I actually think artificial intelligence
like mobile telephony can be potentially beneficial to the developing world in a way by increasing self-sufficiency,
by giving access to expertise in engineering and medicine, you know, that is not readily available.
So can I just jump in there?
Please.
Because David and I have been studying these things together and separately for the last 30 years.
And almost everything you'll hear from David, I agree with.
And most things you hear from me, well, David probably would disagree with them.
But anyway.
But there is one place of disagreement between me and David and David put his finger on it.
So let me expand because I think this just again underscores the uncertainty.
So David and I completely agree that there is a potential to use AI in what we call a pro-worker way.
Okay.
Meaning you make workers more productive.
They become better at their jobs.
They gain additional expertise.
They start performing new and more important and interesting problem-solving tasks.
The place of disagreement between me and David is that I think that direction requires a complete change in the focus of the industry, and we won't get it on their current path.
The current path is very automation focused, whereas I think David thinks, well, whatever the companies do, somehow better things might come out.
So I think he's more optimistic about those productivity gains.
that could then create meaningful jobs,
I think we really are squandering that opportunity.
That opportunity is there, but we're squandering.
And that's the most important reason why I love being in shows like yours
where people listen to as opposed to what I say,
because I think we need to change the conversation.
The conversation shouldn't just be about the doom and the gloom
or the amazing promise of AI.
It should be about, are we actually using these models,
these capabilities for the right thing or the wrong thing?
That's the main conversation we need to have.
Well, let me mediate the dispute between you and David.
Oh, I think. We've tried.
Many people have tried.
Before it turns physical.
I don't want to get there.
I don't know how close you are to each other.
I know I've seen a lot of fistfights on this podcast.
That's exactly right.
And things do get out of control.
Yeah.
And if we need to take it to the octagon, we'll take it to the octagon.
I don't have a problem.
I don't have a problem with any of that.
But I think what we're talking about are sort of two separate things.
So I want to see if we can tease those out a little bit.
You know, you said a phrase, during that I think is interesting, which is you want to make it.
You said worker.
Pro worker.
You said pro worker.
What David is talking about, I think, is sort of the patina over society that these advances allow us to fight diseases that we didn't have to do.
Sure.
But it's pro-human to a certain extent, but not necessarily pro-worker.
So I guess, David, what I would say to you is, generally those that are deploying these new things are not concerned about being pro worker in any way.
Now, the increase in productivity may have it.
You know, they always say a rising tide lifts all boats.
And I always say, unless you don't have a boat.
Right.
And then really, you're just, then it's just water and you're treading it.
But so the people that run, it's sort of like globalization.
What they learned was capital travels and labor doesn't.
So if I can find ways to pay workers less or to give them less safe working days.
So globalization was by no means pro worker for workers that were accustomed to more first world conditions.
But if you were a worker in.
the global south, those investments were wildly pro-worker because your conditions.
So how do we tease out what we mean by pro-worker and the standards of society that we're
talking about raising?
So, you know, Daronne and I, along with our colleague Simon Johnson, also Nobel laureate,
further increasing my distinction, if I'm not having one, just wrote a paper.
on pro worker AI. And, you know, what we mean is, you know, tools that extend the, the
the use, usefulness of human expertise and the range, the things that we can do, give people new
things to do, things that they didn't. And let me, you know, you say, what do we mean by new things
to do? I don't mean, you know, sort blocks. But like, there are a quarter million data scientists
in the United States right now. They earn about $120,000 a year at the median. Like, those didn't
exist 20 years ago. Now, what does a data scientist do? A data scientist is someone who basically deals with,
we have enormous amounts of data,
we have enormous amounts of computing power.
How do we process that?
How do we organize that and make it accessible?
The data that we have on the internet is so complex.
It's video, it's text, it's images,
and data science is all about how you use that constructively.
We had no tools.
We had statistics, we had no tools for doing anything like that.
And now there's tons of expert work.
And a lot of where the value of human work comes from
is demand for new forms of expertise.
So we've had electricity.
and plumbers for a while. Now we have solar electricians and solar plumbers. They're people who do
those fields, but they're specialized even further. Much of our medical work, right? You know, we didn't have
pediatric oncologists 50 years ago, right? Or even, you know, people who do like, you know,
someone who's a, you know, a fitness coach. That's also a new form of work. And often that creates
demand. It creates specialization. People earn a premium for that. It needs to keep moving, right? And so
expertise is always being actually devalued by automation and then reinstate.
by new ideas, new creativity, and new opportunity.
And so both of those things happen,
but we have much less control and predictability about the new work.
It's easy to predict what will be automated.
It's hard to predict how much new work will be and where it will occur.
And most important, who will do it?
Most of the new work of the last 40 years has been for people with high levels of education.
And the majority of American adults do not have a college degree.
It's only about 40%.
And so we really, and college graduates have done fine for the last 40 years.
It's the majority of people who are not college graduates that we should be concerned about.
And so in our view, pro-worker AI in particular is AI that enables people without as much elite credentials to do more valuable medical care, to do more programming, to do more legal services, to do contracting, skilled repair.
And we think there's opportunity there.
But I agree with Jerome.
There's no guarantee that that's where we're going, that where tech firms or even where the market is pointing.
Now, honestly, I don't think, with some exceptions that I won't name, I don't think most of the techs broes are evil.
I don't think they mean to do harm.
All right, now you and I are going to have a problem.
Right, right.
But I don't think they, they don't really know how to control this, right?
They don't, they don't, if you told them, if you said, you know, Dario, this is how you make pro worker AI.
I think he would be very interested in that.
I honestly don't think he knows.
I thought we said that.
I don't think he knows what that means precisely.
But are they even interested in that?
You know, I'm curious what you guys think, you know.
No, they're not interested, John.
They're not interested.
Right.
They're not interested because they've been locked into this AGI, artificial general
intelligence craze.
And your chops in this industry are measured by how close you can argue or you really go
towards this sort of AGI.
And AGI, if you take it seriously, hopefully I don't think we have to take it seriously
anytime soon.
But if you do take it seriously, it means that these models can do.
everything, everything better than the very, very best experts. And then once combined with
advanced robotics that are flexible enough, then they can do all the works better. So a lot of
economic intuitions are based on what David Ricardo introduced, which is comparative advantage.
If you have an advantage in winemaking, fine, you'll make the wine and I'll do the podcasting.
You won't do both podcasting and winemaking because you have a limited amount of time.
Now, if indeed we get to AGI, that framework is out of the podcasting.
the window because these models can operate very cheaply and they'll have an advantage over all human
work. I don't believe we're getting there anytime soon. But that is the agenda and that's the
agenda that's driving the industry. That's the problem. Is the agenda AGI in the industry or is the
agenda to own the operating system of our society? That's where I'm more concerned. You know,
we're bringing up where it may go. But some of it does have to do with those that are
The owners, Palantir, Open AI, the owners of these new technologies, and how exploitative they want to be for workers.
And also, ideologically, what are they going to do if they own, you know, when the companies were laying fiber optic cables or the companies were laying electricity or any of those kinds of things, there was not an ideological
component. But when you listen to the guys that are laying the new pipelines for whatever this
society is going to be, they are ideological. 100%, John, you nailed it. You nailed it. I think
there is an ideology of AI. AGI is part of it. It's very different. Just try to illustrate that
going back to what David said, which again, that part was based on our joint work. So I agree,
sort of mostly. You're required to agree. I'm disavowing your own work. Your name.
names on it, buddy.
So the capability of using AI with non-expert workers to increase their expertise, to allow
them to do new things is definitely there.
And I think it's the most exciting part.
But fighting against that is the ideology and the practice of centralizing all information
in the hands of a few companies and a few people.
Yes.
And if they control that information and if they want to use it in the way of not make the novices
more expert, but get rid of the novices, get rid of the experts, then you have a very different
world. And that's the agenda. Now, can they achieve that agenda? Not necessarily true, because there
are technical barriers to it, but that's what they're trying to do. Yes, you're absolutely right.
So the avocado is one of nature's mysteries, as far as I'm concerned. I find it to be a very vexing,
I want to say vegetables. I think it's a fruit, right? You know what? You'll Google it. You'll Google it,
You probably don't even have to Google it.
You probably know it.
Avocado green mattress.
They sell mattresses, pillows, solid wood furniture.
What more do you need?
And no pits.
It's all made for materials designed to support healthier living and more restorative sleep.
Made without the harmful chemicals.
Can actual avocados say that?
Probably not.
They only use certified organic, non-taxic materials.
Their products are designed to support deep restorative sleep
so your body can properly recover, reset and wake up
and take on the day.
Avocado products are made, not manufactured,
and thoughtfully crafted with real materials
to deliver lasting comfort and support.
Go to avocadogreenmatress.com slash TWS.
To check out their mattress and furniture sale.
That's avocado greenmatress.com slash TWS.
Avocado green mattress.com slash TWS.
Okay.
So I would make three points.
First, you know, you shouldn't take drone and me too seriously about, like telling you about the future of AI, right?
We're not experts in this.
I don't think you should take Dario Amadee very seriously about projecting the future of the economy.
He means, well, but he's not, you know, it's not, it's like people have been telling us forever, we'll run out of work because we're automating stuff.
That hasn't happened.
It doesn't mean it can't happen, but just means thinking about it mechanically is not the right way to think about it.
Second of all, I don't even think when there's AGI that that will actually put all humans
out of work.
Many, many problems are not computational problems.
They're political and interpersonal problems about who has control, who has ownership rights,
who has the information.
If I say today, here's a better way to reorganize MIT.
I've got it.
What, you know, when I've calculated, I did it with my AGI, MIT will not be reorganized
tomorrow, right?
It's a political problem.
Depends on whether you have dictatorial powers or not.
If they also have the dictatorial powers, then it will be reorganized.
Okay, well, I mean, if we get, if we're, if we also throw democracy out, then we're there more trouble.
But David, so this is, let me talk about it in kind of, you know, you made some really good points about the historical precursors of the industrial revolution and globalization.
I just want to make a little bit of a point about human nature.
When new technologies come along that are truly transformative, thinking of splitting the atom, right?
So you have brilliant people working on splitting the atom.
And if you split it one way, you can use it to power the world.
And if you split it another way, you can blow the world up.
Which one did we try first?
So when we talk about AI and we're talking about the technology, it doesn't necessarily have to be transformative in the way that we're talking theoretically.
We can talk about how powerful it is for the general tools that humans use to rule over other humans.
And I'll give you an example.
Palantir comes across with this incredibly powerful AI-generated systems.
And what do they do?
They suck information out of the system and then they funnel information about people who are undocumented.
And the government then uses that information.
It's not just about what it might do.
It's about how governments or individuals will use these new powers.
to game the system and gain advantage over their competitors.
Isn't that a more realistic conversation?
Oh, you nailed it.
You nailed it exactly, John.
So I think for the next version of our paper with Simon Johnson.
When are we writing a paper together?
Exactly.
I was just going to say you have to become a co-author.
Where's my no bell?
Where's my no bell?
Yeah, the direction of technology is highly malleable.
And there is always a worse direction than the one you fear.
And sometimes we find it, the more dictatorial, authoritarian, less democratic we are,
the more likely we are to find that direction.
Nuclear weapons are much more likely under times of war or times of authoritarian control.
And nuclear energy becomes much more reasonable if it's subject to democratic oversight.
exactly the centralization of information, the ideology of AGI, and the sort of the meetings of the mind around the surveillance state and the technology are very worrying precisely because they open those bad doors for us.
And anyway, many of the people in the industry would have no problem walking through those doors headfirst.
David, I want to ask you about that because, you know, you're making really good points about sort of the ways that these new technology.
can be used to uplift.
But in my mind, I'm thinking atomic, it's splitting the atom.
And are you concerned?
Because I think you're more optimistic about where this thing is going about what I'm raising here.
Oh, absolutely.
I'm very concerned.
And I think AI is, you know, God's gift to authoritarian's, right?
It's great for centralized control.
It's great for monitoring, right?
It is, yeah, and I think it's, we already see if we want to see.
you know, mass surveillance and censorship at scale, you know, go to China and they're exporting
that model.
And we've privatized a lot of it.
We're still doing it.
I'm very concerned about that.
So I'm trying to emphasize that there's opportunity, not that we're on it.
We're destined to get there.
I think we're destined to have a range of outcome, some of them quite terrible, some of them
quite good and very unevenly shared.
And the balance may be towards the bad.
It may be towards the good.
But I think we have to, if we don't bear in mind that we have an opportunity, we certainly
won't, we'll certainly squander. Understood.
Absolutely. But I think we also need to, and this is a first most important observation
that David made, but we also need to have the public conversation that those opportunities
exist and we're not currently targeting them. Right. Right. We're currently targeting
something very different. Mass automation, surveillance state, a new sort of merger between
the security apparatus and tech companies. Those are the things we are contemplating.
or practicing right now.
And there's another conversation we're not having.
I just want to loop back to a point you made John a little while ago
about sort of, you know, all these stuff on the internet now kind of being amonitized.
There's a really fascinating book by Max Casey,
who's an economist at Oxford called The Means of Prediction, right?
So play on the Marxian phrase, the means of production.
And he makes it, I think what is a brilliant analogy,
he says, look, you know, the enclosure movement in like, you know, medieval Europe, right?
It was when all the common land, all of a sudden, the Lord said,
hey, we own that and we're just going to farm that ourselves.
And it may have been actually a more efficient way of farming, but the commoners were just wiped out by this, right?
Well, you could say that AI is in some sense enclosing the internet, right?
It's taking all this common property and monetizing it, right?
All of the stuff we put out there, all our photos and all of our writing and all of our movies.
And, you know, you say, oh, well, you know, they're not enclosing it.
I mean, it's still there, just where you left it.
But, of course, you never thought your artwork was going to compete with you, right?
You never thought the story you wrote would be regurgitated and sold and you couldn't sell your work anymore.
So I do think this unilateral transfer property rights is a huge thing that is under recognized, under discussed.
Man.
Oh, yeah, that's so important.
But can I add one thing?
100% agree with David.
But it has an additional really bad effect, which is that.
He always wants to be the black swan.
I know, I know.
Really dark soul black swan.
Yes, exactly.
Yes, exactly.
Go for it.
But the kind of the useful things that David and I are mentioning that you can do
pro-worker AI. That really requires very high-quality data. It requires, if you're going to build
a tool for electricians that makes novice electricians perform the expert tasks that solar
electricians and the best season ones can do, you require the data from those electricians dealing
with the hardest problems. That data will not be produced unless there is property right over data.
and there are data markets in which people can get the returns for the data that they create.
But this enclosure thing that David described is a data extraction economy.
So it's creating the opposite.
Guys, this is blowing my mind.
It's something that I had not thought of it all, but I think that's what you're bringing up is so interesting.
So as AI strip minds, the totality of human expertise and experience, right?
So let's look at it in terms of music.
You get royalties.
If you write a song and somebody uses that song, they pay you a royalty.
If somebody, you know, plagiarizes your lyrics or finds a way to take your melody and put it into their song, you're going to be paid for that.
AI is a human expertise laundering machine.
It's basically taking everything that we've gotten, training itself.
in some ways replacing us, but without that royalty payment.
Where the royalty payment goes is to open AI or to Palantir or to any of these other places.
And if you ask them what they're doing with it, they'll say, that's proprietary.
Yeah, we're in the Napster area of AI, right?
Remember Napster?
Just everybody's music and just burn it, rip it, and share it.
Right.
That was not viable.
We wouldn't have a music industry if we hadn't gotten control of that.
Right, you know, with Spotify, with Apple Music, where we pay royalties when we listen to those songs, more royalties, but we do pay them.
But the difference is that in the Napster, it was the consumers who are doing that replication.
Now it's the most powerful corporations humanity has seen who's doing it.
But this is a failure of property rights, a failure of legislation.
People say, oh, no, fair use allows that.
Well, fair use never envisioned this, right?
No.
And so, you know, who cares what the law, you know, said?
It's not applicable.
We should have been, we should be changing it, right?
You know, people should be compensated and not just once.
They should be compensating as their information is reused.
And that's actually a manageable problem.
Talk to people at Google who've worked on this.
They say, yeah, we know how to do that.
Right.
We just don't, you know, we don't have an incentive to do it, but we know how to do it.
And if the laws, we would support it.
Right.
So I think that and by not recognizing that this enclosure is going on, that this sort of property rights are being reallocated.
Yes.
Economics doesn't deal with that.
It's reverse socialism.
Exactly.
Exactly.
they're taking from the workers and they're funneling up to these five individuals.
And it comes back to, you know, to torture this atomic analogy.
You got the sense that people like Oppenheimer or Einstein were aware of the gravity.
Oh, yeah.
Of what was happening.
And through the crucible of war, maybe made some decisions they might not have made otherwise.
in this environment, I don't think Altman, Carp, Thiel.
Thiel was asked, should the human race flourish and continue to exist?
And he took like a five second pause.
Let me think about that first.
That's a tough one there, yeah.
So the nuance of what you're both bringing to the discussion seems utterly absent.
And, you know, you nailed it again, the war conditions.
You know, P. Einstein, who was very pacifist because he was worried about Germany, Third Reich, supported the atomic weapons and several other.
And you know what? Silicon Valley is also creating war conditions.
The framing of AGI is either China gets their first and we become their vassal state or we have to go first.
And that's creating this warlike condition.
You know, you have to allow us to do anything we want, even the worst things.
because otherwise China is going to do them.
So that's creating the equivalent of the 21st century war condition.
And Oppenheimer, by the way, spent the rest of his career opposing the H-bomb
and eventually was stripped of his security clearance and went to, you know, died,
a broken man effectively because he was persecuted for trying to control the invention
that he was so instrumental in creating.
But, I mean, maybe it makes sense to talk a little bit about what are some policies
that we could have.
Yeah, please do.
Okay.
So, I mean, I would put them in three buckets.
But let me start with one that, that, you know, that, you know,
People call wage insurance.
And wage insurance, an idea that actually was experimented with during the presidential
administration that reigned from 2008 to 2016.
I'm not going to say what the presidents, but you can guess.
I don't recall.
But I think I remember him in a tan suit.
Handsome guy.
Very handsome guy, tan suit.
Anyway, that's all remember.
But, you know, it was an idea was, look, you lose a job in manufacturing.
Let's say you're making $50,000 year, $25 an hour.
And you can find another job, but it's going to be like a $15 an hour, right?
And it's not only is that low wage, but you're like, hey, that's beneath my dignity, right?
Like, that's, I'm not going to take that job.
So wage insurance says, hey, look, we get that.
We're going to make up half the difference for up to, say, $8,000, up to two years.
Just take the $20, take the $15 an hour job.
You'll make $20, right?
And then you can look for something better.
And it gets people back into the workforce more quickly.
It's like an earned income tax credit for returning workers.
This program was so effective in terms of saving unemployment insurance money and generating
additional payroll revenue that it paid for itself.
How is that different, David, than unemployment insurance?
Unemployment insurance, you get it while you're not working.
This you get it if you return to work.
I see.
Yeah, I see.
And now, this needs to be scaled.
And it makes up, so I get what you're saying.
It makes up in some ways the difference that you would have gotten from a job that was paying a little bit more.
That's right.
That makes sense.
And I think, by the way, this is very politically viable, right?
In America, we're not very friendly towards people who aren't working.
If you're working, that's okay with us.
us, right? And so an incentive to work rather than an incentive or something that's subsizing
work rather than subsizing leisure saying that many people can get behind, especially if it's
pretty cost effective. Now, we need a bigger demonstration, right? What was done and people like
Brian Kovac at Carnegie Mellon University is trying to stand up a multi-state demonstration of this.
I've been talking to speaking with funders trying to get it going. So that's like one really actionable
policy. Let me say, this is a no regrets policy. It's not like, you know, if the Armageddon
doesn't come to best, we go, oh, damn, why did we do?
do wage insurance. After all, you know, this is just a good idea. It was a good idea 10 years
ago. It's a good idea now. So let me pause there and turn it over to Ron for the next idea.
Yeah, yeah. Well, you know, that's a, that's a great policy. I am fully behind it.
But let me say before I talk about the next policies, I think the most important step,
even before the policies, is actually this conversation. This conversation that needs to just
take place much more widely, that there are many different things.
things we can do with AI. And it's a choice what we do with AI. That's what's lost in the current
media environment. For about 10 years, the entire mainstream media was so excited about the
tech barons that they couldn't do anything wrong. Now they're talking about, you know,
killer robots and doom. Okay, that's a useful corrective. But we're actually missing the most
important conversation. The most important conversation, AI is not one thing. AI is a whole
spectrum. And at the one end of the spectrum, as we've been emphasizing, there are some terrible
things. And at the other end of the spectrum, we made, there are feasible things that we can do
that are much better. Who's going to decide that? Who are going to empower to make those
civilization-changing decisions? Dario Amadei, Sam Altman, Peter Thiel. No, I think it should
be the democratic process should have part on it and people should become more informed about
it. I think that conversation is first. And then all the policies.
have to come on top of that.
Folks, I don't know if you can hear it in my voice.
I'm tired.
I didn't sleep well last night.
I need a good night's sleep.
I always need a good night's sleep.
And you know what I can do?
I could buy a new mattress,
a little maybe a princess bed,
maybe get a little four-poster thing,
throw some mosquito netting on there,
spend a ton of money.
Or I got to do the only thing that matters.
Get some nice sheets,
some nice, clean,
freshly done comfortable sheets
that's what you need
the Bowling Branch way
it's the best way to get a better night's sleep
is the bedding get the nice bedding
you don't want the chafing bedding you don't want
I sleep in corduroy
who would do that
makes no sense
you can upgrade your sleep with Bowling Branch
get 15% off your first order
plus free shipping
Bolandbranch.com
slash TWS with code TWS, Bowling Branch, B-O-L-L-L-A-N-D Branch.
Dot com slash T-WS, code T-WS to unlock 15% off.
Exclusions apply.
And then there are many policies that we can worry about.
Like, for example, in the United States, we tax labor heavily, we subsidize capital.
That's been that way for 50 years.
How does that change the incentive?
It's been, well, it's got much worse over the last 25 years and much, much worse with the Trump administration.
And how do you think that changes firms and technologists' decisions?
It makes them more leaning towards automation because automation is being subsidized.
That's right.
So let's change that tax.
And we can raise more taxes also because we're just giving a pass to all capital income.
But it's kind of a perpetual motion machine because what happens is,
when these new technologies come along, capital flows towards it in such massive ways,
this giant, you know, trillions and trillions of dollars that flow in and building data centers
and sucking up water and electricity and money.
And then what they do with the profits is they reinvest not just in their technologies,
but in their political power.
Oh, 100%.
So they take their money and they bring it to bear on Washington.
You know, it was a shocking moment to me at the inauguration,
of an American president to see in the front row in a room of the swearing in, not the people,
but the tech companies that had the closest proximity and access to the president.
And you know what's worse? We don't even know who owned who, whether they own the government
or Trump owned them. We don't know which is what. David, you were going to say something,
though. Well, I just want to talk about another policy. Oh, okay, great. I like Doreons, though, the changing of the
tax incentives that can even out, to talk about pro-worker, that makes we value capital over
labor. And I think the pendulum needs to swing back. So I think that was a really important point.
But let me suggest another policy related to the food, which is what people call universal basic
capital, right? So not universal basic income, right, which is like write people check every month,
but the notion that when people are born, we give them an endowment of capital with voting rights,
right, like shares. And what does this do? Well, one, it diversifies it. Most people, you know,
their entire income is bound up in their human capital, right?
Your income comes from your ability to produce valuable labor.
Well, that's a pretty risky bet, right, for anyone, right?
Because, you know, value of labor changes over time.
Specialized skills, sometimes they become more valuable, sometimes it become worthless.
Right.
So we distribute, and by the way, you can call the Trump accounts if you want, right?
They're already being done.
I think we're calling it Trump everything.
That's right.
This is actually the weekly show Trump podcast.
That's right.
We just add the word Trump to everything.
And Daron has the Trump Prize and Eckers.
economics.
That's right.
Just to return to our main theme.
But so what does this do?
Right?
One, it gives people a more diversified portfolio.
It's something they can invest in, right?
They can't spend it until they're 18.
Second, it gives them ownership rights.
What are they?
Basically, you're getting, you know, it's just like, it's like getting a bond when you're born.
Okay.
Like the Alaska fund for everybody.
Okay.
That's right.
That's right.
But what it gives people a diversified income portfolio somewhat.
It also redistributes voting rights.
They have voting rights over capital, right?
And even you could even set it up.
So even if you sell your,
your stocks, you maintain the voting rights.
But what is the voting right?
Is it a...
So the way that I would think about it is it's reverse.
It's Benjamin Button Social Security.
So rather than it's a large fund and then when you're born...
That's why you're the comedian.
That's good.
You are a given.
I just watch a lot of movies.
So when you're born, you are invested into this larger fund that has been...
Now, then the questions come up.
Well, what is that fund invested?
in and how does it grow?
No, it's invented.
It owns shares of these tech firms, for example, right?
It owns a piece of the economy.
Okay.
And so then we all have some voting rights.
And that's really important because if labor, there's certainly a risk that labor will
become less valuable and capital more so.
And if so, we want more people to have ownership stakes.
Part of the brilliance of the labor market is that in a country without slavery and without
labor coercion, everyone owns at most one worker themselves, right?
So it's intrinsically relatively equal, but capital is not like that.
So the reason why I'm slightly dubious about that is, and I'll tell you why.
Companies won't even do that for their own employees.
No, the government has to do it.
It has to be done publicly.
Publicly, but the government is going to give away shares of privately owned companies.
Or buy them.
That's fine.
Or buy them.
Okay.
Sure.
All right.
Yeah.
All right.
Now I'm feeling a little better.
But here is the problem.
Here is the problem.
I completely agree with David's, you know, that would be a nice addition to a functioning
labor market.
Yes.
But here is what I want to put a pin on, which is that the tech solution to these
problems of universal basic income.
I didn't say, I hate you in AUBI.
Exactly.
So, but, but yeah, I want to just underscore that.
Yeah.
Or other schemes where people are somehow given a handout so that they can just not work.
I think there are many problems with that.
First of all, I think we don't know what to do with millions of people who don't work.
That would be highly bad for their mental health, for social peace.
But even worse, I think if you create any system like that, based on dividends, based on income, based on other things,
as long as society knows, oh, these are the creators, Peter Thiel's, Elon Musk, etc.
And the rest living off the income that they've created, that would create a horrible
two-tier society where there are those with very, very high status and all the rest.
We have a horrible, we have a horrible two-tiered society as it exists now.
I know.
I mean, look, in Norway, right, they have a sovereign wealth fund that's worth two GDP and it's
coming from oil, but people are public owners of that, right?
And they're doing okay.
And they're working, but they're working in Norway.
I'm in favor of work.
But I want to push back on just a couple of things within that.
So the system that's already been designed is a two-tiered system.
And there's already that sort of Randian philosophy that there are makers and takers.
But when you have an economic system that requires labor at its cheapest level and you have outside pressure of globalization that continues to drive those wages down and conditions down, well, we've created the conditions for that permanent underclass.
And then we blame those people as though their poverty is a function of vice, is a function of a lack of virtue.
And that's what I want to push back on.
I don't view money that goes into those communities as handouts.
I view them as investments.
And we have to find a way within this.
I love the idea of giving people some ownership over the industries that that, that,
drive the country. I think for too long, we have allowed these companies the providence of the stability
of this country, the subsidies of this country, the investments of this country, and asked for no Vig.
And I do think the House should always win, and the House should be the American people, and there
should be a rake. Right. 100%. Yeah, 100%, John. Now you're definitely a co-author. Give me my prize.
But you also put your finger in passing on something that's very important.
And you might want to have Michael Sundell on the show to talk about this, sort of this ideology of meritocracy that somehow all of those who are so successful are well deserving and virtuous.
And all of those who have lost out of globalization of technological change or social change are losers that deserve their fate.
I think that's been very, very pernicious.
I think you cannot understand the rise of Trump, the rise of anger in this country without
that form meritocracy ideology.
And he's been the most eloquent describer of this.
And I think it's a very, very important thing you put your finger on it.
Not Trump.
Michael Sandell has been missing.
Who would have thought there's so much fun to be had at MIT?
No would have thought that.
Please don't have Trump on your show, John.
Folks, I'm going to be honestly.
A lot of times I'll be pitching your products.
Am I crazy about them?
I don't know.
I muster some enthusiasm.
But then every now again, a product comes along when I'm like, oh, I actually use that.
I actually wear those.
Those are super comfortable.
And that's what we got now.
Folks, bombus is in the house.
Bombas, baby!
That's the alliteration I'm looking for.
bombus i can't even tell you how excited i am that we got bombus bombus is first of all
on about you but i'm a sock man like i like a nice comfortable sock if you give me a sock
every other part of my body is immune to discomfort but my feet uh you throw on a nice pair of socks man
and and you can have yourself a fine day and bombus is the most comfortable socks in the world
man just get rid of all your old socks you know what happened to me recently i had some socks in my drawer
and i put them on and it was as though the fabric had expired like when you pulled it on it made a noise
like the universe was coming apart like it went like almost crackling it was
needless to say i didn't have a good day that day here's even the best part about bombus
For every item you purchase, an essential clothing item is donated to someone facing housing and security, a one-for-one model with over 200 million donations and counting.
Head over to bombus.com slash weekly and use weekly and use code weekly for 20% off your first purchase.
That's B-O-M-B-A-S dot com slash weekly.
Code weekly at checkout.
These are really interesting, and I really do like them.
And what I love about it the most is these are actionable, specific ideas.
What so frustrates me about our political process in this moment, you know, we have this
incredibly powerful technology that sits just on the horizon, but we have a political
system that is unable to articulate mostly anything but platitude.
We have to start talking about kitchen table issues.
Working families must get the thing.
So you think you think like creating American AI dominion and cryptocurrency are not actionable issues?
Well, let me tell you something.
As a proud owner of Melania coin, I can tell you that my future is set.
But, you know, we are in this position.
What's so, I don't want to say ironic about it, is,
we could probably plug these questions into AI and come up with more specific and actionable
and interesting solutions than what are being offered by our political system.
Right. And that's the part. I can't wrap my head around. Where do you guys see? Why is that
the case? Well, I actually think that, so the idea of wage insurance is in currency. It's being discussed. I've
discussed it with people in the Trump administration. I've discussed it with people.
I've discussed it with people in the Democratic leadership.
I think there's enthusiasm for that.
Or, like, you know, there's also, I should say,
there's new efforts around doing modernizing training
in a way where we can measure it and monetize it
and return the revenues to, you know, Raj Chetty
and the group of opportunity insights at Harvard.
They're working on this in a really innovative way.
Harvard. Safety school.
Talking MIT, baby.
Yeah, so I do.
think there are a set of policies that are that again, I call no regrets policy. We won't be sorry
we did them, even if the worst doesn't come to pass, and we know how to do them well. They're not
totally out of reach. So I absolutely withdroan. We need to shape the conversation. We need to
deploy the technology constructively. But we also, we've got to recognize we are in for a rough ride,
even if it goes well, we're in for a rough ride because the transition is going to be so fast.
So we should have policies that support people, support their incomes, support job transitions,
right, and give them also an ownership stake. So they're on some of the,
upside of this, not just the downside. And that's distributing capital more broadly would have that
effect. David, I can't tell you how much I love that and how much I think that in some ways over the
last 50 years, I think that's what's gone wrong with the economic condition in this country is that
labor has never been offered an ownership stake in the value of their productivity. And Daronne,
I want to ask you about that. And then, and I've so appreciated this conversation. But great.
You know, when we talk about productivity gains, because that's always how it's framed, it always outstrips wage.
Always.
And maybe that's just the way that this system is.
No, it's not how it was until the mid-1970s.
Exactly.
But I'm saying since the 19- Yeah, for 50 years.
Since the Reagan revolution.
That's right.
But, you know, like people say that about like, oh, the capitalist system.
Well, it was a capitalist system in Europe in the United States from 1940s to the mid-1970s,
That's right.
Wages grew faster than productivity.
Workers with less than a college degree had faster wage gains than managers.
That was feasible.
There's nothing in the laws of economics or in the laws of democracy against that.
We just chose a different path since 1980.
And do you think at this point those powerful corporations have, there's almost a, that they
kind of have at an extortion point where they say, you know, oh, if you try and do anything
to regulate us or you try and do anything to tax us, we'll leave.
Well, look, this is such an important point.
This is such an important point, John.
First of all, these corporations are absolutely enormous.
I mean, it's not a fair comparison, but I just did the calculation last week.
Each one of the largest seven tech companies has annual revenues in current dollars twice as
large as the entire British Empire's GDP in the middle of the 19th century.
These are enormous, enormous corporations.
They need to be regulated.
But the rhetoric that they cannot be regulated, AI cannot be regulated, that's false.
China proves it.
Okay, I don't approve what China does.
I don't approve what they intend to do.
But they show very clearly AI can be regulated.
Tech companies, Alibaba is now completely subservient to the interests of the Communist Party in China.
We could also make Google and Open AI and Anthropic.
be much more in line with the democratic priorities in the United States.
There is nothing in the laws of economics, in the laws of physics that says these companies cannot be regulated.
They're not delicate flowers.
When Sam Alman says, oh, if you charge us for intellectual capital property, you know, will be put out of business.
That's not only isn't not true.
It's kind of pathetic because they say, we don't produce anything of value.
If you actually make us pay for inputs, no one would buy it.
Right.
That's crazy.
And it's not true.
So, I mean, look, I think, yeah, there's construction.
We don't need to shut it down.
We don't need to like regulate it to death so it can't move, right?
The US is innovative and that's great.
We have a lot to be proud of in that we have led this technology.
We're building it out quickly.
You know, it's valuable.
But we need to, it's an opportunity and we could squander it.
We need to steer it.
It will just left to its own.
It's going to do, it's not going to be pro worker.
What you're hearing both from me and David is that AI is a very promising technology.
But that's precisely the reason why we've got to put the,
care to make sure that we use it for the right thing.
Gentlemen, you have done the impossible.
You have done the impossible, which is you have somehow, not allayed my fears,
but you've given me hope that the future is actually not yet been written.
And what it does is it creates opportunity.
And when you have those opportunities to write it in the proper way.
But I think what you've done really well today is you've given
specifics, that none of this is platitude. This is all the specificity of here's what it could do,
here's the damage it's going to do, here's a way to mitigate it, and here's some ways to give us
a shared prosperity for it. And I think that's truly, I think that's the conversation,
the two of you, have you thought about having a podcast?
We were hoping we would join you after this.
What?
Oh, yes.
Unfortunately, what I've done is I had my data scientists.
They've been strip mining this conversation.
I don't need you.
We're done.
I've created AI avatars of the two of you.
And now we're done.
Fantastic.
But guys, that frees some time.
Man, thank you so much for this conversation.
I've truly appreciated.
Daronne Asimoglu, Nobel laureate in economics, MIT,
Institute Professor David Otter, Rubenfeld Professor of Economics at MIT.
Guys, fantastic and really appreciate it.
And I hope to continue the conversation with both of you.
Thank you so much for having us on.
It's superb.
We love what you're doing.
And it's great to have this conversation.
This was fantastic.
It's a lot of fun.
Thanks, John.
Holy smokes.
I'm feeling something.
Are you feeling something at home?
Are you listening to this?
Are you feeling something?
I'm feeling the possibility of futures unwritten.
the opportunity that it gives us to correct our path, to put us on a righteous path towards a more
positive, productive, equal future. My God. And I apologize, we don't have our normal staff chat
today because, as you can see, I'm on the road. So we weren't able to accomplish that. But man,
I so appreciated what those gentlemen were saying and the specificity of it. And I hope you did too.
and it's put me in something that I've needed for a little bit, which is a better mood.
I am now, and by the way, maybe I'm drinking the Kool-Aid too, but I am in a slightly better mood
than I was at the beginning of this whole Schmagege.
But man, that was, I enjoyed that conversation tremendously.
And thanks, as always, to our fantastic team, lead producer Lauren Walker, producer,
Brittany Mametovic, producer Gillian Speer,
video editor and engineer Rob Vatola,
who he and Nicole Boyce,
are audio engineer. They had to
work today. Today
was a day when I
couldn't figure out how
to log into Riverside.
They had to do the extra work today.
And as always, our executive producers, Chris McShane,
and Katie Gray. Very nice,
and we shall see you next week.
The weekly show
with John Stewart is a Comedy Central
podcast. It's produced by Paramount
audio and busboy productions.
