The Prof G Pod with Scott Galloway - Why CEOs Are Getting AI Wrong — with Ethan Mollick
Episode Date: February 12, 2026Ethan Mollick, professor at the Wharton School and author of One Useful Thing, joins Scott Galloway to examine the biggest mistake companies are making about AI. They discuss why fears of mass job ...loss may be premature, how quiet productivity gains are already reshaping work, and why most organizations lack the imagination to redesign themselves around new technology. Ethan also explores AI in higher education and medicine, the rise of open-weight models, and what all of this means for young people entering the workforce. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Episode 383.
383 is the country code for Kosovo.
In 1983, Return of the Jedi hit theaters.
What do you call a brand new baby Yoda butt plug?
A Toyota Prias.
It's actually funnier, the more you think about it.
Go!
Welcome to 383rd episode of the Prop Gipod.
What's happening?
The dog has been making the rounds across traditional media,
spreading the word on resist and unsubscribe.
A little bit of background.
Let's bring this back to me.
Came out of the gate strong.
Got between 60 and 100,000 uniques a day,
and I'll come back to that,
which is not easy with absolutely no paid marketing
to drive people to the site.
And then it hit a bit of a lull on Monday or Tuesday,
so I did some research on how to arrest or reverse the lull.
And what I found is that with many of the most successful
cron-court movements or boycotts,
It's not the actual economic impact.
It's the media's coverage of potential economic impact and shaming.
What was interesting about the most recent, if you have a successful movement when Disney backed down and put Kimmel back on the air, the number of unsubs to Disney Plus was actually in decline when they made that decision.
But media coverage had increased.
And media coverage creates a lot of momentum around employees, feeling bad, partners, inability to get deals done.
more and more distractions on earnings calls. So I thought, okay, did this myself, got it up with the
help of my outstanding team, some initial success. Now I got to go get traditional media. And some,
I didn't just become a media whore this week. I became a media hoa. Let's take a listen.
Resist and unsubscribe. Resist and unsubscribe. Explain to me why I should unsubscribe from
Amazon Prime.
really want to hurt or send a message to the president.
What he does listen to is the following.
If you look at the times when he has really checked back, immediately responded and pulled back,
it's been when one or two things has happened.
The bond market yields a spike where the S&P has gone down.
This is when he backed off of his plans to annex Greenland.
It's when he's backed off of tariffs.
When you go after big tech platforms with just a small decline in spending,
this is what moves the markets.
I think the string we can pull here is to go after the subscription,
revenues of big tech that now represents 40% of the SMP. You're hitting them with a $10,000 decrease in
market tab with just one subscription cancellation. So this is a chance to go after the soft tissue
of big tech whose leaders the president appears to be listening to.
Anyways, we've got literally millions of views from these and they get circulated. And there's
something about traditional media that still has a halo effect. And that is Jessica Yellen,
by the way, I was on with this week, who I adore, pointed something out that was really important,
and that is the economic model of traditional media is in collapse, but its relevance is still pretty substantial,
and that is if you look at where people are getting all their news online, nothing influences online
or if you look at the stuff that really gets broad distribution online or a lot of clicks,
it's a snippet usually from traditional media. And so while traditional media,
economic models in serious decline, its relevance in some ways gets more and more, if you will,
relevance. So what has begun as an idea has turned into measurable action, and that is since
February more than, or almost 600,000 people have visited resistant unsubscribe.com, and the campaign
has generated over 16 million views across social platforms with 14.7 million on Instagram and
Facebook alone, plus over a million on threads.
thousands of people have publicly posted using our sticker template signaling something important.
This isn't passive outrage. It's economic coordination. The question isn't whether this works.
It's how we scale it and what are the metrics for success here. And to be blunt, this wasn't as much a
coordinated effort as it was an attempt to have action-absorb anxiety. And my team was on board
that I have a group of very talented people. But what we've done here is the following. I did not
want to coordinate with other groups. People have been pinging me, talk to these people at this union or this
activist group, and I'll talk to anybody. But the idea of getting on the phone, I've heard from a lot of
kind of celebs and journalists who said, I wish you'd called me. The idea of getting on the phone
with a much activist and people wearing Birkenstocks with viewpoint on which big tech platforms we should
subscribe from or not subscribe from and people masturbating over every word on the site, that sounds
like my worst fucking nightmare. So while I realize greatness is in the agency of others, the greatness
I'm leveraging is the people within our circle. And to give you a sense for the metrics,
so we're getting upwards or near 100,000 unique visits a day. Now, if you ask ChapT or Clode,
what would be required to put up a site and get 100,000 uniques?
What would the cost be? Say you were building an e-commerce site or a political action committee site,
and you were asking for an action, a call-to-action to drive people to the site, and then you were asking for another action at the site.
Both chat chitin-clote came back and said, all right, the site would be about 100 to 200 grand, right?
That's the cheap part. What's interesting is if you wanted sustained traffic of 100,000-plus unique visitors each day,
it estimates you would need between get this and monthly budget of four to five million dollars across
alphabet, Instagram, Facebook ads, earn media, et cetera. So the way I see it is the following,
the metrics I'm tracking of the following. What would this cost? This is like a chaser effect.
I'm going to spend a lot of time, treasure, and talent trying to get Democrats elected in 26
and trying to find someone more reasonable to take on or to occupy Pennsylvania Avenue.
I'm going to spend a lot of money on Democratic politics. If one man can spend $300 million,
then we need a hundred of us at least to spend $3 million or more to push back on this.
The way I see it is this effort is kind of doubling or tripling every dollar that I'm going to commit to trying to get moderates back in the House.
In addition, the math I'm doing is the following.
You know, would I love it if all of a sudden Sam Altman and Tim Cook were saying, you know, no masked agents.
and it was a clear sign that this was working,
and the Trump White House had to respond, yeah, that has not happened.
As a matter of fact, I asked ChatGPT to summarize the effort so far,
and it said the product management teams are talking about it,
and that as people and companies are talking about it,
and we've got a lot of media exposure,
but executives are not talking about it,
meaning that if the stated goal is some sort of action on the part of the companies
or the White House, that just hasn't happened so far.
But the way I see it is if I can sustain 100,
100,000 Uniques a day to a site. These are people not being driven by Facebook or Google ads,
but they're intentionally deciding to go to this site. I used to be in the world of e-commerce.
You hope for a conversion rate of 2 to 4%. I think I'll get at least 3%. Because these are people
who are coming of their own volition, who've decided consciously to come to a URL. So let's walk
through the math. 100,000 visitors a day, 3% unsubsubbing an average of three platforms. So let's call it.
let's be generous and call it 10,000 unsubs each day, right? That's 300,000 unsubs through the month of
February. Three hundred thousand unsubs. Average dollar value, $100. So that comes to $30 million
less than unsubscription revenue. The average multiple on revenues is 10x, so that is a $300 million
market cap hit, notional hit to these firms. Does that make any difference in the big picture?
probably not. But if we can get a bunch of people to figure out a way to ding big tech by a third of a trillion
dollars, something is going to happen. And that's the whole point here. The signal we're trying to
send is that one person with a footprint, and it could be your parish, it can be your sports
league, it can be your friends, maybe you have a little bit of a following online, can take action
with fairly little effort. And this has been an effort more so for my team than me.
But more than anything, what is required to have a voice in a chorus of pushback?
It's the following.
An absence of fear of public failure.
That was really the only thing getting in the way of me doing this was the fear of public failure.
The fear that you were going to throw a party and no one showed up.
The fear that, oh, maybe I'd be a good sophomore class president, but I don't want to risk public failure.
The fear of reaching out to someone who you're impressed by and saying, let's get together for the game,
fear that they wouldn't be friends with you because you think they're much cooler than the
year. The fear of applying for a job that you feel you're not qualified for. The fear of
living the life you want to, who do we respect the most? I'll shift that. Who do I really admire
at the end of the day? The best example I can use is occasionally I'll find myself in a situation
when I'm on vacation and people start getting drunk and someone gets up and starts dancing as if
no one's watching. Some dude who has no rhythm is just having a great time. And then
inevitably, and this is more fun, some exceptionally hot person gets on a table and starts dancing
as if no one's watching them. That's how you want to live your life. You want to live your life
as if what's important to me, how can I make a difference, and just pretend or just imagine
that no one's watching gets, here's the bottom line. In a hundred years, nobody you care about
and nobody who cares about you is going to remember you or anyone may knew. So here's the key. Here's
the key to taking action. Here's the key to having an impact. Here's the key to living a self-actualized
life is recognizing that every obstacle that is in your way, nothing is as big as the obstacle of the
following, and that is your fear of public failure. And your fear of public failure is a barrier,
but it's a two-inch high curb in your brain. It just doesn't matter. And the people who
punch above their weight class, economically, psychologically, romantically are the ones who have
decided that the risk of public failure is a much smaller risk than everybody else things.
If something goes wrong, if I started this movement and nobody showed up and it was a hit
to my credibility, okay, then everyone goes back to thinking about them fuck themselves.
So the fact that it's worked is really reinforcing, but more than anything, I wanted to be a signal
to people to say, hey, take action, do something, but more than anything, if there's a lesson in
any of this, that I could communicate to young people, it's that the only thing or the biggest thing
between you and having relevance and meaning and living the life you want to live is the following.
Dancing as if nobody is watching you.
Moving on, in today's episode, we speak with Ethan Mollock, professor at the Wharton School and
author of Co-intelligence. Ethan is a leading voice on how AI is changing work, creativity,
in education. He also writes the popular
substack, one useful thing.
So with that, here's our conversation
with Ethan Mollick.
Where does this podcast find you, Ethan?
I'm outside of beautiful Philadelphia, Pennsylvania.
There you go. Are you at school? Or is that
your home? I'm at my home, yeah.
That's my game collection back there.
Oh, I like it. So let's bust right into it.
Anthropic CEO Dario Amode
recently released a 38-page essay in which he delivers a very
ominous warning about AI.
and the threat it poses to our society.
Why do you think the CEO of one of the largest AI
in the world seems to be so pessimistic about AI
and what do you make of this view?
Is it more of this kind of virtue signaling
and not meaning it,
or do you think he's generally trying to build a better AI?
So I think that there's always debates, right?
There's like external facing,
but like when you talk to these people internally,
I think Anthropic is fairly sincere
about their views about how AI works.
You may or may not agree with them.
He actually has a pair of essays,
one on like the bright future ahead,
of all of us and the other about our potential doom,
and pointing out issues that may actually occur.
So, you know, it always is a question of weirdness
that you're building this thing if you're so worried about it,
but I think it is a sincere anxiety.
A quote from the essay,
Humanity is about to be handed almost unimaginable power,
and it's deeply unclear whether our social, political,
and technological systems possess the maturity to wield it.
Do you agree with that?
And also, what does Ethan Mollock think
are the biggest dangers of AI,
What are you worried about?
You know, I'm kind of a weird boat here, which is I think that there's a lot of worries about the existential risks of AI.
And so people leap ahead five years assume the current path continues.
And there's no sign yet, by the way, that AI is slowing down in development.
But there's a move towards existential, you know, Dario in that essay talks about what would a group of geniuses in a data center that are smarter than any human, what would they do?
I'm actually much more concerned and thinking about how we guide the next few years to make AI help people.
thrive and succeed rather than the negative, you know, consequences that could happen.
How do we mitigate those negative risks?
So I think there's a nitty-gritty path between here and some imagined future.
We don't know if AI is going to get there to sort of super powerful and autonomous, but we do
know it's disruptive today.
So I worry a lot about how do we model the right kinds of work so that when we start using
AI at work, that we do it in ways and empower people rather than fire people?
How do we think about AI in education so that it helps students learn rather than undermine's
learning?
How do you think about using a society in ways that don't lead to deep fakes and dependencies?
I think there's two sides to each of these coins, and we need to get very nitty-gritty about which things we care about.
Well, I'll put forward a thesis, and you tell me where I've got a right or wrong.
I'm actually an AI optimist, and I think it's easy.
You just sound smarter when you catastrophize, and I do a lot of that.
But the existential risk of it turning into a sentient being and deciding that in a millisecond,
that we should no longer exist or self-healing weapons.
I don't see any reason why AI couldn't use as much for defensive measures as offensive.
Inequality, that's already here.
We've opted for that.
But what I see, I'm an investor in a company called Section AI that helps corporations
upskill the enterprise for AI.
And what we have seen or what they have seen is that the adoption is woefully under-penetrated
within the actual organizations, at least in the enterprise, individuals are using
AI for therapy or how to reduce their workload on a Friday. I mean, is all of this, quite frankly,
and is, and also I wonder if the CEOs have a best in interest and catastrophize and because it makes
it sound like the technology is world-changing and that much more powerful, and please sign up for my
$350 billion round at Anthropic. Is some of this, quite frankly, just, is some of the dread and doom
just quite, is just inflated? I mean, I'm sure. I mean, but some of there's also, they drink their own
Kool-Aid. Like, they believe this stuff, whether that serves marketing or not. But I do want to take
a business school professor, right? You know, like you. And so I've been doing a lot of work with my
colleagues on impacts of AI at work. And there's a few things. One is, there are fairly large
impacts. In any randomized controlled trial, we did early experiment with my colleagues at Harvard,
MIT University of Warwick at Boston Consulting Group. We found 40% improvements in quality using the
now obsolete GPT4, with people weren't even trained, 26-b-Faster work. Penetration rates are up there,
It's interesting companies.
People are using AI, but they're not talking about it.
They're not using the corporate AI.
So about 50% of American workers use AI.
They report, by the way, three times productivity gains on the tasks they use AI for.
They're just not giving that to companies, right?
Because why would you?
Like, you're worried you'll get fired if AI shows that you're more efficient.
You're looking like a genius right now.
And maybe, you know, the AI is the genius.
You're doing less work.
So I think that there's a difference between what companies are seeing about adoption,
what's actually happening with adoption.
The CEO section says that right now it's being.
used at work for therapy. And so someone who understands AI is giving themselves another day off,
that why sharpen the sword publicly that you cut your own head off with? What specific tasks at
work have you seen in your studies and your research have registered the greatest increases in
productivity? What's been to overestimated and what's underestimated in terms of the disruption
or the improvements in productivity at the workplace? So I think the big picture overestimation
an underestimation is work is complicated and organizations are complicated, right?
So you can get lots of individual productivity game, but if that's producing 10 times more
PowerPoints than you did before, that's not necessarily going to translate to any actual benefit
for the company.
So leadership needs to start thinking about how do you build organizations around this.
At the individual level, though, huge impacts.
And, you know, coding especially has taken this massive leap.
We have earlier evidence that you saw about a 38% improvement in the amount of code people
were writing once they started using agent decoding tools with no increase in error rates.
But that's even increased further.
The newest coding tools, both people in charge on the research level at OpenAI and Anthropic have said 100% of their code is now written by AI.
That's actually quite believable, given how good these tools have become.
We're seeing similar things.
Maggiro Tass medicine we're seeing impacts in.
Scientific publication, super interesting area.
People who started using AI early to write scientific papers.
And we know this because there's a great study that looks at when they started using the word delve, which was a dead giveaway.
you were using AI back in 2023.
If you use Delav a lot in 2023,
then you actually publish about a third more papers
in higher quality journals afterward.
Now, the question is,
is that good for science
to have more AI writing separate issue?
So that's the sort of process versus detail problem, right?
People are becoming individually more productive.
The system isn't built to handle
a mix of high quality and low quality
and just more work.
And that's where the bottleneck often is.
You coined this great term to describe AI
called the jagged frontier.
I love that, which encapsulates how AI is really good at certain things, but really bad at others.
I'm the CEO of a Fortune 500 company.
I've just spent a bunch of money on an anthropic site license, and I've got to actually, you know,
the music has to match the words in terms of my embracing AI on earnings calls.
If you were advised to me, and I said, look, where should I be over-investing and under-investing,
and where can you, what areas of the organization should I focus on to try and deploy AI for
meaningful productivity gains and which areas should I avoid that aren't yielding the type of benefit
that was once advertised?
So I think that equation starts with a realization, which is nobody knows what's going on, right?
Like, I talk to all the AI labs and regular bases.
They don't take money from them.
I talk to them all.
I can do research on this.
I talk to policymakers and CEOs.
And it's not like there's a playbook out there, right?
This is a – we're a thousand days into after the release of CHAT-T.
Like, everyone's figuring this out at the same time.
I'm seeing companies getting incredible amounts of benefit
and other companies struggle,
and part of that is how much they're willing to embrace the fact
that they have to do R&D themselves.
So part of the value of giving people access to these tools
is experts figure out use cases, right?
If you're doing something in a field you know well,
it's very cheap to experiment with AI
and figure out what's good or bad at
because you're doing the job anyway,
and you instantly look at the results
and see whether they're good or bad results.
If you're paying someone to do R&D for you,
that's a very expensive process.
So people are inventing uses all the time.
So the most successful case I'm seeing
are a combination of what they call leadership lab and crowd.
The leaders of the company have a clear direction,
set right incentives to make things happen,
think about process.
They give the crowd, everybody in the organization,
access to these tools to use advanced tools
like Anthropics tools or OpenAI or Gemini.
And then they have an internal team
that is actually thinking about what you build.
So they're harvesting ideas from other people.
So I'm seeing this happening everywhere from,
you know, there's certainly a lot of stuff happening
with internal processes, security, customer service,
like lots of stuff on analytics.
Like, the AIs are quite smart.
So if you let them do analysis work,
you can actually get really big impacts from that as well.
Just wide ranges, but very different across organizations
depending on where their expertise is
and how aggressive they are about trying to experiment.
I know Mark Benioff,
and it's just so it's borderline obnoxious
how many times he'll figure out a way
to insert the term agentic AI
or the agentic layer.
And to be blunt,
I'm not sure I entirely understand
the difference between AI and agentic AI.
Can you break it down for us
and why so many really smart people,
such as Mark Benioff,
seem to be talking about agentic AI?
It's a great question.
First of all, you know,
you started this off by talking about marketing.
Anytime a new phrase comes out,
there's a blur of confusing different interpretations of it
because everyone wants to sell AI product right now.
So it's really easy to get bogged down.
So agents basically can be defined as an AI tool
that is given an AI that's given access to tools
so it can do things like write code,
search the web and do things,
that when given a goal,
can autonomously try and accomplish that goal
on its own and correct its course if it needs to.
So an agent would typically be something
where you could say, hey, you know,
I've got to have Ethan on this podcast,
research everything about him,
come up with a, you know, a pitch deck on what,
you know, why we might have on the cast,
talk about interesting things that he might have said before
and then boil this down to five really good questions to ask.
And it would go out and do this,
the research, and 20 minutes later, you get kind of a complete result. That's an agent at work.
So agents are basically the chat bots that you use today when you go to chat TBT, plus we call
an agentic harness, a set of tools and capabilities they have, searching the web, writing code,
connecting to your data, that lets them do more work. So when you combine those two together,
that's where you get semi-autonomous AI. Give me, I'm a CEO, a student, a mid-level professional.
what is, and I've done very little so far around AI, and I want to catch up, what is the
Ethan Mollock AI tech stack? What should I be downloading, subscribing to? How do I get started
here? What LLMs, agents, whatever the term is, would you recommend investing in right now?
The good thing about AI is it's very democratic, right? There's no better model than the ones
you have access to today. You or every kid in Mozambique has the exact same tools that are at Goldman Sachs
or Department of Defense or anywhere else.
There's no better models.
They're basically being released as soon as they come out.
That being said, the really good models
tend to be, cost you at least $20 a month.
So you are probably going to want to subscribe
to either Google's Gemini product,
Anthropics Claude product,
or OpenAI's chat GPT product for $20 a month.
And you're going to want to, when you do any serious work,
pick their advanced thinking model.
So GPD 5.2 thinking is important to use
Anthropic 4.5 Opus and Gemini 3 Pro.
Those are the sort of starting pack of tools you can use.
They're all capable of doing agenetic work.
You can access them through the chatbot.
And I always recommend people just start by trying to do stuff they do for their job.
Ask it for everything you do that day.
Just ask the AI also.
Generate some ideas for me.
Give me feedback on this.
Help me write this email.
Create the presentation.
That will help you map the jagged frontier of what AI is good or bad at.
And it's a really good starting point.
Like there's a lot of other complicated stuff.
If you want to do, you know, research, the deep research tools for Google are currently
better through this product called Notebook L.M. And that's free and that's very good. If you want to do
coding, you probably want to use Claude Code, which you have to download. But the basics are,
pick one of the big three, pay the $20 a month, and then start using them. You need eight or
10 hours of just talking to it like a person and seeing what results you get. And give us the lay
of the land. My sense is that Open AI was dominant. It's still dominant, but that the Empire
strikes back, specifically Gemini is making inroads, capturing share, and Anthropic has made
real progress in the enterprise market. So that's the limit of my knowledge about the playing field.
Can you add color to that around the dynamics, the intraplay here? If this were a league,
what teams are coming up and what is descending? So to take half a step back, right,
on what drives the underlying dynamic is something called the scaling loss. And the scaling laws
basically tell you the larger your AI model is,
which means the more data you need to build it,
the more data centers, the more electricity,
the more chips, the better your AI model is.
And it's very hard to build a small model
to compete against the larger model.
They're just better at everything.
You can build, you know, once you have one of those,
you can do all kinds of variations,
but you have to build a big model.
And there's a bunch of other tricks
that you could do on top of that,
but that's pretty critical.
And because of that, there's only a few companies
that can actually play in this space, right?
So in the U.S., we mentioned the big three,
which is Google, Anthropic, and Open AI,
There's also Elon Musk's X, which has been scaling quite quickly, X-A-I,
and there's also meta, which has been quiet recently,
but is spending a lot of money in this space.
Outside of that, there's a lot of people with smaller competitive,
but they're not really competitive.
Amazon, Apple, they don't really have their own models that compete.
There's also three or four big Chinese companies
that are producing very good models or at least them for free to the world,
and one French company in the same boat.
So within that dynamic, there's this competition about who could build the biggest data center,
who could train the biggest model because bigger models are smarter,
who could put the most research and tricks into them.
And it really is interpersonal in some ways.
Like the heads of these companies are really out to get each other, right?
Like they do care about winning this race.
They think they should be dominant.
And so there is a lot of resources being put into getting ahead of the other people in this space,
one way or another.
So right now, the sort of three most polished models are Google's Open AIs,
in Anthropics.
And again, which one is better is changing on a day-by-day basis?
as, or at least week-by-week basis,
as each one releases new approaches.
And then, you know, we're waiting to see
if anyone else kind of catches up to them.
But those three are in a very tight rate.
As soon as one of them comes up with a product
that uses AI in a new way, the other two copy it, right?
So Claude Code is currently the very hot coding tool.
OpenA. has codex, which is a very similar thing.
Gemini has its own set of tools.
Deep Research was invented or first came out from Google.
Now there's Deep Research projects for Anthropic
and from Open AI.
So you can kind of pick any of the three of them.
and be in good shape, as long as they can keep growing and spending money and they don't hit a wall in development, which hasn't happened yet.
If I think of luxury brands, BMW Mercedes and Audi, I think I could do a reasonable job of attempting to outline how they're a different shape from one another and who is the right customer for each of those brands.
Can you do the same thing for those big three?
Are they all just kind of mostly the same?
I can, right?
What I worry about is trying to talk to all the various levels, right?
What do you do if you're just starting off?
Pick any of three, you'll be fine.
But I think people who use them a lot, they have personalities, right?
Those personalities are shaped by the companies, the way they train.
I mean, it's amazing that they're all so similar to each other that things basically work across all three.
Like, you wouldn't expect Microsoft and Apple to produce a system that works exactly the same.
These are similar enough that for most people, it doesn't matter.
But if you care, right, Opus 4.5, Anthropics models are tend to be known as the best writers of the bunch.
They're often quite good at sort of intellectual topics.
They're a little fussy in terms of, you know, they have high ethical standards relative to the other models.
Chat TPT is really two different flavors of models.
There's a set of chat models that are really optimized for you to have conversations with and role play and be friendly.
I don't tend to use those much because I tend to focus more on the work aspect.
And they have a series of very logical, very good at long task models that are very good at producing a lot of work.
And Gemini is an interesting step, very smart overall.
model, weirdly neurotic. Like it actually gets quite if you, it gets self-flagellating. If you tell
it did a bad job, it apologizes and kind of grovels. Weird kind of dynamic there. So they all have
their own sets of personalities and approaches. I find that Anthropic is more politically correct.
Chat GPT will give it to me straighter. And then when I go to XAI, it seems like it's purposely
trying to offend people, it's going the other way. It's interesting you say that they both take
on personalities. With respect to differentiation, the data I've seen is that most of these
models are converging towards parity. It is very hard to maintain any sort of substantial or
sustainable differentiation because AI just reverse engineers other AI. Do you see the same
regression to the mean that I'm saying? I won't call regression to the mean. We're seeing a race,
right? There is huge impact. Each model generation is much more capable than the one before, right?
So we keep crossing these lines where, oh, the AI can't do, you know, it can't work with Excel.
And suddenly it works with Excel better than, you know, and does a discounted cash flow analysis better than most bankers, right?
Or, you know, the AI can't produce a PowerPoint and suddenly can do that or it can't do math.
And suddenly, you know, last year, the two models won gold, the international math Olympia.
So, like, there is not, it's not a regression in the mean because there's,
no drop-down of ability level. The ability levels keep going up. But all of the companies in
the space are on roughly the same development curve, right? Their models are keep leapfrogging
each other by a fairly predictable amount over time. And you can draw a pretty good curve
on any benchmark that you want that shows the same exponential gain in AI abilities. So which raises
the big question of like, so what happens in the long term? And I think that depends on what the
long-term of AI looks like. There's one version where we just keep having a race of capability
and you need to stay ahead and you pick a model maker.
As long as they stick with you, you keep paying them money.
There's a version where one of them achieves what's called takeoff.
Their AI models become self-improving,
and they build the smartest possible model
and no one can catch them and build artificial general intelligence
of machines smarter than a human, every intellectual task.
There's some apotheosis or endgame.
Or there's a version where everything sort of plateaus out,
and then people who spend billions of dollars building models
and eventually three Chinese models
or another company catches up,
and there's no money and becomes commoditized.
I don't know which of those three scenarios dominates.
We'll be right back after a quick break.
This week on Net Worth and Shell, we're joined by Victoria Garrick Brown,
former Division I athlete turned body positivity advocate and entrepreneur
who's dismantling the lives we've been sold about our worth.
From battling eating disorders as a student athlete to building a platform that's reached
millions, Victoria's journey as a master class in turning personal pain into purpose and profit.
She opens up about the real financial cost of chasing beauty standards, why the skinny girl
industrial complex is designed to keep us broke and insecure, and how she's built a business
around authentic self-worth without selling out her values.
We dive deep into the economics of body image, the influencer money game, and what it actually
costs to love yourself in a world that profits from your insecurities.
Listen wherever you get your podcasts or watch on YouTube.com slash your rich BFF.
Support for today's show comes from Hungry Root.
Habits are hard to change, and oftentimes it's not about a lack of motivation, but more about not having the right options at your disposal.
Mike, if you're looking to change up your diet, you can't expect to make the move if all you have are snacks and junk food.
That's why there's Hungry Root, for those of you looking to up your nutrition and eat healthier.
Hungry Root basically works like a personal nutrition coach and shopper in one by planning, recommending and shopping everything for you.
They take care of the weekly grocery shopping, recommending healthy groceries tailored to your taste, nutrition preference,
and health goals. We've gotten to try Hungry Route on the Prop G team, and people reported back
that it was surprisingly easy and good tasting. And people were able to spend less time worrying
about grocery shopping. Take advantage of this exclusive offer. For a limited time, get 40% off
your first box, plus get a free item in every box for life. Go to Hungarroot.com slash PropG
and use Code PropG. That's Hungarroot.com slash PropG to get 40% off your first box
and a free item of your choice for life.
I wonder if one of the theses we had for 26 was what I see our potential for is similar to how the Chinese engaged in dumping of steel, predatory pricing, hoping to basically consolidate, put American steel producers out of business, consolidate the market, and then on pricing power.
I wonder if the Chinese are now engaging in what I would refer to loosely as AI dumping, and that is some of these models appear to be really strong, sort of the old navy of AI.
80% of the best models for 10, 20, 40, 50% of the price.
And a lot of VCs and big firms have said,
we're using these models.
They're just a better value.
Do you see any sort of geopolitical chess here
around the Chinese engaging in some form
of what I would refer to as AI dumping?
I mean, there's something interesting going on
because an open weights model,
which is a model that you release publicly
to the world that anyone can run, right?
So if I want to use chat TBT,
I have to go to open AI and use chat CBT to do that.
if I want to use one of the Chinese models like Quinn, any company in the U.S. can download that model and run it themselves.
So that model based on open source made sense for software because I could give away my core software for free but then sell you services.
It doesn't actually make a lot of sense for AI companies because they're building a model and giving away for free.
There's no ancillary benefit to that.
They don't get a gain in the long term.
They're not selling other solutions.
They have no special prize or tool left behind.
in most cases. So there is a little bit of weirdness about how long will Chinese companies sustain
releasing free models. They're about eight months behind, you know, consistently eight months behind
the frontier of U.S. models. And, you know, what's driving that, right? Is this a state-sponsored
effort in the long term? Right now, it's not clear that it is, but it might be that there is some
sort of, you know, dumping kind of effort. On the other hand, I mean, the matter, the degree of
intelligence is fungible. Like, if you are talking to a CEO and they're saying, we're going to use a Chinese
model because it's cheaper, the cost.
of models has dropped 99.9%
for the same intelligence level in three years.
You'd be, like, you actually,
for most applications, want the smartest model
that's most capable of doing tasks
as cheaply as possible. So if fixating
on a model that's not as good may end up being
a problem. Like, this isn't an equation
where we're done yet, and we can pick among
roughly equivalent products, because we're
racing up a curve of ability
that's still changing over time.
When you look at the AI supply chain,
and my guess is you can articulate
the actual supply chain much more
occasionally than me, but I think about the infrastructure layer, the chips, then I think about the
LLMs and the apps on top of it, and then services for adoption here. But I also think about
power and data centers. I'm not even sure where that comes into the stack. But if there's a
choke point here, and it might be just capital to fund all of this, what do you think are the
biggest choke points into that stands in between these CEOs talking about the brave new world of
AI? And, you know, I heard that Invidio,
It takes five years to hook up a data center in some parts of the nation to the power grid.
What do you see as the choke points that get in the way of this brave new world, so to speak?
Yeah, and there's a few of them, right, and they're kind of jockeying against each other.
So as you pointed out, data centers are the sort of choke point, right?
How fast can I build one, and especially how fast can I power one and can I get enough chips to put in one?
Right.
So the power and building and chips are all a big deal.
For a while, data was the bottleneck, but AI companies have increasingly found that they can make their own data.
So it turns out, as long as you have some human data, large language models can create their own data and other models can train on that and that you get good results.
So data is not the choke point it was, but it could be again.
There's also a research choke point.
There's a lot of things that LMs do really well, but there's some parts of the jagged frontier that are still very jagged, right?
LMs don't have memory.
They don't learn things over time.
So I have to instruct them every time.
It's like I'm talking to Amnesiac every time I speak with it.
than all them. So continual learning is a problem that gets in the way of building these amazing
models for the future. They don't keep learning, you know, what, like that humans keep
learning. Otherwise, you have to train them every time. So there's research bottlenecks.
There are energy power and data center building bottlenecks. And those are sort of big ones right now.
From a policy perspective, energy is the big one that all the AI labs are worried about.
They could reliably turn energy and chips into money right now. And the question is, how fast
can they build those data centers?
I look at these things, and you're at the business school, I'm in the business school.
I look at the evaluations of these companies, and I see one of two things needs to happen.
The valuations need to be cut in half, or we're going to see such an incredible destruction
in human capital or the labor force to justify the expense here through efficiencies,
because I don't see a lot of new AI cars or AI moisturizers.
What I see is opportunities for efficiencies, which is Latin for cost cutting.
But my thesis is you're either going to see a really significant destruction in the labor force
and more information intensive industries, or we're going to see valuations come down dramatically.
I'm having a difficult time understanding how any of these valuations can be justified over the medium term, much less the long term, unless these companies begin to register massive efficiencies, again, layoffs.
What do you think of that thesis?
First of all, I think you've laid off the tradeoff really well, right?
Which is, I think that people tend to view valuations as either bubble or not.
But the truth is, valuations are justified if the revenues can be made to justify them, right?
And the revenue targets are potentially achievable in a world that AI actually gets as good as the AI lab say it's going to get.
And we can argue whether that's going to happen or not or that there'll be a financial bubble.
I can't tell you the answer to that.
But I think the real tradeoff is what you just articulated, which is what it means for an AI company to achieve that revenue, right?
Let's assume that they succeed at doing them.
And that's where I think the starkest problem is, because I do worry a lot when I talk to
CEOs of companies.
They're used to seeing technology as efficiency gains, right?
Which, as you said, it means layoffs, right?
I want to see this as like, okay, if one person could do 40% more work, I need 40% less people.
My desperate desire is to try and communicate to companies, something I think the AI Labs try
and say, which is this is also about an expansion of capabilities, right?
If you could do more work and different kinds of work, the boundaries of what a firm could do
could change, the capabilities of what you expect for people can do, this could be a growth
opportunity. I mean, you know, whether or not you believe them, like Walmart, for example,
has publicly been stating that they want to keep all their current employees and figure out
new ways to expand what they do, right, as opposed to Amazon, which has been kind of saying
we have to cut because of AI. There are other models out there, and I do worry about the lack of
imagination in corporate America, where the model is, ah, great, we could just keep cutting down
our number of people because AI does the work, as opposed to how does everyone working
as a manager? What happens if we get 10 times more code? That doesn't mean we should have 90%
less coders. Maybe that means we can do different things than we can do before. What happens if
everyone's an analyst? What happens if we can give better experience to every customer? And the
failure of imagination there makes me very nervous. My first job out of UCLA was a Morgan Stanley.
I was an analyst in the fixed income department, and I look back on that. And I even found
some old PowerPoint decks I used to pull together to pitch companies on debt offerings.
And I don't think the two years I spent there, I don't think it could be distilled at two weeks,
but it could probably be distilled to three months
if I'd just learned the basics of AI.
Having said that,
I haven't seen a huge destruction in jobs
across those information in my understanding.
Unless he's lying to me,
I spoke to David Solomon,
the same levels of hiring.
Big law firms appear,
or at least they're saying,
same levels of hiring.
Do you, where do you see the greatest threat
in terms of,
especially amongst young people coming out of college?
I've seen all of this doom and gloom about
young people, but the reality is youth unemployment is at 10%, which is by no means alarming.
Do you think there's a wave of labor destruction at kind of the entry-level information-intensive
industry?
I think that people overestimate the speed at which large companies change, right?
And so I think you're right.
Like, I'd be shocked.
When has there ever been a technology invented three years ago that affects the labor market
that quickly?
It just doesn't happen, right?
I think that there is change in the system.
I think it's baked in, but I don't think it's there yet.
As you say, companies are just adopting this now.
They're just telling their employees, everyone use AI for something with no centralized idea
about what that's doing or how it's valuable.
No one has been rebuilding their process in a serious way around AI.
They're all in their first AI projects.
There's no consultant you could hire who does this.
So there is, I think that you're right in that as far as we can tell, and there's some debate.
Eric Binovson argues that we're seeing canaries in the coal mine.
Other people disagree.
But there's no giant signal that AI is responsible for labor changes right now.
Companies are blaming AI everywhere.
But realistically, if you look inside organizations, there's no wave yet.
That doesn't mean there isn't going to be.
Like, it's very hard to see, for example, let's just take something that's very well understood,
which is coding computer programming.
Like, it is very clear that AI is going to change how programming works.
You can talk to any coder or any elite coder and they know it's going to happen.
It privileges people who know what they're doing.
The experts become more expert.
You get a huge multiplier.
it becomes a management job, not a coding job.
And that's going to change the hiring market.
It just hasn't done it yet.
And I think it's going to take a while for companies to figure out what that looks like and what that means.
We'll be right back.
Support for Propchea comes from Lisa.
Your mornings can't get off to a good start if you don't get a good night's sleep.
And tossing and turning because of your mattress not being up to par, it's time to reevaluate what you're sleeping on.
Lisa mattresses are designed to help you get the much-needed R&M sleep at night.
so you can tackle all the challenges thrown at you during the day.
Lisa has a lineup of beautifully crafted mattresses tailored to how you sleep.
Each mattress is designed with specific sleep positions and field preferences in mind.
From night one, Lisa believes that you'll feel the difference
with their premium materials that deliver serious comfort and full body support,
no matter how you sleep.
Plus, Lisa mattresses are meticulously designed and assembled in the USA for exceptional quality.
Go to Lisa.com for 30% off.
Plus, get an extra $50 off with promo code PropG exclusive.
for our listeners. That's L-E-E-E-E-S-A-com, promo code Prop-C for 30% off, plus an extra $50 off.
Support our show and let them know we sent you after checkout.
Lisa.com promo code Prop.G.
We're back with more from Ethan Mollock.
So let's shift to academia. All of these articles over the last two years, and I'll put forward,
this is a comment posing as a question. I hear people say, oh, you don't need,
we're not going to need college with AI.
And I find the people saying that are because their kid didn't get into UM and scored a 22 on the ACT and is trying to make themselves feel better.
I see absolutely no evidence that AI is disrupting higher ed.
Applications are up.
Your school, my school are both still figuring out ways to raise tuition faster than inflation.
The whole AI will make higher education obsolete.
I just don't see it happening.
I don't see it happening.
your thoughts?
So, I mean, a few things.
I think my personal feeling is education gets a boost from this for reasons I could discuss
a second.
But, I mean, I think it's disrupting higher education that everybody is cheating with AI and essays
are no longer a valuable way of assigning things.
There's a lot of disruption at the school level at the teaching level.
We'll get through that.
We always do.
But I agree.
There isn't a sign that this is devastating higher education.
And I don't think that saying everyone's going to learn with AI and that's going to be the only
way you learn or you won't need skills anymore.
more are viable outcomes of in this world.
I think that education will change.
I think there's an easy imagine a world given early evidence that AI, when used properly,
can be a good tutor.
I can imagine a flipped classroom setting where my students are engaging with AI
outside of class and inside of class we're doing more experiential, active learning-based,
case discussions, other things.
But that's in the margins.
I actually think that the value of education, especially professional education,
goes up because I teach people to be generalists at Edwardon, right?
I teach them to be really good at, like, you know, business.
And maybe they have a little bit of consulting or strategy focus or entrepreneurship focus.
And then I send them off into the world and they go like you did to work at Morgan Stanley or whatever.
And they learn how to do their job the same way we've taught people for 4,000 years, which is apprenticeship.
Right.
They work, they, if you're a middle manager, you get this advantage of a junior person who's desperate to prove themselves
who is willing to work really hard, but isn't very good, but will be.
And they write deal memos over and over again and they get yelled at or given nice feedback.
and eventually they learn how to write a deal memo.
And that's how we teach people.
You don't have to be good at managing for them to learn.
Ideally, you're good at teaching, but you don't have to be.
But that's all broken down.
Like already this summer broke down, right?
If you're an intern at a company this last summer,
you absolutely were using Claude or, you know, chat TBT
and just turning those answers into people
because it's better than you at your job.
And middle managers were increasingly turning into using AI instead of interns
because it does the work and doesn't cry, right?
And so as a result, you saw this loop where nobody was learning
the sort of entry skills before. So I actually think in a world where the skill destruction happens
at the intern level, we're going to need to think more about how we educate people formally in a
world where informal education becomes harder to do. How has it changed your role as an academic
in terms of research, how you prep for class, or, you know, quite frankly how you make money
outside of the school or in the traditional confines of academia? How has AI impacted the way you approach
your job.
And tons of ways.
And I think by the way, that's indicative, right?
Because, you know, the way we tend to model jobs right now in academia is that they're
bundles of tasks, right?
So as a professor, I do a ton of things, right?
I am supposed to teach classes and design classes and grade, you know, assignments and be
emotionally available to my students and also be a good administrator and review papers and
be a podcast, write books, all of that stuff, right?
Tons of stuff.
And it's an impossible set of tasks.
I mean, most people's jobs have a ton of tasks that they're not getting to or doing badly.
So if the AI is already taken some of these things from me, right?
Some of them I won't do for social reasons.
The AI is a better grader than me.
But as of yet, I haven't let it do grading because my students expect me to grade the papers.
But maybe that will change.
You know, it does, there's a lot of administrative tasks I've handed over to AI to do.
When I do research, my research time is cut dramatically because the AI can do all the code writing and everything else that I can look at the answers.
It's like I'm an RA.
you know, but it's gotten better than I can throw a full academic paper that I've written a couple
years ago into ChatGPD 5.2 Pro, which is the smartest model out there. It will find errors that
require it to have run its own Monte Carlo analysis on assumptions from a table, from table three
and table five put together, and we'll say, actually, you should have done, find errors that I couldn't
have found otherwise. So, especially when used by a skilled human, I'm finding everything I do is more
efficient. Like, I write, you know, this one useful thing, substack, there's a lot of readers.
I do not, I write all my own first drafts, right, because I want to be my voice. But if I didn't
have Claude checking all the answers, you know, what I write to make sure it makes sense,
it would take me days to put out a piece that takes me a few hours to write because I know I have
a good voice as a cross-checker to work with it as a researcher. So in almost every aspect of what
I do, I mean, I use AI for everything. And sometimes it's huge efficiency gains of hours and
sometimes it's, you know, a couple of minutes here or there.
Absolutely hear you. Everything right now. Fact-check this. What additional data would be illuminating to my points? Where am I redundant? And the idea of peer review research in academia, it feels like we're just going to need fewer peers to review. And one of those peers probably should be AI, no?
I mean, peer review research is in a middle of a bit, it's always in crisis, right? Just like everything associated with the universities of academia. But the crisis is pretty bad right now because all the signaling associated with papers, right? So peer review,
depended on you being able to filter out the crap so that you could at least say,
okay, this paper is worth looking at more and worth a couple hours of my time.
The problem with AI, and there's a nice paper showing this,
the problem with AI-produced content is it scrambles our signals,
and it makes it very hard for you to tell whether it's crap or not without a lot of effort.
So human peer review is suffering under a flood of tons of papers being produced with AI help,
and it harder to signal which papers are good or bad in advance,
so it's hard for us to spend the time doing this right.
And then, of course, who's reading all these papers now that AI is producing all of them?
So I think we're going to include AI in the peer review process, like you said.
But then the question is, is AI producing research for AI that it gets published in AI journals and no human ever reads?
Like, there's a, there's sort of a, you can fear the, you know, hear the creaking underneath the whole edifice of academic publishing as we try and figure out what comes next.
It feels like one spot, if you really wanted to be hopeful, would be medical research.
Grady, not at the med school, but you are at the business school, and health care in America has basically been, it's monetized, it's now about profits. Are you excited about the potential, the intersection between AI and drug discovery and, you know, my friend Whitney Tilson said that basically Chats GP, I'm sorry, Gemini diagnosed his father and saved his life. Let's start there. The health industrial complex in America, how excited are you about the intersection of AI? And that is,
industry and what other industries do you also think really stand to benefit exceptional returns
with the advent of AI? So, you know, and with the usual caveat that the more, the more complicated
the industry and the more regulated, the slower adoption of AI tends to be, I think medicine is an
incredibly exciting area. So you talked about a few areas. Like one of them is Google, especially,
but other companies are deeply dedicated to how do we automate research or accelerate academic
research. And I think that there's a lot of value there. We're starting to see actual
reasonable scientific work being done by AIs.
And the hope is that agenic systems can autonomously do directed research in the near future,
which will lead to a flood of, you know, because we're researcher constrained, a flood of
new discoveries.
So there's hope there in that space.
I think there's also, you know, when you talk to AI companies like Moderna has been very open
about drug companies, like, Moderna has been very open about their use of AI, there's tons of
things that companies have to do that slow down the drug development, discovery, and testing
process that are administered.
and the AI helps with all of those things. You get huge value legally and in building forms and
materials. On the doctor's side, you know, even just things like translation, it turns out that if
you use AI to give people a preoperative form that they understand, they actually are happier with
their surgery, have less issues and are more likely to report success because they got the
information away they understood, right? Second opinions, you obviously should be using an LLM for second
opinion. I can't say you should use it to replace your doctor, but they're good enough that in every
kind of controlled experiment, that they're worthwhile, especially where imaging is not involved.
They're not as good at imaging. So I would not trust the radiologist report from a large
language model, but in terms of giving a second opinion or if you're stuck, amazing at that,
people would have access to good health care, good doctors, terrific. And then there's this the
administrative breakthrough piece, right? If the forms get filled up by AI, if some of the processing
gets done by AI doing the grunt work behind the scenes, there's possibilities for gains of efficiency
over administration. None of these things are automatic, though, right? They require actual
leadership and structural change to make happen. And that's, I think, the level where things
get stuck is not so much can AI do this, but how will organizations respond?
You brought up Moderna, and I think of vaccines is a technology that the big winners were all of us,
and that is, modern stock, I think, is off 90%. I don't think a lot of companies have, you know,
made huge companies on, or huge market cap companies in the back of vaccines when I think about,
you know, I've been in four countries in the last five days and the ability to skirt along
the surface of the atmosphere at seven-tenths of speed of sound. I don't think there's any technology
that's changed my life more. And yet airlines and aircraft manufacturers without government
subsidies, and basically all of them, you know, either gone out of business or going out of
business. And it feels like lately we become used to believing that any innovation in technology,
the market share or the stakeholder gains
gets sequestered to a small number of companies.
Do you think there's any possibility
that the real winners of AI will be us
and that is the sense that
we're under this illusion
that a small number of companies
are going to build multi-trillion dollar market cap companies
but this technology
because of the inability to create
ring fence distribution or IP
that the real value might be disseminated
to the general public
and we won't see
and quite frankly just these current valuations
will not hold up
which isn't to say
AI isn't going to change the world. It's just that change isn't going to involve a small number of
companies that are multi-tillion dollar market cap. Could we see a huge destruction and shareholder
value across these companies while seeing huge stakeholder value, similar to what happened
with, you know, vaccines or even PCs?
Well, any frontier company, frontier model company, can destroy the market anytime they want,
given the condition that they release their models open weights, right? Which is what the Chinese
models are doing. So it all comes down to whether
Explain open weight.
So, AIs basically are a bunch of math, right?
And the weights inside these models are basically what determine how they operate.
So if you have the weights, this set of, you know, the mathematical equations, the AI needs, you can run your own AI model, right?
So, and once they're out there, no one can claim the back.
There's no other piece to it.
You just need this piece of information.
So increasingly, what the strategy for the ALSORANS, which are the Chinese companies and Mestral, which is a French-European company,
is to release all their AI models open weights.
So you can find a ton of people in the United States
who run those models.
They can run them in their internal safe data centers.
They can have, you know, a third-party run them.
And the only money that you make from that
is the money that you have to pay
for the power and electricity
and, you know, security and network access to the models.
So you don't have to pay anyone a fee for using them.
And so right now, it's such that those models
are much, are less capable.
than what do you get from Open AI or Anthropic or Gemini.
But it's possible that at some point in the future, they catch up
because the development of a process slows down.
And at that point, then a lot of value flows out of the system.
And Ethan, are you a father?
I am, yes.
How many kids?
I have two kids.
And when you're kind of the helm of the bobsled here
of seeing AI and the impact it's going to have on the next generation,
If and how has it changed your view of the future your kids who are going to face
and has it in any way changed your approach to parenting or what you'd like to see them
prepare for or what skills you think they need to acquire?
Looking this through the lens of a dad who also really understands
and is probably going to guess more right than wrong about where this all heads,
has it changed your viewpoint of your kid's future?
I mean, yes, right?
I mean, there's more uncertainty.
There's always uncertainty.
As a parent you worry, right?
Are you making the right choices?
Are your kids making the right choices?
They're their own people.
They make their own decisions.
It certainly has changed my view on careers a little bit.
I think that thinking about jobs, I don't know what jobs are going to be in the future.
One thing we know about work, I'm a professor of entrepreneurship, is that, you know, jobs change.
People find all sorts of things to do.
I'm less certain that they pick one path and stick with it.
I want them to pick jobs that are diverse where they do many different tasks in case AI takes some of them.
But I also want them to do what they love.
So I don't know enough with the future holds to discourage.
them from being a lawyer or a doctor or whatever they want to be, because I don't know what that
future holds. In terms of actual parenting, I find, you know, AI useful in a cautious way.
I'm kind of lucky enough that my kids were old enough when LLMs came out that I wasn't worried
they'd build like a parosocial relationship with them. We've worked a lot on Internet and, you know,
how to work with these systems. And I don't not worry that they're going to turn to these for,
you know, as serious relationships. But we have spent a lot of time thinking about how you use them
for education. So when they were a little bit younger, I,
would insist if I used AI to help them, I would actually ask the AI, help me explain this the way
I would to a ninth grader. And I take a picture of an assignment and be like, okay, now I can help
explain this to you. As they get older, they've increased, they use the kind of quizzing mode.
They know the AI won't teach them unless they ask to be taught. So they use either the study modes
for the AI systems or they actually ask them, like, don't give me an answer, challenge me
and quiz me and prepare me and tell me what I don't know. So there's lots of like little talented
stuff to use it. Now, in terms of the wider future,
I don't know what happens. I mean, I grew
up in an age of, like, we thought nuclear war
would happen any moment. I think now we have
new anxieties. I'm an anxious
parent. Who can't be? But I also
think that preparing resilient
kids who are self-reliant
and have some ability to
improvise is more important than ever.
When I first, when my parents got divorced, I moved to this
new
elementary school in Tarzana, I think
was Amelita. Anyways,
I walked in and the teacher introduced me, and then she started riding, and then she turned around and screamed, duck and cover.
And everyone dove under their desk. I'm like, what the fuck? And I'm sitting there, like, not knowing what to do. And she's like, we do this in case you see a nuclear flash. We were doing duck and cover drills. I mean, it was just we were, as if that was going to save us that this wooden desk was going to protect us from nuclear blast. But we were doing, and we even had films on.
What to do when the Ruski's detonate a nuclear bomb?
My sense, well, do you think the catastrophizing around the offensive nature of possibility for this AI is overestimated?
And then a more personal question, you don't have to ask you, do you have a go-bag?
Do you have a plan for if all of a sudden, you know, we lose control and, okay, Mollocks meet here and we're headed to the, you know, the Appalachian Mountains or whatever?
I want some people catastrophizing because that's what government-transmitter.
should be doing. Like, we need policies and procedures in place to think about catastrophic stuff, right?
Like, I don't stay up at night, which might be dumb, right? There's a lot of very smart people
who think AI is going to murder us all. There's a bunch of smart people who think it's going to,
you know, become a god and save us all. I, you know, maybe it's the business school professor
and me or something, but I tend to be really focused on like, oh, there's actually a lot of, like,
humans are flexible. There's a lot of way, like, we get used to doing many different things,
living in many different lifestyles. Our goal should be to guide things in the best direction that we can,
now. I am not preparing for the
apocalypse on a regular basis.
For part of the reason that I think catastrophic
that is anything like that isn't that helpful. And
I don't know what
world you're preparing for a catastrophe. And there's
a thousand things that could
end the world. And so,
but I understand and appreciate the anxiety
of other people and think it's valuable that they're there.
As long as we're channeling that into
stopgap measures, I mean,
I'd like to see the government think more
about catastrophic risk, not because
it's my giant concern, but because
very smart people are concerned about it, right?
And you don't just get through crises
and hope you muddle through, you make plans.
I don't think the plans to be made
are the individual level. I think it's as the societal
governmental level that we need
to be starting to think about how to shape AI.
And by the way, it's not just catastrophic.
Will AI murder us all or invent a chemical
weapon that kills everybody
or will a bad guy using AI do these things?
But it's all the other risks they worry about, too.
Deepfakes are a real problem.
I can create an image of anybody saying anything
I want. How do we respond to that?
a society, right, of being able to do that stuff. How do we start responding to make sure that,
as we talked about earlier, that AI is not automatically translated to job loss, but there's a
period of exploration to try and figure how to make it do something better? How do we think about
using this in education a positive way? How do we think about avoiding parisocial relationships
with AI systems that are negative for us? I mean, these are policy decisions and we can help make
that I think are really important. Do you think we should educate synthetic relationships?
I think we don't know enough.
So probably caution is warranted, right?
Like, there is mixed research right now.
There are some papers that suggest that AI lowers, you know, rates of suicide ideation for the very lonely or decreases loneliness to the short term.
We have no idea what the long-term effects are.
I don't think that age-gating is a particularly bad idea for synthetic AI characters that try and act like people.
You know, because we don't know what the effects are.
I think it's easy to be alarmist and catastrophic about it.
The effects may end up being very good.
I don't know.
But neither does anyone else.
Just as we wrap up here and you've been generous with your time,
a lot of young people listening to the podcast,
you're kind of rounding third.
You've built a great career for yourself.
My sense is you have influence, you do something you enjoy,
you're at the right place or the right time,
you make a good living.
Talk a little bit about your career path
and what lessons you can provide to younger people
who might be thinking about
a career in academia or just general professional advice more generally.
You know, the first thing I said, and my colleague, Professor Math Bidwell, talks about this
a lot, is, like, careers are long. Like, I've studied careers, and, like, there are many
different things. And mine's an example. I actually went, you know, grew up in Wisconsin and
lived my whole life there, and then went to the East Coast for school, did the mandatory
job of being a consultant for, like, you know, 18 months, and then launched a startup company
with a brilliant friend and roommate in 1998 or 1997,
where we embedded the paywall.
I still feel a little bad about that.
But nobody really understood what the paywall was
because the internet was new.
But we were, you know, 2.20-something people
trying to sell this product to everyone.
I personally made every possible mistake in this company.
It did well, but not that much thanks to me.
Decided to get an MBA to figure out how to do it right,
realized nobody knew how to do startups right,
got a PhD, and then started studying.
games and education and AI and have had that and the whole thing. So like I've done many,
many things in my career. And my main advice to people is that careers are long and there's a tendency,
especially for young people today who come out of a very regimented system to think that they have
to have to have, like the next thing you have to be completely prepped for. Like I need to know
everything I need to know to be, you know, to do something. Entrepreneurship, I hear this all the time.
Like I need to, you know, learn this and I've worked at this company and that's not how this works,
right? There's no perfect moment. There's no perfect skill set. And, and, you know, and I'm
And it's an evolution and exploratory process.
I don't think that'll change in the near term with AI.
And I think the idea of being flexible of trying different things, of experimenting, of getting your own skills out there and using your own agency to try and find path forward is the way to go.
It's never easy.
And I've been lucky in a lot of these choices.
But I think that there is, you know, that thinking about how you want to take your next step on your own rather than following a predefined path can be very useful.
Ethan Mollock is a professor of the Wharton School in a leading voice on how AI is changing work, creativity, and education.
He also writes the popular substack, one useful thing, and is coined terms including the jagged frontier and co-intelligence, and he joins us from his home outside of Philadelphia.
Ethan, I love seeing people such as yourself who've just put in a ton of work, be as successful and as influential as you are.
Congratulations on all your success.
I trust you're taking time to pause.
and just register that you, you know, you have arrived, so to speak.
I haven't taken time to pause, but it is nice to know that I could do that at some point.
At some point. Thanks, Ethan.
This episode was produced by Jennifer Sanchez and Laura Jenaire.
Camryk is our social producer. Bianca Rosario Ramirez is our video editor.
And Drew Burroughs is our technical director.
Thank you for listening to the PropG pod from PropG Media.
