Dwarkesh Podcast - What will automated firms look like?
Episode Date: May 1, 2025Based on my essay about AI firms.Huge thanks to Petr and his team for bringing this to life!Watch on YouTube.Thanks to Google for sponsoring. We used their Veo 2 model to make this entire video—it g...enerated everything from the photorealistic humans to the claymation octopuses. If you’re a Gemini Advanced user, you can try Veo 2 now in the Gemini app. Just select Veo 2 in the dropdown, and type your video idea in the prompt bar. Get started today by going to gemini.google.com.To sponsor a future episode, visit dwarkesh.com/advertise. Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Transcript
Discussion (0)
When people think of AGI, they imagine what it would be like to have a personal assistant who answers all their questions and works 24-7.
But that just underestimates the real collective edge AIs will have, which has nothing to do with raw IQ, but rather with the fact that they are digital.
Currently, firms are extremely bottlenecked in hiring and training people.
But if your workers are AIs, then you can copy them millions of times with all their skills, judgment, and tacit knowledge, intent.
This is a fundamentally transformational change
because for the first time in history,
you can just turn capital into compute,
and compute into labor.
You can turn trillions of dollars
into the electricity, chips, and data centers
needed to sustain populations of billions of digital employees.
Think about how limited a CEO's knowledge is today.
How much did the real Steve Jobs really know about what's happening
across Apple's vast empire?
He gets filtered reports and dashboards, attends key meetings, and read strategic summaries.
But he can't possibly absorb the full context of every product launch, every customer interaction,
every technical decision made across hundreds of teams.
His mental model of Apple is necessarily incomplete.
Now, imagine Mega Steve, the central AI that will direct our future AI firm.
Just as Tesla's full self-driving AI model can learn from the driving records of
millions of drivers.
Mega Steve might learn from everything seen by the millions of distilled Steve Apparachics.
Every customer conversation, every engineering decision, every market response.
I think it's hard to grapple with how different this will be from human companies and institutions.
You're going to have this blobs with millions of entities rapidly coming into and going out of
existence, who are each thinking at superhuman speeds.
It will be a change in social organization as big as was the transatlons.
from hunter-gatherer tribes to a massive modern joint stock corporations.
The boundary between different AI instances starts to blur.
Megasteeve will constantly be spawning specialized distilled copies
and reabsorbing what they've learned on their own.
Models will communicate directly through latent representations,
similar to how the hundreds of different layers in a neural network like GPD4 already interact.
Merging will be a step change in how organizations,
can accumulate and apply knowledge.
Humanity's great advantage has been social learning,
our ability to pass knowledge across generations and build upon it.
But human social learning has a terrible handicap.
Biological brains don't allow information to be copy-pasted.
So you need to spend years and, in many cases, decades,
teaching people what they need to know in order to do their job.
Or consider how clustering talent in cities and top-faces,
firms produces such outsized benefits, simply because it lowers the friction of knowledge
flow between individuals.
Future AI firms will accelerate this cultural evolution.
With millions of AGIs, automated firms get so many more opportunities to produce innovations
and improvements, whether from lucky mistakes, deliberate experiments, de novo inventions,
or some combination.
Historical data going back thousands of years suggests that population size is a number of
the key input for how fast your society comes up with more ideas.
AI firms will have population sizes that are orders of magnitude larger than today's biggest companies.
And each AI will be able to perfectly mind-meld with every other.
AI firms will look from the outside like a unified intelligence that can instantly propagate ideas across the organization,
preserving their full fidelity and context.
Every bit of tacit knowledge from millions of copies gets perfectly preserved,
shared, and given due consideration.
So what becomes expensive in this world?
Roles which justify massive amounts of inference compute.
The CEO function is perhaps the clearest example.
Would it be worth it for Apple to spend $100 billion annually
on inference compute for Megastiv?
Sure.
Just consider what this buys you.
Millions of subjective hours of strategic planning,
Monte Carlo simulations of different five-year trajectories,
deep analysis of every line of code and technical system, and exhaustive scenario planning.
The cost to have an AI take a given rule will become just the amount of compute the AI consumes.
This will change our understanding of which abilities are scarce.
Future AI firms won't be constrained by what's rare or abundant in human skill distributions.
They can optimize for whatever abilities are most valuable.
Want Steve Wozniak-level engineering talent?
Cool.
Once you've got one, the marginal copy costs pennies.
Need a thousand world-class researchers.
Just spin them up.
The limiting factor isn't finding or training rare talent.
It's just compute.
Imagine Mega Steve contemplating...
Hmm.
How would the Federal Trade Commission respond if we acquired eBay to challenge Amazon?
Let me simulate the next three years of market dynamics.
Ah, I see the likely outcome.
I have five minutes of data center time left.
Let me evaluate 1,000 alternative strategies.
The more valuable the decisions, the more compute you'll want to throw at them.
A single strategic insight from Megastive could be worth billions.
One of the coolest things about this video is that we did not shoot a single frame of video for it.
Every single visual that you see from the photorealistic humans to the claymation octopuses
were all generated by V-O2, which is Google's state-of-the-art video generation model.
I wrote this essay a couple months ago, and then I had this idea,
that we should try to turn it into a video.
And so I worked with this wonderful director, Peter Salaba,
who was able to use VO2 to turn all of these ideas
into the kind of video that would have previously taken us
a full team of cinematographers and animators to make.
For example, one of the things I wanted to show
is what an AGI hive mind might look like.
And so Peter had this idea that you could have a FPV drone
fly through an ant-hill that's full of working ants.
Vio gave him a bunch of tasteful candidates for this
and a bunch of other prompts that we then stitched together into the final cut.
We literally could not have made a video like this without VO.
And now Vio2 is available in the Gemini app.
You can try it by going to gemini.gov.com,
selecting it from the drop-down and typing your own idea into the prompt bar.
By the way, we made this whole video with Vio
before we'd even started chatting with Google,
so it was especially exciting that we could then have them as a sponsor.
All right, back to the essay.
The most profound difference between AI firms and human firms
will be their evolvability, as Gwern Brandwin observes.
Why do we not see exceptional corporations
clone themselves and take over all market segments?
Why don't corporations evolve such that all corporations or businesses
are now the hyper-efficient descendants of a single corporation,
while all other corporations having gone extinct in bankruptcy or been acquired?
Why is it so hard for corporations to keep their culture intact and retain their youthful lean efficiency,
or if avoiding aging, is impossible?
Why not copy themselves or otherwise reproduce to create new corporations like themselves?
Corporations certainly undergo selection for kinds of fitness and do vary a lot.
The problem seems to be that corporations cannot replicate themselves.
Corporations are made of people, not interchangeable, easily copied widget.
or strands of DNA,
the corporation may not even be able to replicate itself over time,
leading to scleroticism and aging.
The scale of difference between currently existing human firms
and fully automated firms
will be like the gulf in complexity
between prokaryotes and eukaryotes.
Procriotic organisms, such as bacteria,
are relatively simple and have remained structurally similar
for over three billion years.
In contrast, the emergence of eukriotic cells, which possess more complex internal structures like nuclei and organelles,
enabled a dramatic leap in biological complexity and gave rise to all the other astonishing organisms
with trillions of cells working together tight-knits.
This evolvability is also the key difference between AI and human firms.
As Gwerin points out, human firms simply cannot replicate themselves effectively.
They're made of people, not code that can be copied.
So would a fully automated company simply become the last company standing?
Why would other firms even exist?
Could the first business to automate everything just form a massive conglomerate and take over the entire economy?
While internal planning can be more efficient than market competition in the short term,
it needs to be balanced by some slower but unbiased external feedback.
A company that grows too large risks having its internal goals drift away from market reality.
That said, the balance may shift as AI systems improve.
AI corporations will be more software-like,
with perfect replication of successful subdivisions and faster feedback loops.
And this internal planning system needs to be connected to some measure of real success or failure.
And this is exactly what the market provides.
