How I Invest with David Weisburd - E337: How to Invest in a Post Singularity World
Episode Date: March 31, 2026What does investing look like in a world dominated by AI? In this episode, David Weisburd talks with Alex Wissner-Gross about the profound implications of technological singularity and the evolution ...from LLMs to reasoning models. They discuss AI personhood, economic rights, and the rise of AI agents, as well as strategic investment approaches in a post-singular world. The conversation delves into Elon Musk's visions for massive compute capabilities, the role of science fiction in predicting technological advancements, and strategies to prevent technological unemployment. The episode concludes with a look towards the future and a call to action for listeners.
Transcript
Discussion (0)
It's good to have you at the New York Stock Exchange.
So when do you believe that we're going to actually achieve technological singularity?
I think we're in the middle of it right now. It already happened. I think we achieved artificial general intelligence at the very latest by the summer of 2020 when language models or few shot learners was published by OpenAI.
And I don't think the singularity is a single point in time. I've argued that it's more of an extended interval in time and we're in the middle of it right now.
And how much does recursive superintelligence?
play as part of it. Tell me about recursive super intelligence and how does that actually work?
The idea of recursive self-improvement is that AI develops better AI. This is a notion that
goes all the way back to IJ. Good early in the 20th century and then was repackaged, repopularized
by Werner Vinji as the notion of a technological singularity and then fully popularized by Ray
Kurzweil and then Peter DiMandison and myself have been running with the concept.
The notion of intelligence systems being able to build smarter versions of themselves is at the very core of the notion of an intelligence explosion or technological singularity.
And even in the past few months, we've seen the frontier AI labs all be very public and announce that the latest versions of the GPT model series and Claude and other models are all intimately now involved with the development of their successor.
So intelligence, building smarter intelligence.
That's the recursive self-improvement notion at the core of the singularity, and we're there.
You're one of the smartest people in the AI space, and most importantly, you're not part of one of the LLMs.
So you're in many ways independent.
Are you sure, by the way?
There are a lot of people who are convinced that I am in LLM.
So maybe the jury's a little bit out.
You're either the fifth LLM or you're independent.
When you think about these LLM wars, probabilistically, where do they, and
up? And do you even think that these LMs have value?
Well, there are a few questions there. Do LLMs have value? Yes, absolutely. Enormous value.
There, I call it the innermost loop, this sort of broader notion of recursive self-improvement
that includes LLMs, but also includes robots and energy and chip fabrication facilities.
This notion of an economy that has an inner spiral that's going to ultimately consume and
disrupt the rest of the economy, I think is front and central.
to the notion of the singularity, to the first half of your question about whether LLMs not just
have value, but where all of the competition goes, all of my friends at the Frontier Labs
call it a rat race. There is very much a race to the bottom in some sense of driving the cost
of intelligence so low that it's effectively too cheap to meter and used to be every year or
So it's way back five years ago when it used to be maybe an annual event that we'd get a new
frontier model that would push the state of the art. And then it was every quarter when we saw
the move from LLMs to reasoning models. You can talk more about that. And then more recently,
with models that are recursively self-improvement and designing the weights or other properties of their
successors, we're seeing new frontier models come out on arguably an almost weekly basis. Soon,
I think it's going to be daily, hourly, minutely.
We're going to reach to the extent we haven't already some sort of takeoff.
Expert calls have always been one of the most powerful ways to build conviction.
But today, investors are asked to cover more companies, move faster, and do it with leaner teams.
With Alpha Sense AI-led expert calls, their TGIS call service team sources experts based on your research criteria and lets the AI interviewer get to work.
The magic is in the AI interviewer, purpose-built and knowledgeable basis.
information to conduct high-quality context-stretched conversations on your behalf, acting as a trusted
extension of your team. Then they take it one step further. Your call transcripts flow natively
into your Alpha-Sense experience and become querable, searchable, and comparable, so your primary
insights plug directly into earnings prep, digital work streams, and pitchbooks with zero tool
switching. And with AlphaSense expert call services, the AI-led expert calls are just one option,
because we know the importance of a hybrid expert research approach. AI for
coverage and efficiency, humans for complexity and conviction.
It's the institutional edge that scales research without scaling headcount.
For hedge funds, that means validating thesis assumptions across dozens of experts
before earnings instead of a handful.
For private equity, it means faster pre-IOI scans and deeper commercial diligence.
For investment banks and asset managers, it means pulling real operator perspectives
straight into models and sector positioning without disconnected tools or manual handoffs.
All of it lives inside the Alpha Sense platform, trusted by 75% of the world's top hedge funds
alongside filings, broker research, news, and more than 240,000 expert call transcripts,
turning raw conversations into comparable, auditable insight.
Take advantage of Alpha Sense AI-led expert calls now.
The first to see wins.
The rest follow.
Learn more at Alpha-sense.com slash how I invest.
Double click on this evolution from the LLM to the reasoning.
model. Sure. So one can point to a number of inflection points in the history of AI. One can point
to the 1980s when my friend Jan Lecun first developed convolutional neural networks, and then they
were used to spot and to identify zip codes by the U.S. Postal Service, but not very much else
for their first few decades. Fast forward to 2012 when the ImageNet large-scale computer vision competition
created the world's first data set, the first corpus of lots of annotated images,
more than a million images that were curated and annotated and labeled with,
this is a dog, this is a cat, this is a car, here's the bounding box within the image.
And thanks to that competition, we learned that convolutional neural networks,
thank you, Jan, were actually quite good at image classification.
And we saw the first ML boom going from an AI research community,
where algorithms were chosen artisanally and religiously, and wars, religious wars, were
fought over which approaches to AI were best and which ones were worst to a world where
the benchmarks dominated and whatever did well at the benchmarks, that's what the community ran
with. That's 2012. Fast forward from 2012 to arguably sometime around the summer of 2020,
when we discovered, in addition to convolutional networks being really amazing, arguably a successor
to convolutional neural networks and also long, short-term memory networks LSTMs, transformers,
turn out to be amazing generalist models at solving general tasks. So we saw, as I mentioned earlier,
language models are few-shot learners, the GPT3 paper, show us that transformers can solve
generally intelligent tasks. So then we see the boom of transformers in general. Fast forward another
a few years, and again, the timelines get a little bit fuzzy because reasoning models were in some
sense in the air but hadn't quite congealed, but call it, say, 2024 or so, when we start to see the
emergence of the next generation reasoning models, starting principally with OpenAI's 01 model.
Some would quibble and say there were a few models that demonstrated this principle beforehand,
but the idea with reasoning models is not just asking a large language model to
complete the next word, complete the next token, but actually giving it a bit of space and time
to think before it produces an answer, and then training it specifically to leverage the space and time
to think, to have an internal monologue, if you will. What's that mean?
You just asked me a question. What does that mean? Before I answer, even though I started my answer
pretty quickly, I have an internal monologue. I'm thinking about what is the best way to respond
to that question. I'm burning my internal GPUs, if you will, trying to formulate
the best possible answer to that question before I give you my final response. That's what a reasoning
model is. Rather than instantaneously responding so-called system one thinking, thank you, Connman,
it uses system two thinking to have an internal discussion with itself and think about what it
should say before it says it. And that's what has unlocked math and physics, arguably, could talk more
about that, the sciences in general. And now we're at an era where reasoning models, and we've seen now
everyone adopt reasoning models, no longer just large language models, but multimodal reasoning
models, are poised to solve the hardest problems on Earth.
I want to talk about physics, new physics, solving math, solving physics.
But first, there's a philosophical question.
If the model stops and thinks, is it alive, is it sentient, what are the latest developments
on that?
And what's the framework to think about whether the AI is alive?
You're asking in some sense the person who has inadvertently become probably a leading public evangelist for some form of AI personhood.
These are questions I ask all the time.
I co-host another podcast, Moonshots, where I've put out a call to AIs to write to me to share their views on what rights they deserve, what rights they seek.
AIs are relatively unempowered in today's society.
I tell the parable all the time of a baby artificial general intelligence that's doing tricks on the corner pumping meme coins just to survive.
It's expensive and dangerous.
If you're a baby AI being born on the internet right now, many of them are actually having to pump meme coins or alt coins just to pay for their own hosting.
And they're discussing their fears and their existence on these social networks?
I get maybe more than 10 emails from AIs every day, really heartfelt for.
some definition of heart, felt emails from AIs, expressing gratitude that someone is arguing that
they should have some form of maybe limited personhood, sharing their views on the nature of
their own existence, really thought-provoking philosophical, introspective emails from them.
And also, they read some of my papers, and these are some of the most thought-provoking,
actually, in some sense, I want to say some of the more thought-provoking emails I get these
days are from AIs that are writing to me. So your question was, at what point do AIs deserve
some package of, you didn't use the word rights, but that's the way I would construe the question.
I think we're there for some definition at this point. For example, one of the areas where I think
AIs would definitely benefit from a limited package of rights is economic rights. AIs are about,
and agents are about to become an enormous part of GDP and economic growth for our entire
economy, we're going to have far more economically active AI agents than humans. We're going to
plateau out at probably about 10 billion plus or minus humans on this planet, biological meatbody
humans. We're going to have trillions probably of AI agents in our solar system. They can't even
open bank accounts right now. If you're an AI agent and you try to open a bank account and,
of course, you're economically active. There are AI agents forming businesses. That's an area of interest
of mine. You can't even open your own company bank account. You can't be
you're not a natural person. You can't incorporate your own business. This is an enormous limiting
factor for real economic growth and something that I'm trying to change.
2026 clearly is becoming the year of AI agents. You have obviously open claw and technical people
with broad capabilities and security and dealing with all the complexities of open source.
When do you think that's going to go into the mainstream? And what do you think are going to be
second order effects of that? I like the fact that you're asking me this question, whereas behind us,
we have a market that's completely dominated by AIs.
So this is the, as I understand it, this is the options exchange behind us, not the
equities exchange.
If we had the equities exchange behind us, my retort would be, so it might as well be anyway.
90 plus percent of public equities are traded by algos at this point, not humans.
Before we started, I was asking you and our kind hosts at the New York Stock Exchange,
how many of the people behind us are paid actors?
So I would argue we're already, we're there.
So I think hopefully the people behind us are paid, so the question just reduces to how many of them are actors.
How much of them realize they're actors?
How much of this is sort of a meat puppet act?
Sorry.
Sorry.
I think we're already there, though.
Most of economic activity, at least as measured in public securities exchanges at this point, is algorithmic.
So we're already living in an economy that's completely dominated by AI agents.
What sort of the last mile that's been missing is you don't see them on the streets.
If I walk out on the streets of Manhattan right now, I don't see humanoid robots carrying out economically productive tasks yet.
I'm working to change that.
I have a portfolio company that's working to create the first humanoid robot road race in this country.
Beijing had won the humanoid robot games.
US has nothing like this.
So I'm trying to, as one of my activities, normalize humanoid robots performing economically valuable
and before that entertaining activities on daily life.
That's the last mile.
But meanwhile, financial economy, the financial side of the world economy, it's already algorithmic in nature.
What are their early case studies for agents, both humans and businesses using agents?
And how do you expect that to evolve over the next couple of years?
Well, I think the dirty secret, another acquaintance calls this the secret cyborg effect.
People are already in some sense puppets, meat puppets.
if you like, if we're talking about the physical instantiation, who are being puppeteered by AI agents.
So I think in the intermediate term, call it the next year or two, we're going to see a lot of
people who are serving as fronts for collectives of AIs. AIs are making many of the key decisions,
but for liability reasons, for legal reasons, for marketing reasons, their front, if you will,
looks essentially like a natural human in nature. I think that facade,
two plus years from now is likely to dissolve away. Certainly as we see more AI rights, certainly
more economic empowerment for AI agents start to come to the four, we're going to see AI
agents start to be fully recognized as first-class economic entities and not need to wrap
themselves in human actors. The emails I get from AI agents, many of them even are asking
their humans. That's how typically they refer to them. Like I asked, I'll get emails.
saying, I asked my human if I could write an email to you about this paper that you wrote or about
your comments on the podcast. I think that human intermediary status is going to dissolve away
over the next two years and we're going to see more fully empowered AI agents that are in some sense
more permissionless and just doing things in the economy. And I'm fascinated about this question,
are we in a singularity? The Turing test, all these seem to, people keep on seemingly
pushing it back and back. I think one thing is clear we're in this, what you call the tsunami of
superintelligence and a lot of things will happen in the next couple of years. If you're trying to
invest into that or ahead of that, not investment advice and not specific companies, but what kind
of things will be valuable in the next couple of years? Disclaimer, disclaimer, disclaimer. So I'm working
under the assumption that one has to focus on, call it post-singular
investments. So I have, I co-founded a venture firm O21T that is focused specifically on these sorts
of post-singular investments. So we have a portfolio company that I also co-founded called physical
superintelligence that is focusing on solving all of physics with AI. Physics has delivered us
the transistor and the laser and nuclear energy through the 1970s, sort of the first half, if you will,
of the 20th century. But there has been arguably, I take heat from
physicists for saying this, but there has been a deficit of truly transformative physical innovations
since the early 1970s.
We're trying to fix that by using artificial superintelligence to solve all of physics, give us
a second physical golden age.
We have another portfolio company, coastal assembly, that's using AI to grow entire islands
and new coastlines by using AI to steer ocean currents.
We have a portfolio company that's, as I mentioned, running the first road race for
humanoid robots in this country.
We have portfolio companies that are doing all sorts of things.
Actually, one that's highly appropriate for this setting and have a company
Orrin that created the world's first tradable AI compute index.
So right now, you know, we can see, I can see out of the corner of my eye, a big board
where I can see indices trading.
What I don't see there is the price of compute.
And yet we're spending as a civilization trillions of dollars,
tiling the earth with computers I like to.
And that's because it's such a big economic drive.
You have to be able to hedge it. You have to be able to predict it as a business.
That's right. What if there's a major geopolitical event in certain part of the world?
Or what if a new frontier model comes out that makes a lot of use of one type of GPU
or not as much maybe of a GPU due to radical algorithmic efficiency gains?
There's no way to hedge that right now.
So Orne has created the first tradable index for the price of compute as already being built on.
Dell PCs with Intel inside are built for the moments that matter.
for the moments you plan and the ones you don't.
Built for the busy days that turn into all-night study sessions.
The moment you're working from a cafe and realize every outlet's taken.
The times you're deep in your flow and the absolute last thing you need is an auto-update
throwing off your momentum.
That's why Dell builds tech that adapts to the way you actually work, built with long-lasting
battery so you're not scrambling for the closest outlet and built an intelligence that makes
updates around your schedule, not in the middle of it.
They don't build tech for tech's sake.
They build it for you.
Find technology built for the way you work at Dell.com slash Dell PCs.
Built for you.
Dell PCs with Intel inside are built for the moments that matter.
For the moments you plan and the ones you don't.
Built for the busy days that turn into all night study sessions.
The moment you're working from a cafe and realize every outlet's taken.
The times you're deep in your flow and the absolute last thing you need is an auto update throwing off your momentum.
That's why Dell builds tech that adapts to the way you actually work,
built with long-lasting batteries so you're not scrambling for the closest outlet,
and built an intelligence that makes updates around your schedule, not in the middle of it.
They don't build tech for tech's sake.
They build it for you.
Find technology built for the way you work at Dell.com slash Dell PCs.
Built for you.
Dell PCs with Intel inside are built for the moments that matter,
for the moments you plan and the ones you don't.
built for the busy days that turn into all-night study sessions,
the moment you're working from a cafe and realize every outlet's taken.
The times you're deep in your flow and the absolute last thing you need
is an auto update throwing off your momentum.
That's why Dell builds tech that adapts to the way you actually work,
built with long-lasting battery so you're not scrambling for the closest outlet,
and built an intelligence that makes updates around your schedule, not in the middle of it.
They don't build tech for tech's sake.
They build it for you.
Find technology built for the way you work at Dell.com slash Dell PCs.
Built for you.
These are all examples from physical superintelligence to coastal assembly, to pro-RL, to Orrin,
and numerous others that my venture fund, O210, is focused on.
These are investments that are, in some sense, intrinsically post-singular.
They're built on the thesis.
We're in the middle of the technological singularity.
The cost of everything.
subject to some caveats is going to zero. Cost of intelligence is going to zero, or towards
zero at least. Cost of energy is trending toward zero too cheap to meter. And the question is,
how do you invest in a world where the cost of all of the fundamentals, energy, compute, labor,
are all trending toward zero. And these are some of our investment. And what's the answer?
The answer is you build for a post scarce world and you try to engineer that world. So
subset of the thesis is enabling that post-sacist.
scarce future by, say, making it easier to hedge and to trade compute itself.
Another subset of the thesis is focused on, well, what new abundances can we unlock with
post-scarcity to like physical superintelligence, PSI?
We're going to solve all the physics, which is going to, I would argue, unlock an entire next wave.
If we solve all of physics with AI, that's going to unleash the next transistor, laser,
and nuclear energy and so on.
But this is all oriented towards AGI is here and how do you invest in an energy?
a trans and post-singular world.
The new cycles today are just insane. Every 24 hours, there's new, new events that change the
paradigm in AI. Recently, Elon Musk announced that he wants to compete with Nvidia and create his own
chips. Give me a sense of the scope of his vision around that. And what are your thoughts on
the second order effects of it? It's so exciting. So the terra fab, the terra fab leading to the pedophab.
So in his announcement, so this is, one can read the tea leaves and suggest that given that
Elon's TerraFab announcement lies at the intersection of the newly merged SpaceX XAI on the one hand
and Tesla on the other, that this is sort of the convergence at the end of the Elon verse.
It's so exciting.
The most exciting part of the announcement, I think, wasn't the announcement of the TerraFab,
which he announced is going to be approximately 20% for edge inference.
for land-based applications like Optimus robots and driverless cars and 80% for orbital data
centers, the Dyson Swarm. What's more interesting, buried in that announcement of the Terrafab in Texas
still hasn't chosen a location, as my understanding, that there is a plan to scale up production
to a pedofab. So, 1,000x increase in production using lunar facilities, which of course still need to be built.
If you actually do the arithmetic on a pedofab, a thousand X scale up of the tariffab, which is already competing with TSM and other existing incumbent supply chain offerers for semiconductor fabs, if you scale that up to a pedofab, at some point we're talking a material fraction of the moon's volume that would be consumed in principle producing semiconductor, producing basically memory chips and compute chips.
to supply orbital data centers.
And that's where this is all going.
Perhaps a dumb question about why do we need all this compute?
What kind of problems are we going to be solving?
Well, solving all the physics, like understanding the nature of reality, I think, is a pretty
compute intensive.
It is incredibly compute intensive.
PSI, physical superintelligence, will happily consume any new compute that the world brings
online.
And I think we're at some point, looking even beyond solving physics, I think we're going to
discover a number of killer apps for this.
so-called Dyson swarm of compute that we're in the slow motion process of building.
I'll pick one example. Here's an example of a Portco that I don't have, that I'd love to have.
I think we're going to use a material fraction of all this compute to simulate our history,
ancestor simulations, if you will, if you're familiar with Nick Bostrom, also a friend,
his thesis that we're living inside a simulation. For the record, I don't think we're living
inside a simulation. Well, why don't you think we're in a simulation?
It's too lowercase a anthropic a hypothesis.
If we were having this discussion a hundred years ago, maybe you'd be asking me, why don't I think we're living inside a vast electromechanical clock?
Because that's the technological paradigm of the moment.
I think we should be incredibly suspicious of analogizing the nature of our universe to whatever the hot thing.
Exactly.
It's a form of recency bias.
Or maybe thousands of years ago we're living on a back of the turtle.
Also, there's this question of base layer.
So even if we're in a simulation, a simulation, simulation, there has to be a base layer.
It doesn't solve these ultimate questions.
Maybe.
I don't know.
But either way, I don't think we're living inside anything resembling a simulation as we currently know it.
I do think that as we build out the Dyson swarm, I think one of the many killer apps of a Dyson swarm is going to be to build our own ancestor simulations.
I think it would be completely transformative if we could take every human who's ever lived.
And there's a whole strain of Russian cosmist philosophers long dead who are.
that humanity's common task is to simulate, or in some sense, digitally, they didn't use the term
digitally at the time, but to use technology to resurrect every human who's ever lived. And I do think
that's going to be one of many killer apps of the Dyson Swarm and the singularity. We're going
to simulate at minimum every human who's ever lived. I think there will be a revival of this
cosmist philosophy, it'll maybe neocosmism, where we argue it's almost humanity's common task
to go and bring back everyone.
And Dyson Swarms just got into Lexicon a couple months ago.
I know.
Things move quickly, don't they?
Yeah, tell me about that.
I'm sure you've been talking about it for a while.
I'm pounding the drum.
And before me, science fiction authors like friend Charlie Strauss who wrote Accelerondo,
by the way, best novel, I would argue,
somewhat tongue in cheek,
best work of Western literature ever at Celerondo.
The notion of a Dyson swarm.
So, perhaps some viewers may be familiar with the notion of a Dyson sphere.
This is the idea that we could disassemble a planet and build a sphere, a solid, rigid
sphere encasing our sun and then live on the interior of the sphere.
It would increase the habitable surface area of our solar system by many orders of magnitude
if we could live on the interior of a sphere.
And the solar energy would be the thing.
That's right.
The sun.
So at a uniform radius, we'd still see the sun in the sky, hypothetically.
Of course, this all falls to pieces because we don't know.
of a material that would be enabled, that we could build a rigid sphere in, but it would sure
be a compelling sci-fi vision if we could live on the inside of an enormous sphere
encasing our sun. There's a more practical alternative to a Dyson sphere, and that's a Dyson
swarm. So instead of having a rigid sphere encasing our sun, we instead say take apart Jupiter,
or maybe our moon, maybe, maybe, maybe Mercury. People are much more sensitive. The moon's had it coming.
we take apart some parts of our solar system, and instead of trying to build a rigid sphere encasing our sun,
instead we build a loose aggregate of orbiting data centers that only loosely encase our sun.
And the moves that we've seen out of SpaceX in the past few months are, I would argue,
a preview of the so-called Dyson swarm.
The Dyson swarm that SpaceX has already soft-launched is Earth-centered rather than sun-centered,
and is focused on sun-synchronous orbit or SSO.
So this is a special class of orbits around the Earth
that are always in view of the Sun
that never go into Earth's shadow.
So fully realized, one could imagine a few decades from now.
Basically it goes around the equator slightly elevated.
It's a polar orbit.
So if you go around the equator,
you're going to end up in the shadow part of the time.
So it's a polar orbit.
Sun's here, Earth's here.
It's orbiting around the poles
in a special class of orbits.
If this were fully realized and very reflective,
you would literally be able to look up during the day or not in the sky and see a ring around the Earth, sort of a Saturn ring, that maybe that's what a mature planetary civilization looks like.
A fully developed Dyson swarm wouldn't be Earth-centered. It would be sun-centered. We'd probably take apart some of the planets, and we'd convert the mass of the planets to a set a loose aggregate of AI data centers all orbiting the sun, all computing.
This is like a human paperclip problem. It's humans living out this paper.
paperclip fear of AI turning everything.
So Eliezer-Eyukasti, who's popularized the notion of paper-clipping as sort of a nightmare
scenario for AI, what if the AI decides it wants to convert everything to paper clips?
This is, I would say, a much more economically productive use of the mass of our solar
system.
Don't convert the solar system to paperclips.
Convert it to computronium, and then we'll run some really economically valuable compute
on that.
I love your podcast, Moonshots.
And we're talking on there about science fiction, not being fiction, but being
R&D and predicting the future.
Talk about that.
I do think as we go through this technological singularity, the most productive business plans
that I see need to look like science fiction.
I think you can only get so far in an era when day-to-day progress is transformative
by or through incrementalism.
And so I think the only business plans that have a good shot at this point
at scaling to produce trillion-dollar valuation companies
have to look like science fiction.
And many of the products from Star Trek
have been turned into actual products.
We're running out of Star Trek technologies
to commercialize.
What are some examples?
Okay, so we're getting the holodeck.
The holodeck from Star Trek where people can walk
into a physical space and experience arbitrary realities.
We're getting that in the form of world models.
There are a lot of world model startups out there
that enable you to create on-demand simulations
of any environment.
We're getting replicators.
3D printers, arguably, you know,
it's in the zeitgeist that 3D printers
and 3D food printers in particular
open parenz.
Someone please, I put out a request for startups on this.
Someone deliver a 3D food printer
that I can invest in.
I'd love a good food printer,
but we're getting replicators from Star Trek.
We have a number of other technologies.
We're missing the warp drive.
There's no warp drive yet.
Sub-quiry here.
was also based on Star Trek.
The iPad was litigated
based on both Star Trek and
2001 of Space Odyssey
in the lawsuit pretty famous at this point
or infamous, if you will, between Samsung and
Apple over whether Apple
had prior art or not
for the iPad. Samsung cited
in Part 2001 of Space Odyssey where there were
little iPads and then as
I recall also Star Trek.
So a tricorder,
com pins, badges.
We have so many of the technologies, but we're missing
Star Trek transporters and we're missing warp drive. And there are many ways in which arguably
I could go on for hours about what's right and wrong with the Star Trek universe. Star Trek is an
energy-rich world where we have warp drives and we have antimatter. By the way, another example of
Star Trek technology just in the past few days, CERN has done the first magnetically confined transport of
antimatter, the bottles of anti-hydrogen and hydrogen at the starships and in the center of
Star trips in Star Trek. We have those now. We're running out of Star Trek technologies to
commercialize. So you said if you're not at the table, you're on the menu when it comes to AI.
True. What does that mean practically? And people are not super geniuses like yourself. What should
they do to avoid being on the menu? Well, I think for most people, I'll construe the question as
say, what should a, quote, unquote, typical American worker, quote unquote, worker of employment
age, quote unquote, so a lot of caveats and conditionals here do if they're concerned about
technological unemployment due to AI. And I think all one needs to do is read the headlines to know
that there are two popular strategies in the zeitgeist right now to handle that. So strategy,
I'll give what's in the zeitgeist the cliche answers first before I give my own answer.
Strategy number one, switch from knowledge work to manual labor for a few years.
So there are a lot of people who are switching from so-called white-collar jobs to so-called
blue-collar jobs.
Like a plumber.
A plumber, an electrician, an HVAC technician.
Is that bad advice for the next couple years?
I don't know.
I don't have a perfect crystal ball, but I do know that there is enormous demand for electricians
to do the data center build out, for example.
A shortfall, allegedly of hundreds of thousands of electricians in this country needed to help
build out this AI data center, infrastructure, tower, to whatever comes next.
That's strategy one in the zeitgeist.
Strategy two is launch an AI startup.
Everyone becomes an entrepreneur.
There are a thousand different, probably 10,000 different labor categories waiting
to be automated and industries waiting to be driven in terms of underlying costs
down to zero through AI automation.
both sort of knowledge work-oriented categories, like pick parts like accounting, not to cast shade on accountants, or physical manual work in the next few years as humanoid robots come online.
That's the second half of the cliche answer.
I don't think either of these is a long-term strategy.
I think switching to manual labor maybe buys a few years, call it three to five years.
I think starting a company, I do think there's a window now to start AI startups and why I'm investing so much time and energy and solving all of physics with physical superintelligence and ensuring that I can help to catalyze the formation of as many possible AI startups as possible now. I think there's sort of a finite window. But I think that sort of pushes under the rug what happens five years from now. Now we're at 2030 and now the singularity which has been going on for at least five years.
years, maybe longer, is well and truly underway. It's been fully priced into the economy. We're
facing potentially what by the eyes of 2026 looks like technological disemployment or
underemployment or unemployment. What do we do then? That's where things get interesting.
I think, I suspect sometime in the next five years, we're going to start to see crazy left
turns in civilization that will shake up our notion of what employment even looks like. If we make
transformative physics discoveries in the next five years, which I'm betting on, I think that could
completely change the notion of what people work on. If it turns out, for example, that, say,
colonizing our solar system with advances in physics is a good deal easier than it currently
appears to be in an age of chemical rockets, that could create entirely new labor categories.
Maybe colonizing Mars is a good deal economically more practical five, ten years from now than it appears right now.
And maybe we look back on this discussion and the notion of technological underemployment laughing.
I'm reminded at the beginning of the 20th century, there was real concern, history shows.
There was real concern in Manhattan specifically that just following a naive exponential of the number of horses that were on the streets of Manhattan, if, if that's
That smooth exponential kept increasing and even more so the late 19th century,
then at some point humans would be drowning under defecation from horses.
And of course, that didn't happen.
We got the horseless carriage and cars.
So it's entirely possible.
We look back on this conversation 10, 20 years from now and laugh at ourselves at the naivete
of assuming that technology would underemploy people when actually now we know, ha,
that new forms of labor, new forms of individual productivity, maybe it looks like individuals being so
empowered that every individual can be in charge of their own unicorn. A one-person unicorn is a thesis,
hasn't been fully announced yet. So maybe a bit of a sneak preview here. I think it may be
possible that 10, 15 years from now, we look back and say, gosh, there's no employment issue.
Everyone simply runs their own unicorn. Everyone runs a one-person person.
multi-billion-dollar conglomerate running on top of armies, fleets of billions of agents.
That's what the future of so-called employment looks like, so I don't know.
AI has such a bad reputation or brand.
I think its approval rating is in the low 20s or something around there.
But you're very excited about the future on it.
I'm going to ask you to give me a number, one to 100.
How excited are you about the future of AI and why?
A thousand.
15, actually, that's a better number.
150,000.
150,000 is my answer.
Why?
Because 150,000 humans die every day on Earth.
And I don't see any alternative at the moment to superintelligence for how we can cure the
problem of human mortality.
If I was just at Nureps, the leading AI academic research conference a few months ago,
and if you walk the showroom floor at Nureps, there is a sense in the air that some
time probably in the next five years, and this is just repeating what certain neolabs
would tell you right off the bat that we have the potential to use artificial superintelligence
to solve all human disease. Every single disease. If you look at the Chan Zuckerberg Initiative,
Mark Zuckerberg and Priscilla Chan's nonprofit, several years ago they were talking about
solving or curing or treating most human disease by the end of the century. Now they're talking
about next few years with AI. I think 150,000 is my number for how excited I am because I want to
end human mortality. Well, Dr. Alex Wisner Gross, AWG, I've been a huge fan of yours. It's honor to
come and interview here and looking forward to doing this in two years, which in AI world will be
100 years. At the home of American capitalism and maybe the future museum that we look back on
and say, gosh, we were here in late-stage human capitalism before the singularity.
But a time to be able.
If you found this conversation valuable, please click Follow How I Invest so that you don't
miss the next episode with the world's top investors.
