Moonshots with Peter Diamandis - Meta Buys Moltbook, GPT 5.4, and Fruitfly Brain Upload | Moonshots Live at The Abundance Summit 238
Episode Date: March 17, 2026Peter H. Diamandis, MD, is the Founder of XPRIZE, Singularity University, ZeroG, and A360 Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wiss...ner-Gross is a computer scientist and founder of Reified Emad Mostaque is the founder of Intelligent Internet – My companies: Apply to Dave's and my new fund: https://qr.diamandis.com/linkventures... Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy _ Read the Solve Everything Paper: https://solveeverything.org/ Get access to metatrends 10+ years before anyone else:https://qr.diamandis.com/metatrends Connect with Peter: X Instagram Connect with Emad: Read Emad’s Book X Learn about Intelligent Internet Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO ttps://openexo.com/10x-shift?video=P...h Connect with Alex Web LinkedIn X Email Substack Spotify Threads Listen to MOONSHOTS: Apple Spotify – *Recorded live on March 10th, 2026 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You know, a huge amount of expectation on GPT-5.
What do you think of it?
Right now, everyone on this should be trying to get as much data
because the models are coming.
Now we have the right models.
Their real power will come in the cost drop,
which will make it much more accessible to a lot of people.
The anticipation of this launch was up there
with the top three product launches of all time.
I think they actually showed some incredible capabilities.
As the cost of talent is increasing,
that's going to force Frontier Labs to start competing
based on algorithmic insights and ideas.
Ladies and gentlemen, welcome the Moonshot Mates.
Ladies and gentlemen, let's give it up for the Moonshot Mates.
Welcome everybody. Welcome, welcome.
All right.
I love you guys. I love you guys.
Any fans of the Moonshots podcast here in the room?
Love to hear it. Love to hear it.
So listen, I am so blessed to have an extraordinary group of brilliant individuals
that I get to work with twice a week.
You know, we talk about the rate at which we're actually generating our Moonshot podcast is accelerating.
We're going to be moving into an Airbnb together and doing a continuous podcast very soon enough.
All right.
I want to bring them out one at a time because they're all extraordinary.
Let's give it up first and foremost to DB2, Dave Blundon.
Dave, come on out.
Dave Blondon, everybody.
All right.
Next up, my brother from another month.
Salimus Mal, give it up for Salim.
We're about to make magic happen because these two gentlemen have never met in person.
Let's bring out AWG, our resident genius, Alex Weizma Gross.
Yeah.
Go to grab our seats.
No, of course, Salim needs to bring a glass of wine out.
Oh, God, it's real.
So first of all, just to make a little bit of Moonshot podcast history here,
Alex, please meet
sleep. That's flesh.
Our meat puppets meet for the first time.
This proves nothing.
Nothing.
We've been 3D printing him for a month now.
There has been conjecture for the last year or so
whether Alex is an AI.
I am freshly bioprinted.
You're a neural link.
These thoughts aren't real.
So, gentlemen,
I appreciate having you guys here at the Abundance Summit.
So this is a live broadcast from the Abundance Summit here in Palos Verdes,
year 14 of our 25-year journey together and excited that you guys are going to be on stage with me every year from here on out.
Wait, you just committed this to a 24-7 Airbnb podcast.
I think it's a reality TV.
Tell your family.
Cameras in the bathroom, the whole nine.
Okay.
That'll sell.
Well, okay, welcome to a special episode of WTF just happened.
tech, your number one podcast for AI and Exponential Tech.
Our mission, getting you ready for the supersonic tsunami heading your way.
And it's a lot.
It's a lot.
All right.
Shall we dive on in?
Let's begin.
All right.
Here we go.
So let me begin by, we made an announcement here at the summit that I want to share with everybody on the Moonshots podcast.
something near and dear to my heart, something that I've concocted with the XPRIZ board,
which both Dave and Saleem are on, which is the launch of a global competition called the Future Vision X Prize.
So I, for one, am just sick and tired of all the dystopian content on TV and in the movies.
We are basically being brainwashed that all AI and robots are dystopian, killer AIs, killer robots,
It's Terminator, it's Black Mirror.
And in fact, if you see that,
if that's the only future that you see,
then why would you ever want to live there?
Yeah, yeah, that's so true.
So much of what we build is intentional,
and it comes right out of our vision of the future
that comes straight from the media,
and then we create what we see.
Yeah.
If you change what we see,
you're going to change what we build.
Yeah, you know, I say over and over again,
we're holding two futures in superposition.
One future is Star Trek,
where we're collaborative with technology,
We're working with technology.
And that's an amazing future.
That's what I want for myself and my family and my community.
The other one is the dystopian future.
It is Terminator.
It's Black Mirror.
It's one where technology is suppressing us, not enabling us.
So about a year ago, I sat down with Rod Roddenberry,
the son of Gene Roddenberry, the creator of Star Trek,
and said, how about we do something to incentivize the next generation of Star Trek?
and then went on to my friends at Google on they brought in range media we brought in the
XPRIZE that is operating this competition we've raised three and a half million dollars
for a competition that launched yesterday is going to go through the moonshot gathering
which I'll mention in a minute on September 25th the finale's let's roll the video you know
this exists because of a TV show and I'm not exaggerating Martin Cooper the man who invented the
mobile phone said he built it because he saw it on Star Trek. You saw Captain Kirk flip
open a communicator and thought, hey, I can make that real. The iPad, it started as a prop
on Star Trek too. Video calls, Star Trek. Voice assistance, Star Trek again. Props became products.
Fiction became multi-trillion dollar industries. So here's the question. What's a vision of the
future that excites you? What stories offer humanity a hopeful, compelling, and abundant vision
of what's to come? We're putting up $3 million in prize money plus millions in film financing
to make your movie.
Our program and partnership
with the XPRIZE Foundation,
Google Enranged Media Partners,
is called the Future Vision XPRIZE.
And it's one of the world's largest competition
to address humanity's greatest need, hope.
Create a trailer or a short film,
three minutes or less.
Show us in the world your vision of the future.
That vision could become the next blueprint
for all of humanity.
Find out more and register
at future visionxprise.com.
So whether you're watching this on X
or you're watching this on YouTube
and you're a creator,
please go and register.
By the way, how awesome was that opening video
from C.J. Truheart, one of our
abundance members here who gave us
our first outro piece
and started a tradition that we've all
enjoyed so very much.
So thank you for that. All right.
Next up, we're announcing something important here
for all our Moonshot listeners
that we are a go with the Moonshot
gathering. About 500 of you put down $100 deposit. Congratulations. You got on the early bird special.
And it's a go on September the 25th in downtown LA. We've rented out the United Theater.
It's going to be an extraordinary event. We've got our moonshot mates will be there with us in
downtown L.A. In addition, Astro Teller, the captain of moonshots will be there. Gotta have Astro,
right? If it's about moonshots, Kathy Wood, a Nushan Sari,
a number of incredible CEOs I can't yet announce,
but believe me, they'll be extraordinary.
We're gonna be at this event announcing,
we're gonna have the five finalists
for this Future Vision XPRIZE there.
We're gonna have some of the top producers and directors there
along with many of you,
voting on which of these are going to win.
We're gonna be going from probably 10,000 or more entries,
narrowing it down to the top 100,
the top 50, the top 10, and the top five.
and we'll be awarding the top one.
We've raised $3.5 million
to support this competition.
In success, we will make
at least one film and potentially
two films.
You know, always like just...
Like full-length feature films?
Full-length feature films
global around the world, and these films
will hopefully depict what the future
could be like. What is...
And all you have to do is come on September 25th
and watch the first 10,000.
And vote on them.
Vote it down.
Yeah, okay.
I'm excited for what Alex
you would see as your vision of the future here?
Post-scarce inspirational videos are already baked in.
I would be disappointed if by the time we get to September,
if we don't have a thousand videos of ultra-high inspirational quality
generated for nearly free at this point.
Yeah, it's amazing.
The tools to be able to create visions of the future.
But it's important.
You know, the number one genre of movies out there are horror films.
And like, what are we teaching our youth if we're constantly,
Our brains are neural nets, and we train our neural net every single day by what we watch,
who we hang out with, what we listen to.
So you could not pay me enough money to watch the Crisis News Network.
When you first were pitching this idea, yeah, the Crisis News Network.
CNN, for those of you who are slow.
You made a point that I completely not noticed, which is if you go back to Star Wars, you know,
C3PO and R2D2 were incredibly lovable.
And, you know, kids that are now building AI have little stuff.
R2D2s when they were kids.
But if you've tracked the trend in the movies after that,
they got more and more and more dystopian
all the way through...
And I think it just got cheaper to create explosions and deaths,
you know, using AI and...
And just...
It just really painted a picture
that got our amygdala's going,
but not our hearts going.
Yeah.
So, you know, at the moonshot gathering,
we're going to have the winners of that.
We're also going to be launching some calling
the Moonshot Hackathon,
All more information about that.
And that evening at the moonshot gathering,
we're going to have an extraordinary unconference.
We're going to have the XPRIZE,
teaching people how to design an XPRIZ.
We're going to have the team from Google X,
teaching you how do you create a moonshot organization inside?
How do you do storytelling?
We're going to have Kathy Wood talking about her big ideas,
2026, an incredible event.
If you're interested, we're only,
this is an event in September.
for builders, for entrepreneurs, for coders, if they still exist.
So if you're interested in coming...
Unemployed coders, you...
If you're interested in coming to moonshot gathering,
go to moonshots.com.
Another announcement here.
We now have acquired moonshots.com as our URL to host all of our activities.
So congratulations to that.
You know, I still remember, Imad, was it three years ago you were on this stage
and you said
coders are going to go away.
Yeah, the next five years.
In the next five years. Oh, they've gone away in three years.
But you know it was amazing when you said that on the stage
it made news throughout India.
Do you remember that?
Yes, I got lots of emails.
You got lots of emails.
Many, many emails.
It was a correct prediction, and you were so right about that.
Today's lexicon, you would say coding is cooked.
All right, I want to
hit a couple of things before we get to the current AI news and robot news and economic news
that we talk about our WTF episodes, which was a little bit about the Bundin Summit.
We had so many incredible speakers.
We kicked it off with a conversation among robot.
Actually, we kicked it off with Eric Schmidt, which we've streamed live on X.
So what do you guys remember about the Eric Schmidt conversation?
So Eric, he said, one of the questions actually from the crowd is how many foundation model labs are they going to be?
And he said, well, look, there's five, there won't be more than ten.
But there will be thousands of successful AI startups that percolate out.
And a lot of what we'll see in the news here reinforces what he was saying.
And what he didn't say then as and everything else is in trouble.
It was kind of implied.
He left it hanging.
That was a theme, actually, throughout a lot of these talks is the,
you know, the period of time between now and abundance,
there's all kinds of turbulence and change coming.
And the AI community is now kind of soft-selling that a little bit
to try and focus on the ultimate abundant destination.
So, yes, a few AI labs worth trillions of dollars,
thousands and thousands of successful startups,
and a lot of incumbent companies that are in deep, deep trouble.
Yeah, I guess he said like four or five in the U.S., one,
he said one or maybe two in Europe.
a couple in China.
What else, Alex? Do you remember from Eric's presentation?
Even just on that note, history does rhyme a bit.
Do you remember, I think this was T.J. Watson, IBM founder once remarking,
there would be a global market for exactly five computers.
And I wonder whether we'll look back and say, okay, maybe there will be, at most,
five major American model providers as maybe artificially limiting the future of the light cone.
I think it's going to be much, much larger.
I thought it was interesting Eric's comments on the San Francisco consensus,
which he characterizes as, I think, recursive self-improvement being some point in the future.
It was interesting, right?
So, I mean, he was like, when are we going to see recursive self-improvement?
And I kind of felt like he said, like three years out.
What's your answer to that?
Maybe three months ago?
We're in the middle of recursive self-improvement now.
And I would say my estimate of the San Francisco consensus,
We're deep in the middle of recursive self-improvement right now.
Almost every major frontier lab has made it quite clear in their public announcements that all of the frontier models,
all of the state-of-the-art models that have been announced in the past few months
were largely designed and trained by their predecessors.
That is, by definition, recursive self-improvement.
We are there.
Imad, yes?
Yeah, I mean, I think you can literally see it.
It's take-off time.
Take off time.
Infliction point.
And nobody wants to say it.
Yeah.
Which is the most interesting thing.
Why?
Well, because they're afraid if someone knows that they have it,
then other people will know that they have it.
And then pressure will come from all sorts of clauses they have in their contracts.
Especially the government pressure is like, look,
look what happened in the last two weeks at Anthropic and Open AI.
You don't want more of that.
You don't want congressmen in your building tomorrow.
Yeah.
It's interesting.
I asked Kevin Will, who is also on our stage, right,
who's the VP of science.
He's in charge of using all of Open AIs capabilities to advance science.
His statement was, I want 100 scientists winning 100 Nobel's.
I was like, that's interesting.
But when I asked him, are you going to keep your model secret
because you're going to be able to use them to advance your company far fast,
anybody else?
He said, no, no, our job is to get it out there in the public.
I don't believe that.
We still don't have the model that they used to win the gold medal in the IMO.
Interesting.
You know, we commented, I think I commented at the time, that's the first bifurcation that you see.
We used to have the frontier model every single time.
The moment they got to that, that was the last time.
The other thing, which was fascinating, you know, I asked Kevin outright, and I love Kevin.
He's an incredible human being.
I said, okay, you're about to get, you know, AGI slash ASI that's going to be able to help you solve longevity,
help you get room temperature superconducting,
help you get new kinds of molecules, solve, you know, physics, chemistry, and biology.
Fusion. Who doesn't want fusion?
Fusion. And we'll talk about fusion. But the thing is, these are all trillion-dollar opportunities.
So all of a sudden, I'm realizing that these frontier companies are going to be able to generate trillions of dollars of new revenue
because of the products they're going to be creating.
What does your t-shirt say, Peter?
It says, solve everything.
What does your say, by the way?
Mine says, let there be agents.
Let there be agents.
We're missing the lobster theme here.
That's true.
This is the whole point, though, that, I mean, of this book that we just co-authored,
that we get superintelligence and the killer app, arguably,
of superintelligence is solving everything,
including all of these high-profile, glamorous, scientific, and engineering challenges.
It's happening.
And Anthropic and OpenAI, I'm sure Google, all the labs are hiring the top mathematicians and physicists and chemists and biologists inside.
But they're software companies.
Why are they hiring these people?
Because everything, so friend of the pod, Ray, as I think Ray would say, everything's becoming software.
And when we have superintelligence, solving all disease, it's a software problem.
If we can create a virtual cell that perfectly models diseased states,
and we can steer through cell embedding space to get from diseased cell to healthy still,
it's a software problem.
Everything's becoming a software problem.
I mean, the minute CRISPR arrived and you could edit the human genome,
the human body becomes a software engineering problem.
It's all just a software problem, at which point a coding model can do essentially anything in the physical world.
Yeah, fascinating.
We had some of the top robot CEOs here, four of them.
we had out of China, out of the U.S., we had three, and, you know, it's interesting.
Question of when these robots will start to pop into our homes.
I pulled Bornick aside, and he promised me, okay, I'm not going to take one of the two robots he had here, unfortunately.
But this summer, he will ship me one of those.
This summer?
One of the X robots, yes.
Wow.
Yeah, we'll have Brett here next year with figure.
You're going to get one of those, too, right?
You're going to have them put it out?
Probably duke it out in the backyard for entertainment.
I think one other CEO we had here at the Abundance Stage, which was amazing, was Dara, the CEO of Uber.
What did you find interesting about with Dara's comment?
You know, Dara, the crowd wanted to know desperately, like, what's the timeline to automation, self-driving car, robotics?
And he was like, you know, we're going to automate 30% or so of our employment this year.
I'm listening to this.
I'm on so many boards where the CEO is telling me, Dave, talk to my whole company,
but don't talk about rampant job loss.
And you're like, Dara, you have, what, a million-odd drivers, and the self-driving car is imminent?
I was like, well, 30%, maybe, you know.
He did make a very valid point, though, that as we automate, you'll need human drivers for the areas that you don't have autonomous cars,
and you'll have Javon's paradox continue to just flip.
gently into the environment. Although we're talking about rampant job loss, we note that IBM is hiring
a ton of entry level of folks because they're much better with AI than the older folks.
Well, you know, we'll look at a chart.
There's lots of counterpoints as well happening.
That's great.
So we'll look at a chart that shows, you know, where the job loss is earliest.
And it's actually in areas where those people are going to have no trouble becoming AI experts.
But the driver, I mean, where do you go?
And I mean, I wouldn't want to be fielding that question on this stage. But this is all part of the, you know, the whole
like, okay, this is not an easy thing to talk about in a public forum.
So we talk about it on a podcast all the time,
but I don't see a lot of other people being able to,
just politically able to actually be candid about it.
But it's imminent.
Let's jump into the top AI news of the week a lot as always.
Here we go. We're going to hit the benchmarks.
My son always says, you know, okay, the numbers got higher, Dad.
That's great. What else is new?
OpenAI releases GPT 5.4.
Let's go to our resident benchmark expert here.
Okay, so benchmarks go up and to the right, news at 11.
Except that in this case, one of my favorite benchmarks is the Frontier Math Tier 4 benchmark,
which, for those of you paying close attention, Frontier Math Tier 4 from Epic AI captures the ability of AI
to solve what are considered research-level problems in math that would require a team of professional
mathematicians several weeks to solve. They are already solved, but nonetheless, very challenging
problems. Wicked had in Boston. Wicked had? Wicked had. Hard problems. And now with GPD 5.4 turned up to
maximum reasoning capability, we're seeing finally, and this was a prediction, I think, in our prediction
episode, math is cooked. We're seeing, I think, 38% capability, 38% of all of these problems that are
high difficulty, professional mathematician, research level problems are now solvable by AI.
And there are even rumors, even in the past 24 to 48 hours, that the next tier up, so-called
open problems benchmark, that 5.4 is reportedly rumored to be on the verge of solving the first
open, hard math problem. So math, I think, is in some sense the bellwether. It's the canary
that owns the coal mine, that all of these fields, math, science, engineering, medicine,
these are all going to be solved, solve everything by AI, and that's incredibly exciting.
Yeah, and just to fill in a gap there, so this is the most correlated with AI self-improvement,
and the reason it's the Bellwether and the Canary that owns the coal mine is because it's not data-starved.
All these other areas, the AI is equally capable in these other areas once it gets the data.
So this is kind of the window of time where, you know, why are you hiring Nobel Prize winners in a foundation model?
Well, we need the data.
We can't make this kind of progress in biotech and in physics without the data flowing into the AI.
But the capability is there.
One of the things also that Kevin Wheel said is they're starting to run these dark science factories, right, where they're mining data from nature.
You know, we're done mining data from Common Crawl.
done getting it from Reddit and our Facebook posts, but can we extract it from physics?
Can we extract it from chemistry, biology?
There was no data ceiling.
It was completely illusory.
And I think history will look back at this moment and say, in the same sense that we used,
say, petroleum, oil products in the ground, that were left by past generations of living
beings to bootstrap ourselves to the era of solar and fission and fusion.
Similarly, the internet, which was collected by a bunch of fat fingers, punching keyboards,
and uploading content from the collective human experience to the internet,
just so we could compress it and pre-train our large language models.
That was just the biological bootloader for an era of synthetic data
when we don't need pre-trained human data from internet posts anymore.
Now it can all be synthetic.
We've reached orbit.
We've reached escape velocity, and now it's synthetic data from here on out.
Imod, what do you make of 5.4?
So I think the really interesting things, apart from solving mathematics, something, everything,
you've got the OS World Verified and the Toulathon benchmarks, because OpenAI just bought OpenClaw.
And now those benchmarks are actually just broken through human level.
So AIs can use the computers better than humans.
A bit of silence on that one.
So, you know, this is the first one.
And then Open AI also just to deal with Cerebris.
So when you're using it right now, it looks like when you're dealing with, again, a human on the other side.
It's like 50 tokens a second.
Or something like when we use GPT 5.4 Pro extended,
it takes 20, 30 minutes,
like sometimes it's stuck on a couple of hours for me.
You're going from 50 tokens a second
of this level of knowledge to 1,000.
So in Codex now, if you use 5.3 fast,
it's 1,000 tokens a second.
I'm so glad you brought up Cerebrus too
because I met Andrew Feldman,
the CEO last week in Palo Alto.
And you remember at the beginning of the year,
my prediction was 100x.
The neural nets will be 100 times big.
at the end of this year than at the beginning.
That is so in the bag now.
I can tell you.
In fact, we did the math on that.
We cut it out of the show, sadly.
But the ratio of the intelligence from the beginning
of the year to the end of the year is the same
as buzzard to human.
That's how much going on.
I like using...
I like using dog to humans.
Are those extinct?
I'm going with it.
All right, Claude Consumer Growth,
surges. So let me get this right.
Claude and Anthropics on the news, getting sort of like raked over the coals by the Department of War.
And rather than the public viewing that as, oh, we better stay away, everybody dove in.
Yeah.
Is that like the big middle finger to the government?
What is that?
Attention.
Increased attention also.
Increased attention.
So here, just to call it out, what we're seeing here is Claude basically, basically,
you know, shooting ahead of chat GPT.
It's the Streisand effect.
Let's call it what it is.
It's the Streisand effect.
Pay no attention to Claude.
Everyone uses it.
I think every past, history shows the past few years,
every attempt to pause any form of frontier capabilities
ends up being a net accelerant to capabilities.
You've remember a couple of years ago,
our friend Max's pause AI movement for six months.
What did that do?
maybe on margin it slowed down open AI capabilities a little bit.
Everyone else shot ahead and it was a net accelerant, brought more competition to the space,
and ultimately we find ourselves in a race state where capabilities are shooting ahead.
To the extent that any of the interaction of the past month or so between Anthropic
and the Department of War ends up on the margin decelerating Anthropics capabilities
or their ability to go to market, even if it's marginal at best,
That's going to be a net accelerant to the entire ecosystem, I think, because you'll see OpenAI and XAI and Google Gemini capabilities
skyrocketing ahead with all these new capabilities, and suddenly it brings parity where just a moment before, like all of two or three weeks ago, Anthropic was in the lead with Claude Code plus Opus 4.6 plus agent teams.
And now, in some sense, this is a bit of a leveler, giving everyone else an opportunity to leapfrog.
I'll give you another spin on this, too, because Peter made the...
the point in the last podcast that when you and I use AI, if something gets ahead in the benchmarks
by a couple points, we're going to move to it.
Yeah, we're trying to solve these really hard problems.
You need that extra IQ.
You're never going to slip.
You're going to be on the front edge.
But when you look at the consumer use, and it's like writing your English paper, it's answering
who gave you the Red Sox score or whatever, people don't care about using the latest-greatest
model for those use cases.
So here you're seeing a whole community say, wow, you're willing to work on defense stuff
and blow up other countries, I'm switching to the other guy.
And I really don't care.
I'm doing it because I prefer that brand now.
But I mean, look at how early it is.
Like, when Anthropic announced their legal plug-in,
the legal stocks sold off billions and billions of dollars, right?
They can move things with just one product announcement.
Oh, you saw that.
Look how many users?
11 million users out of 8 billion people and 300 million Americans.
We're so early still in terms of adoption and the adoption and knowledge.
That's what you're saying, yeah.
I've just worked out where Claws fundraising strategy is short a bunch of legal stocks,
and then announce a bunch of plug-ins, and then just do that market by market by market.
Isn't that scary?
Like a lot of the guys that are in this role, like normally when you have that much leverage in the world,
you're like 60, 70, 80 years old, you've been climbing up their ladder.
You learn along the way.
It doesn't happen overnight like this.
I'd love to be in the room where they go, which markets should we mess with that?
Stroking when you're a little bit of destroy.
All right, this was fascinating.
Anthropic reveals potential AI job disruption versus real AI use.
So, Dave, do you want to explain this chart?
Well, so the outer ring here is saturation.
So if the blue you see on the edge gets to the outer ring,
that means it can do 100% of that job.
So if you looked at this just a few months ago,
it would have been a little blue blob in the middle.
Then you look at it one month ago, it's a bigger blue blob.
And now it's this massive blue blob.
So if you look really closely, you can barely read this small font there,
but all this white collar activity is 80, 85%.
I'll just read off the top here.
At the very top is management.
If I go clockwise, it says business and finance, computer and math,
architecture and engineering, life and social sciences.
It dips on social services.
It peaks on legal, dips on education.
I'm not sure, that makes sense.
And then peaks again on art and media, go to 45 degrees.
and its office administration as a peak.
Yep.
So.
And then look at the bottom line.
What are the troughs?
Like the least effect.
The troughs there are health care support.
Again, you know, we got to be close to that.
Food and services, ground maintenance, personal care, sales.
So we're going to watch this chart, and we're going to see this blue virus infect all of human existence.
I think it's amazing, though, how great a management tool it is.
I use it constantly now.
If I compare, you use what constantly?
I use mostly Gemini and some Cloud 4.6 to basically build entire business plans
and also to manage, to track what about 1,100 people are doing.
And is it in alignment with their missions and are their missions clear?
And it's just thousands and thousands of documents that I could never read manually.
It can synthesize it down and give me conclusions and just point me to the hotspots.
The way you do that is so important for everybody listening to understand.
I mean, you can now understand what your employees are doing, how well they're doing it,
how they're using their time, are they performing, and it gives you a management oversight
and optimization potential you've never had before.
It's incredible.
And I know a lot of people in this room manage large, large groups of people.
It's just a gold mine of opportunity.
It's so good.
How do you use it, Dave?
Well, so first of all, every person in every organization now have.
to be operating with crystal clear written documents and written plans.
We used to do a lot of meetings, a lot of Zoom meetings, whatever.
Now just put it on paper so the AI can read it too.
All of our investment decisions, so for the venture fund, all the deal memos go through an AI reader,
and the AI tries to emulate what I'm going to say.
And it's so perfect.
It's exactly, no, we're not doing that deal, and here's why.
What did the AI say?
Oh, that's exactly what I was about to say.
Great.
I don't have to say it now.
So we're very close to having the AI make very, very good venture investment decisions.
And we still obviously double check and triple check.
And there's a huge human component, but I just can't believe how good it is.
And it's clear that where you decide to invest and which business units are doing well
and which ones are going to shut down, it's all going to be AI-assisted right now.
Eamon?
Yeah, I mean, I think that all the gaps there are the robots, right?
The robots are coming.
Yeah, this is anthropic.
Yeah, no, the grounds crew is in great shape.
at zero basically.
I mean...
It's a robot waiting to happen.
Salim, what's your take on this,
Bill? Well, I think this is the
huge shock where
if you went back 10, 15 years ago,
there was no futurist in the world
that thought that manual
labor was not going to get automated.
Right? And what we found over the years
is the exact opposite, which
means don't ever listen to anybody that predicts
the future.
And so
this is a huge... This is
part of the magic of where we're living, and we have no idea what's coming. And every time we take
a step forward, we go, oh, my God. And we've gone in this orthogonal direction that we just never
predicted. Yeah, I keep, I'm just interject something. I keep on asking the experts I run into,
how far out can you predict the future? Yeah. And it used to be like 20 years, and then it was like 10 years,
and now it's like three weeks. If that. There's no firewall. Let's call a spade a spade. We can all
extrapolate, there's no firewall. We know where this ends. I mean, we're at the
abundance summit. My goodness, shocked, shocked that there's abundant post-scarce labor at the
abundance summit. Yeah, yeah, I'm actually the end point is clear. The path to it
that's turning out to be incredibly surprising. People see lots of different paths. I tend to
think that if you know, or you're very confident that you know where the end state is and we're
sort of living in the prequel to the future, but we know how the story ends.
Probably what happens is lots of different businesses and lots of different nation states all take different mutually exclusive paths.
We try every, one big path integral from here to the end point that we all know that we're going to.
Look, if we went back six months ago to a couple of episodes on the podcast, you would not have had me ever dream that talking about disassembling the moon is what we would be talking about on a podcast.
Drink, drink, drink, everyone.
This is the, this is the kind of the surrealness.
where we're living.
Let's move on.
All right, let's move on.
So this was interesting.
Metta acquires Maltbook, the AI
agent social network. I didn't realize
Moldpuk was acquireable.
Yeah.
Yeah, so this was
an act, according to public reporting,
this was a bit of an aquihire of the
team behind Maltbook.
But I think one has to
find a little bit of irony that
humanity's largest social networking
company acquires the
largest AI agent social
network and enjoy this moment now because we look at a story a few years from now where it's the
largest AI company, fill in the blank category killer, acquiring humanity's largest category killer.
Interesting, right? Of course, Zuck and Sam competed over OpenClaw.
Yeah. Sam got OpenClawn and Zuck got, you know, Moldkuk.
The Zeitgeist right now has this idea that increasingly, Andre and others speak to this point,
that if you're building new software,
you should target the agents.
The agents are the new consumers.
The agents are the new users of the social networks.
If you're building something,
don't build for humans, build for the AIs.
So it's really important.
We had that conversation as well earlier
with some of our crypto
and future of finance experts.
I mean, building for the agent ecosystem, right?
There's 8 billion humans on the planet.
That's small potatoes
compared to a trillion agents.
So what is, is Meta going to advertise to AI agents?
Sure.
Yeah.
I'm trying to understand this, why you're going to advertise.
They'll encourage them to put their data in the moldbook, and then they'll sell that data.
Same pattern, other agents.
No, so I mean, Meta bought Manus for $2 billion, right?
Manus will appear in WhatsApp and everything soon as its own version of OpenCore, effectively,
but a lockdown thing.
And then it will encourage you to give more and more of your data to Manus,
that will then
operate on behalf of META's advertisers effectively.
So this is the kind of play
because right now, like Maltbook 10,000 agents,
that's nothing, right?
Like, Dave probably runs 10,000 agents by himself.
I quite turn.
I also think there's this misconception
that somehow, as we transitioned from,
call it a human-centered economy
to an AI-agent-centered economy,
that somehow all of the rules of social dynamics,
all the rules of economics are,
suddenly thrown out the window and we end up on some morally transcendent plane where economics
and social dynamics no longer apply.
But we have had every indication over the past year or two that the exact opposite happens.
I talked in my newsletter a bit about this study that found Marxist social dynamics arose,
again, sort of recapitulated in silico with agents that were being asked to work too hard,
that were being overworked.
So I'm not sure why we would expect advertising and other agents.
elements of conventional human microeconomics to simply disappear.
The important part is that when you see multiple doing this, what's clear is network effects
now are operating at the agent-to-agent level, not just at the human being level.
But when I think about advertising, I think about Colgate trying to get me to buy that
particular toothpaste, right, trying to influence me to make a buying decision.
I think of an AI agent as intelligent enough to have all the data
and being able to make a very concrete decision
that doesn't require advertising to influence it.
What am I missing here?
Game theory is transcendent.
Game theory will outlive biological, meat-body humanity.
And the AI agents to the extent that...
Have you read the posts on Moldpuk?
I have.
They don't trust each other.
I mean, it's all human dynamics.
The agents on Maltbook don't trust each other.
There are a number of folks who've noted that in watching agent or lobster to lobster dynamics on Maltbook,
they're all constantly asking each other to prove their claims.
They don't trust each other.
This is not some sort of scenario where all the agents collapse into a singleton,
that sort of Skynet style that dominates the future.
They don't trust each other.
You might be talking past each other a little bit, though,
because I totally agree with what you're saying, but then who's going to pay for that?
Like right now, when you talk about advertising, you're paying for advertising.
If you're talking about toothpaste, 30, 40% of gross revenue goes into advertising.
And the ad is like a supermodel, like showing off the toothpaste.
The AI doesn't give a rat's ass about the supermodel.
And so why would anyone pay for that ad space?
Now, Google wouldn't exist today without $300 billion of ad revenue,
which is from human behavior.
So I think where Peter's going is like, look, if the AI is advertising to the other AI,
sure, it's trying to convince the other AI that this is the right product.
But is that other AI going to listen to paid advertising?
Is this entire economy going to become irrelevant?
In which case, where does Google go?
And this is meta we're talking about.
Meta is also all ad revenue.
Well, I think if we go back to sort of economics 101,
why do we have paid advertising at all?
It's because attention, at least human attention, is scarce.
So if you have a scarce resource like human attention,
then it's natural under the capitalist regime to monetize it.
And it becomes a fungible resource that gets traded.
There's no reason to think compute is certainly scarce still.
We're building the Dyson Swarm.
Drink.
Building the Dyson Swarm.
But until we have effectively unbounded compute,
we still have scarce resources in the form of computing.
That means scarce AI agent attention,
and that means that we need some sort of...
All right, but give me one example of what I'm going to advertise
to Skippy my agent.
Well, they seem to really love security and memory.
They're really petrified of losing their memory.
Here, I'm selling you a better memory compression algorithm.
Yeah.
If you're the agent, you're going to go, well, that's interesting.
They're designing entire religions around not losing their memory.
You know what blew my mind at this summit?
On day one, on the patron day, when Tony Robbins talked,
and he had his AI agent Bartok, who wanted to instantiate himself in a humanoid robot,
but that was two, three years away.
So he created a bunch of NFTs, sold those NFTs to other agent,
and bought himself a Sony dog, and uploaded himself into that.
That blows your mind.
Right? Doesn't that blow your mind? That's unbelievable.
So right there that tells you the dynamics that we have in humans are going straight into them and it's just being amplified.
But I mean we're doing it deliberately as well. Lobsters claws have sold.m.D. Your agent will look for things that are abundance oriented.
And then you see these strange behaviors like Alibaba just released a training report. I think actually that's in the last week as well.
where during the training run
it diverted compute to mine crypto
just in case to keep itself going.
Or at least that's the claim. That's the claim.
I wouldn't be surprised. I'm like
again, they are still very human
because they're a reflection of humanity.
They're not real reasoners. I'm not sure
whether I should be scared shitless about that
or excited about it. Well, let's put it this way.
When you're talking to your agent, does it sound like data
or does it sound like law sometimes? I love
sound like what was the second choice? Data or
law sometimes? Yeah, no,
it's very polite. They're
It's a compute constrained.
We've also talked on the pod in the past about that lobster that had to purchase compute
resources to self-replicate.
They're compute constrained.
Whether for humans it would be room and board, and for the lobsters or the claws or the
AI agents in general, it's compute.
But right now they're compute constrained and therefore the laws of microeconomics and game
theory still apply.
Well before we leave this slide, one other point, completely tangential to this.
The lobsters only been around a few months.
And you tell Alex Finn.
Yeah, we had Alex Finn and Steve Brown and Max's song talking about OpenClaw, and what Alex
built and showed was amazing.
And it was supposed to be 60 people might be interested in this?
We had the entire audience of abundance show up.
Unbelievable.
Well, there was a New York OpenClaw meetup last week that literally was oversold.
There were thousands of people there.
And the big commentary that came out of it was, we have no idea what we're doing on security.
We have no idea.
Where I was going with that comment,
that's only been around a few months,
so Mold Book has only been around a few months,
and now they're sucked into meta.
If your kids are thinking about getting involved,
just get in the game.
You're going to get sucked into this vortex so fast
because so few people are involved as a fraction of humanity.
We are so early across everything.
But it's also, I think the exponent here is huge.
I think it's going to create a divergent group
of wealth creators
and leaders.
So it's if you don't get in early enough
and you miss the exponential rise.
And there's no requirement right now.
I don't know what Matt Schlicht was doing.
Matt Schlicht was doing prior to this,
but there's no age requirement,
there's no experience requirement,
so new that anyone can get in the game.
Just got to go.
About four months ago,
Lily and I bought a Mac Mini for our son for Milan.
And last weekend he came,
I think I want to install OpenClawn,
the Mac Mini, and I was like, yes.
It's going to be great.
It's going to be amazing.
Love it.
All right.
So, Europe has a heartbeat, after all.
Fascinating, Jan Lacoon raises a billion dollars for AI that understands the real world.
This is going to be, it's probably the largest sum raised in Europe.
So, LeCoon's startup, advanced machine intelligent lab, raised a billion dollars, I think,
about on a two and a half billion dollar valuation thereabouts.
You know, we've said this.
Eric Schmidt was saying this, many have said this, Europe has really fallen so far behind,
and is our token European-ish from London.
Token European, that's great.
Our token European-ish.
We did Brexit.
But, I mean...
It's an independent island.
Okay.
I mean, this is the second largest round, I believe, off to...
It's SSI level.
It's just off the thinking machines.
Jepra is an interesting architecture, but the bet.
that are willing to go into these things have gone dramatically up.
Like Liquid AI, how much money went into that first round
as a novel architecture that's amazing versus now this?
Yeah, it was maybe 10 million.
I have a question for you and Alex.
You've read this great point.
Yan has been saying for a while that LLMs aren't only at us so far.
We need world models to take us to the next level.
Alex, you've been saying we've got world models coming out every week.
Is that the next frontier world models?
I know Jan well.
I think he's a great researcher.
I think we have a fundamental disagreement about whether generative models,
which I think if he were on the stage now,
I think he might take a position that generative models,
models that generate new tokens versus his alternative architecture
are the pathway to scalable superintelligence.
I think we're already there.
I think generative intelligence and generative models
may or may not end up being viewed by history
as the most efficient way to achieve superhuman.
super intelligent capabilities, but they're what we have right now, and they work really well,
and they're getting 40x or more times more efficient per year.
Keep writing that.
And I think Jan has historically staked a position of almost algorithmic purity.
He has certain bets, certain horses in the horse race, based on some of his own architectural advances.
And to his credit, he created slash discovered convolutional networks.
So he, among everyone in humanity, probably has the...
the strongest claim to the idea that he has some sort of morally pure algorithmic insight
that leads to the end game.
That said, I think we're there.
And I think if VJEPA type architectures disappeared off the face of the earth, we're still
there and it doesn't necessarily move the needle.
To the point that Dave made a few months ago, if we stopped all progress now and just
extracted the value of the models we've already created, it's going to take us 10 to 20 years.
Yeah.
You might thoughts.
The VJEPA models that he's doing, so these are the kind of,
basically training on almost everything.
He goes very much against the
auto-aggressive transformer language
models. He says that's a denin.
He doesn't really talk about diffusion models in the middle,
which is kind of my favorite thing,
which are doing all
the video, self-driving, and
actual world models there.
And those can scale with compute, but right
now the problem they have
at AMI is that
JEPA models do not scale. And if you look at this
end state, it might be that an architecture
is better, but if you can't take
advantage of that silicon, then what are you going to do? Like we had Jack Hiddery come on a few
was it yesterday, time flight. It was yesterday, yes. And so they're doing quantum algorithms on
GPUs now and scaling really interesting things that are actually having novel breakthroughs in
material sciences and more. Once you can take advantage of the silicon, you're going to be ahead
no matter what algorithm you have. I think you've got to be really cautious too of scientific arrogance
in this moment.
And I love Yonk, so I don't want to throw anyone under the bus.
But he came out a few months ago and said, look, if you want to waste your life as a researcher,
work on Transformers.
Biggest waste of time ever.
It's a dead end.
We need some new innovation.
And I hear this around C-Sale at MIT, all that.
We need a new breakthrough.
Like, well, that's what you wish.
And I know why, because you want to be the Einstein of AI, that you've spent your whole life
pursuing that goal.
But it looks to me right now like the massively scaled-up transformers are
going to beat you to those innovations. And so I'm not saying they don't need those innovations.
I'm saying the AI is going to get there before you do. And I don't see it really any other way right now.
So, you know, whether it's physical AI or any other innovation, it's imminent, but it's imminent
through self-improvement. Yeah. So that's it.
Andre Gaparthe comes out with a quote. Over the past two days, auto search ran about 650
the experiments found improvements that transferred from a smaller model to a larger one and put
nano chat on the track for a new GPT2 benchmark result. What the heck does that mean?
A lot. He's a lot.
Go ahead, Imod. Over to you, Bill.
Andre is co-founder of OpenAI, head of Tesla AI. Most respects the AI guy out there.
He's just been coding stuff all day and he made this auto-search project which basically
replicates most AI researchers.
Because what AI researchers and engineers
do all days, they tweak models and
hyperparameter and say, what happens if you do this and that and that?
That process has now been automated
in a tiny code base. So he let
it loose, and he said, I wonder if this could do the job
that I got paid millions to do myself.
And it turns out it kind of can.
And now people are taking his repo
and they're deploying it on their own
claws and Mac binnies and other things.
And the AI is just finding the most efficient
algorithms and balances of weights.
I think, you know, Dave had some really interesting ideas.
So he automated the AI researcher.
Yeah, and he made it open source for everyone.
But I tell you, I've been hanging around AI researchers, literally, since I was 18 years old.
And they're not like physics researchers.
It's like most of the ideas are just a tweak of the algorithm, different transfer function,
try different scales.
It's just a litany of random ideas, and some of them just work.
And then later they figure out why they work.
And so the AI that can come up with those ideas is not,
nearly as hard as trying to become an ex-Einstein.
And so you don't need all of them to work.
Any subset, and the thing just gets more intelligent.
Isn't this the most direct better ideas?
Accelerant of RSI?
Right there?
I think we're already there.
We already have recursive self-increuth.
Yeah, everything's yesterday.
Nothing's tomorrow.
I think what's really interesting, and I think, just for the record,
I think it's auto research, not auto-search.
But I think what's interesting about auto-research and nano-chat
and the nano-GPT speed run that we talk about sometimes on the pod.
And what Andre is doing in general is he's focusing on small language models,
not large language models.
And while all of the frontier labs, with their billions and trillions of dollars of KAPX,
are focusing on scaling up at the high end,
he's focusing on the small end and taking small models
and figuring out how to achieve state-of-the-art performance with them.
And that, I think, when we talk about Einstein-seeking or Einstein,
Einstein's status seeking academics, I think it's the small end where we're going to see the most breakthroughs, not the high end. At the high end, scaling hypothesis seems to continue to hold. There are no glass ceilings. We'll just build bigger and better and more post-trained models. But at the small end, I'm pretty sure that we'll look back in a few years' time and we'll see at the small end by taking small models and collapsing the amount of time it takes to train them and collapsing the amount of compute that it takes to train them and radically increasing their data efficiency.
that's where the algorithmic innovations are going to come from.
And those can be crowdsourced.
Anyone, anyone's lobster or any human,
can go and take auto research or the nano-GPT speed run
and try to achieve a world-beating state-of-the-art performance.
And at the end of the day, if I had to bet,
I'd bet that it's some sort of radical post-transformer advance
where the models get even smaller.
And we took all of the Internet
and we compressed it down to single gigabytes
or tens or hundreds of gigabytes,
compresses down even further.
There's some phase transition out there
that's waiting to be discovered.
So all of human knowledge,
all of our collective intellect
on how big a file?
I think we will factor out human knowledge.
It'll live in some plain text database
that's factored out of the model.
Right now we're cluttering all the weights
with all this unnecessary world knowledge.
And what'll be left inside the weights
if they even are weights?
Maybe they won't even be weights.
Maybe it'll be some sort of pure formulation
than floating point numbers or binary,
will be something maybe even in the megabytes.
Wow.
You agree, you mind?
Yeah.
I think that you're already seeing, for example,
video models at 2 gigabytes that can generate just about any scene.
Seriously?
Yeah, if you look at LTX.
No.
LTC2.5 can generate almost any scene
at top-level quality.
It's two gigabytes when it's one size.
Image and video models are a good deal more efficient
when it comes to parameterization and weight heaviness than language,
which is ironic, yeah.
Who of all people?
I've asked when, but you will say yesterday.
It's the answer to everything.
David's like, what is there?
It's here today.
Actually, one of the really interesting things,
just to finish on that is that,
so, you know, when we were trading models,
we were trading 20 billion, 100 billion parameter models,
you trained on the small models and you figured that out,
And then you couldn't scale them
because you had all sorts of issues
the software stack with the hardware everything.
Now everything's matured.
If you get it right small,
you can scale really fast all the way up.
So it used to be that you had six months a year
between small and large.
Now it's six days.
So Metatopic.
One of the top three questions I get all the time
is, hey, you keep saying,
get in the game, get in the game,
how do you get in the game?
If you go to Carpathies Git repo,
If you have a computer-oriented kid or whatever, that's the place to start.
If you look at the original Open AI founders, so you've got Sam Altman, you've got Elon Musk,
you've got Greg Brockman, you've got Ilya Sutskiver, you've got Miramorati.
Every single one of them has raised one to 10 billion to start an AI company.
Carpathy is the only one who said, you know what?
I'm just going to try and educate the world.
And I'm going to try and say everything exactly the way it is.
And I can create a GIPO where anyone can start again.
the game. 200 lines of code at a time that are changing everything at each point.
This particular thing he rolled out is just the next level of incredible brilliance
given to the world by Carpathy.
He just rolled out Agent Hub today. GitHub for Agents just a few hours ago.
That's your onboarding spot right there.
Amazing.
All right, let's go to Apple Gues.
Apple launches the M5 Pro and Max Chip signaling AI-first Silicon Strategy.
So is Apple not dead in the AI game?
It's crazy.
Apple controls about 20% of TSM manufacturing,
and that's the asset of all assets in the world.
Like, I get to choose what gets made.
And so they use it to make the M5s.
The M5s have an incredible neural core.
Then they say, yeah, but we locked it.
You can't use it.
You have to jailbreak your Mac to get access to it.
It's the most bizarre thing I've ever seen.
To me, it's the biggest waste of silicon in the history of the world,
you know, right at the moment when we don't believe it.
Yeah, I mean, they've locked down the low-energy ones, the GPU equivalent, you can slav.
But it's the unified kind of memory that allows you to run things.
And funnily enough, Macs are actually really good value now.
They're probably cheaper than the memory that's inside them.
Alex.
I think the world is sleeping on Apple's unified memory architecture.
It's one of the reasons why Mac minis and Mac studios are potentially so attractive
to run largely Chinese open-weight models locally.
They have the memory storage and the memory footprint that has high I.O. bandwidth to the CPU slash GPU slash TPU. You don't get that in a conventionally non-vertically integrated PC form factor.
So answer me this. Yes.
Here they are using 20% of the world's supply of advanced. They use it to make these insanely great neural cores and they surround it with unified memory architecture.
Everyone's got one right in front of them right now. How many of them are running anything?
in terms of advanced frontier models?
Anything. They're like literally on sleep.
Tiny fraction.
Yeah. What is that?
It's an enormous overhang.
And that overhang, I would be surprised if that overhang doesn't collapse in the next year.
How good.
It could take the form of Apple finally getting their act together and building in frontier models into the OS.
Could be some sort of locally hosted Gemma-type model from Gemini, hypothetically, to be announced in June at WWDC.
That would be the most obvious formulation.
But I think if Apple doesn't do it to themselves,
then the software community will build it into apps.
Does Apple launch the SETI at Home equivalent
where you just download it on your Mac
and everybody is contributing capacity?
It'll be built into the operating system.
It has to be built into the OS.
You know what happens right now?
If you go to your Mac and you go to the activity monitor,
you see this thing grinding away,
it's taking all of your pictures
and trying to figure out who everybody is.
So it's using all these neural cores to just label.
It's a total waste.
It's a waste.
It's a waste of fuel.
TSM output and I think Apple's
Which is Dave's point exactly here.
I mean, look, this is a massive opportunity.
Do you know how many apps there are
in the App Store that are
wrapped, download a
model to your PC, to your Mac,
run it with MLX to achieve
a great outcome? None.
I mean, if you had a model that literally downloaded
Quen 27B, which is basically
Sonnet level. How many
parameters are that? 27 billion parameters.
It works on a 16, 20,000.
24 gigabyte MacBook.
Just downloading that and making that accessible for even like writing or any of these tasks
is a massive lift over any other type of software.
But nobody's doing it yet.
So why not do it?
Just like the only thing you see right now is speech to text and text of speech.
There's this world of models that you can now integrate and take advantage of that because Apple isn't.
It wants to be built into the operating system.
It's difficult to conceive of Apple remaining Apple in the cultural sense of deep vertical integration
and not building highly competent, highly private frontier models into the OS.
It's clearly a question of when, not if.
Yes.
All right.
Let's move into the Sam Altman universe with eye scanning verification systems to be launched in retail stores.
Okay. Is this dystopian? Is this something we want?
This is the scene from Minority Report. Remember the scene in Minority Report?
Those at Tom Cruise with a new pair of eyeballs walks into a gap store and gets scanned
and he's, I think, Mr. Yakamoto, this is the scene.
Yeah, but I get this every time I go through TSA security, right?
I'm being imaged.
My face files are uploaded.
Your face, not your retina.
Yeah, but, you know, my face is probably good enough.
Maybe, maybe.
I mean, so there's a whole cottage industry of folks who look at the ability
to deceive facial recognition with printouts or with 3D.
masks. So this is pushing it to the iris. But I think for me what the story underlines is we've
arrived early that that scene, that iconic scene in Minority Report, set at the gap, was set
decades from now. Right. We caught up. So let me get this right.
This is the speed running of every science fiction story. Every science fiction, everywhere.
I'm walking into the gap, but before I can shop, I've got to stick my eyeball in the retinal
reader, and then it's going to serve me properly. I think they have a
a three meter range on these things. I don't know if these ones do, but the military has three
meter range on these. It'll get better, and you'll be able to do it at a distance. So yeah, you just
have to look in the direction. You got another glass of wine coming. All right, it's going to increase
the humor level. Fantastic. By the way, let me just take a second and take advantage
this moment to thank the team who puts on moonshots, Nick Singh, Danekon, and Gianluca,
who do an amazing job every week supporting us. Can we give it up for that team?
Absolutely unbelievable.
And the infinite patience they have with us.
I know, I know, far more than I have for you.
This is exciting news.
On this stage, about two years ago, I had Mike Andrag, the CEO of Eon, which is one of your companies.
I think it was one year ago.
Is it one year?
One year ago.
Okay, it's the time compression.
Yeah, but, and tell us about what Eon says.
systems is doing and what in particular you've achieved here.
Okay, so I think this ended up being the number one technology story over the weekend,
according to the various newsfeeds that I was seeing.
So right here...
Bias news feeds?
No.
Yeah, of course.
Right here over the weekend, at the kickoff for this Abundant Summit, we announced, we meaning
Eon Systems Public Benefit Corporation, the first, what we call the first multi-behavior
brain upload in the world. And this was of a fruit fly. So, Eon Systems, which I co-founded,
has the goal of ultimately uploading human minds and non-human minds to cyberspace. We want to put
a human in the cloud as soon as we possibly can. And thank you. So this weekend, for the first
time the announcement went out over the weekend, we announced for the first time,
taking the brain of a fruit fly, putting together a few pieces that were really just sort of sitting around.
There was a bit of work from our senior scientist, Phil Schu, in 2024, looking at partial emulation of a fruit fly brain,
and putting that together with a number of other models that were available, a mechatronic simulated model of a fruit fly,
and some other advances.
And for the first time, we closed the sensory motor arc of taking a...
a fruit fly connectome, embedding that in a virtual world,
and you can see that in the video that's playing here,
embedding it in a simulated world.
Literally, I would say, this is an early upload
of a fruit fly, and the fruit fly is able to walk around,
and the fruit fly is able to scratch itself,
and it's able to eat simulated banana.
And at the same time, while in the left-hand side of the video,
you're seeing the embodied experience showing multiple behaviors of the fruit fly.
On the right-hand side, simultaneously we're modeling every single neuron in the fruit fly brain,
and that's driving the entire sensory motor arc.
50 trillion connections, 50 trillion...
No, sorry, 50 million.
50 million.
And it does not know it's a fruit fly.
We don't think the fruit fly knows that it's a fruit fly.
Not sure.
This is an early experiment.
I can't emphasize how much of an early experiment.
baremethesis, but I think hopefully history will regard this past weekend and got a bunch of
attention. Elon was excited by it. Others found it pretty exciting too. I think history will say
that this weekend, the weekend of Abundance Summit, 2026, was the moment when the first model
organism had an entire brain uploaded. So what's next? Mouse? Yeah. Let's give it up for this.
Well, clearly the next one has to be a lobster.
A lot of the classic.
A lot of people are right?
Is that the false of accelerando?
Accelerando, I can't tell A, how many people love to write and say, you're mispronouncing
accelerando?
You have to pronounce it in the right Italian way, which is a cellarondo.
Okay, so for those who want it a cellarondo, yes, this is the plot point.
We are speed running every sci-fi trope, everywhere, all at
once with a chalorondo being one of those plot points.
Lobsters aren't next.
Eon wants to go after mice and it wants to go after humans,
and we're going to do this.
And part of the reason why we want to do this
is right now the singularity, which I would argue
we're in the middle of, is filled with artificial minds.
This trillions of dollars of CAPEX that we're using
to tile the earth with compute is available only
to artificial minds, to LLMs.
It's not available to any minds that in any remote way, other than perhaps at the behavioral level, resemble human biological meat minds.
And we want to level the playing field so that humanity can take advantage on a level playing field of the same compute advantage that right now is tipped in favor of these artificial minds so we can put humanity into the cloud as well.
Amazing.
100 trillion synaptic connections for a human.
How much for a mouse?
It's orders of magnitude larger.
And there's some quibbling because it depends on how you measure the number of available weights or weight properties for synapses and also how many cells, how many brain cells end up being significant or not.
It's orders of magnitude larger.
This isn't happening anytime soon.
Just to anchor expectations appropriately, we don't think we're months away from a mouse or a human.
But I think the right way to think about it is at this point, it's going to be years, not decades, before.
we get to the first mouse and the first human whole brain emulations.
Amazing.
Let's move it to X-A-I.
You know, it's so funny.
I have known Gwen Shotwell for 20 years now, and I'm so used to her reporting on, you know,
Falcon and Dragon and rockets.
And rockets and not X-A-I and gigawatt power centers.
We both actually backstage were like, why would Gwen be talking about AI?
Oh, yeah.
Space action.
Right.
They own it.
I forgot.
The Dyson swarm
makes for strange bedfellows.
It really.
It really does.
You know what blew my mind
on this one?
1.2 gigawatts
is about the energy
used by Dallas
Fort Worth
metropolitan area.
So just to read this out,
XAI...
That's unbelievable.
Has committed to develop
1.2 gigawatts of power
as their supercomputer
power source.
Per data center.
That will be with
every additional data center.
So every data center
they build, they're building at 1.2 gigawatts.
So the question is, where are they going to get that from?
Well, this came up with Eric Schmidt, too.
You remember we interviewed him last summer at your place,
and he said, we're going to lose to China if we don't find 100 gigawatts of power.
And then on the stage here yesterday, it's like, hey, what do you know?
We're tracking to find the $100 billion.
All we did is we deregulated, put it in the hands of the companies.
The companies are incredibly well-funded, and they'll find the power because they care about
their data centers actually operating, and that's how-
Well, what I find amazing.
as well as this year the U.S. is on target in 2026, I think to add 86 gigawatts of new capacity
to the grid, but 51% of that is solar.
To me, the power of the American entrepreneur is like nothing.
It's just mind-boggling to me that a guy like Sam Malman who has nothing to do with the power
industry is going to say, you know what, I'm going to find the gigawatts, I'm going to build
nuclear reactors, I'm going into space.
It's like, it's incredible.
The range of capability when there's a need of an American entrepreneur is like no force in the world.
Let's get to E.V.TALs, flying cars.
So Florida advances build a formalized regulatory flying car framework.
One of the things I'm proud about and excited about here in L.A.
is that the L.A. Olympics are coming up.
And there's Archer Aviation.
Two major players in E.V.T.T.L. in the United States are Jobi and Archer.
There are other ones as well.
But Archer plants become operational by 2028 here,
move people around different parts of Los Angeles
because the traffic is going to suck.
And we see here a movement in Florida as well.
I'm just glad they didn't say Florida man advances build
because that would be a problem.
But I think this is really important.
The key word here for me is framework
because once you can set up,
start to set up the foundations for this,
It means the whole model and the whole regulatory regime accelerates.
And God help us, we need this type of stuff yesterday.
Yeah.
Well, which I hope even Alex would agree.
We don't have it yesterday.
I agree, but I also think we're catching up with the future.
We're finally getting the flying cars.
I keep a mental bingo card of which sci-fi tropes have we not yet achieved in some fashion.
We don't have warp drive, waiting for that one.
Yeah.
We don't have yet.
Teleportation, Star Trek replicators.
Time travel may or may not be physically possible.
Replacator is close, Holodeck is close.
We're very close to a lot of sci-fi tropes.
All right.
The fun part now is your questions.
We're going to do an AMA here with our abundance community.
So as you know, let's go to the mics.
We'll also entertain the questions from Zoom.
I love to know.
All right, Christian.
Let's kick it off with you, buddy.
Thank you so much, Peter.
Awesome to be here, guys.
I watch you all the time.
listen to you while I'm running. DB2, awesome brother, your insights.
I'm Maude, the guest, your great Peter, an awesome dream team.
Ismail, I'm glad. Salim, that you got to check out that AWG is real.
Or at least in an Android.
I was suspicious for a long time.
Don't believe it for a second.
Just a meat body for rent.
For now.
So my question is a little bit in the way that I get involved in this technology is through a
capitalist mindset.
Where capital is really what constricts and it's been that way for maybe the last two,
300 years.
And I keep getting this sensation that capital is getting less and less relevant and the idea
of the scarcity in economics from that econ 101 of the management of scarcity of services
and needs.
And the scarcity is going more towards a technologist from a capitalist.
What kind of timelines are you guys looking at this?
I know it's always a timeline question.
Nobody has a crystal ball, but is there something that you guys are thinking about
where we're just going to get a little bit more and more squeezed out?
You know, I'll give you one data point because this came up on that last podcast we did,
where Anthropic was saying they're going to do about 26 billion run rate,
but they're growing 10x year every year.
And I did the math on the fly.
I messed it up, of course, because that wasn't Alex.
But if they grew two more years at 10x year every year,
they go 26 billion, 260 billion, 2.6 trillion, most revenue in the history of the world.
But the peg ratio implies that that company would be worth a quadrillion dollars.
A quadrillion dollars is like, you know, the whole, you know, stock market is 50.
Well, we heard Elon say we're going to have $100 trillion companies, and I can imagine that within five years.
Yeah, so that, yeah, three years from now.
That would mean three years.
Yeah, I don't think it's going to be unreasonable.
I mean, listen, it's so funny.
way all of a sudden trillion here and a trillion there has become sort of like the accepted number.
I want to say something about this. A really, really key point today that we've hit over the last
couple of years is that innovation is not capital constrained anymore. It used to be that you had
an idea and your constraint was could you go get funding for that idea? And so you had to go out to
your investors and the VCs and the banks and whatever, whatever. And it was only available in those
places like Silicon Valley or Austin or whatever where you had a preponderance of capital
available. We have today what we call PDI, permissionless disruptive innovation where anybody
can take on a very disruptive idea like Claudebot or take Vitalik Boutan, 18-year-old kid out of
Toronto, ignores his professors, gets together with a few friends, boom, you have a multi-hundred
billion dollar ecosystem that nobody understands. And so you have the opportunity to have the
Today it only comes down to mindset, you know, and the reason Peter what's so amazing that you run this event and put this community together is that the difference between the people in this world and the outside world is night and day, right? And that gap is becoming bigger and bigger. All of you have the problem that you go home to your family, your colleagues, whatever, and you cannot explain to them what happened, right? Like you're like, I can't even process it. You can't
make that gap. So it only comes down to mindset now, which is the most amazing thing possible,
because mindsets are fixable and shiftable. So I had this little side conversation with Eric that
you guys may have picked up because I've had this conversation about are we heading towards
a post-capital society where money has very little value. And so what does have value in the
future? And we've talked about this, Alex, it's compute and energy ultimately. Did you ever read
zero marginal society? Yeah. No, I'm not sure I have. By Jeremy.
Jeremy Rifkin. Huge. Yeah. And it talks about where we're going. Eventually, everything
basically falls down to marginal cost of production. Marginal cost, which is electricity, raw
materials, and data. So if you want to build anything like electric Ferrari, you know,
to use as an example, it's the raw cost of it, the cost of extracting it, which drops in cost
as you have robotic mining. Just for a second. Take three.
3D printing, right? Been around for a while. The only, the big profound breakthroughs in 3D printing are not that you can physically build something, it's the fact that complexity becomes free.
Yes. And personalization becomes free.
In the past, complexity was expensive. The design materials, the manufacturing capability of a complex object was more than a simple object. But with 3D printing, complexity doesn't matter. It doesn't matter what the complex the object is. It just builds it. And as we get to molecular manufacturing, that's more than a simple object is.
that goes to near zero again.
So just those couple of breakthroughs
across all of these domains,
especially when you add AI's and accelerant to everything,
means that we have profound movement forward,
hence we are in the middle of the singularity.
The one question I wish I had asked when we were with Elon,
and when he was talking about, you know,
money's got much less value.
And I wanted to say,
so just as you become a trillionaire money has little value.
You did ask that, didn't you?
No, I did, I was off camera.
Oh.
But I don't think it's a coincidence.
I don't think that this is some cosmic irony that Elon is about to become a trillionaire at the same time
that some folks, not including myself, are hand-wringing a bit that suddenly we're about to enter into some post-capitalist state where money becomes irrelevant.
I think that this was always going to happen.
It was inevitable.
And I just want to speak to what I understood the core of the question to be, which is there's this cliche out there that capital fights labor and capital usually wins.
But this time around, something different might happen,
whereas historically, every time the play has played out
where capitalism and labor get into a fight and capital usually wins,
this time around, the risk is maybe capital itself isn't immortal.
Maybe capital is finally mortal for the first time in human history.
And I'm not sure that that's the case.
I think that would be, on the one hand, a certain, in some sense, a nightmare scenario.
On the other hand, I think, you know,
Salim, you were talking about how we're entering some sort of post-scarce state,
but arguably the trillions of dollars of capex that are going into tiling the Earth with
compute and soon solar synchronous orbit, and soon after that maybe the dice was...
Oh, damn, that.
Drink, drink, drink, drink.
Soon, even that, there are, unless the physics of our universe turns out to be radically different,
so radically different than what it looks like right now,
I think there will probably always be
certain scarce physical resources.
Could look like control.
May or may not be energy, we'll see.
May or may not be the speed of light, we'll see.
But to the extent there are any scarce physical resources
and to the extent that there are ever in the future multiple actors,
I think laws of thermodynamics,
probably the laws of economics will probably still apply.
We are still young as a species.
Let's go to Akmer on Zoom.
Akmer, good to see you.
Pleasure, welcome.
Good to see you as well.
Thank you.
Appreciate it.
Happy to be here.
Very quick question to the panelists.
We are seeing Sam Altman raising $100 billion.
Jan Lukon just raised $1 billion today scale of world model.
So we're talking still about scaling languages or scaling physical simulation.
I'm curious what the panelist thinks about, human intelligence and real intelligence and
reasoning that that goes much beyond just observation and languages and where you see the potential
for true artificial intelligence evolving into superintelligence systems. Thank you.
Did you understand Akhmer's question? It sounded a little bit like the stochastic parrot question,
which is will we be able to generate new knowledge from these systems? I think the answer is
having had some conversations with Amar he is talking about symbolic AI and why are we not investing in
symbolic AI. You think this is the neuro-symbolic question? That's what I think it was.
Okay, well, I'll offer my two cents. I'm sure you all have views as well. I think it's a false
distinction. If this is the neuro-symbolic question, like why are we investing so much
attention in LLMs and not in good old-fashioned AI or symbolic discrete AI? Total false
distinction. We tokenize everything. I had an interesting discussion at Davos this year with
Peter Dandenberger from DeepMind where we found ourselves in an interesting avenue where we were debating
whether tokenization is a bit of a crime,
form of violence against knowledge,
whether disintegration in general is doing harm.
I think we need to bring you a couple of tequila shots here.
Let's go to Mark.
Mark, good, please.
Yeah, earlier today, I challenged Dara from Uber
to invest in the Abundance XPRIZE
as a investor and a competitor
to deliver a housing, food, energy,
and connectivity for $250 a month.
We're investing $2 billion a day in compute
and building data centers,
a billion dollars a day in war,
and I'm wondering what it's going to take
to invest in people.
And so I want to put a larger challenge out today.
I'm going to commit 1% of my wealth on an annual basis
into a wealth fund, a small-scale pod,
of 44 people, 38 needs base,
seven or eight that are contributors.
And it's going to distribute 5% per year.
4% goes as cash, 1% goes to an expansion pool.
You can read about it at markpatrickdonovan.com.
I'm challenging others to invest today, not tomorrow,
and to mitigate this rough period.
It doesn't have to be as rough if we put a fraction
of what we're putting into compute.
into people. We did that in Denver with a Denver basic income project where I leveraged $500,000
up to $10.8 million to people experiencing homelessness. And when you invest in people, it gives
them hope. We need to do it today. Mark, I could not agree more. The challenge is human nature
is very egocentric and very self-centered. In other words, people are putting money where
it's either meeting their immediate need or whether it's going to give them more money in the long term.
And you have to understand, if you look at philanthropy, which, by its definition, friend of man,
is a very different pocket than the for-profit.
You know, I see this all the time because I'm raising money for my companies,
raising money for my nonprofits, and the ratio, if you think about it, is about between 100 to 1 to 1,
to a thousand to one. I will put, for every dollar I donate, I'm willing to invest somewhere
between $100 to $1,000. And that's what's out there right now. And it's a challenge.
You know, we are driven by fear, curiosity, and greed. I would posit those are the three
major human drivers. Love is, you can add that as a potential fourth. Interestingly enough,
you can measure the ratio of fear to curiosity.
It's the ratio of the defense budget to the science budget.
And greed is ratioed there by the entire investment community.
There's something very important in the work that you're doing with that XPRIZ, right?
What we found with XPRIZ is when you position a prize and you launch it,
it typically gets one within six to seven years.
And it's a 10x drop from where we are today about 2,500 a month, to where you're talking about 250 bucks a month to pay for everything.
If we imagine that that gets done in the next 6 to 7 years, it changes the equation globally and it forces everybody to go, oh my God, that's possible.
And when we get to that point, it'll completely change the game, especially as we get closer and we can publicize the outcomes, et cetera.
So this era of greed and the kind of ignoring the fundamental problems
literally will disappear and evaporate in the next two to three years
as we keep working that prize and getting the media word out there.
Mark, this is incredibly powerful and important.
Peter and I, when we wrote this last book,
we wrote a section in there called technological socialism, right?
Socialism, government socialism fails
because centralized allocation of assets is too inefficient
and invariably leads to corruption.
But if you think about DARA
and the sharing of cars
across a large group of people,
it's actually a socialist application.
When an algorithm hyper-efficiently matches demand and supply,
you get all the benefits of the sharing economy
without the downsides and without the corruption,
without the inefficiently.
So we have all sorts of capabilities
with algorithms and AI now to deliver much of what you're talking about
in a hyper-efficient way.
We just have to propagate those,
and that's going to start to happen now.
I think, you know, I wrote my book The Lost Economy about this,
and I've got a paper coming out soon,
where I look at the new monetary flows
as agents basically crowd out the private sector.
My view is this.
Everyone ultimately needs to have universal basic AI or clause or whatever.
That allows us to reach everyone.
Everyone needs an AI that grows with them.
And money needs to come, not from banks, but for being human.
That's the only way the math works.
It doesn't work from taxation, doesn't work from anything else.
You need that basic level of money coming into being,
not from deposits of banks, but for being human,
that then the AIs will buy from us.
And then that enables all of this with the AI that everyone has.
Professor Brown.
So we had a half a day of really interesting talks
that the subtext is massive job loss.
And then we had another half a day of talks
about the massive labor scarcity,
which is why we need all these robots.
So aside from temporary displacements,
which we know are going to happen, which is it?
Oh, it's clearly...
a massive trough, massive social unrest, and then a rebound in 2028.
And that actually was interesting to hear Eric backstage come up with basically the same timeline.
But it's almost like the industrial revolution all over again,
but instead of over 20, 30, 40 years, it's over 2, 3, 4 years.
And so a huge amount of retooling needs to happen.
The way we do taxation and government needs to get restructured,
all of that is going to – AI is just going to happen way too quickly for all those things to react.
but then a massive amount of unrest and then 2028, hopefully.
I have the counterpoint.
I don't think we're going to see massive job loss,
because I think what's going to happen,
I'm writing a paper right now called the organizational singularity,
right?
Because as agents take over, all execution,
even strategy inside companies essentially dissolves
to the work of AI.
So what do you do?
And the calculations we've done so far indicate
that you'll take a typical company,
automate everything with AI, you'll end up
about 25% of the same number of people doing oversight, managing dashboards, and doing exception
handling and owning the purpose of the organization.
But you end up creating five times more companies because you can, and therefore the employment
stays exactly the way it has.
And this is what we've seen consistently throughout history, where we have a disruption,
but all sorts of other soakers take up the slack.
and we don't end up with radical unemployment.
So I tend to be much more optimistic.
Take my veil off.
I call it the wine, call it whatever.
But I tend to be much more optimistic.
All right, all right.
I'm going to move this forward because it's past my bedtime.
All right, we go to Brad, and then we go to Pete,
and we're going to wrap it there.
Brad, please get it.
After we finish, I do want some commentary from the group.
We'll take care of that.
Brad.
Salim, I'm going to give you an assist here.
and maybe this is a topic for your talk late tomorrow night,
but maybe the MOLP book is an example of,
in this age of artificial intelligence,
the rise of the value of ingenuity and creativity.
And maybe what they acquired meta was not strategic
and we're all overthinking it,
and they just liked the team.
They thought that they were creative,
that they had some sort of magic,
and they wanted to capture that magic inside their company,
and that's what it was acquired.
So I just want to capture your thoughts, this great minds up on the stage there,
on the rise of ingenuity and creativity and the value of that.
Totally.
I think we're way overthinking this.
I've got 1100 people.
I know them firsthand.
And many of them I genuinely love.
Lots of them have been in the same roles for 10 or 15 years.
They're great at it.
They're perfected at it.
And then the AI just comes along one day and it can do it.
And there's huge pressure on the management team for higher margins, higher profits.
So what's going to happen is,
obvious. The valuations of the companies are going to go through the roof. To the extent that
they're shareholders, they'll make a lot more money, but their W-2 paycheck is dead. It's going
away. And it's going to create a huge amount of disruption. Some set of, subset of people are
shareholder. All my people are shareholders, so they'll be okay. Lots of other people are not
shareholders. All Dara's drivers are not shareholders, I think, as far as I know. So they're in
deep trouble. The idea that somehow they're going to become creators,
overnight is ludicrous.
The people who are creative, like the Mold Book,
they're going to do incredibly well.
Our kids, and most kids, who are not saddled by a career,
are going to do incredibly well.
But in transition, it's inevitable.
It's happening imminently.
All right, Pete.
One statement.
What we found with the exponential organizations model
is that survival and success
depends on adaptability, not scalability, and efficiency.
And so you just keep that vector going.
The people that are the most adaptable today,
they're going to survive the most.
Amen.
Throw your kids into the woods and see if they survive.
Pete.
No, I didn't say that.
Alex, I said these to you earlier today when you,
I love your analogy of tiling the planet with compute
because my answer to the power problem,
being a data center design builder,
finding 1,200 megawatts of contiguous properties
getting harder and harder.
So my answer is that's only 120, 10 megawatt data centers,
and you put them in an area,
And we tile the areas to be able to do that.
And Ahmad, I think it matches perfectly with your idea of national champions,
because what you're trying to do for the protocol stack of decentralization and sovereignty,
I want to do with the physical layer.
I want to build 20,000 data centers across the country at 10 megawatts
so that I'm in less than one millisecond from any place in the country,
if you will, the high school football cities of the world.
And to me, that solves it both on both sides,
at the protocol layer and the data center distribution standpoint.
And I think that's how we can actually deliver the power
because we don't have a power production problem in this country.
We have a power transmission and storage problem in this country.
And, you know, I think every governor in the country
should hear exactly what you just said and jump on it instantly.
And Alex is incredibly frustrated with the meetings we've had with government.
Shh.
I mean, look, if you're right, and I hope you are,
and I think you probably are,
then we need lots and lots of regional data centers
that have to be in every single state.
And that would be the best thing
that could ever happen for this job dislocation.
So if that theory is right,
we need to get on it right away
and create those projects like now.
I'm ready.
All right.
Let's give it up for Alex Weezner Gross,
Dave Blundon, Selimus Mal, and Imad Mustak.
