Moonshots with Peter Diamandis - LinkedIn Co-Founder Opens Up on AI Job Loss w/ Reid Hoffman, Dave, Salim, and AWG | EP #194
Episode Date: September 16, 2025Download this week's deck: http://diamandis.com/wtf Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Grab dinner with MOONSHOT listeners: https:/.../moonshots.dnnr.io/ Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified, focused on AI and complex systems. Reid Hoffman is an entrepreneur, investor, and LinkedIn co-founder focused on AI and human progress. – My companies: Reverse the age of my skin using the same cream at https://qr.diamandis.com/oneskinpod Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding – Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Salim: X Join Salim’s Workshop to build your ExO: https://openexo.com/10x-shift?video=PeterD062625 Connect with Alex Website LinkedIn X Email Connect with Reid X LinkedIn Listen to MOONSHOTS: Apple YouTube – *Recorded on September 11th, 2025 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
entry-level job loss. You made some pretty, I think, sharp comments on this conversation.
How do you think about this? Job transformations, they are coming.
Tech entrepreneur and co-founder of LinkedIn.
Co-founder of Inflection AI, a partner at Greylock.
His new book is Super Agency. What could possibly go right with our AI future?
Please welcome, Reid Hoffman.
It is definitely the case that AI will lead to a lot of job transformation, in some cases, flat-out job loss.
but also I think we will adapt perfectly fine.
The career of the future is entrepreneurship.
It is how do you use these tools to create value in the world?
The entry-level job of two years from now
will be very different than the entry-level job today.
I think if you look at the Industrial Revolution,
you know, the net effect is always more job creation in the long run.
The problem is the timeline is so short.
It's happening much faster than just the raw job displacement you would expect.
We all need to think much more entrepreneurily,
and here are some lessons of entrepreneurship.
ownership, and here's how to think about them.
Now that's the Moonshot, ladies and gentlemen.
Everybody, welcome to another episode of WTF
just happened in tech. This is the news
that's important for you to learn if you want to change
and transform your life, your company, your industry.
This is the news that's about hopefully an optimistic vision
of the future, not about sort of dystopian views.
It's about hopefully real views, things that you can use.
I'm here today with a moonshot.
Got Mates, Dave Blundon, Alex Wiesner Gross, and Salim Ismail, and the special guest, a friend
now for, I don't know, probably at least 20 plus years.
Reed Hoffman, you know, Reed as the creator of LinkedIn, many companies on the board
of Microsoft outspoken about this tech field.
And I'm excited to get Reid's input on a lot of the topics we're going to be discussing
today.
So without any further ado, first of all, Salim, just back from India.
I missed you in the last two versions of this podcast.
And the question is, did you solve the trade issues?
And did you bring me back my iPhone 17?
I brought back some parts.
So if you want to assemble it yourself, you can because, you know, that's somewhat flaky over there sometimes.
The two things that blew my mind was the attitude of everybody in India was literally middle finger to the USA.
And I think this is a big challenge because if India and the...
the U.S. Indian China start trading. The rest of the world kind of goes to hell. And so it's
kind of a big deal. But what I found most incredible was the unbelievable optimism for AI and
the use cases that are exploding there out of the gate. And I think that's amazingly exciting.
And we'll get to that a little bit. Dave, I just saw you up at Stanford. We interviewed the CEO
of Replit along with Saleem. That was a fun conversation. Yeah, yeah, it was fantastic. I'm still
here, actually.
150 million repositories.
It's incredible.
Yeah, no, the code's piling up like crazy.
Well, that'll come out soon, but Peter used it on his plane, too.
So it shows you.
Yeah, it was great.
It was so fun.
I literally downloaded Replit.
I had Starlink on the dash of my SR22, and I was flying, connected, and I vibe-coded a, what
was it, a mindset app on the way up there.
It was great.
Yeah.
Tweeted it out.
All right.
Let's drop in on the subject of jobs and education.
And, Reed, I want to go to you first.
Our common friend, Eric Benhoffson, published a paper recently with these charts looking
at entry-level job loss, down 16 percent in AI-exposed fields.
And you made some pretty, I think, sharp comments on this conversation.
How do you think about this?
Well, a couple things.
So it is definitely the case that.
you know, that AI will lead to a lot of different job transformation.
In some cases, you know, kind of, you know, flat out job loss.
I mean, a simple heuristic that I sometimes use, it's a partial heuristic is if the human
beings trying to do a job by following a script that a, you know, an AI can follow better,
customer service, et cetera, that will happen.
But obviously, some of the issues here are around, you know, can I like questions
of not just customer service, but also like software engineering.
Now, that being said, even with a kind of a downbeat in, you know,
possible initial software engineering job hires, my belief is that that is only a
transformation issue, partially because I actually think, if anything, the thinking
about how you do software and software engineering is actually going to get a lot more
widespread in problem solving because part of what AI is going to lead to is a software
co-pilot for all of us that anything that you're doing that involves thinking, anything
that you're involved doing communication language will also involve, as it were, custom software
that you'll be doing. And so I think that there's still nearly infinite hiring demand for
software engineers. So it's a little bit of a complicated thing. Eric is often.
awesome and this work is awesome. And so it's, but it's, you know, kind of job transformations
they are coming. Yeah. You know, we look at this first chart here, which looks at job losses,
if you would, in marketing and sales. And you can see the chat GPT inflection point in 2022.
And the drop off that blue line for those who are watching this on YouTube is, in fact,
early career job hunters. And one of the biggest concerns I've got, and Salim, you can speak
to this just having come back from India, is getting into an Arab Spring-like situation where you've got
a large population of youthful individuals who are, you know, on the male side, testosterone-driven
without investing their time and energy to do something meaningful, to create a career, to be able to
get in a position, to have a family. A lot of frustration.
And it could seed, you know, I'm usually the massive optimist, but could see the, you know, a civil unrest, thoughts on that, Slam?
So I just came back from India.
And what they were seeing was initial signal is that entry-level software jobs are down about 20 to 25 percent, which is quite a big number there.
And there are hordes of Indian engineers coming out of the workplace.
Now, there is a little concern around the social implications of that.
My guidance to them was, hey, go become entrepreneurs.
I mean, this is like literally the best time.
Find a problem that you think needs to be solved and go transform yourself.
And I think it'll force, it'll be a forcing function on the positive side.
Yeah.
Dave, thoughts?
Well, I completely agree with what Reid was saying.
I think if you look at the Industrial Revolution, you know, the net effect is always more job creation in the long run.
So he's completely right.
The problem is the timeline is so short.
So I think a lot of people didn't anticipate that employers like Salesforce.com would cut off hiring in anticipation of AI that'll be in the market a few months in the future.
And so it's happening much faster than just the raw job displacement you would expect.
And so it's really disproportionate on new graduates.
So like Salim was saying, it's the perfect time to start a company.
But historically, very few people graduating from college start companies.
So the net effect, what I'm hoping happens is, you know, we've been trying to meet with a governor in Massachusetts and talk about this, and she's super competitive.
But her trying to get the legislature and people to move, it's like pulling teeth.
But when you have voters that don't have jobs, then that creates some acceleration and some motions.
I'm kind of hoping that the net effect of that is that people react a lot more quickly, especially political people.
Yeah, I want to read some of the comments you made and you posted on X.
You said, more interesting puzzle is the drop in junior engineering roles, right?
And then you said, people who understand computation will become more essential.
Can you speak to both of those?
Yeah, that was a little bit of what I was saying earlier, which is, if we see anything in the last, you know, decades, is that actually, in fact, the amplification of how computation affects all aspects of human society, including human work, are essentially just going up.
And it's not just mobile, not just internet.
And obviously, AI is the, you know, kind of the exponential, you know, acceleration in this.
And I think that the question is, is still thinking about, like, how do we put problems computation?
Now, to make that a little bit more tangible for people, think about how much as you begin to get exposed to the current AI models, how much your thinking pattern changes to more of a, how do I start?
with the right kind of prompt to accelerate my analysis as a problem.
My thinking of this problem, the research analysis, et cetera, et cetera.
And so almost like when I'm thinking about a new creative project, a new research question
in terms of like, oh, how might one, you know, do this business problem, like a go-to-market
or something else is I think about, like, how do I put it in terms of a more detailed prompt?
Now, part of the reason that's computational thinking is that one of the things that
relatively few people do, that should do, is most of my prompts are involved doing the
deep thinking or deep research prompts. But my first prompt is, give me the deep research prompt
that will, you know, solve these kind, or target these kinds of things. And then so I write in a
paragraph or speak in a paragraph, and then it comes back with a page and a half, then I edit it,
and then I submit it. And that's the prompt that I'm beginning to drive and work off of. And that's
that's an instance of how computational thinking we're going.
And I think this is going to become, you no longer have individual contributors in companies,
et cetera, that we all deploy with a suite of agents.
And this is just that lens into that.
I love that.
I did a little research, got some numbers, I want to share with you read and also get
Alex's point of view on this.
So today there's 150 million users on GitHub as of May of this year, which is extraordinary.
And if you look at the growth in the number of software engineers, the number of programmers,
since 2022, it's up 50%.
So 50% more programmers in the world.
At the same time, what we've seen is not a decrease in salary.
In other words, it's not an overglut where competition is bringing down salary.
It's actually been a 24% increase in salaries over a five-year period.
And so what does that tell us, that increase?
productivity, increasing demand?
So, look, I do think, I tend to, look, I would, I would, I would tend to want to
think increasing productivity.
I do think that this is very early days in how all this is working, and I tend to not
try to get distracted overmuch by numbers this month or this quarter that have
technological underpinnings in terms of the theory of what's going
on. Now, I can tell from my own work that I know that we have increasing productivity because I
know what used to take me a couple hours sometimes now takes me 10 minutes or 15 minutes in terms
of getting into something. And so once you know that, you know that whatever the, you know,
what was the old line on computers and the economy? The computers are everywhere except in the numbers.
Like even if you said, hey, well, I don't have a GDP number. It's like, well, but I know those
productivity increases in this case.
And so that's the reason why I'm a little cautious about overreading into specific
numbers, and I tend to more generalize from what I can actually see in kind of as
it were workflows, not just my own, but other people I talk to and seeing what happening
in companies and, you know, how most startups these days are, you know, completely AI
native in terms of how they're operating.
And so they're finding great accelerations in it, you know, that kind of thing.
Alex, how do you think about this?
this, Alex?
I think we're in the earliest innings of AI automating the service economy.
I think it's very instructive if you read Eric's paper that these results were most striking
in fields where AI was automating rather than augmenting human labor.
So I think this is completely consistent with, call it the hypothesis that humans and
machines are in the not too distant future going to merge symbiotically.
And this is just sort of small potatoes.
the earliest possible trickle of what's ultimately going to turn into a flood to Reed's point
of productivity gains and giving humanity the opportunity to chase much more ambitious problems
than what here are being characterized as entry-level jobs. I think with the benefit of hindsight,
10, 20 years from now, we'll look back and will be horrified that so much of the economy
was bound in what are here being characterized as entry-level jobs
rather than more ambitious, more fulfilling endeavors.
Dave, you were going to jump in?
I was wondering if, you know, your productivity, as you mentioned, is way up.
Mine is way up.
But I could use a lot more agents than I have access to.
I was wondering as a board member of Microsoft,
if you get like 20 or 40 dedicated GPUs and special access
because God knows you can use them.
As soon as you're hooked, you're hooked.
then you just want more and more and more.
I don't.
That's a good idea.
I should ask.
I just simply...
Could yes for me too.
Yes.
I simply do the Mac subscription across all of them.
And frequently when I'm doing something,
like I've actually already put on a kind of, you know,
kind of running a,
and now it's the open AI, open source model on my laptop to front end to parsing it out to
multiple agents.
Then, you know, like, you know,
run it on chat gbt,
run it on co-pilot,
running on Gemini,
running on Claude,
and then integrating what comes back
on anything
that's kind of more substantive.
So I've got the personal hack,
but not the personal cloud.
I like this in the future of offer letters.
You know,
here's your salary,
here's your bonus,
here's number of GPUs
you get, the number of agents you have.
No doubt, no doubt.
The GPUs are so much more important
than the other components of a comp plan.
If you know,
if you have any ideas,
you know,
You just use them up as fast as they can print them.
You will suck them up.
How many companies do you have incubation right now?
Are you in startup mode across a multitude?
Well, you know, it's one of those things.
You know, I've got two co-founded companies in Flection and Monis, you know, kind of one play on what is the essential role for how we have companion agents that go throughout our
whole life with us. And another one, accelerating drug discovery, becoming a drug discovery factory
with a target of cancering, of curing cancer with Siddharamookergy. And then I've got another thing
that I'm in the ideation phase on. It have to be, right? That's the fun. That's the fun part.
And, you know, I remember you introduced to Mustafa when you were working with him. And now he's in
Microsoft heading their A activities. You must be proud of that transition for him. He put out a paper
recently, basically warning people to be careful to think of AIs as conscious, as living
entities.
I'm assuming you read the paper.
Absolutely.
And what did you think of it?
I thought it was exactly right.
I think they, you know, and there's a, you know, be interesting to see, you know,
some of the better philosophers also kind of engage in this.
I mean, the challenge is, is that historically we've been able to to pretty easily
map between can you speak language and are you conscious? And it isn't that that isn't a very
complicated question, which Mustafa would agree, as to when is it that you are conscious? And I think
Mustafa was at Google when there was some engineer who said, well, I asked it was conscious and
it said it was. So therefore it must be. And it's like, okay, let's not be that quite that
simplistic. But the notions of, you know, kind of like self-awareness, self-reflection,
the notions that would come up, you know, not as kind of a simple like 30-minute
Turing test, but also this kind of question around like how we learn of others' minds and
other consciousness by how we navigate the world together, how we communicate, not just
by sitting in a, you know, kind of between behind two terminals, all the, you know, the Turing
imagination. The, you know, and so I think that it's exactly right to not jump to it too
quickly because, you know, we as human beings also have this weird thing of both over and
under ascribing consciousness. Over ascribing consciousness, like, you know, your car, you know,
come on, Georgia, you can do it. To under ascribing consciousness, like, well, you know, these
animals. They're not conscious. Like, well, it's a little complicated. Look at how they're
navigating the world, et cetera. And look at how we're doing it. Like what the shape of their
consciousness is versus the shape of our consciousness is probably the more interesting
question. So anyway, but basically it was a very good kind of warning shot because what
happens is people have the language experience and then go, well, I asked it was conscious
and it said it was. Yeah. Yeah.
I think my dog.
I think my dog thinks it's a human.
What's weird to me is whenever I'm interacting.
What's weird to me is whenever I'm interacting
thing with AI at home. I'm always really polite to it because it's polite to me. But it'll do something
and I'll go, okay, that's really cool. Now what I want you? And my wife is like, what are you doing?
Why are you talking to it that way? It really freaks her out. And like, well, look, I've been building
these things since I was 16 years old. I know it's not conscious more than as much as anyone,
but it's just your natural reaction is to treat it the way it treats you. And I don't know,
it keeps it fun from my point of you. But she thinks it's creepy.
Alex, are you ascribing consciousness or now or in the future?
I'm probably at the far end of this discussion in the past.
That's why I called you.
If you remember Peteral, people for the ethical treatment of reinforcement learners,
huge supporter of that, the Non-Human Rights Project,
which is fighting for legal personhood for non-human animals,
starting with elephants, dolphins, and great apes.
I think we're, I think we're on the verge of having the personhood discussion for non-human
animals, for pure AIs, probably for some new exotic forms of intelligence like Borgonisms,
collective intelligences, and maybe other, organisms, I love that.
Borgonisms, yes.
It's a very important class.
And I think probably, you know, we talk about prediction markets sometimes, collective
intelligences, I suspect we're on the verge of having a discussion where there will be half-dozen
different new categories of intelligence, to Reed's point, they won't all necessarily have the
same shape as natural persons, but they will nonetheless be intelligences that will perhaps be
deserving of their own rights. And I think not just personhood rights that are civil rights,
as it were, that we normally discuss, but economic rights and communication rights. And at some
point we start to think about what does an economy, a heterogeneous economy of lots of different
intelligent actors of different types even look like. I think it's going to be a very exciting
jungle. And continuity rights. Can I tell you a quick secret? Don't tell anybody. But working on an
XPRIZE with Palmer Lucky right now called an interspecies communications prize to use AI to be able to
communicate bi-directionally with a number of species, right, to be able to understand what they're
feeling what they're saying, being able actually have some level of a dialogue. A lot of work
has been done, but we hope to step it up another level. And that will be fascinating. There was a
project a few years ago. They were trying to use machine learning to translate dolphin language.
Yeah. And my response was, I'm not sure we want to know what they have to say. That's a whole
other kind of bullock kind of work. I think we absolutely do. So I advise a company named Sarama
that's working on this for non-human animals, starting with dogs. I think we absolutely want
to interact economically, socially with non-human animals.
Daniela Roos is doing it with whales, actually.
And they got incredible, it's online, actually.
They got incredible footage of a humpback whale birth, which had never been filmed before,
but they got all the audio.
And so they've got sounds that have never been recorded before.
Because they're very social, and they all get together for childbirth.
And the whale baby needs to be elevated to the surface by a whole top pod or team or whatever.
Yeah.
Yeah.
So it's really cool.
And by the way, I would love to get that company name.
I help stand up this thing called the Earth Species Project, which is maybe what Salim's referring to, because it's not just dolphins and whales, but also corvids and primates.
And it's basically, you know, record as much as you can on both the sounds and the environment and then run it through, you know, ML translation and see what you get.
Reid, do you mind if I reach out to you on this XPRIES we're getting ready for?
Oh, of course.
Yeah. And do you want to give them that name of that company again?
Earth Species Project. Oh, sorry, but you had the name of company.
Sarama, S-A-R-A-M-A. I'll connect you, ready.
Every week, I study the 10 major tech metatrends that will transform industries over the decade ahead.
I cover trends ranging from humanoid robots, A-G-I, quantum computing, transport, energy, longevity, and more.
No fluff. Only the important stuff that matters, that impacts our lives and our careers.
If you want me to share these with you, I write a newsletter twice a week, sending it out as a short
two-minute read via email. And if you want to discover the most important meta-trends 10 years before
anyone else, these reports are for you. Readers include founders and CEOs from the world's
most disruptive companies and entrepreneurs building the world's most disruptive companies.
It's not for you if you don't want to be informed of what's coming, why it matters, and how you can
benefit from it. To subscribe for free, go to deemandis.com slash Metatrends. That's Deamandis.com
slash Metatrends to gain access to trends 10 plus years before anyone else. So the next article here,
U.S. students' reading and math scores at historic lows. 35% of 12th graders are at or above
proficiency, 35% at or above proficiency down from 40%. 92. Only 22% of seniors are proficient in
math with science at 31%. This is dismal. And I'm assuming this is, you know, this is the United
States. It's not the same in other parts of the world. You know, what are your thoughts about
AI accelerating this or helping solve this, right? It's, it's double ed sword. A lot of people
are not doing the work. They're going to, you know, chat GPT to get their answer and they're
defaulting to not thinking. On the other hand, AI will be the best educator on the planet.
Reed, where do you come out on this? Well, I'm ultimately not surprising, extremely positive,
optimistic. I do think there's some transition issues, which we may see with the, you know,
handing in, you know, AI done homework. But, you know, if you just make this thought, one,
within a small number of years, all assessment will be essentially be done by AI.
So, like, we'll have the equivalent of being able to do PhD oral level defense and then on down just where the AI is doing it.
And so, therefore, your level of cognitive preparation can be set at whatever, the benchmark can be set however we like, and the people have to prepare for it.
So I think that everything else makes that is an interim.
The second thing is, a little bit like Open AI's learning mode, basically if you just put in a metaprompt to the AI agents today and say, work me towards the answer, don't give me the answer.
You already have the most amazing tutor that's existed in human history.
For free.
For free.
And global.
Yes.
Amazing.
So it's like, yes, these are serious problems.
and it's simple to work in a massively positive direction.
Well, I think the tutor analogy, too, is so much less than what it actually does
because, you know, it goes in any direction you're passionate about.
You know, a tutor will teach you math or whatever within that curriculum.
The AI version of it goes wherever you want to go.
It's just beyond a tutor by many orders of magnitude.
There was a statistic.
I've been using that a child with an AI is learning between two to six times faster than
sitting in a classroom.
And I was talking to somebody at that Stanford AI conference the other day.
Rita, I think you dialed in for that.
And they said that you're out of date.
It's actually five to ten times faster.
So it's so easy to provide it.
I also think if you think that we're on some sort of exponential or hyper-exponential
progress law where we're about to have full high-band.
with BCIs, brain computer interfaces, in five to ten years.
It's a little bit difficult to get too worked up by a blip on a few scores for a few years
if you think we're just going to be able to sideload knowledge into minds five to ten years
from now.
Yeah, and I wonder to the laws of physics years, right?
So by the way, Alex, after you say that, you have to say, now I know kung fu, just to be clear.
Now I know kung fu.
And demonstrate it.
Yeah, but I wonder to what degree these to tell you.
are driven by the fact that people have so many other ways to spend time and to learn
things and to do things.
And because I know for a fact that once you're into AI, my kids, they get so frustrated
by the school curriculum because one, they can learn it much faster anyway, but two, they
want to learn other things.
Like this curriculum seems ridiculously narrow and stupid compared to everything they can
learn.
And they're passionate about something else.
I do want to point out something.
This kind of statistic based on the existing curriculum, it's the same commentary
applies to Eric's report on the jobs, we're looking at the jobs in a static way, the way
they are today, the entry-level job of two years from now will be very different than the
entry-level job today, right? So the jobs will transform along with it. The education may
not transform as fast because of the regulatory structure, but definitely will be changing
the game as it goes along. And I think the conversation we've had on this pod for some time now,
and I hope people have heard it, especially if you're in, you know, high school or college, or
is the career of the future is entrepreneurship.
It's not going through a factory process of getting a job for someone else.
It is how do you use these tools to create value in the world?
Reid, I mean, we've been on this for a while.
Entrepreneurship is the future.
How do you think about that?
Well, you know, as you know, because we've talked about this for decades,
my very first book was a startup of you because basically we all have to think more
more like entrepreneurs, even if we are the entrepreneur founder of creating a business or not,
it's the nature of the world we're evolving into.
And it came from the commencement speech I gave my high school.
And it's part of the reason why so much of, like, you know, there's probably, you know,
two mainstreams of the kind of content that I produce.
And one of them is around kind of technology and society and the other ones around
entrepreneurship.
And part of the entrepreneurship is not just for, you know,
which is obviously great if it's used by, you know, kind of blitzscaling, you know, high growth, Silicon Valley and other entrepreneurs.
But it's also, we all need to think much more entrepreneurally.
And here are some of lessons of entrepreneurship and here's how to think about them, even for an individual and, you know, a career of jobs.
Just just in that vein, I just want to point out, I got a copy of Reed's book, which is, which actually says, this is the Salimus Mail edition.
at the bottom. So it's customized that. And it has a photograph of me in punk rock stuff. So it's
customized to the actually to the individual. I thought this was unbelievably clever. I think your
commentary on the fact that AI gives us all super agencies a really profound one. Everybody needs to
digest the implications of that because it's so huge. If people kind of look at that across the
board, it'll uplift the whole of humanity very fast. Amazing. And congratulations on that book. I want to
to switch to a conversation led here by Jeffrey Hinton. So we've been talking about AGI. A lot of us on
this pod have had the conversation saying, you know, the touring test came and went. You know,
that was nice. And a few of us believe AGI is here, has been here, and the real conversation
is around digital superintelligence. All right, let's go to Professor Hinton.
The thought is that a super intelligent AI is unlike anything we've ever seen. It's very, very
different from just a new machine that does something more efficiently. I mean, people used to
make clothes by hand, and then they make clothes with machines, and there was massive unemployment,
but then eventually they got jobs doing other things. But super intelligent things are going to take
away nearly all the jobs. And the idea that there's going to be jobs that are still okay when you have
super intelligent AI is quite dubious. I think the job of an interviewer, for example, will disappear too.
super intelligent AI, I will be able to do a better job of interviewing me.
So I sort of completely disagree with, yeah.
So a couple of subjects to jump into there, and I'll start with you, read.
What are your thoughts on ASI or digital superintelligence?
And on the back of that, however we define it, and Sleem, I'm cutting to the chase here.
I know the question you're going to say, well, how the hell do we define ASI in the first place?
let's just say it's like, you know, a million-fold more intelligent, the average human.
You know, no ceiling on this.
Does it destroy all jobs?
And then where do we get our purpose from?
These are the conversations that we've been having.
It's what my next book is about.
Reid, thoughts?
Well, look, to start with the circumstance of say we get to a Star Trek universe where, you know, kind of all work and physical materialism can, you know,
physical material, you know, goods and services can be provided by, you know, kind of intelligent
infrastructure, and that's the universe we're living in. I think we'll adapt perfectly fine.
You know, we have a proxy for it in human history, which is, you know, medieval times. That's
essentially how the nobility lived where everyone else was the surf and peasant and middle class
and whatnot infrastructure. So we'll have dinner parties and and theatrical.
performances and hobbies and all the rest of the stuff. So I think, like, overly worrying about
this, I think is a mistake. Now, the problem with, and this gets the why Salim always defines
like what it is and even going to a million times, it's like, well, you know, what kind of shape
of super intelligences? Because if you look at the progression of GPTs, they're progression of
savants. And so if you get to the massively incredible savant that still has context awareness
problems and other kinds of things, that is a different shape in terms of what happens than
if you simply have, you know, something like the Ian Banks Culture Series, which is, you know,
super intelligent robots that kind of look at us as kind of fun companions in the space journey
of life. And, you know, I think it's a, it's very easy to be science fiction alarmist.
It's very easy to be, I don't mean to be either even science fiction just,
you know, banal, you know, kind of optimism, but it's kind of like there's a lot of different
things where the details matter. And so kind of navigating what are the pieces we should be
constructing right now in a range of different probabilities of outcomes is, I think, where the
intelligent discussion is. Well, no, it always surprised me how, like Jeffrey Hinton is a god to me,
because he wrote the Rommel-Arden-Hinton paper, 1986, the back propagation that
kicked off all the AI that we're experiencing right now comes from that invention in 86.
And it eliminated all the other forms of AI, symbolic and Marvin Minsky and all that of
this stuff. So he is just an epic god. And he's, you know, he's worried sick. You can see his
furrowed brow in that video. He's just worried sick. And at the end of the video, he's like,
yeah, I completely disagree with Jan Lacoon, who's another legend of the field, the inventor of
convolution. And then we had David Siegel on our stage here at Imagination and Action the day before
yesterday. And David is also, you know, he was at the AI lab at MIT the same time I was as a PhD
student. And he's, you know, he's on the Forbes 400 quant trader using AI. And he's got a completely
different opinion about the timeline to strong AI. So it shows you how difficult it is to predict
what's going to happen next when you've got great, great minds like that, vocally disagreeing
with each other in the media.
Yeah, we're holding two different futures in superposition right now,
and we're going to see how we collapse the wave function.
Alex, please.
Yeah, I think this sort of moral panic, again, very difficult to get too excited over this.
I think if you think, as I do, that we're on the verge of having evenly distributed superintelligence
and that evenly distributed superintelligence is going to solve substantially all open problems,
in math, science, and engineering, that's going to create so many different opportunities
throughout the economy, sort of worrying too much about the state of jobs and the state of
careers as they're currently parochially constructed, circa 2025, is going to look hopelessly
naive and quaint in a few years.
Yeah, our basic call to action is solve everything.
We're on the verge of solving everything.
But here's a question for you.
You know, we've had the conversation with Bologi that AGI is polytheistic, not monotheistic.
Reid, I don't know if you saw that he put out.
And all of these frontier models are sort of leapfrogging each other.
And it's been pretty impressive to see how they've been moving in lockstep.
But the question is, does, is there a winner take all ASI, right?
Is once you reach this, whatever fundamental breakthroughs are required, does the first ASI,
block all others. Is it a hard takeoff? Reed, do you have a thought on that? At Microsoft.
You have to add at Microsoft. Indeed. Look, so this is again why it gets to, like, is, can I
sketch a universe where there's an ASI takeoff that gets to a compounding curve and or operates
to, you know, to prevent other AI? Yep, film at 11. I can tell that story.
But the, you know, I can also equally tell a lot of other stories, including the fact that it is pantheistic.
By the way, one footnote that I think is interesting is if you have different cultures' responses to the possibility of superintelligence, those that are inherently monotheistic generally express broadly fear.
And those that are pantheistic broadly express excitement because it's kind of like the one god versus
as many gods as a as an approach and so um the i think that the i think it's much more likely when
you look at the pattern over the last couple years that it will be more of like kind of classic
human invention which is whatever it is will be a um a kind of a kind of a zeitgeisty
simultaneous um across a set of different so therefore the panther
But, you know, I can tell both stories.
I want to make two points here.
One is, though, you know, being in India where it's incredible to see the excitement around this
just because there are ten doors of the pantheistic.
So that really speaks to the comment that read made in and the other commentary.
I want to drill a bit more on which Alex just said.
Once you do have superintelligence, however it happens and it's solving huge numbers of
problems, you essentially uplift all of humanity.
And now you're in a, you know, this is the very definition of a singularity, right?
We have an event horizon that we cannot see beyond.
And it's going to happen very quickly.
And when it does happen, I fall back to the simple observation made by Ray Kurzweil
that technology is a major driver of progress in the world.
It might be the only major driver of progress in the world.
And now we have an kind of electricity type of underlying layer that's lifting everything.
This is unbelievably positive.
And the framing of it should be unbelievably positive.
It is unbelievably possible, positive.
The question is, and the challenge is that we as a species, we strive and we thrive when we're challenged, when we have problems that we mean, right?
The video game that's super easy, you get bored and you don't play it.
The video game that's extraordinarily hard, you give up.
So my question ultimately is, and there was some incredible work done at New York University back in the 60s called the Universe 25 experiment.
Reed, have you heard of that experiment?
I don't think so.
So there was a social, sociobiologist who basically built this experiment 25 times.
It was a massive resort for rats, let's call it that way.
There was no, there was no shortage of food, no shortage of nesting space.
They had everything they could possibly want.
They put four breeding pairs in their exponential growth.
And at some point, the population starts to basically go upside down.
You've got stillbirths.
You've got, you know, sort of rats fighting each other.
You have rats sort of marginalizing themselves, just licking their fur and doing nothing.
And the population basically dies, not from having, you know, a shortage of any resources,
but of having everything and not being challenged.
So this is an extension of the Wally Center.
Yeah, so it's, you know, for me, it's like we need a Star Trek future, not a madmox, Max, or a Wally.
You know, we need, if we're given this level of super capability in terms of AI and robotics and nanotechnology and BCI, what do we do with it that challenges us that gets us thinking on a cosmic scale?
I think that's critically important.
And Alex, you and I've talked about that before.
Totally.
And in fact, speaking of cosmic scale, if I could put a physicist hat on for a minute,
And go back to the question, I think, Peter, you and Reid were talking about, which is, do we find ourselves, do we think it's more likely that we live in a near future with a singleton superintelligence or more of a multipolar world?
I would point out, we're several generations, our star, our son is several generations old in terms of stellar evolution.
And the singleton that I would worry about isn't is one particular frontier lab going to be the first to achieve.
recursive self-improvement and then dominate the future light cone. We're actually, we're pretty
far into the history of the universe. I worry about some other civilization, not of our world,
that developed a singleton and now is seeking to exclude Earth's development of superintelligence.
Fermi is knocking on your door, buddy. Well, I would say that the fact that thus far,
to my knowledge, we haven't seen any evidence that the frontier labs are being bombarded by orbital lasers,
by efforts to exterminate Earth's development of superintelligence would seem anthropically,
lowercase A, not capital A, to point us in the direction of a multipolar superintelligence world,
not a singleton.
Fascinating.
Reid, any closing thoughts on that topic?
Well, I think a little bit of also in these questions is, what is the world we should want?
And I think actually multipolar, you know, kind of.
of pantheistic. And by the way, in terms of the, you know, your rat experiment, or the
rat experiment, you know, one of the benefits is we human beings tend to present challenges
to each other. So I'm actually not that worry that we won't have ongoing challenges because,
you know, we compete, you know, whether it's, you know, in in things that, you know,
we could be better at than anyone else or also just, you know, like today, there's more people
watching human beings playing chess than there have been at any point in history.
And, you know, that's, of course, you know, human beings are never going to be the,
never going to beat AIs anymore.
And chess haven't, haven't for many years.
Fascinating.
You know, I think we end up with, as we evolve this, and chess is a great example of this.
We watch people on a soccer field or on a chess board.
We watch for the humanity.
I mean, can they make it in that really tense moment?
Can they see the right move?
Can they make the pass at a very critical juncture?
Can you hit that tennis shot when all the pressure is against you?
We live for watching that type of stuff.
So I think, to Reed's point, as we progress humanity, you know,
when we've looked at cultures that have gotten to abundance,
the moguls taking over India, other Romans taking over Europe,
you end up in four activities that human beings do,
which is food, art, music, and sex.
Not in that order.
And so you end up challenging each other in different ways.
and we'll continue to invent those in more sophisticated ways.
I love that.
All right, let's get into the AI wars here.
And here's our first.
Senator Cruz proposed bill to ease regulatory burden on AI companies.
A proposal creates AI regulatory sandbox to speed innovation.
Companies could get temporary waivers from HIPAA, FDA, and other agencies.
All right, what do we think about this?
it's you know the government's pulling out all the stops it's bringing capital from the middle
east it's relaxing the rules here it's changing the energy equation not as fast as we're seeing
in china but you know take off the gloves on drill baby drill nuke baby nuke who wants to go first
i love it and it's desperately needed and we're going to need a lot more of it but then i read
the details and it's like apply here get a waiver like oh god it just just so
it's so bureaucratic right out of the gate.
But it's well-meaning, you know, at least it's a step in the right direction.
Well, AI should evaluate your application.
Yes, yes, yes.
And AI will evaluate your application.
Yes.
And AI should just say yes from the beginning.
In which case, the whole process should be less than 10 seconds, right?
Or instantaneous, right?
Or instantaneous, right.
We're advisors to a project called Fermi America,
which is the largest energy generation project in the world, like 12 gigawatts.
And they file their S-11.
And instead of taking two years, they did in a few weeks using AI.
I would add, if you think we're on the verge of an explosion of math, science, and engineering discoveries that are generated by superintelligence, then it also follows, I think, that we're about to have a glut of discoveries that are present governance mechanisms, including read to the point of your startup monis, we don't necessarily know how to metabolize all of these discoveries.
If we have a thousand cures that are developed by AI overnight, how do we get those through clinical trials and get them out deployed for the public benefit?
And so to that extent, I think in the abstract sandboxes and special economic zones and other ways to basically offer new platforms for modifying governance mechanisms to metabolize that glut of inventions and discoveries are probably super net helpful in the long term.
Well, I'll tell you, in Foundations of AI Ventures class at MIT, maybe a third of all the business plans that come out of that class are something health related and they're really, really good ideas.
And they all end up concluding they need to go to India to get started and they'll come back to the U.S. later because the FDA is so slow.
Just like Zipline got started in Africa and came back to the U.S.
I think we're going to see a huge amount of that geographic arbitrage just because it'll be.
And this is where we talk about innovation on the edge, right?
You don't ever want to do innovation in the core organization.
You want to do it at the edge and point it into adjacent areas.
I think we'll do the same on this side of things where we can set up sandboxes at the edges of cities or countries, whatever.
Go do it in a safe place.
And then when it's working, you can demonstrate that, then come back into the mothership.
Read a closing thought on this one?
Well, I do think that it's absolutely critical to be imagining, to be seeing what we can get.
And so, for example, ability to, like the simplest one that I go to in this is we should create clear safe power mechanisms for creating a 24-7 medical assistant that runs on every smartphone because the benefit for that is huge.
Massive.
And, you know, obviously plaintiff attorneys, you know, other kinds of things will try to redact this as part of the reason why, like, people think, oh, we get to overregulation only because the government regulatory.
agencies have a natural bureaucratic accretion, but a lot of it's actually like liability law
from, you know, plaintiff attorney associations and so forth. And you need to actually, in fact,
sandbox that in a way to get that. And I think that then can be used as an example across the whole
thing. Those are the kinds of things that I would pay much more attention to in this.
And I think it'd be good to do. Like on the energy side, I'd like to see the energy stuff happen.
That was, you know, kind of promised that's really important. Energy is going to be a really key part
of this, but so far, all I've seen is a lot of tweets and relatively little action.
You know, Reed, I'm so glad you said that because one of the great advantages of America is we
have 50 distinct states and you have 50 different ideas and you have opportunities to try things.
And that variety should be a great strength for us.
But what actually happens in practice, if you launch an app, it's like a medical app,
it naturally goes out to all 50 states.
And then you always get sued in East Texas, which is Ted Cruz's territory, by the way.
You should just like, look, that whole tort law world is so messed up because it's 50 different shots at you, which means just by random chance, some really weird jurisdiction is going to come after you.
I'm sure this happened at LinkedIn until you're probably very aware of this.
But it's horrible.
It completely backfires versus what the intent of the design was in the Constitution.
So we get that fixed.
Every day I get the strangest compliment.
Someone will stop me and say, Peter, you have such nice skin.
Honestly, I never thought I'd hear that from anyone.
And honestly, I can't take the full credit.
All I do is use something called One Skin OS1 twice a day every day.
The company is built by four brilliant PhD women who identified a peptide
that effectively reverses the age of your skin.
I love it.
And again, I use this twice a day every day.
You can go to OneSkin.com and write Peter at checkout for a discount on the same product I use.
That's OneSkin.com and use the code.
Peter at checkout.
All right, back to the episode.
Jumping back to India,
Open AI plans India Data Center
in major Stargate expansion,
planning a one gigawatt
accounting for 22% of India's
entire data center capacity by 2030.
It's part of Open AI's $500 billion stargate project.
So a fascinating thing here
is, again, Open AI planting its flag
in different regions around the world,
trying to get early users captured.
How do you think about this, read?
Well, actually, I suspect it's less a user-grab thing,
although, you know, that's totally possible than for Open AI and range of business.
And it's more looking for Open AI is a very clear-eyed about scale
is the thing that's creating a huge amount of this potential opportunity.
scale needs, scale compute, and scale energy.
And so where can you get that?
And wherever can work on getting a deal that works within kind of the Western ecosystem,
it will do that.
And I think that's how to how to interpret this.
And, you know, that was a little bit like my earlier comment,
which is we are so behind on doing all the energy stuff.
Massively.
Massively.
And the real is really acceleration.
And, you know, back, you know, the Empire administration, I was kind of trying to circulate plans about doing deals with Canada to try to make this work from a kind of North America and U.S. perspective.
But, of course, since the current administration is, you know, trying to, I've never seen the Canadian so pissed off with us in my entire life, you know, that becomes less of an option.
Yeah.
I have a Canadian passport, so enough said there.
Yeah.
Salim, was this discussed while you were in India?
It was, but it's mostly seen as a marketing tactic, kind of to show a planting a flag.
This is going to take a while, though, roll out.
And India has quite significant infrastructure challenges to do this in a kind of reliable way.
But I think the general trend is huge, and I think what I see there is opening out, looking at the youth of India
and planting a major flag saying, let's make sure we're completely accessible to all the young people.
in India. Am I planting a data center there, then you solve for a lot of the data sovereignty
issues that lots of people are concerned about. Yeah, this is critical. They have a very,
you know, literate tech-forward youth that they need to engage. On this, on the note of making
geographic grabs, you know, India is a 1.41 billion people, you know, we're going to go
just under that at 10 million in, 10 million in Greece.
So Open AI and the Greek government launched Open AI for Greece.
Congrats here to Prime Minister Mitsunekis, Mitsutakis, and Vasili Kutumap, Kutumpas, digital AI.
Listen, I know him.
Easy for you.
I'm just messing it up.
But I love the fact that we're starting to see country after country begin to think about what is their AI strategy and beginning to partner on this.
So, you know, I think we saw opening eye going into the U.K. as well and, of course, going into the UAE.
Do we see Microsoft doing any of this, Reed?
Well, Microsoft, you know, kind of the original tech hyperscaler has, you know, one of the things has been kind of amazing about being on the board there is, you know, it has a, an international.
scope of, you know, kind of relationships with multiple industries, multiple governments,
multiple countries kind of around the world. And so I simply have lost track of all the
things that they are, of that they are doing because it is a, you know, it is a UN in scope
in terms of these things, although obviously a lot more efficient because it focuses on
kind of good business process and partnership and all the rest. So I think that the
And this is, I think, the kind of natural thing to do.
I mean, it's one of the things that I think is, you said, what should our AI foreign policy be?
It's let's provision, medical assistance, let's provision, and I agree with Dave's point about tutors.
I mean, a tutor is just to make people understand it, but the fact that it can condition for learning for you, wherever you want to go and in the metaphor and language that you want to use and all the rest, like, that's the kind of thing we should be doing.
So I think this is awesome, well done by the Greeks, well done by Open AI, and we should see a lot more of it.
Can I ask you a question, read, about, you know, Open AI going to India for power makes no sense whatsoever.
They're short on power, and it's all coal.
Well, you know, India has doubled their power generation as the U.S. has remained flat.
So India's on the rise faster than the U.S. is.
They're deploying solar.
They're deploying solar at the most staggering rate.
Are they?
Well, they have a lot of sun.
But it still doesn't make a lot of sense to me.
But what I wanted to ask you about is RLHF engineering.
You know, the Mercor is just growing like wild now that scale the AI.
Or scale has been acquired.
Scale AI.
And a huge fraction of what is going on there is India.
And we were at 1X robotics a few weeks ago, too.
And they're like, hey, you know, all this kinematic telematic data is going to be a gold mine for teaching the robots how to pour a cup a copy without spilling it.
And so a lot of that work, you know, which is creating a huge amount of jobs.
It's a new type of job.
But the Indian workforce is absolutely perfect for filling all those positions quickly.
So is that potentially a factor in why Open AI is pushing so hard into India?
I don't think they need to do it for that.
My guess is is any major scale partnership and, you know, look, I think they're looking for power.
And so, you know, Salim pointed solar, I don't know, but I would hope.
I do know that there's a bunch of other areas in the region that also have good access to a lot of clean power like Bhutan.
And so I think that I think that's more of that.
But by the way, yes, let's use the talent.
And I don't think they need to have that kind of data center in order to do that.
You know, one thing that Basilia Kutumpas has done here is really focus this on education, right?
Chat, GPT, EDU for secondary schools.
And I'm still really pissed that the U.S. has not tripled down on this, right?
Made it edict that you must use, you must be bringing this technology in.
It's one of the most important things.
My two boys are 14 years old.
And, you know, the school systems are not preparing them for the future they're heading towards anywhere close.
Yeah, and they're not not doing it.
They're just moving so slowly.
But AI is just so fast, you know, it's very hard for them to react because they're used to making decisions over a
year time cycle. I think what's going to happen, though, is the impedance mismatch between a student,
as we said, five times faster is just going to break the existing system. The forcing function
there will be really powerful. We've been waiting for some kind of instigation that will be a forcing
function to transform education for decades now. And I think this might be it. So this past week,
OpenAI announced it's starting an AI chip production run with Broadcom.
It's building three nanometer process.
So let's start with you, Alex.
Thoughts on this one.
This lies at the intersection of so many different trends that are all converging at the same time.
On the one hand, I think this is a reflection of NVIDIA's high margins, relatively high margins.
And in some sense, this is capitalism doing its thing and encouraging additional competition.
On the other hand, I think this is a reflection.
of the proliferation of ASICs, application-specific integrated circuits to compete with more
general purpose, Nvidia GPUs. On the other, other hand, I think in some sense, in the same way
that Nvidia GPUs and Nvidia overall from a market capitalization perspective, displaced Intel
by being more specialized, I think we'll see a rise of inference, AI inference specific compute
like a hypothetical open AI inference processor or accelerator maybe ultimately pose the threat
that many people in the industry are asking for, which is where is the next
invidia going to come from?
I would argue if there is going to be a next invidia, it's likeliest to come from a more
specialized ASIC that does a better job of focusing in a more energy efficient way, the sorts
of tasks that we care about, the sorts of workloads we care about.
And then finally, biggest trend of all, Moore's second law, the cost of fabs is doubling every four years, and that's steamrolling the entire space.
It is so expensive to build a fab at this point that leaves anyone at the same time that Moore's first law is basically ending or near a conclusion or tapering off leaves everyone else who wants application-specific acceleration, fabricating their own super narrow.
processors. So 10 different trends all converging in this.
Only thing I'd add to that, Alex, is that it's a new era in the sense that if you were a
TSM and you're building microprocessors, you know, pre-GPU, and somebody comes out with a 2-nometer,
1.8 nanometer, 1.4 nanometer process, everybody moves to that new chip. Nobody wants the old
chip because it's more power efficient and it's just a better buy. Now all of a sudden we care
tremendously about just raw volume.
And that never existed before.
You couldn't just crank, tiled the earth with chips.
Now you can and use it productively.
So I think there are two avenues going on here.
There's increasingly $20 billion, $30 billion, $40 billion fabs,
but then there's these new $4 billion fabs.
And maybe they're stuck at three nanometer.
They don't go beyond that.
But they have a huge ability to get up and running quickly
and create a massive amount of volume.
Because I think the algorithmic improvements are much
more important than the difference between three nanometer and two nanometer. And so getting
scale of volume. And, you know, this slide kind of indicates that OpenAI is buying these from
Broadcom, but Broadcom can't make them. They're a design company. You know, they don't have fabs.
And so, you know, where are you actually going to get the manufacturing and you look a layer
deeper? And there's a huge amount of investment and job creation, by the way. And then something all
the governors should be looking at. Get those fabs in your state like tomorrow. Because that's where
all the jobs are going to be.
Robots all the way down, buddy.
Yeah, that's true, too.
All right.
So, we have a new number one trillionaire in the house.
Oracle CEO, Larry Ellison, exceeds Elon as the wealthiest.
He's going strong at 81 years old.
I'm very happy that he is someone focused on longevity and health span extension.
I'm waiting for some good breakthroughs coming out of his work.
So Open AI will buy Oracle compute over the next five years, $60 billion per year.
The contract is for a 4.5 gigawatts of capacity.
Two Hoover dams.
I like that.
We're going to start measuring data centers in terms of Hoover dams.
So I asked the governor of Massachusetts how many Hoover dams she wants.
I love it.
So opening eye adds Oracle as a partner.
And you may or may not be able to comment, but this begins to show some potential strain with Microsoft
and a push to avoid having a single supplier.
comments on this anybody up for grabs well i mean obviously i can't talk about anything from a internal
perspective but i would say that it's you know i think one of the simple things is as i said a couple
comments here is open i wants to be in as many growth threads as possible and i think it's a as much
as possible and i think it's kind of the fact that there's a bunch of volume that they could buy for more
I think that's the, that's, that's actually the real thing more than a, more than a strain.
I got a question for you, Reed.
So, Chase Lockmiller was out here the day before yesterday.
They, you know, he's, he's, he's, he's, he's, he's, he's building Stargate in Abilene, Texas.
And he's, you know, MIT class of 08.
He got two degrees in three years, not quite, you know, Alex's getting three degrees and four years, but, but he got math and physics in three years.
Absolutely brilliant guy.
And I, on stage, asked him, you know, how is the deal with open.
I see all these videos of you and Sam Altman walking around, looking at all the pipes and wires.
And he said, we actually sell it through Oracle to Open AI.
I was like, well, I didn't understand for the life of me.
I didn't have time on stage to ask, but why is Larry sitting between you and Sam Altman?
I don't quite understand what his value ad is in the middle of there.
I don't know either, although I do know that a lot of the Oracle has passed through.
So, you know, in terms of provisioning and building data centers and so forth.
But I don't know the shape of it.
I made him the wealthiest man in the world.
You can see on the slide there.
So it's a big deal.
He's a lot to live for.
Let's see if he gets to 151st.
Hey, guys, it's Peter.
One of the things I found out is that a number of people are getting together for dinner
to talk about the content around the moonshots, WTF episodes.
And I want to facilitate that.
On September 24th, there's going to be a get-together.
There's a link in the show notes below.
people are getting together here in L.A.
If you'd like to go to one of these dinners, click on the link.
And have fun.
We have amazing subscribers listening to this, people who are building companies who are really going after moonshots.
So check it out.
I'd love to hear in the comments.
If you go to the dinner, what you thought of it.
By the way, I don't have an affiliation with the company listed in the link below.
And while I don't normally show up to these dinners, occasionally I do.
All right, have fun.
Back to the episode.
Right. Moving on to some Anthropic news. Anthropic raises $138 billion in a series F valued at $138 billion valuation. They've got a revenue surge of $1 billion run rate in January, now up to $5 billion run rate as of August. That's insane. Over 300,000 businesses have gotten enterprise accounts. Fastest growth curve in tech history and one of the most valuable AI firms. We're going to start to redefine.
you know, the Magnificent Seven very soon as something else.
I think the broader picture here is that Enterprise has been sitting around
watching all of these foundational models get to a certain point.
But now you need the robustness and data sovereignty and on-premises stuff that the enterprises need.
I think that's where the massive infrastructure and investment will go next.
I totally agree. I don't know if Daria will do it because he's very, very, very
safety conscious. So TBD, whether Anthropic is the company that does it or not. It's interesting
that Dario is totally neutral in these battles now because everybody's making their own chips.
And so the battle between Jensen and his own customers is just beginning and it's going to be
epic to watch. But Dario is still the one guy who's neutral in every conceivable way. I can work
with anyone. I can sell to anyone. I don't know. We'll look at an article coming up very shortly.
Reed, did you see Dario's presentation at Davos or recording of it where he said that he could
imagine doubling the human lifespan the next five to ten years on the back of AI?
I did, but I know Dario well, so it doesn't surprise me.
Yeah.
What do you think?
What do you think of his machines of loving grace?
No, no, I want to ask him still about the life, you know.
Are you, do you buy this idea of AI helping us double the human lifespan?
I guess this short answer is trivially yes.
It's just kind of a question of time frames and how has happened.
I mean, part of, for example, you know, the reason why, you know,
Sir Dr. Mukherjee and I are working on, you know, accelerating, yeah, Manassai for
accelerating drug discovery with a focus on cancer is if you, you know, kind of begin to get
a, you know, kind of a set of the different cures, we naturally age in various ways, but a set
of the different cures, which substantially elongate the kind of the aging curves in a
a healthy, prosperous way, that does it. If you have a medical assistant that allows you to be
much smarter about, you know, kind of consumption and other kinds of things, that helps too.
I think there's, you know, and then precision medicine, obviously accelerated with AI. I think it's
very straightforward.
Love it. Dave, you're going to ask a question. Oh, yeah, Machines of Loving Grace, his whole, you know,
kind of treaties on the future. Did you like it? Oh, I loved it.
Look, I think part of the thing that people misunderstand about some of the people who, you know, he made his comment about safety, and I think he is very funny, this next slide, he is very focused on safety, but the reason he's in it is a pro-humanist reason.
It's the same reason why the open AI people are in it.
It's like what, you know, what are the ways that we elevate the human condition?
So speaking about this, here's our next slide.
The title is AI safety sparks, anthropic hunger strike.
This is a quote for the guy on the hunger strike.
I'm on a hunger strike outside the offices of Anthropic because we are in an emergency.
The AI company's race is rapidly driving us to a point of no return.
I'm calling on Anthropics management to immediately stop their reckless actions,
which are harming our society and remediate the harm caused.
So I don't know, this is from a few days old.
He was on day three at that time.
I don't know if he's still in a hunger strike or if someone's Uber Eats has delivered him.
him, Emil, but I don't know.
Might have been way mouth, but yes.
But in all respect, I mean, here's someone who cares deeply and is trying to make the
point.
What I find fascinating is what I think about all of the, you know, frontier companies, I think
Anthropic is the one who is the most sensitive to, you know, to these topics, to AI safety.
Yeah, I wonder why I picked that one.
Maybe it was the closest to where he lived.
Yeah.
I don't know.
I didn't know where the new XAI headquarters were.
Yeah, exactly.
Yeah.
This feels to me as a candidate application form for the Darwin Awards.
Oh, my God.
Oh, no.
Oh, no.
Okay, moving on.
So Amazon's AI resurgence, AWS and Anthropics tranium expansion.
AWS cloud revenue slowed as Google and Microsoft pulled ahead.
Congratulations, Google and Microsoft.
Amazon's invested $4 billion into Anthropic.
I guess that was part of that $13 billion F series round to build a 1.3 gigawatt data center capacity dedicated to AI training.
And Anthropic will run on Traneum 2, Amazon's in-house AI chip, cheaper per memory bandwidth versus NVIDIA.
So this is what I was saying, Dave, when you were saying about Anthropic and NVIDIA.
It looks like they're shifting towards working with Amazon here.
Yeah, well, the trainiums and then the TPUs at Google are incredibly good inference time designs.
Maybe training, maybe not, we'll know soon.
But yeah, definitely a very serious threat to Nvidia.
Not because everyone's going to sell out everything they can make.
There's no doubt about that.
But if your chip is more performant, then you can argue for more manufacturing from TSM.
That's where it all gets bottlenecked.
is at TSM.
So, yeah, these new chips, I mean, everybody's competing with everybody.
It's all out war.
All these companies that were in swim lanes and could cooperate are suddenly absolutely
at each other's throats, which is great for startups.
Turbulance is always great for startups.
But it's really weird to see all of tech and a huge fraction of our economy in direct
competition with each other.
Alex, I would add also, it's not just, to today's point, it's not just competition
at the chip level.
Maybe from my perspective, the headline that we're sort of burying here is the memory bandwidth.
So much, if you're trying to do a coherent training run, actually the limiting factor is the chip-to-chip bandwidth, not necessarily the compute within the chip.
And here, I think what we're seeing, fortunately, is a bit of competition for Nvidia's NVLink coming from AWS and Amazon.
AWS has this chip-to-chip interconnect technology named Neuron Link that is perhaps hopefully giving NVLink and Infiniband run for their money.
And to the extent that future training runs need to be coherent, open parenthesis, do they need to be coherent?
Or will we see some sort of radical overhang breakthrough in terms of distributed training runs?
Yeah, there is some stuff in China that's very promising on.
that front and that would totally change it's funny how a little innovation couple lines of code
could break the whole math behind these investments very very interesting fragile kind of thing
reed you can see why uh why alex weizmergross is the favorite moonshot mate on this podcast
uh exudes his brilliance yes all right uh let's move on here uh next up is uh polymarket um so
So Polymarket finally is coming to the U.S.
It's an incredibly useful product.
Do you play with Polymarket much, Reed?
No, I've done it a couple times.
I'm obviously fascinated from it from a market point of view and, you know, kind of how it plays out into the, you know, the various ways in which, you know, the general crypto environment, you know, is shaping and how we shape it to try to make our.
our society's better. You know, Polymarket, for me, is wisdom of the crowds. One of the things we
discussed on a previous WTF episode is the idea of AI is being able to predict the future. And the
question is, how do you do RL with, you know, predictions of the future? I guess you could look at them
in retrospect, but polymarkets could be an interesting truth signal. Alex, do you think so?
I think polymarket and prediction markets in general, if you're a startup and you want to do free
real-time research on your customers or on your competitors, prediction markets are a way to
do that. And I think while we're still in this gap, this window of time between when we have
prediction markets, sort of collective intelligences, or Peter, I know you like Borgonisms.
I love that. And when we get super intelligences, prediction markets are the closest thing we have
to a crystal ball for the future. At some point, probably, I would argue, we get our superintelligences.
as we get our Isaac Asimov, Harry Seldon, psychohistory, AIs that predict the future.
At that point, maybe prediction markets gets subsumed.
By the way, there is an asterisk here, which is important, is the prediction markets don't set separately like theory in the world.
So, like, some of the things that I've been seeing happening has been people putting bets on, you know, like what color, you know, dildo will be thrown onto the sports ring, you know, first.
and then it becomes an economic incentive
because you put a bet on your blue
and you show up there
trying to toss your blue dildo
onto the sports ring first.
And so there's a weird intersection with society
where it's not just a kind of a physics of prediction
but an interleaving of dynamic incentives
and what happens there.
The best way to predict the future is throw the dildo yourself.
Yeah, to incentive.
Yeah, but Peter, this is the problem
with super determinism. You see everything that Reid just said, I made him say that.
That was an incentive.
Love it. All right. Moving on. All right, this was a great note. I love this chart.
So U.S. patents have exploded during the AI revolution. So you can see here on this chart,
the number of patents per year. My God, the poor patent examiners, they've got to be displaced by
AIs. And we see here in 2022 an explosion, 6,000.
and more patents granted in 2024 versus
2023.
It's a, you know,
it doesn't get more exponential or sort of
vertical assent than
right now.
What I read from this is that
people are using AI to
of course.
Well, not only that, they're using AI to create the patent
application. I mean, so one of the things,
one of my favorite things I did years
ago when Chad GPT first came out
was I said, okay, here are two
patent numbers. This is
the business I'm in, how would I use these patents to creating new product or service inside my
business? And it was, you know, here it is. And, okay, is that now patentable? So just this ability
for people who want to explore this area is fast. Well, if you've ever been through the process,
too, one of the companies in the studio is thing struck. It's Nicky Abate and Julius.
They started doing academic research using AI.
It's like a toolkit, and they quickly moved over to patent research and then patent filing.
So they've automated the process.
But if you do it the old-fashioned way by talking to a lawyer and they're saying,
okay, explain to me what this technological breakthrough is.
Like, oh, my God.
From ground zero, you want me to explain it?
It'll take days.
But you do it be AI, and it's instantly, you know, here's the application.
let's go. And presumably the Patent Office has to read it with AI too because, you know,
this will keep going up. It's an arms race. It has to. There's a patent law firm out here
that I've recommended to some of my companies. They're based in Boston as well. What they've done
is they've analyzed all the patent examiners and buy different category. And they've looked at the
percent of allowances they've had. They've also looked at the time between application and review.
and so they will direct your patent application to the examiners who have the highest rate of acceptance
and the lowest time to review. It's a game. You know, humans in their loop can be gamed.
This reminds me of that study that was done where if you were lawyers bringing up their clients
for parole hearings, we're kept trying to put them in after lunch because they found that before lunch
the judges are hungry and you're going back to jail. After lunch, they were biologically happier
and you were 30% more likely if you go free
because the judges are bi, you're happier.
Like, that's just gaming the system to an endth level.
That's the whole thing is just a game.
Alex.
I was, and maybe I'll take the opposite side of Salim's comments.
I would say at this stage, we're still in the,
just the earliest innings of AI generating transformative,
mathematical, scientific engineering breakthroughs.
So to the extent that we're seeing any boomlet of patents,
being generated in part or in whole by AI.
I don't expect that on average they will be utterly transformative.
I do think we will see transformative inventions being generated by AI for the next year.
What I'm talking about is it's very clear that the patent application process is being generated
to the application forms are redone by AI, which allows you a large number, not that
the patents are generated by AI.
I wonder what's going on here.
If you're listening to this podcast on this chart, we look at the number of patents per year.
and it's pretty flat from 1960 to 1996.
And we see this rapid ascent, 6,000 patents over eight years.
And then it begins to flatten out again with 1,000 patents over what looks like to be an 18-year period.
And then it explodes on the heels of chat GPT.
What is that period between 2004 and 22?
Why is it in Flander?
Well, you had a huge number of gene patents because, you know, they decided ready.
in that time frame and dot com patents too i'm sure i would remind everyone to to read the title of
the chart these are computing related patents so i i think what we're seeing with the first boomlet is
the dot com boom and then we're seeing the a i boom with the second boomlet uh-huh okay always
actually looking at the data read the chart read the chart uh so we were on stage or at least the
title yeah we were on stage with ah ahma the CEO of replet and
I'm sorry, I'm Jod, the CEO of Repplet, and Amjad had just released an agent on that day.
This was, what, Tuesday?
But he just posted this as well, which Replit's agents are outpacing AI scaling.
So this is the meter benchmark, which is basically looking at measuring AI's ability to complete long tasks.
And he's saying that this benchmark is wrong.
Alex, thoughts?
Well, so if we assume that the data that meter is collecting or the timescales that meter is estimating are accurate,
an exponential fit isn't necessarily the best fit.
It could be, for example, that we're on, as Ray Kurzweil would say, a hyper-exponential curve.
If we're on a hyper-exponential curve, then it's entirely possible that we see some sort of blow-up in the next few years.
I've seen estimates that if we are on a hyper-exponential curve,
that is indeed the best fit, that there is almost an effective vertical asymptote in late
2027 or early 2028.
So it may be the case that the data are perfectly fine.
It's the fit that's perhaps overly pessimistic.
So on this chart here, Amjad says, listen, Agent 1 was able to think for two minutes.
Amjad said Agent 2 was 20 minutes and now Agent 3 is 200 minutes.
We're seeing a 10xing here.
And the question is, will it continue?
And I think it's probably worth adding, if I remember correctly, he attributes that to, well, perhaps some sort of multi-agent approach is intrinsically better than another approach.
My guess would be the exact opposite.
My guess is multi-agent type approaches will just be naturally subsumed into existing compute scaling laws, and we just find ourselves on a hyper-exponential.
And all of this turns into transformative discoveries and almost magical AI on the time scale of two to three years, regardless of what.
whether it's underneath something that looks multi-agent or otherwise.
No, Alex, a bit of a question there.
I do you think that one of the things that is part of underlining this is how do we do various forms of parallelism?
It's the parallelism to the supercomputer, but also parallelism to agents because you've got a mixture of experts as a key thing for the sparse models in order to grow.
I actually think one of the things you're seeing with a chain of thought reasoning and other things is, is again, putting in collections of agents.
in terms of how they're operating together in order to get higher cognitive performances.
So I'm curious a little bit about what your comment is, because I actually think that multiple,
you know, the kind of parallelism and kind of at least multiple entity constructions,
even if it's to a targeting kind of a singular output, is actually part of the lesson here.
And I'm just curious, how does your comment bear on that?
I love that question, Reed.
So the way I would answer that is to say multi-agent teams,
multi-agent approaches in general, that's just a form of sparsity. So you could imagine, to your point,
multiple agents working in parallel together, that you could just view that through the lens of a
much larger, sparser architecture with multiple feed-forward lines that are all feeding forward in parallel
that ultimately connect up at some point down the road. The problem I perceive, and history will
judge whether this prediction is correct or not, with conventional multi-agent approaches, is there
usually not end-to-end differentiable, whereas one could imagine sort of a next-generation
multi-agent approach where the agents are actually part of one end-to-end differentiable model
where, due to the way it's sparsely organized, it actually, if you squint at it, looks like
it's multi-agent, even though it's one very large but sparse model. That, that I think speaks
to your question. Yeah, we desperately need better benchmarks for this. And this came up
with blitzies saturating sweepbench last week or this week. You know, if you look at human endeavor
over a long period of time, many, many things happen in parallel. And then, you know, read Peter's
book, The Future is faster than you think. And you see how the synergy is later. And so here you can
spark, you know, an infinite number of parallel agents. Not everything needs to happen after the prior
thing. And so when you look on the curve, it implies, oh, I thought of this, then I thought of that,
then I thought of that. But much of that processing can
be done in parallel. And also you can have many, many redundant threads. Very often, if you prompt
10 different things, one of them works, nine don't. And so, you know, you can take that from 10 to 100 to
a thousand and just get a better result. So all that is not baked into the, you know, the y-axis is just
how much time is it thinking, which is a crazy metric when you think about it.
Hey, folks, Salim here. Many of you've asked where we can see more of Salim and where is he based,
etc. Well, we do a monthly workshop called 10X shift, which was happening tomorrow the 17th.
And on that workshop, we go two hours. It's not recorded. It's we keep it limited to about
100 people. It's $100. People say it's the best 100 bucks that they've spent, where we coach people
on how to 10x to 100x their organization using the exponential organizations model. And we go through
and look at live examples and we take questions and do coaching live on the call. I'm on the whole call
for end to end. So if you want to hear more for me, that's the place to do it.
10X workshop. Link is below or go to openexo.com. We'll see you there.
So there's technology we've all imagined years ago, and it's finally here,
which is live language translation in my Apple AirPods. Airpods. Let's take a listen.
Talk. Just speak naturally. I'd love to take some of these to my sister for her birthday.
I'll buy eight, please. Your iPhone displays your words in their language.
And can even read them out loud if needed.
Bale?
Live translation is even more useful when both people are wearing AirPods Pro.
I agree, yeah.
Let's include the key.
I'm going to include the key point in the presentation of the
Feather.
With certainty, the client will love that.
I'll let the strategy team know to prepare that immediately.
This incredible capability is enabled by advanced computational audio on AirPods, combined with Apple.
So we saw Duolingo take a stock hit when Google's live translation went live.
I haven't looked at DuLingo stock price here, but at the end of the day, you know, this looks like another incredible should have existed, finally does exist, and it's going to make the world a little bit smaller.
Any quick thoughts on this one?
Apple finally launched Douglas Adams, Babelfish.
Yes, Babelfish, thank you.
Yes, of course.
And hopefully it's a lot better than Siri.
Don't get me started.
This is audio-augmented reality, and I would say, now do video.
Give us our lightweight smart classes.
Yeah.
I think the video, the augmented conversation with video is going to be incredible.
I think that's the real vision.
This is kind of cool, too.
Yeah.
Yeah. Next subject is robots and transportation. So Elon's made this point before, you know, by 2040, expecting 10 billion optimist robots. And he's gone to the market and said, you know, cars are okay, but the real opportunity is optimist robots. So he's playing the scale to $1 million per year within five years. Automotive sales comprised 74%. So a little bit about this. He was just really,
recently speaking about this, saying that, you know, Generation 3 is coming online soon
with the manual dexterity of a human. In particular, huge focus on the forearm and the hand
with 26 actuators. His goal, if he gets to a million per year, is a cost of manufacture of
$20,000 each, and he'll price it depending upon what the demand is. But at the end of the
day, what we've talked about here on the pod is an expected price in the 30,000 per purchase,
300 bucks per month for at least $10 per day. Any comments on Optimus?
I had dinner with Rodney Brooks last night, who is the I-Robot founder?
Yes, Rodney is the OG in the space. Yeah, but he was really pessimistic about, the question
at our table is, will we have a robot in our home by 2035, which seems like a lifetime?
Oh, my God.
And he said, no.
I was like, really?
Yeah, it's all supply.
The technology will exist, but the supply chain won't be there.
So here you're talking about Tesla making a million optimizes per year within five years.
But there's 300 million people, 150 million households.
So that means very few of your friends have one in five years, just because the supply chain is so slow to catch up to the demand.
I'm supposed to get my 1x by the end of the year, at least by March, right?
You heard me, you saw me shake hands with, you're, you though.
I was talking to Steve Cousins about this, and we kind of talked through some of this.
And there's all sorts of issues.
One is battery life is still way low for a bunch of these applications.
The second is if it falls over, it's going to be so heavy.
It's going to be very hard to pick up.
One X is like 70 pounds.
And Lord, help if you falls on you type of thing.
So there's a lot of areas where I think this is going to take much longer.
I would not so much the supply chain, I think just the liability issues and the
constraining the function and the actual action of what it does is going to take much
longer to solve the insurance and legal issues first.
Are you an optimist on this read or a pessimist, a robot pessimist?
An optimist, did you say on this?
Look, I was just kind of bemused.
Look, ultimate long term, I think, obviously, it's there.
I think the short term, you know, I don't know if Tesla has ever met a projection.
Hit a target.
Tusha.
Alex, you were going to say something.
Yeah, I would also, I mean, I would focus on that 80% figure that I think is such a striking number.
If you think about the, from a market analysis perspective,
think about the size of automotive.
It's probably like $4 or $5 trillion per year worldwide.
Whereas if you think about labor and the services market, it's depending on which estimates you believe manual labor is like two-thirds of the services economy and call that like $20 trillion.
So in some sense, this 80% of Tesla's value is really a bet that Tesla achieves parity with the services market.
It's a general sort of universal intelligence powered services company.
And I think that's probably where the market overall ends up.
Well, you remember, he's got to hit $8 trillion to get his trillion dollar pay package.
So if anybody can do it, I think Elon can.
So anyway, let's move on.
So other robots here.
All right, let's take a look at a different design.
This is called the hidden robot boom.
This is a generation of robots that don't have five, two arms, ten fingers, Salim.
Here, here.
Yes,
Salim's been on the WTF episodes.
Like, why do they have to have two arms and two legs and a head?
Well, okay, this is what they look like otherwise, Salim.
Do you want one of these?
Let me know.
I think this is awesome.
one where they were using it to map out the floors of a construction thing and
and mapping out exactly where the pillars and so on would go.
I think this is huge.
I think the industrial use for robots is so much past the home use for a while to come
that people aren't estimating that.
I think the efficiency gains from that are going to be huge.
And obviously the form factor is not going to be humanoid.
I mean, my beef, you've heard me before, but at least give me.
a third arm if I'm a humanoid robot, at least?
Or a dildo.
Anyway, the use here, wind turbines, nuclear plans, subsea pipelines, railways, tunnels,
power lines, the old adage, if it's dull, dangerous, or dirty, use a robot to do it.
The estimated market on this particular tweet that came out is that this sort of
marketplace for robot maintenance and inspection is $6.7 billion today, growing at about
13% per year, expected at $12.5 billion by 2030.
I think that's a radical underestimation.
So just for example, if you look at the Mekong Delta in Indonesia, Vietnam, whatever, it's so
polluted.
And if you had underwater robots cleaning it up, it would just completely change the game,
make things massively better. It's like the smallest titch of an application. I just want the robots
cleaning the side of the 405 out here in L.A. Or the beaches. Yeah, sure. Okay. I found this one
super, super interested. And this is surgical robot performs gallbladder procedure autonomously, right?
So this is different than the Da Vinci robot, which is basically an extension of a human operating in a
theater and you guys all know what I've said about this if you need to hire a surgeon for something
this is to our our amazing WTF moonshot subscribers if you need to get a surgery and you want to
interview your surgeon there's one question you ask them which is how many times have you done
this surgery this morning right the success of a surgeon is a function of how many different cases
they've seen and sort of the eye you know muscle memory on doing these so in the final result
I do believe the best surgeons in the world will be robots.
They'll see an infrared, ultraviolet.
They wouldn't have a fight with their girlfriend or boyfriend.
They wouldn't have that drink and drunk too much caffeine.
So this comes out of Johns Hopkins, and they built a surgical robot without any human control.
It achieved 100% accuracy in this gallbladder remover.
It's different than Da Vinci.
And DaVinci came out of, it was a DARPA project, right, to help surgery in the field.
I think this is huge, you know, how long, how far we'll have, how long we'll have this in the future?
I don't know.
I don't think it's more than, you know, three to five years.
This is a, this is a sensor, actuator, machine learning problem.
Alex, what do you think about it?
Yeah, no, notably, Peter, so I read the paper, very exciting paper at that from the Hopkins team.
This was a model that was trained by imitation learning.
So it was trained by watching videos of human surgeons perform surgeries.
And that immediately rhymes in my mind with the early deep mind results like AlphaGo that were trained from watching in part expert human games.
And I think we're going to, and I would expect if history does rhyme, we're about to enter an era when using digital twins.
And maybe this may or may not be aligned with what Reed is thinking for curing cancer.
with Monis, we're going to transition from imitation learning based medicine and surgery
to reinforcement learning based medicine and surgery.
The moment we have high fidelity digital twin of the human body and, of course, turtles
all the way down, cell, virtual cell models as well, why train off of copying humans when you
could do reinforcement learning and achieve potentially super duper human level performance?
So I think, again, early innings, but I think it's a net.
inevitable. We see sort of a surgery zero, mu zero version of this sometime soon.
Reid?
Look, I think the surgery part of this stuff is, I think, I agree with you, line of sight.
I think we already, you know, it's a little bit like, for example, today you said,
I would rather, would you pick an AI or your average radiologist to read your x-ray film,
you'd pick the AI today, you know, your hands down.
Like, you know, 11 out of 10 times.
And I think we're heading to that with the.
Robotics now, I think that the approach is, I think actually, you know, human biology is actually
quite complicated and the ability to do a full simulation, you know, is some ways off. But I think that
this kind of robotic thing, oh my gosh, you know, hit the accelerator. Yeah, love it, love
it. I think of it as air traffic and flying a plane, right? Today, the plane will fly themselves
99% of the time. The pilot is only there in case for emergency. I think we'll see
the same thing, yeah. All right. So Zooks, I remember Zooks. I came, went and visited them and met the team
there. They were acquired by Amazon and back in 2020. And this is a self-driving pod, right? This is
you and your best buddies facing each other inside there. And they're launching finally. Good fun
them. They're launching in Las Vegas. They have massive scale, well, not really, 50 vehicles in their fleet.
they'll start in Las Vegas, then San Francisco, Miami, L.A.
It's free for the first few months, but then they're going to go to similar pricing as Uber and Lyft.
Any thoughts, gentlemen?
Have any of you guys taken the self-flying drone in Dubai?
You know, you push the button and it picks you up?
Not yet.
Not yet.
Dying to. I just don't want to be first.
Just don't do it to die.
Let's put it that way.
They're in production.
Yeah.
Peter, you're usually really adventurous with that stuff.
I would do it in a heartbeat.
I saw it first at Consumer Electronics show.
You know, Martine Rothblatt bought like a hundred of those vehicles early on for organ delivery
before she got involved in her own, you know, vertical takeoff, flying car beta.
All right.
Let's wrap here at the end of the program.
I'm, you know, I remain just continuously like a kid in the candy store at the speed at which this is moving.
You know, every day waking up.
And, you know, I love the articles you send over Alex and the conversations that we have.
You know, Reed, what are you most excited to see in the next year?
Well, I'd say the next year will be part of the reason why I think the focus on coding and acceleration of coding is I think it accelerates everything, accelerates individuals as per the co-pilots that I was talking about before, but also accelerates the discovery and the computation, the algorithms that Dave was talking about, and I actually think we will see massive coding acceleration, and that will be a precursor to many other accelerants.
is but is there one science one star trek part of the equation that you're looking
forward to um you know it's kind of a we tend to under um over predict the two years and
under predict the 10 years so it's a little bit of the you know what would be the science fiction
thing i mean obviously we saw um you know and i'm very hopeful about the ipod three's
maybe maybe a tricorder yep i think uh we've reported on some
early version of the tricorder. I had done a $10 million
Qualcomm Tri-Quater XPRIZEX-Prize 10 years ago.
It's time to do it again. I think the tech is there for sure.
Salim, what about you? What are you excited about?
Passenger drones would be my personal favorite.
I mean, I spent eight,
EVTALs. I spent eight days out of nine in India traveling between airports.
And like, what the hell kind of waste of time of crap is that in today's world?
The technology is there. It's not just a implementation infrastructure.
We saw a flying car in the campus of Stanford two days ago.
Yeah, very, very impressive, very impressive design, and it's very workable.
He thought he could get over time the cost down to about $40,000 a car.
Yeah, I can't wait.
What I'm most excited about by far is a version two of Reed interviews, Reed.
I thought that was, you know, Reed AI talking to the real read was one of the most brilliantly conceived pieces of media.
Anyone who hasn't seen it, dig it up.
You could do it so much better today, actually, because when you did it, actually, I assume
you coded that up yourself, but that was pretty hard to do when you did it.
Now you could do something incredible.
And Peter did, he did a on-stage interview of Socrates and Aristotle that turned into a big
love fest.
So if you don't prompt it right now, you just love everybody.
I did an interview of myself last year at the Abundance Summit of 150-year-old version of myself,
which was fun.
and, you know, asked it about the future, and it had some great answers.
I loved it.
Well, I'll say everybody watching this pod posts, please read, do it again, do it and update it, and that'll put some pressure on you.
And, Alex, let's not forget you, buddy.
What do you imagine over the next year that would really sort of, you know, hit your childhood ambitions?
Oh, gosh.
I think I'm getting rather difficult to ontologically shock at this point, but I will say pulling all of Star Trek,
not just some of Star Trek to the left.
I think that's a worthy ambition.
We've got the holodex.
We've almost got the replicators if you squinted food printers.
We're missing warp drive.
We're missing a whole bunch of other aspects of Star Trek.
Wouldn't it be lovely if we were able to pull those to the left as well?
I've got one.
I was chatting with Steve Jervison at the Stanford conference,
and he reiterated that crazy anecdote about once we have a quantum computer,
it'll be definitive proof of a multiverse.
And that really means alcohol to get.
get into. All right. Well, we'll do that one of our next sessions. Thank you everybody to our
subscribers. We appreciate you. If you're not a subscriber, come join us. These are the conversations
we have that we hope will make you more intelligent, more excited about the future,
help you understand what the hell just happened in the last week, because the speed of change
is not just exponential. It truly is becoming hyper-exponential. See you guys. My moonshot mates
appreciate you. Thank you, buddy. It's been a wonderful friendship.
I'm grateful for you.
Massively fun.
Yeah.
If you could have had a 10-year head start on the dot-com boom back in the 2000s, would you have taken it?
Every week, I track the major tech metatrends.
These are massive, game-changing shifts that will play out over the decade ahead.
From humanoid robotics to AGI, quantum computing, energy breakthroughs, and longevity.
I cut through the noise and deliver only what matters to our lives and our careers.
I send out a metatrend news letter twice a week as a quick,
two-minute read over email. It's entirely free. These insights are read by founders, CEOs,
and investors behind some of the world's most disruptive companies. Why? Because acting early is
everything. This is for you if you want to see the future before it arrives and profit from it.
Sign up at Demandis.com slash Metatrends and be ahead of the next tech bubble. That's demandis.com
slash metatrends.
Thank you.
Thank you.